Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., & Zheng, X. (2015). TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. Software available from tensorflow.org. https://www.tensorflow.org/
Balaguer-Ballester, E., Lapish, C. C., Seamans, J. K., & Durstewitz, D. (2011). Attracting dynamics of frontal cortex ensembles during memory-guided decision-making. PLOS Computational Biology, 7(5), 1–19. https://doi.org/10.1371/journal.pcbi.1002057
Barak, O. (2017). Recurrent neural networks as versatile tools of neuroscience research. Current Opinion in Neurobiology, 46, 1–6. https://doi.org/10.1016/j.conb.2017.06.003. Computational Neuroscience.
Bi, Z., & Zhou, C. (2020). Understanding the computation of time using neural network models. Proceedings of the National Academy of Sciences 117(19), 10530–10540. https://arxiv.org/abs/https://www.pnas.org/content/117/19/10530.full.pdf. https://doi.org/10.1073/pnas.1921609117
Britten, K., Shadlen, M., Newsome, W., & Movshon, J. (1992). The analysis of visual motion: a comparison of neuronal and psychophysical performance. Journal of Neuroscience, 12(12), 4745–4765. https://arxiv.org/abs/https://www.jneurosci.org/content/12/12/4745.full.pdf. https://doi.org/10.1523/JNEUROSCI.12-12-04745.1992
Carnevale, F., de Lafuente, V., Romo, R., Barak, O., & Parga, N. (2015). Dynamic control of response criterion in premotor cortex during perceptual detection under temporal uncertainty. Neuron, 86. https://doi.org/10.1016/j.neuron.2015.04.014
Ceni, A., Ashwin, P., & Livi, L. (2020). Interpreting recurrent neural networks behaviour via excitable network attractors. Cognitive Computation, 12(2), 330–356. https://doi.org/10.1007/s12559-019-09634-2
Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation.
Chollet, F., et al. (2015). Keras. https://keras.io
Chow, T. W. S., & Li, X. -D. (2000). Modeling of continuous time dynamical systems with input by recurrent neural networks. IEEE Transactions on Circuits and Systems–I: Fundamental Theory and Applications, 47(4). https://doi.org/10.1109/81.841860
Cunningham, J. P., & Yu, B. M. (2014). Dimensionality reduction for large-scale neural recordings. Nature Neuroscience, 17. https://doi.org/10.1038/nn.3776
del Molino, L. C. G., Pakdaman, K., Touboul, J., & Wainrib, G. (2013). Synchronization in random balanced networks. Physical Review E, 88, 042824. https://doi.org/10.1103/PhysRevE.88.042824
DePasquale, B., Cueva, C. J., Rajan, K., Escola, G. S., & Abbott, L. F. (2018). full-force: A target-based method for training recurrent networks. PLoS One1, 13(2), 1–18. https://doi.org/10.1371/journal.pone.0191527
Deng, J. (2013). Dynamic neural networks with hybrid structures for nonlinear system identification. Engineering Applications of Artificial Intelligence, 26(1), 281–292. https://doi.org/10.1016/j.engappai.2012.05.003
Dinh, H. T., Kamalapurkar, R., Bhasin, S., & Dixon, W. E. (2014). Dynamic neural network-based robust observers for uncertain nonlinear systems. Neural Networks, 60, 44–52. https://doi.org/10.1016/j.neunet.2014.07.009
Article CAS PubMed Google Scholar
Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14(2), 179–211. https://doi.org/10.1016/0364-0213(90)90002-E
Funahashi, K. (1989). On the approximate realization of continuous mappings by neural networks. Neural Networks, 2(3), 183–192. https://doi.org/10.1016/0893-6080(89)90003-8
Funahashi, K., & Nakamura, Y. (1993). Approximation of dynamical systems by continuous time recurrent neural networks. Neural Networks, 6(6), 801–806. https://doi.org/10.1016/S0893-6080(05)80125-X
Gal, Y., & Ghahramani, Z. (2015). Bayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference. arXiv. https://doi.org/10.48550/ARXIV.1506.02158. https://arxiv.org/abs/1506.02158
Gallacher, J. C., & Fiore, J. M. (2000). Continuous time recurrent neural networks: a paradigm for evolvable analog controller circuits. In: Proceedings of the IEEE 2000 National Aerospace and Electronics Conference. NAECON 2000. Engineering Tomorrow (Cat. No.00CH37093). https://doi.org/10.1109/NAECON.2000.894924
Gallicchio, C., Micheli, A., & Pedrelli, L. (2017). Deep reservoir computing: A critical experimental analysis. Neurocomputing, 268, 87–99. https://doi.org/10.1016/j.neucom.2016.12.089
Ganguli, S., Huh, D., & Sompolinsky, H. (2008). Memory traces in dynamical systems. Proceedings of the National Academy of Sciences, 105(48), 18970–18975. https://arxiv.org/abs/https://www.pnas.org/doi/pdf/10.1073/pnas.0804451105. https://doi.org/10.1073/pnas.0804451105
Gerstner, W., Sprekeler, H., & Deco, G. (2012). Theory and simulation in neuroscience. Science, 338(6103), 60–65. https://doi.org/10.1126/science.1227356
Article CAS PubMed Google Scholar
Girko, V. (1985). Circular law. Theory of Probability & Its Applications, 29(4), 694–706. https://arxiv.org/abs/doi.org/10.1137/1129095
Gisiger, T., & Boukadoum, M. (2011). Mechanisms gating the flow of information in the cortex: What they might look like and what their uses may be. Frontiers in Computational Neuroscience, 5, 1. https://doi.org/10.3389/fncom.2011.00001
Article PubMed PubMed Central Google Scholar
Goel, A., & Buonomano, D. V. (2014). Timing as an intrinsic property of neural networks: evidence from in vivo and in vitro experiments. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 369(1637). https://arxiv.org/abs/http://rstb.royalsocietypublishing.org/content/369/1637/20120460.full.pdf. https://doi.org/10.1098/rstb.2012.0460
Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwinska, A., Colmenarejo, S.G., Grefenstette, E., Ramalho, T., Agapiou, J., Badia, A.P., Hermann, K.M., Zwols, Y., Ostrovski, G., Cain, A., King, H., Summerfield, C., Blunsom, P., Kavukcuoglu, K., & Hassabis, D. (2016). Hybrid computing using a neural network with dynamic external memory. Nature, 538. https://doi.org/10.1038/nature20101
Gulli, A., & Pal, S. (2017). Deep Learning with Keras. Mumbai: Packt Publishing.
Hoellinger, T., Petieau, M., Duvinage, M., Castermans, T., Seetharaman, K., Cebolla, A.-M., Bengoetxea, A., Ivanenko, Y., Dan, B., & Cheron, G. (2013). Biological oscillations for learning walking coordination: dynamic recurrent neural network functionally models physiological central pattern generator. Frontiers in Computational Neuroscience, 7, 70. https://doi.org/10.3389/fncom.2013.00070
Article PubMed PubMed Central Google Scholar
Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8), 1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735
Article CAS PubMed Google Scholar
Holla, P., & Chakravarthy, S. (2016). Decision making with long delays using networks of flip-flop neurons. In: 2016 International Joint Conference on Neural Networks (IJCNN), pp. 2767–2773. https://doi.org/10.1109/IJCNN.2016.7727548
Hopfield, J. J. (1984). Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Sciences, 81(10), 3088–3092. https://doi.org/10.1073/pnas.81.10.3088
Jarne, C. (2021). Multitasking in RNN: an analysis exploring the combination of simple tasks. Journal of Physics: Complexity, 2(1), 015009. https://doi.org/10.1088/2632-072x/abdee3
Jazayeri, M., & Shadlen, M. N. (2010). Temporal context calibrates interval timing. Nature Neuroscience, 13(8), 1020–1026. https://doi.org/10.1038/nn.2590
Article CAS PubMed PubMed Central Google Scholar
Jin, L., Gupta, M. M., & Nikiforuk, P. N. (1995). Universal approximation using dynamic recurrent neural networks: discrete-time version. In: Proceedings of ICNN’95 - International Conference on Neural Networks, 1, 403–4081. https://doi.org/10.1109/ICNN.1995.488134
Kar, K., Kubilius, J., Schmidt, K., Issa, E. B., & DiCarlo, J. J. (2019). Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior. Nature neuroscience, 22(6), 974–983. https://doi.org/10.1038/s41593-019-0392-5
Article CAS PubMed PubMed Central Google Scholar
Kimura, M., & Nakano, R. (1995). Learning Dynamical Systems from Trajectories by Continuous Time Recurrent Neural Networks. In: Proceedings of ICNN’95 - International Conference on Neural Networks. https://doi.org/10.1109/ICNN.1995.487258
Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. CoRR. http://arxiv.org/abs/1412.6980
Kuroki, S., & Isomura, T. (2018). Task-related synaptic changes localized to small neuronal population in recurrent neural network cortical models. Frontiers in Computational Neuroscience, 12, 83. https://doi.org/10.3389/fncom.2018.00083
Article PubMed PubMed Central Google Scholar
Le, Q. V., Jaitly, N., & Hinton, G. E. (2015). A Simple Way to Initialize Recurrent Networks of Rectified Linear Units. arXiv. https://doi.org/10.48550/ARXIV.1504.00941. https://arxiv.org/abs/1504.00941
Laje, R., & Buonomano, D. V. (2013). Robust timing and motor patterns by taming chaos in recurrent neural networks. Nature Neuroscience, 16, 925–933. https://doi.org/10.1038/nn.3405
Article CAS PubMed PubMed Central Google Scholar
Landau, I. D., & Sompolinsky, H. (2018). Coherent chaos in a recurrent neural network with structured connectivity. PLOS Computational Biology, 14(12), 1–27. https://doi.org/10.1371/journal.pcbi.1006309
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521. https://doi.org/10.1038/nature14539
Molano-Mazon, M., Barbosa, J., Pastor-Ciurana, J., Fradera, M., ZHANG, R.-Y., Forest, J., del PozoLerida, J., Ji-An, L., Cueva, C. J., dela Rocha, J., et al. (2022). NeuroGym: An open resource for developing and sharing neuroscience tasks. PsyArXiv. https://doi.org/10.31234/osf.io/aqc9n. psyarxiv.com/aqc9n
Maass, W., Natschläger, T., & Markram, H. (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14(11), 2531–2560. https://arxiv.org/abs/https://doi.org/10.1162/089976602760407955. https://doi.org/10.1162/089976602760407955
Mante, V., Sussillo, D., Shenoy, K. V., Newsome, W. T. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature. https://doi.org/10.1038/nature12742
Maheswaranathan, N., Williams, A. H., Golub, M. D., Ganguli, S., & Sussillo, D. (2019). Universality and individuality in neural dynamics across large populations of recurrent networks.
Mohajerin, N., & Waslander, S. L. (2017). State initialization for recurrent neural network modeling of time-series data. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 2330–2337. https://doi.org/10.1109/IJCNN.2017.7966138
Michaels, J. A., Dann, B., & Scherberger, H. (2016). Neural population dynamics during reaching are better explained by a dynamical system than representational tuning. PLOS Computational Biology, 12(11), 1–22. https://doi.org/10.1371/journal.pcbi.1005175
Nakamura, Y., & Nakagawa, M. (2009). Approximation Capability of Continuous Time Recurrent Neural Networks for Non-autonomous Dynamical Systems. In: Alippi C., Polycarpou M., Panayiotou C., Ellinas G. (eds) Artificial Neural Networks - ICANN 2009. Lecture Notes in Computer Science, Vol 5769. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04277-5_60
Orhan, A. E., & Ma, W. J. (2019). A diverse range of factors affect the nature of neural representations underlying short-term memory. Nature Neuroscience, 22(2), 275–283. https://doi.org/10.1038/s41593-018-0314-y
Article CAS PubMed Google Scholar
Pascanu, R., Mikolov, T., & Bengio, Y. (2012). Understanding the exploding gradient problem. CoRR abs/1211.5063. abs/1211.5063
Pehlevan, C., Ali, F., & Ölveczky, B. P. (2018). Flexibility in motor timing constrains the topology and dynamics of pattern generator circuits. Nature Communications, 9. https://doi.org/10.1038/s41467-018-03261-5
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., & Duchesnay, E. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12, 2825–2830.
Remington, E. D., Egger, S. W., Narain, D., Wang, J., & Jazayeri, M. (2018). A dynamical systems perspective on flexible motor timing. Trends in Cognitive Sciences. https://doi.org/10.1016/j.tics.2018.07.010
Article PubMed PubMed Central Google Scholar
Remington, E. D., Narain, D., Hosseini, E. A., Jazayeri, M. (2018). Flexible sensorimotor computations through rapid reconfiguration of cortical dynamics. Neuron, pp. 0896–6273. https://doi.org/10.1016/j.neuron.2018.05.020
Richard, H., Rahul, S., Misha, M., Douglas, A., Seung, R. J., & Sebastian, H. (2000). Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature, 405. https://doi.org/10.1038/35016072
Rivkind, A., & Barak, O. (2017). Local dynamics in trained recurrent neural networks. Physical Review Letters, 118, 258101. https://doi.org/10.1103/PhysRevLett.118.258101
Rojas, R. (1996). Springer. https://page.mi.fu-berlin.de/rojas/neural/
Russo, A. A., Bittner, S. R., Perkins, S. M., Seely, J. S., London, B. M., Lara, A. H., Miri, A., Marshall, N. J., Kohn, A., Jessell, T. M., Abbott, L. F., Cunningham, J. P., & Churchland, M. M. (2018). Motor cortex embeds muscle-like commands in an untangled population response. Neuron, 97.
留言 (0)