Perspectives and Challenges for Recurrent Neural Network Training

TitlePerspectives and Challenges for Recurrent Neural Network Training
Publication TypeJournal Article
Year of Publication2010
AuthorsMarco Gori, Barbara Hammer, Pascal Hitzler, Guenther Palm
JournalLogic Journal of the IGPL
Pagination617-619
Keywordsneural network training challenges
Abstract

Recurrent neural networks (RNNs) offer flexible machine learning tools which share the learning abilities of feedforward networks and which extend their expression abilities based on dynamical equations. Hence, they can directly process complex spatiotemporal data and model complex dynamic systems. Since temporal and spatial data are present in many domains such as processing environmental time series, modelling the financial market, speech and language processing, robotics, bioinformatics, medical informatics, etc., RNNs constitute promising candidates for a variety of applications. Further, their rich dynamic repertoire as time dependent systems makes them suitable candidates for modelling brain phenomena or mimicking large-scale distributed computations and argumentations. Thus, RNNs carry the promise of efficient biologically plausible signal processing models optimally suited for a wide area of industrial applications on the one hand and an explanation of cognitive phenomena of the human brain on the other hand.

Full Text

Marco Gori, Barbara Hammer, Pascal Hitzler and Guenther Palm. 'Perspectives and challenges for recurrent neural network training.'Logic Journal of the IGPL, Volume: 18.5 2010:617-619
research center: Knowledge Engineering Lab

publisher: Oxford University Press