Copy the page URI to the clipboard
Rüger, Stefan
(1998).
DOI: https://doi.org/10.1007/PL00013830
Abstract
A stability criterion for learning is given. In the case of learning-rate adaptation of backpropagation, a class of asymptotically stable algorithms is presented and studied, including a convergence proof. Simulations demonstrate relevance and limitations.
Viewing alternatives
Metrics
Public Attention
Altmetrics from AltmetricNumber of Citations
Citations from DimensionsItem Actions
Export
About
- Item ORO ID
- 11955
- Item Type
- Journal Item
- ISSN
- 1432-0541
- Keywords
- convex optimization; neural learning; learning-rate adaptation; line search; stability criterion
- Academic Unit or School
-
Faculty of Science, Technology, Engineering and Mathematics (STEM) > Knowledge Media Institute (KMi)
Faculty of Science, Technology, Engineering and Mathematics (STEM) - Copyright Holders
- © 1998 Springer-Verlag
- Depositing User
- Users 8580 not found.