Copy the page URI to the clipboard
Hamilton-Hopkins, Graham M.
(2002).
DOI: https://doi.org/10.21954/ou.ro.0000fd26
Abstract
The literature abounds with papers on the subject of function approximation using neural networks. Many of these attempt to improve some aspect of the approximation by imposing a constraint, or introducing a new algorithm.
The underlying strategy of our work has been to attempt to pull together some of these strands as part of an overall aim of establishing the parameters of networks which will be more robust function approximators.
The underlying strategy of our work has been to attempt to pull together some of these strands as part of an overall aim of establishing the parameters of networks which will be more robust function approximators.
Rather than exploring a few topics in depth, the scope has been broad; searching for more general properties and behaviour of different networks, with a view to designing more focused studies in the future.
We studied the ability of neural networks with different architectures, and with either sigmoid or radial basis node activation functions, to approximate a variety of target functions. Some of the properties of these networks were investigated by studying the behaviour of the networks’ variables during training.