Abstract
In this paper we prove convergence rates for the problem of approximating functions f by neural networks and similar constructions. We show that the rates are the better the smoother the activation functions are, provided that f satisfies an integral representation. We give error bounds not only in Hilbert spaces but in general Sobolev spaces. Finally, we apply our results to a class of perceptrons and present a sufficient smoothness condition
on f guaranteeing the integral representation.
| Original language | English |
|---|---|
| Pages (from-to) | 235-250 |
| Number of pages | 16 |
| Journal | Journal of Approximation Theory |
| Volume | 112 |
| Issue number | 2 |
| DOIs | |
| Publication status | Published - 2001 |
Fields of science
- 101 Mathematics
- 101020 Technical mathematics
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver