Error bounds for approximation with neural networks

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper we prove convergence rates for the problem of approximating functions f by neural networks and similar constructions. We show that the rates are the better the smoother the activation functions are, provided that f satisfies an integral representation. We give error bounds not only in Hilbert spaces but in general Sobolev spaces. Finally, we apply our results to a class of perceptrons and present a sufficient smoothness condition on f guaranteeing the integral representation.
Original languageEnglish
Pages (from-to)235-250
Number of pages16
JournalJournal of Approximation Theory
Volume112
Issue number2
DOIs
Publication statusPublished - 2001

Fields of science

  • 101 Mathematics
  • 101020 Technical mathematics

Cite this