The paper contains approximation guarantees for neural networks that are trained with gradient
flow, with error measured in the continuous $L_2(S^{d−1 })$-norm on the $d$-dimensional unit sphere and targets that
are Sobolev smooth. The networks are fully connected of constant depth and increasing width. We show
gradient flow convergence based on a neural tangent kernel (NTK) argument for the non-convex optimization
of the second but last layer. Unlike standard NTK analysis, the continuous error norm implies an under-parametrized regime, possible by the natural smoothness assumption required for approximation. The typical over-parametrization re-enters the results in form of a loss in approximation rate relative to established
approximation methods for Sobolev smooth functions.