1 min to read
Why bigger is not always better: on finite and infinite neural networks
The lack of representation or equivalently kernel learning leads to less flexibility and hence worse performance.
Video
References
Why bigger is not always better: on finite and infinite neural networks
Comments