PhD Candidate in School of Mathematical Sciences at PKU ㅤ ㅤ ㅤ 理科5号楼257室ㅤ|ㅤScience Building No.5 - Room 257 Email: tongyuli [at] pku [dot] edu [dot] cn
(email)
Most of the common statistical models (t-test, correlation, ANOVA, chi-square, etc.) are special cases of linear models or a very close approximation. This beautiful simplicity means that there is less to learn.
Unfortunately, stats intro courses are usually taught as if each test is an independent tool, needlessly making life more complicated for students and teachers alike. This needless complexity multiplies when students try to rote learn the parametric assumptions underlying each test separately rather than deducing them from the linear model.
The recent accumulating evidence that test error does not behave as a U-shaped curve in neural networks seems to suggest that it might not be necessary to trade bias for variance.
The bias-variance decomposition does not necessarily imply a tradeoff, e.g., in neural networksboth bias and variance decrease with network width, in the over-parameterized regime.