Most of the common statistical models (t-test, correlation, ANOVA, chi-square, etc.) are special cases of linear models or a very close approximation. This beautiful simplicity means that there is less to learn.
Unfortunately, stats intro courses are usually taught as if each test is an independent tool, needlessly making life more complicated for students and teachers alike. This needless complexity multiplies when students try to rote learn the parametric assumptions underlying each test separately rather than deducing them from the linear model.
The recent accumulating evidence that test error does not behave as a U-shaped curve in neural networks seems to suggest that it might not be necessary to trade bias for variance.
The bias-variance decomposition does not necessarily imply a tradeoff, e.g., in neural networksboth bias and variance decrease with network width, in the over-parameterized regime.
Assuming e-cigarettes are equal to cigarettes could lead to misguided research and policy initiatives, argue experts in a new commentary, which distills articles and published studies that compare e-cigarettes to cigarettes and supports the importance of investigating e-cigarettes as a unique nicotine delivery system.