The simplest case: estimating the mean of some data
Drag the slider or click on the curve to set your estimate. Can you find the maximum?
Tip: Watch the log-likelihood value — higher is better!
For the Gaussian mean, we can solve directly: the MLE is simply the sample mean $\bar{y}$. No iteration needed!
The curve shows the log-likelihood function $\ell(\theta)$ for different values of the parameter $\theta$. Higher values mean the parameter explains the data better.
The red dot shows the current estimate. Watch how different algorithms navigate to the peak:
In 1D, finding the maximum is trivial — you can see it! But real models have many parameters, creating surfaces we can't visualise. Understanding how algorithms work in 1D builds intuition for what happens in higher dimensions.