1D Optimisation: Estimating a Single Parameter

The simplest case: estimating the mean of some data

Finding the Maximum Likelihood Estimate

Data + Current Fit

Log-Likelihood Surface

Find the Peak!

Drag the slider or click on the curve to set your estimate. Can you find the maximum?

150.0

Tip: Watch the log-likelihood value — higher is better!

Adjustments made: 0
Current Estimate
--
Log-Likelihood
--
Iteration
0
True MLE
--

Analytic Solution

For the Gaussian mean, we can solve directly: the MLE is simply the sample mean $\bar{y}$. No iteration needed!

What You're Seeing

The curve shows the log-likelihood function $\ell(\theta)$ for different values of the parameter $\theta$. Higher values mean the parameter explains the data better.

$$\ell(\theta) = \sum_{i=1}^{n} \log f(y_i \mid \theta)$$

The red dot shows the current estimate. Watch how different algorithms navigate to the peak:

  • Analytic: Jumps directly to the solution (when a formula exists)
  • Newton-Raphson: Uses curvature information for fast convergence
  • Gradient Descent: Follows the slope uphill, step by step

Why This Matters

In 1D, finding the maximum is trivial — you can see it! But real models have many parameters, creating surfaces we can't visualise. Understanding how algorithms work in 1D builds intuition for what happens in higher dimensions.