Star
Probabilistic Robotics · SIR Particle Filter

How do robots know
where they are?

GPS fails indoors. Odometry drifts. A robot navigating a warehouse or hospital must answer a fundamental question every millisecond: where am I? The particle filter is the probabilistic algorithm that makes this possible, and this page is your interactive guide to understanding exactly how it works.

01 · The Problem

Dead reckoning breaks down

The naive approach to localization is dead reckoning: track every wheel rotation and gyroscope reading and integrate them into a position. It works for a few seconds. Over minutes, small errors accumulate: a 1% wheel slip becomes meters of position error, and the robot is lost.

The robot can also observe its environment. It knows where certain landmarks are on a map (walls, beacons, distinctive features) and can measure its distance or bearing to them. But knowing "I'm 3.2 m from beacon A and 5.1 m from beacon B" doesn't pinpoint a single location; it constrains it to a ring, or an arc, with measurement noise making even that fuzzy.

The core insight: Rather than trying to track a single "best guess" position, we track a distribution of possible positions. Uncertainty is a first-class citizen, not an afterthought to be minimized away.

02 · Representation

Particles as hypotheses

A particle is a single hypothesis about the robot's pose: a triple (x, y, θ), a 2D position and a heading. We maintain a set of N particles. Each particle is a bet: "maybe the robot is here, facing this direction."

At startup, if we have no prior information, particles are drawn uniformly at random across the map. The particle cloud represents maximum uncertainty: we're saying every location is equally plausible.

As the robot moves and senses, we update the particles. The cloud morphs: hypotheses consistent with observations survive and multiply; inconsistent ones die out. After several iterations, the cloud collapses to a tight cluster around the true pose.

// 300 particles drawn uniformly. Each dot is one hypothesis (x, y, θ). The robot appears; the filter has no idea where it is yet.
03 · Predict Step

Motion model: uncertainty grows

When the robot executes a motion command (move forward v meters, turn ω radians), every particle applies the same nominal motion. But we add Gaussian noise sampled from the motion model:

x' = x + v·cos(θ)·Δt + εx, εx ~ N(0, σ²motion)
y' = y + v·sin(θ)·Δt + εy, εy ~ N(0, σ²motion)
θ' = θ + ω·Δt + εθ, εθ ~ N(0, σ²turn)

This noise models wheel slip, uneven floors, and motor imprecision. A larger σ_motion means the robot's motors are less reliable, so the cloud spreads faster after each motion step. After the predict step, the cloud is wider: uncertainty grew.

// A cluster of 120 particles moves right. Noise spreads the cloud with each step; higher σ_motion means faster spreading. The robot's true position (white) stays sharp; the belief cloud grows.
04 · Update Step

Observation model: sensing collapses uncertainty

After predicting, the robot takes a sensor reading. It measures its distances to each visible landmark: z = [d₁, d₂, ..., dₖ]. For each particle i, we ask: if the robot were at this particle's pose, how likely would we observe those measurements?

Assuming independent Gaussian measurement noise with standard deviation σ_sensor, the weight of particle i becomes:

wi ∝ ∏j exp( −½ · (zj − d(xi, lmj))² / σ²sensor )

Particles near the true position will have small discrepancies and thus high weights. Particles far away will have large discrepancies and thus exponentially low weights. After normalization, the weights form a new probability distribution over particles.

// Particles colored by weight after an observation update. Dark gray means low weight, blue means high. The true robot (white) is surrounded by the highest-weight particles.
05 · Resample Step

SIR resampling: survival of the fittest

After the update step, most particles have near-zero weight. They're bad hypotheses. If we keep them, we waste computation on hopeless guesses. Sequential Importance Resampling (SIR) fixes this: draw a new set of N particles proportional to the current weights.

1 Compute the cumulative weight distribution over all particles.
2 Draw N evenly-spaced samples with a single random offset (systematic resampling). This is O(N) and has low variance compared to multinomial resampling.
3 Replace the old set with the new set. High-weight particles appear multiple times; zero-weight particles vanish.
4 Reset all weights to 1/N. The new particles are equally weighted and ready for the next predict step.

We trigger resampling when effective particle count N_eff = 1 / Σ(w_i²) drops below N/2. This metric detects weight degeneracy (when a few particles dominate) without resampling unnecessarily.

The kidnapped robot problem: If the robot is suddenly moved to a new location, all particles become wrong. The filter can recover if σ_motion is large enough to spread particles broadly, but there's a fundamental trade-off: a more diffuse motion model localizes less precisely. Try it in the Playground.

06 · Putting It Together

The full algorithm

Particle filter localization repeats three steps on every sensor tick. Starting from uniform uncertainty, the filter converges, usually within 5–15 iterations for a well-configured system.

// Live particle filter: 200 particles, 4 landmarks, auto-pilot robot. Watch the cloud collapse from uniform chaos to a tight estimate. Blue arrow = estimated pose. White = ground truth.

The speed of convergence depends on: N (more particles → more robust, slower), σ_sensor (lower noise → sharper weights → faster collapse, but less robustness to model mismatch), and landmark geometry (well-spread landmarks constrain position better than clustered ones).

When the filter is well-converged, N_eff stays high and the particle spread σ is small. When the robot is moved suddenly (kidnapped), N_eff drops sharply, a reliable early warning that the filter has lost the robot.

Ready to experiment?
iter0
N=300
σ=
N_eff=
err=
mode=autopilot
LEGEND
True robot
Estimate
Particle (low w)
Particle (high w)
Landmarks