GPS fails indoors. Odometry drifts. A robot navigating a warehouse or hospital must answer a fundamental question every millisecond: where am I? The particle filter is the probabilistic algorithm that makes this possible, and this page is your interactive guide to understanding exactly how it works.
The naive approach to localization is dead reckoning: track every wheel rotation and gyroscope reading and integrate them into a position. It works for a few seconds. Over minutes, small errors accumulate: a 1% wheel slip becomes meters of position error, and the robot is lost.
The robot can also observe its environment. It knows where certain landmarks are on a map (walls, beacons, distinctive features) and can measure its distance or bearing to them. But knowing "I'm 3.2 m from beacon A and 5.1 m from beacon B" doesn't pinpoint a single location; it constrains it to a ring, or an arc, with measurement noise making even that fuzzy.
The core insight: Rather than trying to track a single "best guess" position, we track a distribution of possible positions. Uncertainty is a first-class citizen, not an afterthought to be minimized away.
A particle is a single hypothesis about the robot's pose: a triple (x, y, θ), a 2D position and a heading. We maintain a set of N particles. Each particle is a bet: "maybe the robot is here, facing this direction."
At startup, if we have no prior information, particles are drawn uniformly at random across the map. The particle cloud represents maximum uncertainty: we're saying every location is equally plausible.
As the robot moves and senses, we update the particles. The cloud morphs: hypotheses consistent with observations survive and multiply; inconsistent ones die out. After several iterations, the cloud collapses to a tight cluster around the true pose.
When the robot executes a motion command (move forward v meters, turn ω radians), every particle applies the same nominal motion. But we add Gaussian noise sampled from the motion model:
This noise models wheel slip, uneven floors, and motor imprecision. A larger σ_motion means the robot's motors are less reliable, so the cloud spreads faster after each motion step. After the predict step, the cloud is wider: uncertainty grew.
After predicting, the robot takes a sensor reading. It measures its distances to each visible landmark: z = [d₁, d₂, ..., dₖ]. For each particle i, we ask: if the robot were at this particle's pose, how likely would we observe those measurements?
Assuming independent Gaussian measurement noise with standard deviation σ_sensor, the weight of particle i becomes:
Particles near the true position will have small discrepancies and thus high weights. Particles far away will have large discrepancies and thus exponentially low weights. After normalization, the weights form a new probability distribution over particles.
After the update step, most particles have near-zero weight. They're bad hypotheses. If we keep them, we waste computation on hopeless guesses. Sequential Importance Resampling (SIR) fixes this: draw a new set of N particles proportional to the current weights.
1/N. The new particles are equally weighted and ready for the next predict step.
We trigger resampling when effective particle count N_eff = 1 / Σ(w_i²) drops below N/2. This metric detects weight degeneracy (when a few particles dominate) without resampling unnecessarily.
The kidnapped robot problem: If the robot is suddenly moved to a new location, all particles become wrong. The filter can recover if σ_motion is large enough to spread particles broadly, but there's a fundamental trade-off: a more diffuse motion model localizes less precisely. Try it in the Playground.
Particle filter localization repeats three steps on every sensor tick. Starting from uniform uncertainty, the filter converges, usually within 5–15 iterations for a well-configured system.
The speed of convergence depends on: N (more particles → more robust, slower), σ_sensor (lower noise → sharper weights → faster collapse, but less robustness to model mismatch), and landmark geometry (well-spread landmarks constrain position better than clustered ones).
When the filter is well-converged, N_eff stays high and the particle spread σ is small. When the robot is moved suddenly (kidnapped), N_eff drops sharply, a reliable early warning that the filter has lost the robot.
This live demo is built for a keyboard and a larger screen so you can drive the robot and use the panels comfortably.
The Learn tab works great on your phone.