Markov Localisation
This is the process of using landmarks to reduce the uncertainty of a robot’s location.
-
We start with some distribution across all states:
- This includes heading as well as position.
-
This is typically uniform if the initial robot position is unknown.
This means that we don’t know where we are.
-
This distribution tells us the probability of the robot begin at each state:
The belief is proportional to the observational likelihood.
-
As the robot observes the environment, it spots a landmark:
- The distribution is updated to reflect a Guassian aroudn all instances of the landmark.
- Other probabilities will fall, so that total across the state space is equal to one.
-
The robot then moves:
-
Find the convolution of the probability distribution with the motion model.
This shifts the distribution model in the direction of the move and smooths the distribution.
-
-
A new observation is made:
- We multiply the distribution from the movement with that of the observation to update the belief.