Building Maps
We have seen a couple of types of maps in prior lectures:
this covers how to make these types of maps.
The General Mapping Problem
We can create a map from a run of:
- Controls (poses)
- Observations
Given the sensor data and poses, we want to calculate a map with the maximum likelihood given the data.
2D Volumetric Maps
This map assumes the world is a discrete set of cells:
- Each cell is either white (empty), or black (occupied).
Counting Model
You drive around, keeping track of where you are, and looking at each cell:
- You keep count of how many times you detect an obstacle in a cell.
- You increase the count each time you detect an object in it.
- You decrease the count for the cell each time you detect that it is clear.
By also counting how many times you have seen the cell (the count), you can estimate the probability that the cell is occupied.
Given that $M$ is a matrix of measurements and $C$ is a matrix of counts, we can calculate the probability that the cell is occupied like so:
\[P(m_{x,y})=\frac{M_{x,y}+C_{x,y}}{2C_{x,y}}\]Binary Bayes Filters with Static State
Many problems can be addressed if we assume a binary, static state:
- The state doesn’t change during the observations.
- The state of an object is binary.
Given that the state doesn’t change, then maps can be estimated from:
\[p(m\mid z_{1:t},x_{1:t})=\prod_i p(m_i\mid z_{1:x},x_{1:t})\]where the pose of each observation $z_{1:t}$ was $x_{1:t}$.
This gives us a binary Bayes filter for a static state.
Binary Random Variable
The area in the environment corresponding to a cell is modelled as:
- completely empty or occupied.
A cell is therefore a binary random variable.
Cell Independence
If we assume cell independence, the posterior is determined by the product of independent cell posteriors:
\[\prod_i p(m_i)\]Log Odds
Odds are the ratio of the probability of an event $p(x)$ over the probability of its negated form $p(\neg x)$:
\[\text{odds}=\frac{p(x)}{p(\neg x)}=\frac{p(x)}{1-p(x)}\]Logarithms are useful when odds are frequently multiplied or divided as:
\[\log ab=\log a +\log b\]It is more efficient to add than multiply.
The log odds ratio is defined as so:
\[l(x)=\log\frac{p(x)}{1-p(x)}\]we can then retrieve the probability from the log odds form, using the exponential function:
\[p(x)=1-\frac1{1+\exp l(x)}\]The log odds model is given like so:
\[l(m_i\mid z_{i:t},x_{1:t})=\underbrace{l(m_i\mid z_t,x_t)}_\text{inverse sensor model}+\underbrace{l(m_i\mid z_{1:t-1},x_{1:t-1})}_\text{recursive term}-\underbrace{l(m_i)}_\text{prior}\]There is a full example on how to derive this from slides 28-42.
Occupancy Mapping Update Rule
There is a simplified implementation for an update function using log odds at slide 48.