IROS 2009 Tutorial on Filtering and Planning in Information Spaces

Presenter: Steve LaValle

[Summary by: Kostas Bekris]

The tutorial took place on October 11, one day before the start of the main IROS conference. Approximately 60 people attended it, overflowing the available space. The entire presentation, close to 6 hours, was offered by Steve LaValle, while the audience had plenty of questions and comments about the material.

Slides from the presentation, as well as a tutorial article on the material can be found on Steve LaValle's webpage.

The objective was to introduce the audience to the idea of Information Spaces (I-spaces), an abstraction equivalent to the Configuration Space abstraction (C-space) but for tasks that involve sensing, such as tracking, monitoring, navigation, pursuit-evasion, exploration and mapping. It was emphasized that for such sensor-based challenges it is important to start with the particular task at hand and then try to determine its information requirements:

"What critical pieces of information about the world do we need to maintain, while leaving everything else ambiguous?"

Then filtering and planning problems can be solved more effectively in the appropriate I-space defined for the task. The hope is that I-spaces can provide the framework for building a theory of sensing, similar to how Turing machines provide an important abstraction for theory of computation.

After a short introduction of the notion of I-spaces and the objective of the meeting, the tutorial was split into four sections:

  1. The first section included a review of the characteristics and a categorization of many physical sensors (e.g., spatial, temporal, electromagnetic, mechanical, etc.), as well as a review of applications where sensing is critical. At an abstract level, however, a sensor was viewed as a transfer function from physical phenomena to sensor readings.
  2. Given examples of physical sensors, the second section focused on defining multiple virtual sensor models. These are mathematical models of information obtained from sensing systems, i.e., the set of all possible sensor observations given the physical state space of a task. The review of such models started from:
    • simple sensors (e.g., the dummy sensor that returns a constant or the bijective sensor that returns each individual state) and then moved to:
    • depth sensors (e.g., measure the distance to the boundary in a certain direction),
    • detection sensors, which detect if a body is within a detection region (e.g., a floor sensor that causes a door to open automatically),
    • relational sensors, which measure relations between different bodies, such as their order (e.g., body B1 is in front of body B2)
    • gap sensors, which measure information about the boundary of the sensing region, such as identifying discontinuities in depth
    • and field sensors, which measure the properties of vector fields over a sensing region.

    About 26 different virtual sensor models where covered in the presentation. This discussion led to the notion of a preimage, the inverse function of a sensing observation. A preimage returns the states in the physical state space that will result in a specific observation and results in a partition of the state space. It was argued that this partition can be used to compare the power of sensors (or at least of virtual sensor models). If the partition of a sensor's preimage is a refinement over the partition of a second sensor's preimage, then the first sensor is more powerful. Given this property, it is possible to define a sensor lattice, where the dummy sensor and the bijective sensor are at the extremes of the lattice and all other sensor models fall in between.

  3. After the lunch break the tutorial continued with a discussion of the realistic complications that arise in sensing, such as:
    • non-deterministic or probabilistic disturbances (e.g., faulty detectors),
    • temporal variations that force us to reason about the state-time space (e.g., a clock as a sensor becomes important)
    • and cases where the output depends on a history of previous states (e.g., an odometer).

    Another 13 virtual sensor models where introduced in order to represent these complications. Once all these virtual sensor models were defined, then the tutorial progressed into a discussion of filtering. Filters were classified into:

    • spatial filters, those that combine simultaneous observations from multiple sensors, e.g., triangulation.
    • and temporal ones, which incrementally incorporate observations from a sensor at discrete stages to answer problems such as: which state trajectory might have led to the sensor readings or what is the set of possible current states.

    In the context of temporal filters, the notion of an I-space was defined as the set of I-states. A temporal filter in an I-space has then two components: an initial I-state and a transition function that given an I-state and an observation outputs a new I-state. This formulation naturally leads to traditional Bayesian filters (e.g., Kalman filter) once probabilistic disturbances are taken into account.

  4. The fourth section started by pointing out that the I-spaces that result when considering these additional complications (e.g., probabilistic disturbances) are huge. Motivated by the objective of starting from a specific task and identifying what information can be discarded by the filter, the presentation introduced combinatorial filters. Such reduced-complexity filters should define small I-spaces that maintain geometric, topological or combinatorial parameters which are critical for the solution of the specific task. Combinatorial filters were introduced through their application in three specific examples that Steve LaValle and his group have worked in the past:
    • Obstacles and beams: A problem where a point body moves in a known environment and we observe the sequence of detection beams that the body has crossed. Then the objective is to compute the possible state trajectories.
    • Shadow information spaces: Consider several robots carrying detection sensors and moving through a common environment. The objective is to analyze the connected components of the "shadow region", the part of the environment that is not visible to the sensors. For example, this could allow to compute how many moving point bodies are observed by the robots or not, or solve pursuit-evasion problems in general.
    • Gap navigation trees: Consider a robot with a gap sensor that moves in a polygonal environment. Then it is possible to show that if the robot explores enough of the environment it can construct a data structure that encodes a portion of the reduced visibility graph that is sufficient for optimal navigation in the environment.

    The section closed with a quick discussion of planning in I-spaces. This discussion focused on four recurring issues for planning problems in I-spaces:

    • Predictability: Are the effects of actions predictable in the I-space? If yes, then a path can be computed in the I-space (traditional combinatorial or sampling-based planning may be appropriate), otherwise it is necessary to have feedback (feedback-based control will be needed).
    • Reachability: Is the goal region even reachable? Or can the goal even be easily expressed?
    • Optimality: According to which criteria should different solution plans be compared against? Do optimal plans even exist?
    • Computability: Can an algorithm be determined that automatically computes a useful plan? What is the complexity and the implementation difficulty of a solution plan (perhaps one designed by a human)?

    Once these issues are taken into account, then the proposed overall process for planning in I-spaces is the following:
    a. Design the system (environment, bodies, sensors).
    b. Define the models (state space, sensor mapping, transition function).
    c. Select an I-space for which a low-complexity filter can be defined.
    d. Express the goal in terms of the I-space formulation.
    e. Compute a plan that achieves the goal.
    In the above process, it might be necessary to backtrack and reconsider our choices. The application of planning in I-spaces on specific examples (maze searching, navigation with a gap sensor, bug algorithms, sensorless manipulation) was also briefly discussed.

The overall tutorial concluded with a short review of the material, discussion of open problems and future challenges, together with questions from the audience.

For more details, visit Steve LaValle's webpage, which contains slides from the presentation, as well as a tutorial article on this material.