ARS2: Robotic Vision

The goal of this module is to introduce students to the concepts of computer vision applied to mobile robotics. It includes sensors description and the basics of processing approaches.

To complete theoretical studies, some sensors will be shown as part of demos and practical sessions: camera, thermal camera, event camera, ToF camera, lidar…

Teachers:

Course plan

  1. Intro and Different camera modalities [J. Moreau]
  2. Depth estimation [J. Moreau]
  3. Image features [Ph. Xu]
  4. VSLAM [J. Moreau]
  5. Visual servoing [Ph. Xu]
  6. Reinforcement learning for control [Ph. Xu]
  7. Deep learning for robotics [Ph. Xu]

A full schedule with dates will be provided in the Moodle ARS2.

Practical sessions

Exercices will be done in Python language with NumPy library (and some others).
Developments can be done with the available computers in the classroom.
Or, => Python setup tutorial

Each course is associated to its short practical session:

  1. Pyhon image processing refresh + binning of event camera data to build frame representations
  2. Kinect sensor + Ground Plane Fitting on 3D rotating lidar street data
  3. Image features
  4. 2D-2D motion estimation on Kitti data
  5. Visual Servoing
  6. Reinforcement learning
  7. Deep reinforcement learning

Assessment modalities

  • Written exam: 50%
  • Bibliographical project: 50%
    • Global paper (25%)
    • Individually assigned references understanding (25%)

Bibliography / State of the art project details

  • Select a topic from the proposed pool (other propositions related to the course might be aproved on demand)
  • Written review paper:
    • Rules similar to a scientific call for paper
    • Part of the evaluation process done by peers, i.e. by Master students

Other informations

Requirements: Mathematics and algebra basic skills, 3D and 2D geometry, statistics. Signal processing is a plus.
Scientific Python (NumPy etc) and Linux are needed for the practices.

Level: Master 2

Number of hours per week: 2h course, 2h exercices.