Autonomous Learning Robot Lab

«Learning as the core principle in autonomous systems which operate in complex and changing environments.»

The Robotics and Mechatronics Center (RMC) is DLR’s competence center for research and development in the areas of robotics, mechatronics and optical systems. Mechatronics is the closest integration of mechanics, electronics and information technology for the realization of “intelligent mechanisms” which interact with their environment.

The core competence of RMC is the interdisciplinary design, computer-aided optimization and simulation, as well as implementation of complex mechatronic systems and human-machine interfaces. In the robotics community, the RMC is considered as one of the world leading institutions.

Recommended Products

Vantage is Vicon’s flagship range of cameras. The sensors have resolutions of 5, 8 and 16 megapixels, with sample rates up to 2000Hz – this allows you to capture fast movements with very high accuracy. The cameras also have built-in temperature and bump sensors, as well as a clear display, to warn you if cameras have moved physically or due to thermal expansion. High-powered LEDs and sunlight filters mean that the Vantage is also the best choice for outdoor use and large volumes.

The compact Vero cameras have sensor resolutions of either 1.3 or 2.2 megapixels. The camera has a variable zoom lens, which makes it especially suited for smaller capture volumes where it is especially important to have an optimum field of view. The Vero’s attractive price combined with its light weight and small size makes it a great choice for smaller labs and studios.

Vicon Tracker has been designed for the requirements and workflow of Engineering users wanting to track the position and orientation of objects with as little effort and as low latency as possible. Perfect for many applications in robotics, UAV tracking, VR and human-machine interaction, Tracker lets you define what you want to track with a couple of mouse clicks – and then you can just leave in the background tracking. A simple SDK lets you connect the output data stream to your own software.

Challenges for Robot Justin

  • Experimental scenario for a robot that autonomously sets up a habitat on Mars before astronauts arrive there.
  • How can a robot achieve human-level performance in complex manipulation tasks in unknown environments?
  • The first prerequisite is the creation of a precise 3D model of its environment in order to plan collision-free actions and locate the objects needed to solve the task.
  • Tactile sensing is then used to enable dexterous fine manipulation.

Ground Truth System

The method for generating precise 3D models in real time is a variant of self-localisation and mapping (SLAM) based on dense depth data from an RGB-D sensor mounted in the robot’s head. The SLAM methods calculate the current position of the head by matching the current depth image from the RGB-D sensor with the part of the model already captured. Now that an estimate of the head posture is available, the depth image is used to update the 3D model. The algorithms run in real time at a frame rate of 30 Hz. The resulting models must have an accuracy of <1 mm at typical manipulation distances of 1 m to 2 m.

In order to verify and debug these high-precision modelling algorithms, truthful measurements with an absolute accuracy of 0.5 mm are required for a typical manipulation volume of 6 m × 6 m × 2.5 m for the head pose.

Only together with the support of prophysics AG and Vicon could this precision finally be achieved under all circumstances.

Are you interested in a similar solution?

Ask here!