About the project

Author: Dominik Belter

Abstract:  The next step in the development of perception systems is to inference about the properties and meaning of objects. Autonomous robots should detect objects in the environment and understand their meaning to operate without human supervision. An example of a scenario is the use of neural networks to estimate the properties of objects such as doors, drawers, switches, etc. In this scenario, a robot that uses a two-dimensional representation of the environment (RGB image) and a depth image should conclude on the potential motion of the articulated objects, its kinematic limitations, and the state. Another example is the reconstruction of 3D objects. Robots, unlike humans, are not able to reconstruct 3D objects using a single RGB-D image. A modern robot perception system should also reconstruct surfaces that are invisible because they are occluded by other objects or are on the other side of the object. Such properties of perception systems can be obtained by using artificial neural networks and aggregating knowledge about the robot environment and object properties with the application of learning mechanisms. We also develop methods for neural representation of robot motion constraints during motion planning. In this talk, I’m going to present how we use machine learning methods to increase the autonomy of mobile robots. I will show the example application on mobile, walking, and mobile-manipulating robots.

Related founding:

  1. Opus NCN, Neural Environment Model and Motion Constraints in Robot Motion Planning, budget: 902 080 PLN, 01.10.2024-30.09.2027
  1. Sonata NCN, Enhanced Robotic Perception with Deep Neural Networks, budget: 467 400 PLN, 02.10.2020-01.10.2023
  2. Lider NCBR, New Localization, Mapping and Motion Planning methods with RGB-D Sensing for Industrial Flexible Manufacturing System, budget: 1 198 705 PLN, 01.01.2018-30.06.2021

Related publications:

  1. Wietrzykowski, D. Belter, Stereo Plane R-CNN: Accurate Scene Geometry Reconstruction Using Planar Segments and Camera-Agnostic Representation, IEEE Robotics and Automation Letters, Vol. 7(2), pp. 4345-4352, 2022
  2. Belter, J. Wietrzykowski, P. Skrzypczyński, Employing Natural Terrain Semantics in Motion Planning for a Multi-Legged Robot, Journal of Intelligent & Robotic Systems, Vol. 93(3-4), pp. 723–743, 2019
  3. Rafał Staszak, Bartłomiej Kulecki, Witold Sempruch, Dominik Belter, What’s on the Other Side? a Single-View 3D Scene Reconstruction, Proceedings of the 17th International Conference on Control, Automation, Robotics and Vision (ICARCV), December 11-13, 2022, Singapore, pp. 173-180, 2022
  4. Jakub Bednarek, Noel Maalouf, Mathew J. Pollayil, Manolo Garabini, Manuel G. Cata-lano, Giorgio Grioli, Dominik Belter, CNN-based Foothold Selection for Mechanically Adaptive Soft Foot, IEEE/RSJ 2020 International Conference on Intelligent Robots and Systems, pp. 10225–10232, 2020
  5. M.S. Kopicki, D. Belter, J.L. Wyatt, Learning better generative models for dexterous, single-view grasping of novel objects, The International Journal of Robotics Research, Vol. 38 (10-11), s. 1246–1267, 2019

 

About the project

Partners taking part in this project

PP

Poznań University of Technology

view more