RGBManip: Monocular Image-based Robotic Manipulation through Active Object Pose Estimation

1School of CS, Peking University and National Key Laboratory for Multimedia Information Processing 2Beijing Academy of Artificial Intelligence (BAAI) 3Department of Computer Science and Engineering, Chinese University of Hong Kong
Teaser

An eye-on-hand camera captures multiple RGB images to estimate the object pose in the manipulation process.

Abstract

Robotic manipulation requires accurate perception of the environment, which poses a significant challenge due to its inherent complexity and constantly changing nature. In this context, RGB image and point-cloud observations are two commonly used modalities in visual-based robotic manipulation, but each of these modalities have their own limitations. Commercial point-cloud observations often suffer from issues like sparse sampling and noisy output due to the limits of the emission-reception imaging principle. On the other hand, RGB images, while rich in texture information, lack essential depth and 3D information crucial for robotic manipulation. To mitigate these challenges, we propose an image-only robotic manipulation framework that leverages an eye-on-hand monocular camera installed on the robot’s parallel gripper. By moving with the robot gripper, this camera gains the ability to actively perceive object from multiple perspectives during the manipulation process. This enables the estimation of 6D object poses, which can be utilized for manipulation. While, obtaining images from more and diverse viewpoints typically improves pose estimation, it also increases the manipulation time. To address this trade-off, we employ a reinforcement learning policy to synchronize the manipulation strategy with active perception, achieving a balance between 6D pose accuracy and manipulation efficiency. Our experimental results in both simulated and real-world environments showcase the state-of-the-art effectiveness of our approach. We believe that our method will inspire further research on real-world-oriented robotic manipulation.


Video

Other Material

Pose Estimation from RGBManip

RGBManip Mug1
RGBManip Mug2
RGBManip Mug3


Pose Estimation from GAPartNet

GAPartNet Mug1
GAPartNet Mug2
GAPartNet Mug3


Pose Estimation with Different #of Steps

1 Step
1 step
2 Steps
2 steps
3 Steps
3 steps


Observation for Point-cloud Baselines

Sim 1 PC 1
Scene 1 and its corresponding observation
Sim 2 PC 2
Scene 2 and its corresponding observation
Sim 3 PC 3
Scene 3 and its corresponding observation


Sample 3D Models for Dataset

PC 1
Sample objects from Partnet-Mobilty
Sim 2
Sample objects from ShapeNet