Deep Reinforcement learning with real-world robots
If we take an example of a real-world robot like a robotic arm which can pick up anything from the basket. It is possible by using machine learning where it requires lots and lots of data to train this robot to pick up things from a basket. We have installed a camera on the top of the arm, and from this RGB image we are going to train a neural network. To learn what commands it should send to the robot to successfully pick up objects, we can solve this task taking as few assumptions as possible. So importantly we are not going to give any information about the geometry of what kinds of object we are trying to pick up, and depth of the scene .So need to solve this task model need to learn hand-eye coordination and see where it is within the camera image, depth of the object from arm and combining all of this information, robot will figure out how it should move around. Now to get a dataset we can go forward with a simulation image which will help us to collect millions of data in a few hours. But, A model that graphs objects 90% of time in simulation only graphs them 23% of the time when deployed to the real robot. Which is a very big performance drop. So we can use sim-to-real transfer which uses simulated data to improve real-world sample efficiency, this is a transfer- learning technique.
When we compare the result we will find that using only simulated data will give very less grasp success in the real world, where if we use only real-data the result are little bit improve but to further improving result when we try to do it by using both simulation and real-word data result will we pretty amazing.