From Robot Simulation to the Real World

https://www.infoq.com/presentations/robot-simulation-real-world/


Summary

Louise Poubel overviews Gazebo’s architecture with examples of projects using Gazebo, describing how to bridge virtual robots to their physical counterparts.

About the conference

QCon.ai is a practical AI and machine learning conference bringing together software teams working on all aspects of AI and machine learning.

Bio

Louise Poubel is a software engineer at Open Robotics working on free and open source tools for robotics, like the robot simulator Gazebo and the Robot Operating System (ROS).


Transcript

Poubel: Let’s get started. About six years ago, there was this huge robotics competition going on. The stakes were really high, the prizes were in the millions of dollars, and robots had to the tasks like this, driving vehicles in a disaster scenario kind of thing, handling tools that he would handle in this kind of scenario, and also traversing some tough terrain. The same robot had to do these tasks one after the other and in sequence. There were teams from all around the world competing, and as you can imagine- those pictures are from the finals in 2015 and that was really hard for the time, and they’re still tough tasks for robots to do today.

The competition didn’t start right there straightaway, “Let’s do it with the physical robots”. The competition actually had a first phase that was inside simulation. The robot had to do the same thing in simulation. They had to drive a little vehicle inside the simulator, they had to handle tools inside the simulator just like they would handle later on in the physical competition, and they also had to traverse some tough terrain. The way that the competition was structured is that the teams that did the best in this simulated competition, they would be granted a physical robot to compete in the physical competition later, so teams that couldn’t afford their own physical robots or they didn’t have the mechanical design of their own robots, they could just use the robots that they would get from the competition.

You can imagine the stakes were really high; these robots cost millions of dollars, and it was a fierce competition in a simulation phase as well that started in 2013. Teams were being very creative with how they were solving things inside the simulation and some teams had very interesting solutions to some of the problems. You can see that this is a very creative solution, it works and it got the team qualified, but there is a very important little detail. It’s that you can’t do that with the physical robot. Their arms are not strong enough to withstand the robot’s weight like that, the hands are actually very delicate so you can’t be banging it on the floor like this.

You would never try to do this with the physical robot, but they did it in stimulation and they qualified to compete later on with the physical robot. It’s not like they didn’t know. It’s not like they tried to do this with the real robot and they broke a million dollar robot, they knew that there is this gap between the reality of the simulation and the reality of the physical world, and there will always be.

Today, I’ll be talking to you a little bit about this process of going from simulation to the real life, to the real robot, environment, interacting with the physical world. Some of the things that we have to be aware when we are doing this transition and when we are training things and simulation and then to put the same code that we did in simulation inside the real robots. We have to be aware of the compromises done during the simulation, we have to be aware of the simplifying assumptions that were done while designing that simulation.

I’ll be talking about this in the context of the simulator called Gazebo, which is where I’m running this presentation right now, which is a simulator that has been around for over 15 years. It’s open source and free, people have been using it for a variety of different use cases all around the world. The reason why I’m focusing on Gazebo is that I am one of the core developers and I’ve been one of the core developers for the past five years. I work at Open Robotics, I’m a software engineer, and today, I’ll be focusing on Gazebo because of that. This picture here is from my master thesis back when I still dealt with physical robots, not so much with robots that are just virtual inside the screen. I’ll be talking a little bit also later about my experience when I was working on this and I also would use simulation then I went to the physical robot.

At Open Robotics, we work on open source software for robots, Gazebo is one of the projects. Another project that we have that some people here may have heard of is ROS, the robot operating system, and I’ll mention it a little bit later as well. We are a big team of around 30 people all around the world, I’m right here in California in the headquarters. That’s where I work from and all of us are split between Gazebo and ROS, and some other projects and everything that we do is free and open source.


Why use Simulation?

For people here who are not familiar with robotics simulation, you may be wondering why, why would you even use simulation? Why don’t you just do your whole development directly in the physical robot since that’s the final goal, you want to control that physical robot. There are many different reasons, I selected a few that I think would be important for this kind of crowd here who are interested in AI. The first important reason is you can get very fast iterations when dealing with a simulation that is always inside your computer.

Imagine if you’re dealing with a drone that is flying one kilometer away inside a farm, and every time that you change one line of code, you have to fly the drone and the drone falls and you have to run and pick it up, and fix it, and then put it to fly again, that doesn’t scale. You can iterate on your code, everybody who’s a software engineer knows that you don’t get things right the first time and you keep trying, you can keep tweaking your code. With simulation, you can iterate much quicker than you would in a physical robot.

You can also spare the hardware, but hardware can be very expensive and mistakes can be very expensive too. If you have a one million dollar robot, you don’t want to be wearing out its parts, you don’t want to risk it falling and breaking parts all the time. In simulation, the robots are free; you just reset and the robot is back in one piece. There is also the safety matter, if you’re developing with a physical robot and you are not sure exactly what the robot’s going to do yet, you’re in danger, depending on the size of the robot, depending on what the robot is doing, how the robot is moving in that environment. It’s much safer to just do the risky things in simulation first, and then go to the physical robot.

Related to all of this is scalability, in simulation, it’s free. You can just have 1,000 simulations running in parallel, while for you to have 1,000 robots training and doing things in parallel, that costs much more money. You can have for your team, the whole team would have one robot, then if you have all developers trying to use the same robot, they are not going to move as fast as if they each were working in a separate simulation.


When Simulation is Being used?

When are people using simulation? I think the parts that most people here would be interested in is machine learning training. For training, you usually need thousands and millions of repetitions for your robots to learn how to perform a task. You don’t want to do that in the real hardware for all the reasons that I mentioned before. This is a big one, and people are using simulation, people are using Gazebo and other simulators for this goal. Besides that, there’s also development, people are just good old fashioned, trying to send commands to the robot for the robots to do what he wants, to follow a line, or to pick up an object and use some computer vision.

All these developments, people were doing in simulation for the reasons I said before, but there’s also prototyping. Sometimes you don’t even have the physical robot yet and you want to create a robot in simulation first and see how things work and tweak the physical parameters of the robot, even before you manufacture it. There’s also testing, a lot of people are ready CI in their robots, like every time you make a change to your robot code and maybe nightly or at every port request, you run that simulation to see if your robot’s behavior is still what it should be.


What you can simulate?

What can people simulate inside Gazebo? These are some examples that I took from the ignitionrobotics.org website, which is a website where you can get free models for using robotic simulation. You can see that there are some ground vehicles here, all these examples are wheeled, but you can also have legs robots, either bipeds with two leg or quadrupeds, or any other kinds of legged robot. You can see that there are some smaller robots, they are self-driving cars with sensors and some other form factors. There’s also flying robots, both robots with fixed wing or quadcopters, hexacopters, you name it. Some more humanoid-like robots, this one is from NASA, this one is the PR2 robot. This one is on wheels, but you could have a robot like Atlas that I showed before that had legs. Besides these ones, there are also people simulating industrial robots, underwater robots. There are all sorts of robots being simulated inside Gazebo.

It all starts with how you describe your model, all those models that I showed you before, I showed you the visual appearance of the model and you may think, “This is just a 3D mesh.” There’s so much more to it, for the simulation, you need information, all the physics information about the robot like dynamics, where is the center of mass, what’s the friction between each part of the robot and the external world, how bouncy is it, where exactly are the joints connected, are they springy? All this information has to be embedded into that robot model.

All those models that I showed you before are described in this format called the simulation description format, SDF. This format doesn’t describe just the robot, but it describes also everything else in your scene. Everything else here in this world from the visual appearance, from where the lights are positioned, and the characteristic of the lights and the colors, every single thing, if there is wind, if there is a magnetic field, every single thing inside your simulation world is described using this format. It is an XML format, so everything is described with XML tags, so you have a tag for specular color of your materials or you have a tag for the friction of your materials.

But there’s only so far that you can go with XML, sometimes you need more flexibility to put more complex behavior there, some more complex logic. For that, you use C++ plugins, Gazebo provides a variety of different interfaces that you can use to change things in simulation from the rendering side of things, so you can write a C++ plugin that implements different visual characteristics, makes things blink in different ways that you wouldn’t be able to do just with the XML. The same goes for the physics, you can implement some different sensor noise models that you wouldn’t be able to just with the SDF description.

The main language, like programming interface to Gazebo, is C++ right now, but I’ll talk a little bit later about how you can use some other languages to also interact with simulation in very meaningful ways.


Physics

When people think about robot simulation, the first thing that you think about is the physics, how is the robot colliding with other things in the world? How is gravity pulling the robot down? That’s indeed the main important part of the simulation. Gazebo, unlike other simulators, we don’t implement our own physics engine. Instead, we have an abstraction layer that other people can use to integrate other physics engines. Right now, if you download the latest version of Gazebo, which is Gazebo 10, you’re going to get these four physics engines that we support at the moment. The default is the Open Dynamics Engine, ODE, but we also support Dart, Bullet, and Simbody. These are all external projects that are also open source, but they are not part of the core Gazebo code.

Instead, we have this abstraction layer, what happens is that you describe your word only once, you describe your SDF file once, you write your C++ plugins only once, and at run time, you can choose which physics engine you’re going to run with. Depending on your use case, you might prefer to run it with one or the other according to how many robots you have, according to the kinds of interactions that you have between objects, if you’re doing manipulation or if you’re doing more robot locomotion. All these things will affect what kind of physics engine you’re going to choose to use in Gazebo.

Let’s look a little bit at my little assistant for today. This is now, let’s see some of the characteristics of the physics simulation that you should be aware of when you’re planning to user simulation to then bring the codes to the physical world. Some of the simplifying assumptions that you can see are, for example, if I visualize here the collisions of the model- let me make it transparent- you are seeing these are orange boxes here, they are what the physics engine is actually seeing. The physics engine doesn’t care about these blue and white parts, so for collision purposes, it’s only calculating these boxes. It’s not like you couldn’t do with the more complex part, but it would just be very computationally expensive and not really worth it. It really depends on your final use case.

If you’re really interested in the details of the parts are colliding with each other, then you want to use a more complex mesh, but for most use cases, you’re only interested when the robot really bumped into something and for that, an approximation is much better and you gain so much in simulation performance. You have to be aware of this before you put the code in a physical robot, and you have to be aware of how much you can tune this. Depending on what you’re using this robot for, you may want to choose these collisions a little bit different.

Some of the things that you can see here, for example, are that I didn’t put collisions for the finger. The fingers are just going through here. If you’re doing manipulation, you obviously need some collisions for the fingers, but if you’re just making the robot play soccer, for example, you don’t care about the collisions of the fingers, just remove them and gain a little bit of performance in your simulation. You can see here, for example, that actually the collision is hitting this box here, but if you remove the collision, if you’re not looking at the collision, the complex part itself looks like the robot is floating a little bit. For most use cases, you really want the simplified shapes, but you have to keep that in mind before you go to the physical robot.

Another simplifying thing that you usually do, let’s take a look at the joints and at the center of mass of the robot. This is the center of mass for each of the parts of the robot, and you can see here the axis of the joints, here on the neck, you can see that there is a joint up there that lets the neck go up and down, and then the neck can do like this. I think the robot has a total of 25 joints, and this description is made to spec, this is what the perfect robot would be like and that’s what you put in simulation. In reality, your physically manufactured robot is going to deviate a lot from this, the joints are not going to be perfectly aligned from both sides of the robot. One arm is going to be a little bit heavier than the other, the center of mass may not be exactly in the center. Maybe the battery moved inside it and it’s a little bit to the side. If you train your algorithms with a perfect robot inside the simulation, once you go and you take that to the physical robot, if it’s overfitting for the perfect model, it’s not going to work in the real model.

One thing that people usually do is randomize a little bit this while you’re training your algorithms, for each iteration, you move that center of mass a little bit. You reduce and you increase the mass a little bit, you change the joints, you change all the parameters of the robot and the idea is not that you’re going to find the real robot, because that doesn’t exist. Each physical robot, they are manufactured differently, from one to the other, they’re going to be different. Even one robot, over time, will change, it loses a screw and suddenly, the center of mass shifted. The idea of randomization is not to find the real world, it’s to be robust enough to arrange a variation that once you put it in a real robot, the real robot is somewhere there in that range.

These are some of the interesting things, there are a bunch of other things, there’s inertia, which is nice to look at too, but that’s with the robot not transparent anymore. Here is a little clip from my master thesis and I did it within our robots and I did most of the work inside simulation. Only when I had it work in simulation, I went and I put the code in the real robot. A good rule of thumb is if it works in simulation, it may work in the real robot, if it doesn’t work in simulation, it most probably is not going to work in the real robot, so at least you can take out all the cases that wouldn’t work.

By the time I got here, I had put enough tolerances in the code and I had to test it a lot also with the physical robot, because it’s important to periodically also test with the physical robot, that I was confident that the algorithm was working. You can see that there is someone’s hands there in case something goes wrong, and this is mainly for a thing that we just had in model in simulation which is the physical robot. I was putting so much strength onto one of the feet all the time because I was trying to balance and those motors in the ankles were getting very hot and the robot comes with a built-in safety mechanism where it just screams, “Motor hot,” and turns off all of its joints. The poor thing had the forehead all scratched, so the hand is there for these kind of use cases.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值