All right. I think we are good to go. Good early evening everybody. Thank you very much for joining us here and I had asked about serving drinks. Uh but they said no. So unfortunately, that uh you know, it's not gonna work. But thank you very much.
If you've come to talk about simulations and learn more AWS SimSpace Weaver, which was mentioned this morning and launched this morning in Adams keynote, you are in the right place.
I am joined by Roland and Mike and two of our partners that will uh explain and go through actually two of the projects that they've been working on with Weaver um over the next hour or so.
Uh we're going to explore the problem space that Weaver was designed to solve. We will uh discuss some of the services new and exciting features. And uh before I do that, I'm going to start with a bit of a confession actually. So this is confessions of an AWS product manager.
Uh I gave a talk last year at re invent a bit smaller than this. And we introduced two examples of scaling up simulations to over a million objects. And during that talk. I said this instances that power this simulation and 100 and 60 unreal servers that are running on those instances. We have our 10 instances here. Yes. So I was talking about 10 instance clusters and running massive simulations. And I never ever mentioned anything about simspace weaver.
And then everybody gave me feedback. They were like, you know, well, can you just kind of, you know, it's really cool but tell us more about the distributed computing part or you know what's going on with the distributed computing cluster? And just can you please tell us what the heck is really going on here?
Um and at the time I couldn't, but this distributed computing cluster that we had set up last year was in fact, simspace weaver. And so here we are with AWS simspace weaver. The first of our simulations specific services, it is designed to help developers take their spatial simulations to new levels of scale and complexity and be able to create dynamic seamless and virtual worlds across multiple instances in the cloud with millions of dynamic objects.
And you can also use your own simulators or popular 3D tools such as unity and unreal on the surface. Simspace weaver addresses a simple premise for a rather complex problem. Developers simply want to build bigger simulations.
Uh I want to build a bigger simulation that resembles that of an entire city, all right, but that is actually rather difficult if you want to simulate all million uh sorry, all the people, the cars and everything else that goes on in that particular city.
So to understand why that problem is actually rather hard, let's make sure we're all level set on the basics.
Number one simulation is the evolution of a model over time, much like this car and the position and velocity of which changes as it rolls down the incline plane, which physics professors everywhere will rejoice that we are using their examples here at rein event simulations come in many forms, fluid dynamics, weather structural simulations.
Adam talked a lot about these uh in his keynote this morning and apparently there was some missing structural simulation on that bridge uh which could have used some more of that. I guess the uh spatial simulation is a bit different in that we are modeling objects with time and place, right?
So like the spheres that are moving throughout this environment, each object has a position at a specific point in time and these objects all share the same space and they all interact with one another as you saw there, the sphere is actually sort of dodged each other, right?
So if we're going to build really large spatial simulations that seek to reach real world scale, where do you start? The first place you start is by creating your environment and essentially what the simulation would look like. And then also defining what the simulation does, how it behaves the code that actually runs the behaviors of the entities, you then go to aws and say, ok, well, now i actually need to run this simulation application.
And so i need to go grab an ec2 instance, deploy my simulation and i'm gonna push that as far as i can get and maybe i can get 100,000 entities. And then i say, ok, well, let me scale up the compute it's the cloud after all right.
So we go and grab another bigger instance. I'm sure there's a bigger one in the data center. And so we get to 200,000 and then we try again and then you get to 400,000. And at some point, there just isn't a bigger instance you have to think about. Well, ok, at this point, i just need another instance.
And so that now brings up questions like, well, now how are you actually gonna do that? Well, if i need to go grab another instance, now, i need to think about the hardware, i need to think about the networking. I need to think about synchronizing the state between all of them. I need to think about how to replicate data between the two instances. And i need to think about how to basically divide my entire simulation space, which all told is enough to make you want to flip your desk over and light it on fire uh or the other option which you have now is simspace weaver with simspace weaver, we could take the simulation that used to be confined to a single instance and now distributed across multiple instances for you in the cloud.
We handle all the infrastructure, the networking and setting up a secure AWS environment in which you can deploy your simulation applications scale up to across 10 instances that are woven together to create a single seamless simulation environment.
Now, where does it fit in the overall grand scheme of the simulation? So understanding this is key to understanding where weaver fits in weaver itself or simspace weaver is the scalable simulation infrastructure. You are in complete control over what the simulation does and what it looks like.
So the simulation logic, whether you want to write that in your own custom engine or you want to use something like on realer unity totally up to you. If you want to create a city with gis or geospatial data that looks like vegas go for it. If you want to make one that looks like new orleans, go for it, that as well is totally up to you or you can create a fully synthetic city that works just as well.
So uh the key to this is the simspace weaver app sdk, which is how you integrate with the service. And one of the key service principles here. So let's dig into this a little bit. You can build the simspace weaver apps by integrating the c++ sdk with your simulation code.
The first type of apps are what we like to call the spatial apps. These are the apps that read and write entity data and entity state as the simulation progresses, right? They are the core simulation logic.
Then you also have a class of custom applications that can do other things that interface with your simulation. Usually we use these to uh analyze the data from the simulation or read that data out of the simulation for purposes of visualization, which we will see examples of you can launch these custom apps on every instance or you can launch them manually. That's your choice.
Now, what are we actually doing with weaver? Let's take a closer look at the spatial apps simulations today. Look a lot like this picture. It's a single simulation application that manages all of the entity state of everything essentially in the sim at any one time.
If you need more instances, you go get more compute until you can't and then you're stuck, we are going to throw more compute at this problem, but we're going to do so in a slightly different way, we will take the simulation space and divide it up into what we call spatial partitions.
Each of these partitions, which i have four of here that are all color coded uh will de your, your we simspace weaver apps run in each of these spatial partitions instead of taking our compute power and throwing it at a single giant area. We're actually going to throw it at each of these smaller sections and we're going to assign a set amount of compute and memory resources to each of these. And they will manage all of the entities within that particular area.
This behaves like a single uh this behaves like a simulation application today except that simspace weaver is what is maintaining the global state of all your entities under the hood for you. So invariably as things move throughout the simulation, because these simulations are intended to be dynamic, right cars will drive on the road and people will move from place to place uh entities need to cross these spatial partitions or these boundaries or they also need to cross instances.
We help you with that as well. The entity state is stored in what we have as shared memory segments which in each of the partitions are those blue rectangles. And then i have one particular rectangle representing my orange sphere that is going to transition from one partition to another under the hood.
We transfer the shared memory segments and the data for those entities for you. So we're handling all that data handoff. And this also happens across instances just as easily as it does across partitions again from your standpoint and from the standpoint of the entities within the environment, it's one single seamless world.
Now we can move across the world, but we need to somehow be able to view it. So how do you do that? Well, simspace weaver has a concept called subscriptions which using the custom applications that can read simulation data from the sim we can create subscriptions on different areas of the simulation and read that data out to a client for visualization purposes.
So that is how we create the nice videos and the nice visualizations and the interactivity that you'll uh that you'll see coming up in some of the videos. Later subscriptions also works for spatial apps. I often get asked, you know, can entities interact across boundaries or how does one entity know that there's another entity directly on the other side of the partition?
Well, best practice is actually to have these spatial apps subscribing to a boundary area on either side of them so that you can actually actively simulate and acknowledge what's on the other side so you can preserve the interactivity.
Ok. Those are the basics that we have on the service. So i'm going to stop talking for a bit and i'm gonna turn it over to roland and mike uh to explain how developers are actively taking advantage of simspace weavers tools to create virtual worlds at unprecedented scale doctor. Here you go.
Roland: Thank you very much. I'm uh roland geraerts. Uh and uh i'm the founder of crowds and that's a company that i founded five years ago. It's a spin off of utrecht university in the netherlands and it all started here uh with uh the tour de france. Uh this was in sport event being held in utrecht. And utrecht has a small city and the municipality of utrecht asked us, um can you host 800,000 people in such a small city?
Uh and uh in order to answer that we made some simulations uh to uh get some insights. So here are some pictures and uh you see uh a simulation going on and by doing the simulations, it turned out that at certain places uh there was not enough capacity. So we had to uh broaden the road a bit or we had to install uh one way uh bridges or uh one way roads.
So another example is of a schiphol airport, they wanted to know uh suppose that it will be 30% more crowded. Uh can we handle that in this environment? And also here we could do simulations to answer these questions.
Ok. So, um and in order to answer these simulation questions, uh we have been studying and creating a framework that can simulate big crowds. And this framework consists of a few elements. And one of these elements is that we have to have a representation of the environment in such a way that a computer can do quick with it. And then it talks to a framework that consists of five levels of movement.
And uh these are uh levels that comprise or describe how people are moving and behaving in a virtual environment. So at the highest level, we have a i uh that decomposes a certain task into sub tasks such that we know for each task, what the start and goal position is in an alignment.
Then in level four, we uh try to uh compute an indicative route which is a route that indicates how you globally want to walk from your start to your goal uh while uh uh moving uh through a corridor in a building or uh on a street on a street level.
So once we have this uh route, uh what we do is iteratively 10 times per second, we pick a point on this route which we follow, this gives us a velocity. So a direction and speed and which you want to move. But when there is a crowd, you cannot always move in that preferred direction because you have to take into account everything that happens around you.
So for instance, there is an imminent collision going on. So you have to avoid this collision from happening or maybe you are moving in a social group like two parents and two kids, you want to stay coherent, you want to stay together while moving from a to b.
So all these adjustments can be done at level two and then this yields a final velocity which we can feed into an animation engine or a physics engine. And uh that drives eventually the visualization on your screen.
So uh we created an implementation of this framework in an engine. Um and this engine is written in 100% rust. And we also built some plugins for popular game engines, uh unity and unreal. And obviously, uh with aws, we have been working on s space uh weaver integration.
Uh we also have another product with a very nice user interface called sim crowds.
Ok. So uh this engine has been built for performance and interactivity. Um and these are some tricks we used. So uh we started off with a c++ engine uh but it had some memory bugs and we wanted to make it more safe and in such a way that it also scales on machines with many computing cores.
So uh we moved from uh c++ to rust and we also moved from an object oriented way of programming to a data driven way. Um we looked at how memory behaved on machines so we uh transformed our algorithms uh so that they became cache aware.
Uh then we carefully looked at memory management. So we made sure that we didn't do um uh too many allocations and the allocations of memory because it really destroys uh high performance uh computations.
Some computations are too expensive to compute uh during one time tick. And that typically takes uh 0.1 2nd. So these uh heavy tasks are decomposed into sub tasks and then distributed over multiple frames.
So the result is that we could simulate 400,000 agents on a consumer pc or 100 and 50,000 agents in the web. Uh because rust allows you to compile to web assembly
Okay, but uh there was still a need for speed because 400,000 is not enough. We wanted to do 1 million. So uh working together with AWS unlocked, unlocked an interactive digital twin um that features a cloud based uh simulation of a persistent world. So persistent means it stays alive.
So I have been talking to many gaming companies and they said like when you are creating a game, often you are walking around in an environment and the game is created around you and also the behaviors of other PCs and PCs are created around you. But here persistent means it should keep on running for the whole world. Interactive means that you can make changes to the world and change properties and take these properties and changes into account while the simulation is going on.
Then there are real time NPCs. We have a realistic photogrammetry environment that we can render in real time. Uh it's visualized and animated on a client. And we worked together with Epic Games to also animate and visualize them on a client for 1 million visible NPCs.
Okay. But there were some challenges for so um and one of the ways to solve it is to eat the elephant in slices because it's a big problem. So the first thing what we did was we split up the whole environment into cells like Ritz explained. Uh and then we increased the cell size. So that would be an overlap between different parts of the simulations. And since our simulations are deterministic, that is ok.
Um and this uh uh overlap allows you uh to smoothly transition uh from one virtual machine to the other one. Then we create a navigation measure structure and a simulation per cell. Uh and yeah, by doing the careful um synchronizations, we made sure that it all happens, right.
So um the creation of these navigation mesh structures that is needed for the simulations. Uh it can be quite difficult when you are working with uh JS data because it's view ready in general, but not simulation ready. So uh we looked at three examples.
The first one is that we can make simulation anywhere on earth uh based on open street map. So it's open data uh with some uh satellite data. And what we did, we looked at the footprint of the buildings and used that as obstacles in our environment.
Well, the second environment, we got some photogrammetry data from uh mecs r. And also here we took the footprint of the buildings in order to create a representation of where people can work.
Uh in the third example, um we got data from error metris. Uh in this case, it's a representation of las vegas and uh they gave us a huge pictures in a tiff format and we extracted uh some information from that. So each pixel denoted like the meaning of a certain area, it could be street or a sidewalk or something else. And so we extracted the polygons that denoted the places where you could walk and then entered this into our simulation as input.
So another challenge was, well, uh we could assimilate 1 million in real time in a cloud that was no problem. Uh but then we needed to retrieve the data to a client. And if we wouldn't do anything about the data, so not compress it, et cetera, then we would need a 10 gig gigabits connection, which is way too much, obviously.
So uh we compressed the data. Uh we also came up with a customized protocol uh to send over UDP packets and then we applied uh temporal and spatial sampling. So temporal sampling an example is that uh agents or entities that are moving very far away. Uh yeah, you don't need to send over all the data in each frame because it's just a pixel that moves on the screen. And if the movement is less than a pixel, then you won't see it at all.
Um so these are some tricks to compress the data stream. So here is uh an example, you see the picture there uh with uh at most 10 megabit megabytes per second, we could send over 1 million visible entities and also render them which may still be a lot of data. But in most cases, you are not seeing 1 million people in real time because they are hidden by uh buildings and other shapes.
So uh next, we started working on the visualization and animation of 1 million NPCs uh with the help of Epic Games. So uh we used the Unreal 5 engine with blueprints and we added a vertex based animation and rendering on the GPU uh by the usage of their new uh Niagara particle system, which was surprisingly fast.
Um so we added some smooth interpolations and positions and animations. Um for instance, uh when you want uh have a character standing still and it wants to go walking, there needs to be a smooth transition. Otherwise it doesn't look right. We also uh integrated a few levels of details in the rendering uh to have a high performance.
So basically uh agents that are very far away, you can render it with uh way fewer polygons and agents that are just like one pixel on the screen. You can also render it uh with uh two triangles if you do it carefully.
So um so we uh eventually managed to create a simulation of 1 million people in real time uh including the visualization. But the world is not enough, we want to do more. We want to make simulations at a world scale so now we are trying to work towards the simulation of a whole country which is technically possible.
Uh so we are looking uh for partnering apps to collaborate. So uh we want to have an improved labeling of photogrammetry data uh so that the simulations are becoming more and more realistic. Also, we want not only to do real time simulations but also super real time simulations.
So for instance, uh we did a project in London St Pancras station. Uh there are 32 3D cameras measuring the crowd flow. Um and we can with a three second delay, put that information into our simulation system. And then typically we have three minutes time to make predictions of what will happen 15 minutes into the future because that gives us enough time to make changes in the real world. And the changes are that trains are then arriving on a different track or that you give passengers information about go 50 m to your left and then take your train. So there is a need for super real time simulations also in a cloud.
Next, we want to have collaborations with companies working on other simulation components like a traffic simulator. Okay. So I'll pass uh this one to the next speaker. Good luck.
Alright. Alright, looks like we will be operational. Um hi, everybody. I'm Mike Taylor. I'm one of the co founders at Duality and at Duality, we have built a digital twin integration platform to help our customers build robots AI systems, even metaverse experiences. And I know if you follow the news and the Shanahan's on some of those sites, metaphors has got a bit of a negative connotation at the moment. If you give me a few minutes here, I'd like to try to rehab its reputation here.
So if you might have a video compression issue, but I want to start talking about what our customers need in simulation. Now, our customers need large complex realistic simulations that reflect their operating domain. And the reason for that is they build robots and AI systems for the real world. As someone who used to do that in his career, I know how hard this is. It's painful to screw up in the real world. That's why you want to develop train test and simulation as much as possible.
When it comes to the environments that they need in a city environment, our users need a large variety of scenarios on which they can test and train their systems. That means from a city perspective, we need something very, very large and highly variable from top to bottom. From an environment quality perspective. We have a similar need.
So our customers range from undersea robotics to aerial systems, to everything in between ground robots, even indoor systems with retail and manufacturers, the common thread for all of them is high quality environments. So when we think about a city, I need just to look good at the street level. I need to look good at the top of the roofs. So drones can fly over a lot of different environments and at their cruising altitude, a lot of things look good to their sensors. But when you start to land, you need it to be accurate.
When we're talking about a city though. Also what we really care about are the dynamic obstacles. At least that's what the robots think of them as we all think of them as cars and people. And that's the messy part of this world. That's what's really, really hard. And so if we look at the people here, actually, I do want a little bit of a shout out all of the humans you see here are made with Epic's MetaHuman system. If you haven't looked at this, I highly recommend that you do.
One of our customers is actually using our system Falcon along with MetaHumans to develop images that they've actually put in front of their consumers and their consumers can't figure out if those are pictures of a real human or a synthetic person. Our customers love MetaHumas because of how accurate they look. But our customers also need them to behave accurately and on a city level, what that means is their motion models can't be simplistic, they can't simply walk around the same block repeatedly. They need to be able to walk from one end of the city to another and at the scales that we're all talking about here, that's quite the simulation challenge and that's why we've been working with Weaver.
Um now I said I want to rehabilitate the the meta term. So if you'll give me a moment here, a metaverse is simply the virtualization of a physical 3D space along with the assets and the interactions that are in there. Now put that way, that doesn't seem too absurd. And if we look over our history, this is just the next digital transformation.
Starting back when PCs started to become common and we had the beginnings of the web that was us learning how to virtualize records. So I don't need a stack of papers on my desk. I now have Word and Excel to replace all of them as our tech, our networking and our compute got better. We started to virtualize communication and processes. That's why pretty much all of us here have smartphones in our pockets.
Now, probably the last time we talked with our bank, it was through a website. So now we're working on virtualizing spaces because we've hit that point with our tech and our compute and our networking and done correctly done in the right domain. You'd be forgiven for thinking that a metaverse is really just a very advanced simulation. And that's because that's exactly what it is.
So we believe in Duality, that simulation very much is the engine that powers the metaverse and that simulation needs to run digital twins. A lot of folks know digital twins definitely on the IO side of things. So these are real time or operational digital twins that tell you what is happening in the world. Where is my robot arm pointed? What is going on on my factory floor? But there's a more powerful digital twin, a predictive digital twin that you can run in simulations. These twins let you ask more important questions. The what if question, what would happen if my robot arm was pointed here instead or what would happen if i replaced a computer or a machine on my factory floor?
Now, the problem for our customers is meta versus do exist if you have children the same age as mine, you know, actually know their names. Roblox Minecraft, Fortnite. These are actually meta verses they're fantastic for engagement. I play Fortnite with my son all the time. It's a great time. But from a business perspective, the value tops out at marketing and brand presence.
On the other hand, there are simulators out there and there are digital twin simulators. The problem for our customers is they lack scale so they don't have the ability to pull in a wide variety of assets of twins. They can't bring in multiple intelligence systems and the humans, those systems have to interact with.
"And that's why we've built the metaverse to solve. We call it the real-world problems, the problems of our customers. It's a, a metaverse is composed of digital twins instead of just avatars, it provides accurate and deterministic simulation so that you can generate realistic data data that's indistinguishable from the real world. And the twin simulator that powers this for us is Falcon. So this is our baby. We all think it's pretty wonderful.
Falcon loads your worlds the assets inside of them at runtime, just like Microsoft Word didn't show up on my computer with my thesis already on it, but it can run it. Falcon can load whatever environment you want to hand to it. The robot, the quad copy that is here, the computer, the computer vision system, all of that is instantiated at runtime based on a US D scenario system, which means that a roboticist like myself can edit human readable files to change this virtual world. And most importantly, get the data out of it that I need to make my system better. And so what this allows us to do is help our customers end to end, starting with acquiring the digital twins that they need, managing them, allowing them to simulate those twins and get that crucial data out data to train their systems data to understand how their systems are behaving. And that's the view that we wanted to bring into this project in the space.
Sorry, it looks like the audio is a bit loud here. Um we didn't want to simply integrate Falcon and SimSpace and, and talk to everybody about the stats. We wanted to talk about how tools like this can answer actually very, very important questions, questions like how can smart cities that are starting to come online? And intelligent vehicles be combined to tackle problems like reducing the time it takes for an ambulance to navigate a city like this to get to someone in distress. And our first step along that path path was pulling in a digital twin. And so what we brought in was CitySample from Epic Games.
Now this is an example city that Epic put together to highlight what's possible with Unreal 5 and spoiler alert. A lot's possible. This city was built with Houdini procedurally generated. The end result is a world that measures roughly four by five kilometers in dimension. It provides 260 kilometers of streets, 7000 new buildings, buildings as you drive past them, you can actually see in the windows of the rooms, not just as a picture but with active parallax, it's really quite impressive. 100,000 vehicles realistic top to bottom out of the box CitySample comes with 50,000 of Epic's Media Humans wandering through the world.
What we wanted to do on this project among other things was use Weaver to really up that count and to do so in a way that didn't crush our local machines so we could leverage the cloud to do it. So on this particular project, we actually played the role of Duality Bells as one of our customers. So we worked on the integration with SimSpace Weaver. But we also built all of the autonomy systems that our customers typically build. So the routing system to figure out how an ambulance can get from where it is today to a person in need, the autonomy system to run that ambulance along that path, the dispatch system to figure out which ambulance should be deployed. And we did all of this heavily leveraging Weaver in its local mode so that our teams could work on both the integration but also the simulation itself without necessarily needing to connect to the cloud.
The architecture that we put together supports up to 10 workers across this world, evenly distributed across the entire space. Each worker supports up to 16 partitions again equally distributed across that geographic area. Now, what Weaver allows us to do is look at all these separate simulations, each partition running its own instance of our Falcon simulator. But our clients can talk to us of this one seamless integrated simulation. And so what we did is as Ritz mentioned, we wrote a custom communication app. The job of this app is to talk with all of the Weaver workers and partitions and communicate to another instance of Falcon a client that is running our EMS scenario. So every time step, every tech it reports out the location of all the entities that SimSpace is managing to this client.
So now the client that in this case on the EMT scenario is running a smart ambulance, the client can render all of the people, all of the cars, everything that SimSpace is managing for us can render both for the viewport but also for the sensors on board. However, that client doesn't have to take on any of the simulation mode of simulating those complex humans that in this case can walk from one end of the city to another. The result is this now we have a simulation to ask the questions that we want to ask as we look at this world, as we trigger an emergency in this world. As we try to figure out which ambulance can get there, we can start to ask. Well, I've predicted three minutes and 12 seconds to make this route. How long will it really take? What's going to happen if one of these humans makes a different choice or more interestingly, what happens if a human that's playing this game along with me, not a simulated human, but a real live human does something different.
So in this case, you can have a human come in and behave as a traffic officer. There's not a lot of models, real good models of the human brain. If you understand what a traffic officer might do, why don't you ask them to join the simulation and do the job that they're trained to do? So what we find out is as our intelligent ambulance drives down this route. Of course, we can see all of the SimSpace managed people, we can render them in our sensors. And we can find out that for this particular run instead of the three minutes and 12 seconds that we predicted it would take in this particular occasion, it took three minutes and five seconds. And this is the kind of data we need to come out of this. My latest version of my routing system is overestimate the time it takes me to get to my system or to get to this person in need. I can move on and try to go and improve it.
And I just want to wrap up with a comment about data. All of this is in the service of generating the data for our customers that they need to again develop train and test their systems. They need data that is indistinguishable from the real world. That is information rich and intentional with the goal being better ML systems, better robotic systems, more robust systems that hit our streets with that. Let's thank you so much Mike and Roland. Look, it's really been a pleasure working with both you and and your teams. I think we've just about managed to break like every existing simulation tool out there as we try to hit these impressive levels of scale. Um so this is, it's only the beginning. So this has been a lot of fun and I look forward to continuing to work with you. Uh and your teams, I want to talk about time for a second. Uh because being able to keep track of time is actually a bit of a human obsession, right? After all, we need to know when and where to be uh so that we can be at every reinvented session that we want to be at. Um and Mike, you gave us a bit of a history lesson. I get to do one too. Uh because keeping accurate time is not always easy. And one particular story I like goes all the way back to the 16 and 17 hundreds. And it revolves around the ability to determine your longitude on the earth. And in order to do this, uh ships mainly relied on dead reckoning. And if you actually try to do this calculation, you need to kind of know a how fast the ship is moving. B which direction it's headed and c how much time has gone past from the last time you tried to do this calculation. In theory, you could think about this as a basic kinematics problem, but keeping accurate time is actually really hard to do at sea. And so what you end up happening is ships that just kind of bounce all over the place, not really knowing exactly where they end up and then uh sinking because they crash on the rocks, right? It's called dead reckoning. For a reason uh in here because the actual clocks or how you would try to keep uh track of time with like a pendulum clock or like an hourglass. Don't actually live up to the extreme conditions of being out at sea, the movement of the ships, the temperature of the water uh of the of the uh sorry temperature differences and such. However, it wasn't until the 17 thirties when John Harrison had actually invented a highly accurate chronometer, that ships were finally able to determine their price, their precise position anywhere on earth and the explosion of trade happened and such.
Now why are we talking about keeping time? Because time is very easy to keep track of when everything is on one instance, right? If it's all running on one instance with one simulation application, there is always one clock and it's always moving at a predictable amount at a predictable rate as soon as you start to distribute this simulation across multiple instances or even breaking up partitions. Now you start to come to a problem where you have to keep everything working in time synchronization with everything else because if not, the simulation just stops making sense, for example. And we're going off script here, you gonna catch the water bottle. Ok. Go for it. There you go. This Mike is able to catch the water bot uh bottle because we are interacting in real time. He knows exactly when I throw that water bottle and he can respond accordingly. If for example, we were on different servers and Mike was five seconds behind me or five seconds ahead of me, he would have really no idea. At what time I actually threw him the water bottle. And as a result, he could never really respond. Like for example, if Mike was five seconds behind me and I threw the water bottle into the past, what, what what happens when Mike actually realizes the water bottle shows up? Does it like come back to me? Does he never see it? Right? This is just like a paradox that will make your brain explode.
So um at the end of the day, you don't need to worry about this when you distribute your spatial simulation across servers because we solved it for you. So SimSpace Weaver keeps your entire simulation time synchronized no matter how many servers you distributed across, whether you want to run the simulation at 10 hertz. As Roland was saying 15 or 30 you can run the whole thing even in lockstep if you want. So you don't end up shipwrecked on the rocks.
So from keeping track of time to saving time, let's talk about one of the features of SimSpace Weaver that was a big request of customers who were testing this thing early on. During testing, we had a lot of customers that were saying, look, the service is great. I can expand my si spatial simulations but it's taking me a lot of time to from when i go and make a code change to when, then i have to go deploy that on the cloud, then i can see the results of my change. And then if i don't like it, i have to go all the way back and this entire process is taking me way too long.
Ok. How does one then take a uh a service that is inherently intended to run in the cloud because you have to run on multiple servers and run it, for example, on a local device, which is essentially what customers asked us to do. I said i just want to run the simulation on my laptop. See that it works. And then when i'm done with it, i will upload it to the cloud. So we set out and we developed SimSpace Weaver local as Mike was alluding to. This is very useful for teams while they're in active development uh with SimSpace local, you can run multiple spatial apps and also agent applications to view your simulation right? On a laptop. If you caught my AWS on air earlier today, i was running a simulation in local mode with 16 partitions on a very, very beefy Alienware laptop. But you can run two partitions, you can run four, you can run as many as your laptop can handle.
Here's how it works. Number one, you integrate with the uh the App SDK that we talked about using the same APIs that you will use when you move to the cloud, develop, test iterate locally as many times as you'd like and then package and build your application for the cloud. When you're ready for scale, upload that to the cloud and off you go, you're using the exact same APIs so you don't even have to rewrite your applications. The best part about the whole thing is this is absolutely free use this feature. I i can't stress that enough.
Speaking of these tools, we've seen how we've been really leaning in on the 3D engines, right to bring these virtual worlds to life. What customers don't want to do however, is have to go through all the work of integrating SimSpace Weaver with the engines just to get started to then build simulations after all the entire premise of the service is to let you spend more time coding and building content and less time worrying about the infrastructure and, and how you and the connections and how you're gonna get to that point. That's why today I'm also very excited to announce our 3D engine plugins with Unreal Engine and also with Unity. These plugins make it possible for anyone to get started building SimSpace Weaver simulations with either engine right away and we have samples with each engine available in preview starting today.
Ok. Before we close this out, one question, I'm sure you're thinking is you have this amazing new service. Uh it probably has a very complicated model of how you're gonna price it. What's this thing gonna cost me? Uh the answer is it's actually not that complicated. Uh it's quite simple. You get charged for the instances you use when you use them. If you need to run a simulation with only two instances you pay for two for however long you want to use them if you need five, pay for five and if you need to scale out across all 10 pay for 10, no upfront commitments, no fees, no licensing costs. The SDK is free. The integrations are free and local mode is free where to find us here at re invent tomorrow. If you want to uh actually get hands on with some of the demos that we've been talking about, we've got both Roland and Mike's demos uh at the EC2. Uh kiosk in the AWS village will also be a launch hq launch hq tomorrow, 1 to 2 AWS village, 2 to 6 and also 10 to 1. So you can get yourselves hands on. And then Mike, you're gonna do a deeper dive on the actual demo and the um integration and some of the unique challenges that you guys faced. That'll be Thursday at 2 p.m. and um i think with that, we will be opening it up to uh to questions"