How BMW and Qualcomm built an automated driving platform on AWS

Jorg Cribs, BMW: Hello and welcome to our presentation of BMW and Qualcomm with an Automatic Driving Platform on AWS. I'm really excited to be here at the Rein event in Las Vegas in front of this great audience. Thank you for joining us today. My name is Jorg Cribs. I'm with BMW and I'm a Cloud Solution Architect.

With me on the stage today, there will be Swam from Qualcomm and Maria from AWS. They will both introduce themselves later on during the course of this presentation.

I can imagine a lot of people here in the room dream of having an automated driving car and of course, you're dreaming of a BMW, right? So let us give you a glimpse how we work on making your dream come true.

Let's have a look on the agenda and what we want to cover in today's breakout session. First, we want to give you some context - what's behind our next gen A platform answering the questions, why we build it up and why we chose AWS.

Then in the second part, we want to give you an overview about the platform itself, about the design principles and also about the building blocks of the platform.

In the third part, we will talk about a development journey, a typical data engineer has on our platform and we will show you how we can store and process hundreds of petabytes of data and how even thousands of users can work together on a platform.

Finally, we will give you a demo, how we visualize the data and then we will give you some key takeaways. So let's get started in the first section.

Let me set the scene and give you some background information. Today at BMW, we already have great premium cars like the 5 Series you can see here on the left side and also the 7 Series. In the 5 Series, we already have advanced level 2+ driver system functions like the like the hands-off option up to 100 and 30 kph you can use on the highway.

And the 7 Series, you can not actually only see here on the slide but also here at the AW AWS Expo. So go there and check it out if you haven't yet, but please wait until we are finished.

So the level for the seven series, we have already announced the level 3 functions to be available at the beginning of next year. There were three, meaning that really the car is taking over the responsibility instead of the driver.

But a BMW, we actually never satisfied with what we have achieved so far. Therefore, BMW's product range, which has grown successfully now, over decades will be realigned on the basis of the so called very classy.

So let me present to you the no class. Unfortunately, we couldn't bring a car up here on stage, but of course, i have it on a slide with the BMW. No class BMW. Actually reinvents itself and it's a perfect play on words that we are now here at the AWS re invent to show you the significance of this initiative.

I want to quote our CEO Oliver Tipsy who said the no class means nothing less than the future of the BMW brand. It's our totally new generation of cars which will start in 2025. And on the left side of the screen, you already see the BMW wish no class as it was presented to the public in Munich at the IA in September this year.

This BMW no classic combines and will be characterized by three key aspects which would dominate the mobility of tomorrow. Those are electrification, digitization and circularity. We will have a completely new redefined it and software architecture.

And some of you might now ask y at the beginning of your presentation, you had mentioned Qualcomm. So how do they fit into the picture and actually they fit in perfectly and are a very essential part because to develop this new generation of cars, BMW has partnered with Qualcomm, the top of the class tech company they bring in the system on a chip. This means this snap dragon ride for our e as well as deep experience in the field of computer vision and computer vision software.

We are jointly developing safety features and level two plus and level three functions. And this will even reside in a product then that Qualcomm can bring to the market again for the validation and verification of all of those features and functions and the whole system, we actually need to collect a huge amount of data and this collect the data we need to store process and then simulate on our platform. This is what sam and maria will tell you in detail in just a few minutes.

And maybe some of you have already spotted that there's also another company here on the slides. This is Capgemini and they are operating and maintaining our platform today.

So why did we decide on building an automated driving platform in the cloud BMW s and Qualcomm's business requirements are based on our learnings we've had on our existing on prem solutions from there. We know that we are in a dynamic development process where timelines change and plans get shifted and we need to be able to adapt to those changes. And so is our it infrastructure, the it infrastructure needs to be scalable and flexible to adapt to those changing timelines.

And as we are in this agile environment, we of course want to avoid upfront costs and fixed costs. We only want to pay for the storage we use currently and the compute capacity we consume at the moment.

Another point was that we collect data all over the world. And we also have development teams from many different companies in many different countries. And therefore we needed one place where they could work together and collaborate easily this we found in the cloud.

And finally, we wanted to reduce the time to market. So we do that with reusable and well architected modules, blueprints and services. With that we shorten the development cycles and increase the innovative power of our deaf teams.

So on my last slide, I want to briefly talk about why we chose AWS. Of course, they have a high amount of managed services with a lot of features, but this was only one aspect very convincing for us was during the tender process, the p stated there, they really showed us that they understood our requirements quickly and could already provide well, architected solutions.

They brought in their hardcore techies from proserve maria is one of them and they not only have deep knowledge about their own services, of course, but they also bring in great technical know how and expertise in the field of aiders when the contract was signed, aws brought together all of their team into one place in munich.

And after aligning the technical architecture and design with us, they started to code like crazy and two so called hackathons. So they were able to build up the platform within eight weeks when the contract was signed for that. As i said, they brought in the whole team code it like crazy and then setting up this platform so quickly was actually only possible because they had existing reference architectures frameworks and blueprints right at their hand.

I explained you the context, what's behind our platform and now i will hand it over to swam, who will give you more details and insights about the platform itself.

Swam, Qualcomm: Thanks your hi all. uh my name is uh shriram pal. i am uh director for our product management uh from qualcomm and uh i am responsible for uh the data and simulation factory and also data centric a i related product that run on the cloud.

um before i uh i jump in, i just want to get a quick pause from the audience. how many of you here are from automotive industry? 20 30%. how many of you here work in a ds application? very small percentage. so we want to make sure that we are not just calling out these acronyms and, and so we want to make sure that we do justice to that. we will try our best. yeah.

um as jorg mentioned, qualcomm and bmw are partnering in building an advanced l two plus l three level two plus level three stack. i know a stack. what does that mean? l three means? actually a driver can take their hands off and they are able to actually the car drives by itself, right? so it's pretty advanced capability and that's what we are jointly developing, right?

in order to take this kind of an advanced a a product to market, um we need to make sure that uh you know, we have a highly sophisticated end to end platform on the cloud to enable time to market for these a s application so that developers can innovate quickly and they are able to get the product to the market, right.

so the goal for qualcomm here is also to make sure that whatever we are building for bmw, we are actually we are able to enable bmw to be to be successful in getting their product out, but also take that product to market. and we are able to actually, you know, market that product for other oes to take advantage of all the all the cool capabilities we are building.

so what are the key design principles by which we approached in building this product? i mean, one of the key principle is reusability. we wanted to make sure that the product is fully reusable and we actually used all the key blueprints, iac blueprints and modules to make sure that the product is fully reusable. not only that because it is reusable, the developers are able to quickly innovate and they are able to actually their time to market is much, much smaller.

also the other oems who will be adopting the product, their time to market is also improved. in addition to reusability, we are also we made sure that it is a highly collaborative platform, meaning there is not only the oems but also the tier one partners and an ecosystem of players come together to make an, you know, an a you know, an advanced driver assist system work.

so you will have tier one suppliers, you have partners, you have oems oem basically the bmw and they all should be able to actually test their application in a neutral field. and so that all these players work together.

for example, i'll give you an example. qualcomm, for example, is applying the, the computer vision stack for for this platform and then the, the prediction and the drive policy, all the l two plus and l three stack we are jointly developing, you know, that's another stack and, and, and bmw may use a parking product from a tier one oe all of these need to be tested effectively in one common ground so that we are able to actually test the whole end to end application and we are able to take it to market next.

we want to make sure that all the different personas who are interacting with this product. like for example, a developer or a rail owner or a test engineer, we want to make sure that their workflow is seamless and we want to make sure that they have a very, very good ui experience as they are developing this product.

finally, we want to make sure that this product is not just focused on north america or europe, but also to china and asia, right. so make sure that it's a global footprint. the product supports a highly scalable global footprint.

now, um now we will get into a little bit more details on how this this whole into end product is architected. and also what are all the key feature functions for this product? what are all the key capabilities of this product?

so let's i'm going to talk about the term realm and we are going to use this a lot in this presentation. first time when i, when i heard our engineering team talk about realm, i thought it's about some sci fi movie, some doctor strange, you know, so it is actually trust me, it is actually a technical term and we will talk about more details about it.

a realm is a logical group of four aws account. it has a csc d development integration and production. in addition to that realm is also a logical bundle of application code and all the associated governance, people, security roles budget for each one of these because a large pipeline like this, it's actually a massive undertaking and you can actually go out of control with budgeting, et cetera. so it has to, it has to the whole ecosystem needs to be very well set up.

and finally, a good example for a realm is when we collect the data from hundreds and hundreds of vehicles, hundreds of petabytes of data, which comes, we have to ingest the data first, that's a real. and then from there, the data needs to be actually cataloged and, and, and that's another realm. and then from there, the data need to go into an advanced a i testing at scale. hundreds of petabytes of data are tested. that's another realm called reprocessing like that, so on and so forth.

so let's now talk about who are all the different personas, who are we building this application for, who was going to use this application? who's going to take advantage of this platform that is being developed? right?

first, the most important persona is the developer who will, who will actually an a developer and these are they are developing advanced a i features. they are doing like, you know, very advanced a i capability and they should be able to come into the platform and quickly innovate and they should be able to deploy their application, right? that's an a developer.

and so so is the case with a slightly variation of a developer would be a test engineer, they should be able to actually run nightly cia large scale scale of testing on a nightly basis and also on a verification and validation at the end, right?

and then a realm owner, realm owner is the keeper of the keys for a specific realm. they want to make sure that they define the budget

They want to manage the budget and they have to actually have the ability to submit requests and get approvals, et cetera. A tenant admin who's a larger keeper of the keys. Basically, they, they basically keep the keeper of the keys for the overall ecosystem and they manage the overall budget for that. And finally, a fops manager who will actually run all kinds of analytics and data to kind of give the tenant admin to see how each realms are doing. How are they, how are we managing our resources and how are we actually running our pipeline

Now going a little bit into how we have thoughtfully layered this architecture so that there is a high level of reusability, right? So we start with the the basic aw storage, compute, etcetera networking, etc. And then on top of that, we liberate some of the aws open source capabilities like adf basically that has the whole c cd capability built into it, etcetera, and some additional capability from their autonomous driving, you know, open source bundle.

Um where we use, for example, high speed data streaming, using their kinesis fire hose, etcetera, right? And then on top of that, we have an horizontal layer which goes like network connectivity, billing functions, security framework logging and monitoring. These are some of the common capabilities which is used by all the realms.

And then on top of that, all of the independent realms layer on top like labeling framework, our analytics framework or reprocessing framework, our kpi and analytics framework, etcetera. All of them, they kind of layer on top. And then on the other side, the personas, the different user, user community, they actually effectively interact with these routes and uh you know, and the self service ui to actually get through their day to day of their workflow, right?

I mean, so it is a very seamlessly architecture and leverage a lot of underlying capabilities from aws. Now you've understood how we have like layered and architected. I want to dig a little deeper into what these product capabilities are. What are we building, right?

So imagine there are vehicles which are go out from multiple countries several 100 several 100,000 kilometers of data. We gather, what do we do with that data? The data comes into a pipeline, we ingest the data into a into aws. And then from there, we actually make sure that the data, the data pipeline, we actually do some preprocessing of the data. What do we do there? We actually go ahead and catalog the data, we go ahead and glean some metadata and we will actually do some enrichment using advanced a i techniques. We actually enrich the data.

Why do we need to do all that? So that a developer is able to easily identify data? For example, if a developer is working on, on the car's interaction with a bike, for example, they want to make sure that they should be able to quickly put together a data set of bikes and they should be able to quickly run a model and test their models and doing the preprocessing and some of the data enrichment and get me all the data where there is a rain or a fog, et cetera, right?

They are able to actually quickly, you know, you know, put together a data set, curate a data set and they are able to run this advanced model on top of that. And then we take the data and we send it to labeling pipeline. We also do some automated labeling again using some advanced a i techniques and then an advanced simulation framework and where you are able to plug and play all of the different stacks like a parking stack. I talked about the cv stack and, and for example, the dry policy stack, all of that you are able to simulate at scale. All of these are done at scale.

And finally, the verification and validation framework, that's when all of these are orchestrated together and you run it at scale. Like for example, before releasing a product, you want to make sure that all the data we collected from all the countries. So and and the product is working seamlessly. So that's where the verification and validation pipeline which take advantage of all the other framework and actually execute a large amount of testing at scale.

And finally, where do we analyze the results of these tests? Right, that's where we use an advanced analytics and visualization capability where actually a developer will be able to drill down into one specific failure. And from there actually be able to identify which data what pipeline so very quickly get access to all the all the different. In fact, maria will show a quick demo of that as well.

So this gives an idea of the, the the whole end to end pipeline and the and the massiveness of this pipeline and how we leverage aws underneath the covers with that.

I'm going to quickly also when we are doing this reprocessing of this high volume of data. uh recently, we i'm very happy to share that our qualcomm a i 100 accelerators are now available on aws for us to actually run advanced a i models. Important thing to note the note here is this, this a a i accelerator is not only for an adas application, of course, we will be leveraging it heavily for adas application but also for any generative a i or any any other capabilities. This this uh this uh this two instance actually is capable to handle. So you can scan the qr code to get more details about that.

And what we are going to do is quickly show you a demo again, our goal is to give you a tip of the iceberg. I mean, this is a very massive platform. So we won't be able to show all the capabilities but just to see how a developer workflow or or an end to end developer workflow would look like, right? That's our goal.

So what we are going to show you is four different bundles. One realm owner is going to request for a real saying that ok, i'm going to do a data landing zone realm or a reprocessing realm. And then a realm admin is going to approve that realm. And what they are going to do is they say, ok, the budget looks ok, we are still within the overall budget. I'm going to approve that up and then, and then the realm owner is going to jump back again and say, ok, now my realm is approved, i'm going to actually now go ahead and add users to that realm. A developer user gets added and a developer user, now, what he's going to do is he or she is going to do jump in and start running a high, high, you know, high end reprocessing graph on, on, on our dlq instance, on our a 100 instance. And then after that, there will be some kpis and analytics. So that's the overall flow for the demo and i'll quickly give commentary as we go through, but that's the, that's the demo.

So here is when a realm owner actually logging in and saying that, ok, hey, i, you know, we need to create a realm here. He is that particular user is creating a landing zone uh well, and submitting the request and then realm admin, she looks at the, the the request and she says, ok, now i'm going to 2500 sounds ok. I'm going to approve the request and once the realm is approved, um creators initiated and you can see four different uh you know the c id integration production, all the different environments are created and after the environment is created now you are able to, it's quickly set up, right? I mean, it's all very seamless workflow and then what they do is ok. I want to go into this realm and i want to go ahead and add users to the realm. They go ahead and create multiple users who will be using that realm developer or a or a, you know, tester test engineer, etcetera, right? These roles gets developed.

Now a developer goes logs in now finds an a i 100 qualcomm a i 100 instance where the developer is going to actually submit a very complex graph like a model graph. And they pick the the a 100 instance and they actually submit the graph and the graph actually goes through a complex execution several different instances. Cpu instance, gpu instance dl q a 100 instance, they are all orchestrated using this complex graph to actually execute a very complex a i model.

And once the graph gets executed, they are able to actually see the metrics and analytics et cetera. Here is a quick demo of all the kpis and metrics. For example, if you are running an ingest pipeline, you want to make sure that what's the throughput, how much data is stored? What is the, you know ingress and egress of of my pipeline like that? There is analytics and there is uh kpis for every single realm, every single realm will have their own kpis and analytics. All of that is already set up this here.

Um they, they are look again, look, looking at the data zone, looking at how much data has passed through the pipeline. Um uh looking at some of the jobs which jobs are successful, successfully completed, which jobs are still running, etcetera.

Um and finally, um we'll also show you uh and fin ops analytics capability where a fin ops analyst is able to see what kind of money is being spent and which particular realm is over on budget, which realm is, you know, doing ok on budget, et cetera, you can see that it is a very comprehensive end to end uh ecosystem. And that's, that's a, that's a quick glimpse of what this product is capability.

Now, we will once now you have understood what the product capability is. Now, we are going to drill down a little bit deeper into the aws architecture. How this whole end to end product is architected for that. I transitioned to maria.

Mhm. Hi, everyone. My name is maria. I am a big data architect from professional services aws and i was part of the team that designed and developed this platform. From the beginning of the project. I focus on developing everything that revolves about around the rms which are the building blocks of the platform.

So now let's put ourselves in the shoes of an a s engineer that is using the platform to develop these a a va s features. And let's take a look how this developing journey looks end to end before getting into the technical details.

I wanna talk about data because without data, they wouldn't be possible to develop these a a functionalities. So to develop these features, thousands and thousands of kilometers are driven worldwide to try to capture as many traffic situations as possible. This information is being captured with specific sensors such as lighters, cameras and radar. The sensors are connected altogether into a device in the car that is called loger which is storing all these informations of the data feeds.

Once the the the memory of the logger is full with data, then it is extracted from the car and plugged into the copy stations. Copy stations are physical locations designed for high efficiency data upload and they are connected to the cloud via direct connect from this moment. On it. When the data from the sensors and the signals starts coming in into the platform, the data is uploaded into a specific rm that is called landing zone, landing zone acts as a gateway between the rest of the realms in the platform and the physical world.

Once the data is successfully and completely uploaded into landing zone, then this rm would communicate with the rest of the platform saying hey, new disk is available to be further processed. Then different rems at different times in an asynchronous fashion would interact with this data among different r ms that we have in the platform.

I want to highlight, for example, ingest which is taking care of extracting all the sessions in the disk, moving all those sessions to ale to collected zone, which is the permanent stored rm. And it's also extracting metadata of this process. And of course making this information available for the rest of the rms or the r am that we have in the platform is labeling, labeling is selecting extracting and adding labels on top of this data to then making this information available for the rest of the rems.

So um now let's remember we are a as engineers. So you might be wondering, ok, this is a high level overview but how do we actually start developing our r ms? Let's take a look.

So to develop a r am we need two main things in place. In one hand, you need the business logic. So the the a ds applications that are running in your r am. In the other hand, you need the infrastructure. So the aws services that support these a ds applications.

For the first thing for the business logic, the platform provides artifact store. Artifact store is a centralized account that all the rams have access by default. And the particularity of this artifact store is this im user that we see there.

Let's assume that we a as engineers, we already have our pipelines that are taking care of building the ad a s applications. So you don't want to spend time migrating your pipeline or creating a pipeline uh new for the setup. What you can do is you can take this im user and plug it, plug it into your pipelines and then thanks to this im user, your pipelines will be authorized to push these artifacts also by artifacts. I mean everything that is around scripts, conflict files and docker images

Moving on to the c i cd and the and the infrastructure, the r am automatically when you request it comes with ac i cd set up which is ready to use with little to no configuration effort. This setup is uh located in the cs cd account and is composed of three main services

These are first code commit which contains an initialized infrastructure as code application where the services of the rs are defined.

Then this uh co repository has three branches - dev, int and prod. We have a corresponding cope line connected to it.

Code pipeline is a continuous delivery service that is designed for automating the release of changes into a code report. In each of these pipelines, there's always a build step where the code is compiled and remotely from the c i cd. It is deployed into the specific corresponding development environment.

If the change was made into the deaf branch, then these changes will be deployed into the deaf account. End to end and prod to prot.

So when we request a new rm, the onboarding process, it's super simple with the setup. The only thing that we need to do is log into the cs cd console, go to the call commit repository, clone it locally and reference or artifacts that we pushed into artifact store.

Once we pushed all these, all these changes into code commits, then the setup will take care of pulling the artifacts and deploying everything together with infrastructure in the corresponding development environments.

So the main idea here to have in mind is the onboarding process to a newly provisioned realm is as effortless as possible because in one hand, you don't need to spend time migrating your existing pipelines that are taking care of your business logic. Thanks to this im user.

And on the other hand, um you don't need to build ac cd setup because that's already provisioned out of the box by the by the platform.

So moving on to the second phase, remember we are eight engineers and we are developing a r am. So now let's assume that we successfully on boarded into our r am. We have our our cloned repository locally and we want to start actively developing.

What are some key things that the platform offers which accelerates development of the developers?

In here, i want to introduce the concept of blueprints. Blueprints are infrastructure as code templates that contain pre composed web logic. Each rm type in the platform has a corresponding blue to it which defines the baseline resources that get automatically deployed on provision in time.

For example, sri was talking about repo reprocessing rm. So the blueprint template that has that corresponds to the reprocessing rm has among other services. Um the the configuration necessary to deploy the easy two instance that contain the qual tip a i 100.

So what happens when someone requests a new rm with a wu i that three i showed is there's a lot of fields that you need to fill up, but one of them is the rm type. So when you select the rem type, what happens under the hood is that the account provision in framework would select the specific blueprint that corresponds to that rem type, then it will transform it into the call commit report that we saw earlier.

So, but it's nice to know is you are not limited by what was defined, the blueprint, once the blueprint is transformed into the code commit repository, you can customize it or add as many services as you would like.

So the blueprint is only a baseline i started for you even one step farther. If let's say you added some customizations that you think they bring added value to the platform, you can even publish back a new version to the blueprint in the platform. And in that regard, if someone else is requesting in the future, the same brand that you developed, they can select between different versions.

So the second thing that the platform offers which accelerates development is infrastructure as code modules. This is a collection of modules that are accessible to all the r ms and uh and, and all the r ms have access to them by default. So no configuration needed.

The idea behind this is we want you to provide um infrastructure uh c code modules which implement common a a tasks. And then we the engineers, we can use these modules to customize or rm and add the services on top of the baseline.

This infrastructure has got modules uh together with the blueprints. Everything is based in a with ad df, ad df or autonomous driving data framework is an open source project that we from professional services developed to try to come up with reusable and modular infrastructure as code artifacts which implement common ad a s tasks.

Some examples of modules that we have in the platform are the compute group within which we have. If we want to, for example, deploy a containerized pipeline, we can use the batch module to, to deploy a batch compute environment. Or if you want to use cn is there's an eks module available ready to use epy or if we have some spark applications, we can also use the emr module.

Another group that we have of modules is the orchestration modules. I'm gonna be explaining much more in detail into the next slides. But essentially they allow the rems to communicate with the rest of the re ms in the platform. In a standardized way, we even have more complex examples of how to wire different services together to achieve a common a task such as a signal exaction pipeline.

So the key message here is the platform offers a series of infrastructure as code artifacts which help us a as engineers to make the infrastructure development as easy and as reusable as possible. So we can focus it in what we know best, which is the ad a s development.

All right. So moving on to the third and final phase of our rem development. Now let's assume we are done with our infrastructure, we are done with our business logic. And now we want to plug our rm into the mesh of r ms in the platform.

So the rams in the platform communicate in an event driven standardized pops up communication model for this to work. There's two key things that need to be in place.

On one hand, we have overarching orchestration, overarching orchestration is a rm as any other but has that does some specific capabilities by means of ab sync and dynamo overarching orchestration provides an interface to the rest of the rms to be able to define to which specific events they want to be subscribed by means of lambda and evan bridge.

The um oat orchestration is able to receive all the events being published from the rams and forward those events to the target rams then moving on to the rams themselves to be able to communicate using the setup. They need to do two things.

First, the rams need to define to which specific events they want to be subscribed to using this interface that overarching orchestration provides they can do so by doing api calls of the absent api of overarching orchestration.

The second thing that needs to happen inside the realms is that they need to have the orchestration modules deployed inside the r ms. This is the event writer and event listener. These two modules allow the rams to publish events from the r am to overarching orchestration and receive events from overarching orchestration to the rams.

So let's take a look into an example of forums communicating using this setup here, we have landing zone ingest data quality and signal extractions. As we know landing zone is the r am where the data is uploaded in the first instance.

So as soon as the data is successfully and completely uploaded in landing zone, then landing zone would set an event to overarching orchestration saying hey, this is uploaded.

Then overarching orchestration knows that india is subscribed to that event and therefore it will forward this event to the rm. Once ingest receives this event, this will trigger internal processes inside ingest which will result in some events being published back into overarching orchestration. This is session ingested and text requested.

Then that equality is subscribed to this event that was published by g and so original orchestration would forward this event. Then data quality would trigger on internal processes and eventually would publish back an event to overarching orchestration. This is text succeeded last but not least signal extraction is subscribed to two events coming from two different realms session ingested from ingest and tech succeeded from uh from data quality.

This will be received asynchronously as soon as they are fired from the original rems, then signal extraction will uh trigger some workloads inside the r am. And then finally, we publish back an event to overarching orchestration saying hey, signals will extracted.

As you can see the events in the platform are communicated in a standardized event driven and p a communication model. The key thing here is this setup allows and accommodates accommodates the flexibility that all the rams need in order to meet the requirements in in in the order to communicate from events, sorry from rms that are pure publishers like landing zone from rms that are um subscribed to multiple events and all the different use cases in between this setup accommodates them all.

All right. So moving on to the last piece of information that we want to say to talk about regarding r ms is data visualization.

So we wanted to showcase one of the features of of the many rams that we had and we chose data visualization because we find it pretty um interesting and easy to, to see for that.

Here. I want to present the data view round. This data view run essentially allows the a as engineers to uh actually help a engineers by visualizing different se sensors and signals from the car together with some computer vision and object detection processing.

The way this works is there's an easy two instance running in the um in the in the r am that has some software pre installed in it. Uh in one hand, it has nice d cv, which allows a remote desktop connection to the instance. So that's what we are seeing right now.

Then in the other hand, there's fox club studio. This software allows to establish a connection between the ec2 instances and the a rosy service that is running in eks or ecs. This rowe service is the one that is in charge of fetching the information of the signals and se and sensors and streaming that back to the instance via web sockets.

Let's see this in action. Here we are opening a window of fox studio and we are establishing a new connection to a rose ridge service to do that. We need to provide the url of the application of balance that is fronting the service. Once we establish a connection, we can see the different visualization panels starting from the upper right, we see the light art followed by the outside vehicle camera and followed by the geolocation information of the odometer signal of the ecu and the bottom right, we see the results of the object detection processing.

We can see that the object detection is classifying different objects in the road. We can see that there's uh the different lanes, the trucks, the cars and also some signals by the road. What is also really nice is that the software allows to explore the three dimensions of the signals that allow them such as the visualization of the of the lighter.

We can also play back and forth to the different signals and be able to see everything together in a synchronized way. So that was a little teaser for how we visualize data with the platform.

Let's drop the session with some key takeaways.

So the next gen ad platform provides a self service u i that allows different personas to interact and manage the life cycle of a ds applications from requesting a rm to giving access to developers to monitoring cost, everything is done through the self service u i.

Also the everything that you saw from the features of the platform to the infrastructure as code of the rms, everything is defined with infrastructure as code and therefore is portable and can be easily reused and rolled out to other a a projects.

The platform also offloads the und differentiated heavy lifting of having to configure yourselves things such as the networking, the security, the cs c a set up, all these features are automatically provisioned by default. Thanks to the platform, also, this platform accelerates development and it does so by providing infrastructure as code modules which make the life of the developers easier and they can focus on what they know best, which is the a as feature development.

Last but not least, the platform offers flexibility which is required by the rams to be able to meet the communication requirements. The rams are communicating in an event driven pops up and a standardized way.

All right. So that's all we have prepared for today. Thank you very much for being here. Um i, we would really appreciate if you could uh complete the session. Survey feedback is always more than welcome.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值