Accelerating IoT product delivery with AWS and open source

Megan: Hi, I'm Megan.

Richard: Hey, I'm Richard Elber and I'm here to talk about how to accelerate delivering IoIoT products. I'm building on AWS and open source. And today, what we're going to talk about is first why customers build products with embedded links. And what does that mean? Then we're going to go through why people use Octa project. Uh IoT product life cycle challenges, then we're going to get into the meat of the talk, which is something that we developed for customers that is open source to help them deliver IoT products faster.

So let's first talk about why customers build IoT products with embedded Linux. All right. And I'm assuming that there's some people in the audience that use embedded Linux and some people that don't. So when you look at the car, on the left hand side, what you can imagine is that you have the perfect vehicle that comes out and is installed on your target system, right? Such as like a laptop or a server. And many times that these are very well curated distributions such as Debian Ubuntu Su or whatnot, right? You not only get to install the operating system easily, but you also get package streams to help you upgrade software easily. But usually when you install these operating systems, there's a lot more that gets installed than you actually use. And the reason why they do that is it doesn't necessarily impact the resources on the system. You buy a laptop and it has plenty of disk space and plenty of memory.

Ok. But with embedded systems, these are tiny systems, memory constrained, often running on batteries. All right. And what we want to be able to do is have something that you can highly constrain something that's tailored exactly for the type of product that you want to build. And this helps you reduce the amount of processing power that you need to run your operating system and applications and also the amount of memory that you need to run the entire system.

So the real difference between a commercial distribution and an embedded Linux distribution is one is like an operating system that you pull directly off the shelf, any of those fancy Linux distributions. And the other one is I want to know exactly every little tune that I can make to the system to make it operate as efficiently as possible and reduce things like the security foot footprint on the device, all that sort of stuff.

So I want to have on the right hand, see what side is what you see is a kit vehicle, something that you can explicitly specify every single little thing that you want in that system. So that is why for 20 years now or more folks have been building these embedded Linux systems to make something that is fit for purpose.

But what I often talk to customers about is before we even get started on this embedded Linux journey, because all the customers and partners i speak with, they're dealing with IoT devices and how they can actually connect those devices to the AWS cloud for IoT applications. So what we really start with is visualizing what you want to do. What do you need, how do you want to operate? How efficient do you need to be? And what's really important oftentimes is what's the skill set in your team?

One thing that you would often do, for example, on your laptop is that if you wanted to connect to something like AWS IoT Core, you would just run a python program to write an application and do something with it and you really don't see any impact on your system. Conversely for embedded Linux, you want to make something very tight, something very constrained and having a python interpreter with a whole bunch of modules. And it being a runtime system does end up being a lot less efficient than doing something that's in, that gets built to machine code such as ac or c++ application. All right.

So we want to visualize where we want to be and then take a step back and say, well, how do we get there step by step to build my IoT product. And when we did that, well, i started off with, in the early days of IoT, i can say i been working with AWS IoT cornell for about seven years. And uh in the beginning, it was all like little raspberry pies and arguin knows and that sort of thing. And then when Greengrass version one was being built, i had the opportunity to start working with partners that were doing micro microprocessor devices.

So I'm talking like ST Micro, they have a product line NXp Digi International, these sorts of companies. And one thing that i found is that people can prototype a little bit on things like a Raspberry Pi, but no one is going to ship on a Raspberry Pi in anger. Ok. They're not going to do mass distribution, hundreds of thousands or millions of devices that is running on a Raspberry Pi device. Ok.

So what i found when i was doing that this was before Greengrass v one shipped is that, that's when i learned about embedded Linux, that's when i learned about Yto project. Every single person was using, um they were, they were using Octa project, but we still found that they needed to start prototyping first. Ok.

So they would start on a Raspberry Pi, but they would soon need to pivot to a system that got them to that end state. i needed to have that embedded Linux system, they're not using. Rasp. Rasp is like an another Ubuntu. Ok? What they need to run is something that's super tight, something that is super constrained.

So what they needed to do then is like, well, i, i prototyped a little bit on Raspberry Pi. I have stuff working. But now when i visualize that end state, i need to start working in the embedded Linux world. Ok? I need to isolate what all my software stacks are. I have my IoT application. If you're using middleware, I'm not talking like Webs sphere. Don't get me wrong. It's like something like AWS IoT Green grass or something like that, some other framework and they need to have an operating system and then underlying to that you have hardware. Ok? And from that stuff correlates to application code. Ok? Fair enough. I'm building my application. I have my application code but this is where it gets kind of dicey. If you're building a fit for purpose system, which is normally an architecture that's totally different than what you're working on your desktop, then you have to build all that stuff from scratch. You're not installing applications from a package stream, right?

So you're building all this from source code, you're building your operating system from source code, you're building your hardware, support your drivers, your kernel modifications from source code, all open source. Well, i can't say all there's exceptions always right. But then finally you need to have an optimized Io product. And i put binaries there, but this is binaries and configuration, binaries and configuration. And you take all this stuff that you built and you plop it on a root file system, you burn it to an EMC memory segment or something like that and you start running, ok? And it's much, much different than when they put all this stuff together and make it into an image much different than installing a commercial operating system. Ok?

So putting that aside what you need to do, step by step, you need to build that stuff from source code. You need to build the middleware, build the operating system, build the drivers, usually in the rubber direction. I'm looking at a top down but usually you start at the beginning at the bottom going up, starting with even building the compilers for the system, you'll even do that.

Mm so then you have all this stuff going on. This isn't 20 or 25 years ago when you're reading a pdf called Linux from scratch. And you had hours to go before you even had a basic root file system just to get something running, we need to be more efficient. So how can we accelerate that delivery? Because you can imagine even if you have a single make file that does all this stuff, you're building an entire operating system, you need to have a way to optimize the way that you build the stuff or else your cycles are just going to be way too long.

So what we found is not only do more than 70% of the IoT products that are commercially out there that they're using Yako project are open embedded. But because we knew this was going on because we're speaking to people like Arm and that sort of Arm and Intel and these sort of people that make processor IP we know the breadth and extent to which Yaco project is being used.

So when we are looking at ways at optimizing processes for customers, so they can build and deliver and run products on the AWS cloud, we needed to use Yto project and open embedded. First, there are a few other ones such as Buildroot, for example, such as OpenWrt, for example, but this is really the primary embedded Linux build system today and it's been that way for almost 14 years.

So with that to learn a little bit more about Yak a project and open embedded, I'm going to let Megan take over.

Megan: Thanks. Thanks so much. Yes. So uh 70% that's pretty uh pretty high number. And I'd love to jump into why we have found customers are choosing Yaco project and open embedded over and over again.

Um and before that, let's just talk high level about Octo project as a whole. So um if you're developing these types of devices again, the chances are you're probably going to run into this. And it's just sort of become the de facto framework instead of tools for building customized uh embedded Linux operating systems and the flexibility of Yaco project and open embedded is really what has led to its success in, in being used across uh IoT device makers.

So as we've mentioned, Yarto project builds and maintains validated tools and components associated with embedded Linux. And uh the history of this has started actually with open embedded, right? OE core and bit bake. And they are the the base of, of Yaco project and they form the base of the of the tool kit.

So uh open embedded and Yto project combined to find um po pocky .. and it's the Octo project reference distribution system.

So um with all of these components together, uh there are quite a few uh IoT product life cycle challenges that Yako project and open embedded are able to uh solve or at least alleviate a little bit of the tension there. So we're gonna spend some time talking about these and how Yako project can help you um with these challenges.

So uh first one being license compliance, right? Are we complying with software licenses? And how do i track license changes when they happen?

Um risk management, right. Customers ask, how can i check if we are um exposed to vulnerabilities? And is someone monitoring that for me regularly?

We also have software supply chain challenges, right? What's in my software, where did it come from? What version is it? Um and a lot of other questions, i'm sure we have related to, to software supply chain.

And then lastly, we have uh unit testing, right? Um so moving in to our first challenge, license compliance, um customers are asking again, am i complying with software licenses? And how do i track license changes?

So, YO project actually has an automated compliance check built in and when it's enabled, um it will actually check a list of licenses at the time of a build. And the cool part about that is you can actually customize that list. So you can ask for that check to include certain things or exclude certain things based on the needs of your organization.

And the build process is generate warnings for license requirements that happen outside of that list. So it's actually flagging it for you and saying, hey, you need to go take a look at this and um the list of licenses are found and used.

Um and they're kept in this build directory. So it's really nice. They um can track changes to license text of upstream projects as well.

Uh so you have all of this nicely formatted in reporting for you and it's again automated um if you enable it.

So moving to another common challenge which is risk management, um again, customers, like you do want to know how can i check if we are exposed to vulnerabilities and um how frequently are those checks occurring?

So we wanna be able to cyclically report on common vulnerabilities and exposures um CVEs, right. And this graph here shows Octo projects uh CVE monitoring and it is continuous and you can go back in versions, you can pick particular periods and dates and dig in as much as you want on this particular data.

So again, with the automated theme from the previous challenge alleviation, you can enable these automated CVE security vulnerability checks uh which is really great. They maintain a list of known vulnerabilities and they track the evolution of a number of unpatched CVEs and the status of the patches.

Um with the CPE check enabled BitB will also try and map each compiled software component, recipe name and the version information to the CBE database and generate recipe and image specific reports.

So these reports are gonna contain all of that particular metadata for you to store dig in on mitigate as you'd like. And again, um Yako project is constantly validated by testing all releases with the reference distro Paki on each supported architecture.

Um so you're getting that reassurance that things are getting validated automatically.

So, moving on to our third challenge, which was software supply chain, right? What's in my software? Where did it come from? What version is it? Right. And Yto project actually works directly with SPDX to create a standardized SBOM or a software bill of materials. Here is the remaining transcript formatted for better readability:

Megan: Um and it's baked into uh to Yako project. Uh so these reports are standardized across, across the industry so anyone can pick one up and understand what's happening there and it tracks that key metadata when the source code is actually built.

Um so the the SBOM is is stronger because it's being generated at the time of the build rather than after, after things have been built. And lastly, these reports, like i said are generated and compiled into a target image in archives so that you can go back and revisit them.

So the last challenge we talked about was unit testing. How can i test my IT product on target architecture without it being accessible? W ho how do i make that happen? Right.

Again, they have this really cool automated unit test on target architecture already included in there for you. So they natively provide this mechanism called a package test and it delivers these unit test results automatically after the test harness has been invoked.

Uh there's closed loop verification from a developer perspective. It's just a simple pass fail. And there is again an auto generated unit test reporting so that you can understand what happened through, through the P test.

So as Rich was talking about earlier OEMs and ODMS primarily use a Yacht to project build framework to compile package image and distribute fit for purpose embedded Linux system images and building Yako images is complex and can take a lot of computing power.

So customers have asked, how do i have a simple frictionless way to get AWS software to cloud connected embedded Linux devices. How how do i do that um without writing my own recipes, right.

So let's talk through that. So Meta ATUS is something that customers can use, who want to use AWS services easily. It provides provides recipes and it provides support uh in a form of a Yacht to project compatible layer that's directly maintained by AWS.

So again, Meta AWS provides recipes for building in AWS edge cloud software capabilities to embedded Linux built with open embedded and YA project. So that answers that first question.

But the second question is involving this AWS for Embedded Linux CI, it's a CI CD pipeline that helps customers build embedded Linux distributions faster using the cloud, right?

So it alleviates a lot of the problems that you would have with uh on, on prem compute powder and uh on prem storage, right? Um by using the power of the cloud to make it go um faster.

So Rich is gonna talk about how you can make use of Meta AWS and AWS for Embedded Linux CI to accelerate your product delivery while you're using the Octo project.

Richard: Oh, thanks Megan

All right. That was really awesome. So, really what I like about that is it's not just to make file the Octo Project and Open Embedded. It's not just to make file comes with a whole bunch of other tools and we knew that our customers needed that. But let's bring that up level.

When Megan mentioned AWS for Embedded, the XCI, we knew, well, talking with customers more than five years ago, we knew that this was a real problem building and distributing these embedded Linux images from on prem. So over the years, we've been improving technologies for our customers to do just that. I mean a few years ago, I was doing trainings at Embed World just for that purpose. But what we actually did was take it up a level. I mean, the way I did it was like a prototype, ok? You install it, you install the Cloud Formation and you are basically just able to do a bill. But this time around we want to do something a lot more special.

So you could actually integrate your full end to end processes to the CI framework. So if anyone in the room is doing CI CD today, this is going to be very familiar, the slide will be very familiar to you. So you guys do source code, you check it in, you trigger a build. That's the whole thing about continuous integration. If there's a failure, you get automated feedback. So then developers can follow up on those failures and provide fixes and when they provide fixes, hopefully everything goes great. And when it goes great, you automatically run the unit test and not only that, but you can run your functional tests as well.

All right. And when you get a failure in that case, then you get that feedback loop and you go through it all over again. And this is how you actually get to release, right? Because you run tests and your tests determine how much of the required if you're doing test driven development, how much of the functionality has actually been completed in order to release the project.

Now, if you have, if you're very, very comfortable with the competency of your teams to write all of these, these tests thoroughly to determine how much things have been completed in order to ship. Well, you can go straight away right after that point. Once everything has been validated, that's the holy grail of continuous integration and continuous delivery. But I'll be straight with you for almost two decades of working with, well, not almost more than two decades of doing CI CD work at various companies and in consulting, I can tell you very few companies actually meet it to that point. All right, it's very, very hard to do. But when they do do it, when they don't do it yet, then you have to have manual gating approvals, right? When you do feel good about it, then you can straight away just ship it. Ok? And that's actually really great.

So we knew already that this is the kind of flow that we wanted to get to when building AWS for Embedded Linux CI. All right. So what we did is that we architected AWS for Embedded Linux CI to be not just a thing that does CI CD in the cloud, but also something that is highly resilient. So when you go and install a pipeline from AWS for Embedded Linux CI examples, you'll see that things are all nicely arranged for multi um for multi availability zones within a region, right? You get all of that kind of stuff, but bring that up a level, an architectural level that the CDK will actually create a pipeline definition for you. Something that is well structured, something that's been thoroughly tested and it will also create a build definition which is using CodeBuild that uses a build spec. Ok? The build spec is a set of steps. It looks it's a YAML file, right? So it's easy to read, easy to modify.

And then when that, when um part of that set up for that is creating an image within ECR for your build machine. So we're using CodeBuild, I don't want to make any assumptions that folks in the room know a lot about CodeBuild in CodeBuild, you run things in a container and you do a build machine definition, you can take a default definition. But when you do that, you have to pre install everything to that you have to install everything to that container at build time, which can take a lot of time up front. So what AWS for Embedded Linux CI does is that it will actually have a process to pre install all the prerequisites for the build environment. So your build times become shorter.

So when the build system actually executes um there, then the build instance gets hydrated from ECR into CodeBuild. Now CodeBuild, when you're using systems such as EFS and so forth, it goes into its own VPC. Ok? And when CodeBuild executes that build, what it does is that um first is that the build system will take the layer configurations from uh from uh configuration code and we actually do that in CodeCommit, all right for you. So we set up a configuration, git repo for you just for configuration files. And that will tell the system what layers that you need to check out and layers are part of the Octo Project and they define all those layers of systems that I defined in the beginning of the presentation such as where are the drivers I need? That's part of the um the silicon manufacturers build um board support package system. Ok? So you'll get all that information, you'll get all the Linux information, you'll get all the middleware information and then finally, you'll get your application information. Ok?

So they pull those layers up and then as it builds, it uses source code repositories, right? Because these recipes within these layers say, oh, you need the Linux kernel, you need to um download that from the kernel.org, git repository, right? But here's the thing is that we built the system. So we don't have to rebuild every single time the uh the binaries from scratch. So there was a really cool feature that was developed within the Octo Project, which is called hash equivalency. And what that means is that they actually compute at build time whether or not that the the binary is outdated.

So we put all these binaries and download files out on an EFS server for you uh EFS drive for you. So that every time the build happens, it mounts those EFS drives and it reuses those binaries. So your incremental builds after your first build, the incremental builds are way much faster if you're building like a leaf recipe like your application, for example.

So once that's all built, right? Well, it doesn't make any sense to just leave it on the build machine. So once that's done, the build is over, then it's going to stash that work out on Amazon S3. Now, what I'm going to get into in a moment is all these different image types that you can have. But the first one, the primary one that people usually do is the type of firmware that I'm going to use for my target processor, right? If I need to flash it to an NXP um to, to uh non vo tash, that's for an NXP processor or MMC or whatever I need to have a firmware image.

But what we also did over the last year within AWS. Well, actually the Meta AWS project is how we made it. So you can build an Amazon Machine Image. So you can take what you've built within the system, this embedded Linux system and run it on the target architecture which is most often ARM. So we just use Graviton. But then, but that's not all right. If you remember the previous slide, we have quite a flow to get through. So we want to be able to launch that stuff. We want to have fast iteration and we want to be able to verify from a unit testing perspective, at least smoke testing perspective relatively quickly and we can use the cloud for that. Ok. I'll get into those options in just a moment. But right here, I have Amazon EC2. If you want to run QEMU or if you want to run it directly on Graviton, you can do that. But ultimately, while you're doing all this work and you're iterating quickly at the same time, it's likely that your IoT product is being designed at this time, your PCB might not even be ready for your IoT product. But once it is, you'll be far enough along, this is about the acceleration, you have the opportunity to be far enough along now that you will be ready when you go to hardware and loop testing. Ok. So that is a lot about the acceleration, not just the build time acceleration but being able to get ahead of the game and delivering your code to hardware and loop testing.

So what does AWS for Embedded Linux CI do for you give you the opportunity to do right from a runtime perspective. Well, you can build it for your target processor, right? If you have your hardware board ready, if you have an EVB and evaluation board ready that you purchased off a Mouser or Digikey, you can just grab that and you can install it or you can run in QEMU, you can do that as well. We have a runtime type for QEMU that you can build. Actually, that's the default type in Octo Project who want people want to get uh started early. You can actually run that on Graviton. Like I just previously mentioned with an AMI you can use containerization. If those in the room are really savvy have been um doing Yocto Project for a long time and keep up to date with all of the new fangled things that are happening. You, you might be aware of the SOPHIE project and WA that's about containerization targets. The OP project can natively produce containers and it can also natively produce virtualized images. So if you're running on a hardware target that's running a type one hypervisor and you want to run multiple virtualized sleds on your machine, you can do that as well within the Octa project.

So you can deliver those. But there's other kinds of image types that Yocto Project for a single build can be very interesting to you, not just the image that you're doing to run your end product that you can build a software development kit. So on your local machine, on your, on your local development machine, you can download a generated SDK for that target processor, unpack it, install it and you can have a local cross compile environment and you can make that as part of your standard build process.

Now, when Megan talked about unit testing, when you do unit testing, you could probably conceptualize that on the machine under the the target under test, you probably need additional testing tools on that target. So there's a special image type, a QA build type that does exactly that it takes all of the unit testing dependencies and also puts that on the image so you can test with that. Ok.

So after all of that, well, we also decided is that we needed to have a whole set of examples for customers to use to get started with AWS for Embedded Linux CI quickly. And the first one is the standard one Megan mentioned multiple times. Poky. Poky is as she said, the Yocto Projects standard distribution. So this was basically a must have, right. It was the entry point. If you're doing the Octo Project and Embed Linux, you know what Poky is and people would just expect that. But there's also a new F build environment or local development environment for Yocto Project which is called KAS. And we know that probably about a third to half of the people out there, they're using KAS for their development and we don't want people to move from KAS to some other kind of build some other kind of build environment because maybe you get unexpected results. It's a different development environment, right? So we have a pipeline just for KA an example pipeline. Excuse me, then if you want to do work, like if you don't have your target hardware ready and you don't want to direct directly run on Graviton and you have a special processor type that's already been emulated with QEMU. You can build a QEMU um which is an emulator, it's a QEMU um image and there's a pipeline especially made for that.

Next is Poky AMI. And I'm going to show this during the demonstration actually where you can take the Poky distribution. It's basically bullet one and making it cloud ready. So there's a couple of things you need to add, you need to add um cloud init and you need to add uh systemd. But after that point and pre distributing uh having the ability to get the keys onto that for the during the AMI build process. So that process is all built into PEE um pipeline. So instead of just building a regular image type, there's an extension to that build process, which is the build process. So we can launch that on Graviton.

Now, we have a couple of examples for hardware specifics. So we actually do a lot of work with automotive customers. So we have done a lot of work because of that with NXP, Renesas, Qualcomm and those sort of folks. So Renesas is if you don't know, is a very popular processor. um um that's headquartered in Japan, ok. Very popular in the Japanese automotive space. So we created a couple of pipelines for these hardware specific targets, for example. So you can get started pivoting really from basic configurations to more advanced and extended circumstances, adding a BSP and all these particularities that you will eventually need when shipping an Io product. Ok.

So how do you, how do you get there? How do you get started last Friday? We made this repository public on GitHub. So now you can just clone this repository from the AWS for Embedded Linux organization on GitHub, simple clone, easy one second, right? But the next step, maybe a little bit or um a little bit more challenging.

So what we did um just really for uh continuity purposes, everyone's moving in the direction from going from Cloud Formation to CDK, the Cloud Development Kit, right? So we started working with the Cloud Development Kit. So in order to do that, you need to install some dependencies like NodeJS and the CDK itself and so forth. So before these steps actually run on the right hand side to invoke it, you need to have some dependencies installed. This is all documented within the repository. But after that point, basically, it's a dependency installer. You run the CDK build and then you run the CDK deploy and you'll see that during the demonstration. Oh, there we go. And I forgot the animations. Ok?

So a little bit more detail under the covers within the build system. I previously mentioned that we create a CodeCommit repository for you. Now you have two choices at that point what you want to do

You can either use that configuration repository as just configuration for the build system or you can actually end up evolving that into your embedded Linux distribution repo as well, right? So you can evolve it into that direction. But the system that we decided to use for cloning systems from various repository types is the repo tool.

The repo tool has been around for quite some time, originated within the Android project, right? So embedded engineers are very familiar with using the repo tool. And if you want to just add something here, like when you get started, we don't put the AWS, the meta AWS layer in there, ok? But if you need to install AWS device software, you would add the AWS repository and then you would clone that out at a particular commit ID. So it's not continuously moving on you.

So in the meta AWS project, we keep that very up to date with the latest for recipes with the latest AWS device software across all of the Yocto Project release branches, ok? So you will want to identify a commit ID that works for you. Ok? And that's usually the latest one that's on there and you want to stick with that probably throughout your development cycle unless you have a real reason to change it.

All right. So after that, what you need to have, what you see on the right hand side is a build spec and build spec is used within CodeBuild itself. So to initialize the build environment, we need to make sure that the BIC is aware of the layers that it needs to use to build the system.

So if you were adding the meta AWS layer to install AWS device software, you would add that to the repo XML on the left hand side. And then you would have to register an additional layer within the build spec. It's really a super quick job and then you just commit that to CodeCommit and then you would invoke the distribution pipeline.

There's two ways to invoke the pipeline. So remember we use CodePipeline to manage the whole end to end pipeline. But as the underlying invocation container, we use CodeBuild, right? So you can invoke the pipeline on the command line if you need to, if that's not being sought out by anything, even EventBridge, and then you'll get the pipeline execution ID and you can use that in your automation systems.

But normally what you would want to do is make sure that you have identified all the repositories that you want to watch that would trigger a rebuild in the system. So for example, if I want to trigger the rebuild of the system based on my application code changing, which is usually what I would want to have happen, I would add another event block there.

So with that, what I would like to show you is how that works in action. But before we do that, I showed a bunch of stuff about GitHub and that sort of thing. So I'm going to hand the control over to Megan so she can show you all about those projects and such.

All right. So let's take a look at where the automation actually lives on GitHub and the ties and tie ins to your project. So as Rich was talking about, this is the AWS for Embedded Linux organization. This is where the repositories we keep mentioning live and we'll show you a little bit around this organization here.

So this is Meta AWS, this is that bitbake layer that we've talked about that delivers the recipes for AWS IoT device software. And this is where you can see on the OpenEmbedded layer index that the layer is actually registered with the layer index. And that's where customers go to find that stuff for their products.

This is the Embedded Linux CI repository and this is that library, the generalized library that you can use to build the infrastructure. And the Embedded Linux CI examples repository is the starting point to build those pipelines that Rich was talking about earlier.

So Rich is going to create a pipeline that builds the Poky distribution. He's going to create an Amazon Machine Image, that AMI he talked about, and then he's going to run it on the AWS EC2 Graviton.

Awesome. Cool. Thanks Megan.

All right. So just like what we saw in the slides previously, quickly going to go execute the CDK. And so we can get a fresh pipeline situated.

So what's happening here actually is that it skipped a part. Why did it skip a part? Because I had run a pipeline execution for a different type of pipeline. So all the networking under the covers that CodeBuild needs to run in for the system was already there. So it takes a relatively short amount of time to hydrate.

Now, what you also see come up here is that you're like, wait Rich. I thought you were only using two EFS drives and I see a whole bunch of targets there, what's going on? And the reason for that is what I mentioned before about it being highly resilient across multiple availability zones. So we need to create those mount targets for those different availability zones.

All right. So it has to create all those things. And then once it gets to the point where it's finishing up the stack, which should come relatively soon, I have this sped up a little bit. Then all those things are actually going, getting situated and registered in the AWS cloud.

Eventually, what you'll be able to do, what you'll see in a moment, is that the pipeline begins executing after all these things get hydrated. All right. So we should almost be there. Yeah. Ok. Perhaps could have sped it up a little bit more, right?

Ok. So now it's completed and what we want to do now is take a look at the results. So the first thing I'm gonna show you is what the pipeline looks like right now. Super simple, takes the event does the build. And then here what I'm showing is actually the CodeBuild execution, right?

So you can see that it succeeded, what the duration of the build was, and so forth. And that's, this is creating the actual AMI instance. Now what I'm showing here is the actual configuration that is in the CodeCommit repository. And then finally all the files that are on the target S3 bucket.

Now there's a lot of stuff in there. I'm including, I'm pointing at the CVE reporting as well, what Megan mentioned previously. So now what is going to happen is I'm going to go and launch that AMI for the Embedded Linux image that I just created.

So that build, that pipeline does everything for registration and EC2 for you. So all you have to do actually, you can, this is fully automated, you go in and you just launch that thing and that's exactly what the next next step for Meta AWS for Embedded Linux CI is going to be. So we're almost there.

We're actually using the Graviton 4 instance type here, taking the key and soon enough, we're going to have our instance launched. All right. Ok, great launching the instance.

Now, normally what you would do if anyone in the room has launched an instance, it's going to be, I don't know, Amazon Linux 2 or it's going to be Ubuntu. But in this case, what you're going to see from the uname -a output, it's going to look very, very different. And the reason for that is that it was built by a different system and it's using a different architecture.

So instead of Ubuntu or whatnot, when you see the uname -a output, look for Yocto Project in there and that is what you're going to see. All right. So aarch64 that's ARM 64 and it's Yocto standard.

Ok. And with that Megan, I think I have one more that I'm gonna talk about. So what's the next step that we're going to do?

What we put public and took quite some time in the making on Friday was getting all the infrastructure correct and resilient and then making sure that customers could build the system. The next step in development, and we are also embracing folks to contribute to this, is to work on the test system.

Now, what I can say for unit testing is that we have been doing unit testing in this exact way for Meta AWS, the Meta AWS project that has all those recipes for quite some time, for almost a year. Now, we completed that full CI/CD system. So even in Meta AWS project, we have automated pull request mechanisms.

So if all the tests work out great for an upgraded version of a recipe, we automatically do pull requests and approve them, ok? So because we felt very, very confident about the system, so the way that that unit testing works, this is exactly from our system today, is from that YAML file, which is from CodeBuild.

We were in a script and this is could be simplified maybe. But we want to make it very when, when we develop it for the future, we want to make it very terse, right? So it could be this way, it could not be this way, but functionally, it should be, it should behave the same.

So what you see in the top section, there is the actual script invocation, the two bottom boxes below, what you see is the script that's being invoked just to make that clear. And then the next step is because in that CI/CD system for Meta AWS, we're executing the build and test process for a specific recipe.

However, the mechanism is, the underlying mechanisms for making a unit test system happen is no different compared to doing a single recipe than doing a full distribution. It's just what the scope that you're executing against to make your unit test happen.

All right. And then the next step is actually making the test image. So I made, I made a statement earlier that we have three image types, the production type, the SDK type, and then I mentioned the quality assurance type. So you have this thing called test image which is a target within Yocto Project. And it pulls together all the unit testing dependencies into the image build.

So even if we just build and test one recipe for Meta AWS, we still need to make an image that runs the entire operating system, right? So that's what's happening here. And then there's a cool tool that comes with the Yocto Project called result tool. And that will do the execution of the ptest.

It will take the execution of the ptest and pull it into a report. So then you can go and take that report and do what you want with it at that point to create any type of report format. It's a very nice and open format.

So with that, what I'm going to do is let Megan tie us up.

All right. Well, we've given you lots of information today and you might be asking yourself, what do I do now, how do I get involved in all of this?

So, if you're not currently making IoT devices in your organization, and you are involved in the process, it would be great to get with those people and learn about what they're currently doing, what tools they're using. And if they are using the Yocto Project, which we learned, there's probably a good chance they are, check, check out Meta AWS, check out AWS for Embedded Linux CI.

See if those tools would be of use and help, help to you and speeding up your processes by using the cloud. We have the GitHub repositories here in the QR codes. I would suggest if you're interested, go ahead and just give them a scan and star them so that they're easy to find after the re:Invent craziness settles.

We're really active on these GitHub repos. So please feel free to ask questions, chat with the engineers. They, they will respond. And if you have any suggestions or would like to collaborate on the project that's also available to you through, through GitHub as well.

We hope this was helpful for you today. We are so happy to be here and talking about these topics. If you are interested in hearing more topics like this, please fill out the survey. We would love to get your feedback on this if this was helpful, and come back and talk about it more.

So thank you all so much and have a great day.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值