all righty. Thank you everyone for, for coming along. I know it's a little bit late. Um but really excited to talk to you today about economics 101. Amazon omics. No, that's not right. Um AI219 Amazon omics. Um with that terrible joke out of the way, I apologize.
My name is Aaron Friedman. I am the principal product manager for this new service that we announced yesterday in Adam's quino. And so I wanna start by talking a little bit about why we built this. Then we'll get into a little bit about what are some of the use cases, things um you can do or features within the service and we'll talk a little bit about customers and, and close it out.
So for probably many in the room, it it comes as no surprise. Um I see some familiar faces. You certainly know this, that a a deeper understanding of biology has this profound potential to really transform how we treat disease, how we develop new therapeutics. There's the whole field of precision medicine, really guided by these uh these therapies and really just overall accelerate scientific discovery. But to get there, it there are a lot of challenges, right in the industry um scale being a huge one in this industry.
One of the predominant things that we're seeing now pop up across the globe are population sequencing initiatives. And I think if you look at what's happened, you know, from the industrial or the industry sequencing providers in the last couple of months, you've seen costs of sequencing plummet even further throughput, more than double and very clearly tens of millions of whole human genomes will be sequenced and stored in the next five if not sooner years.
To to get there, right. You know, customers, healthcare, life sciences organizations uh across the globe, really, right. They're specialized tools for generating a lot of this raw data, whether it's whole genome sequencing, whether it's single cell RNA seek or anything in between or things that haven't even been developed yet. We know that customers need specialized tools, customers like to use their own workflow languages to enable portability and to do these analyses and to do them at scale requires a combination of multiple modalities. It's omics data, right? Things uh that are related to biology, it's clinical data, it's imaging data and being able to combine all these modalities together is is really critical. Um it's definitely a better together story.
And so I kind of just alluded to this and we saw a lot of great announcements even recently at a shg uh the, the american society for human genetics conference, uh a month or two back. But the cost of sequencing continues to plummet for the first time that the first human genome was sequenced. You know, hundreds of millions of dollars to $200 today. You can go and get your your genome sequence with some of the new technologies. And what that gets to is a fundamental shift in the scale that we're having to deal with, right, is the cost for generating this data continues to plummet, the amount of data rapidly increases. And so we need to figure out and help our customers and work together as an industry to really make this easier and easier and create this virtual flywheel to accelerate discovery.
And so yesterday, Adam announced this um the new service that we launched Amazon omics and it's a purposeful service really focused on helping healthcare and life sciences organization store query, analyze genomic transcript omic other omics or biological data and use all that collectively together to generate insights and improve health, accelerate scientific discovery. And that's really what our focus is. And you know, i there's the common amazon uh you know, statement, right? It's always day one. It's definitely what we feel here and we look forward to continuing to work with you as we develop new capabilities in the future.
And so wanted to put this slide up just fundamentally how it works, sort of an overall uh very high level architecture, but there are several key components which we'll get into in the next few slides. So the first thing is um we need to be able to store all this data. And there is a concept in our service called sequence source makes sense. You have uh data that comes off of uh dna sequencers, whether it's dna or RNA or what have you in common industry formats like fq bam and cram, we're gonna be able to store that cheaply efficiently for you. So we can do reduce cost, for example, as well as data management is barrier side for scale.
The second component here is bind workflows. A lot of us here have probably built or run nex flow pipelines, little pipelines or other pipelines as well. And what we found from talking to customers is there is a lot of repetitive work that goes into how do we manage this workflow? How do we spin up the engine, manage retries, provision, the file system? How do we get the cost of what a workflow actually takes to run if we're doing bin packing or other types of uh approaches and and fundamentally running these things over and over again, customers have told us that that's really not what they want to spend their time on. And so abstracting away all that undifferentiated heavy lifting and being able to focus on science and medicine, which is really just the tools that you're bringing the data that you're bringing and then the workflow which manages the dependencies between the compute tasks, the tools and the data uh is really what our customers want to focus on.
And then the last piece is once you do a lot of these canonical analysis, and I know it's shown linearly here. But I think a lot of us know that it's not totally linear. There's a lot of ways you can analyze things back and forth is if you take uh for example, the one of the more canonical um secondary analysis pipelines and it's very calling the output of these are variant data, uh often stored in varying format files, v cs as well as genomic vary called files, gvcfs. And customers today have to manage an etl pipeline. They have to deploy something often the data from a vcf.
Um uh uh out of curiosity, has anyone here looked at the vcf spec before? See a couple of hands. It's, it's something, it's, it's great because it contains a lot of data, but it's really not been optimized for a lot of query and analysis. And so what we find over and over again is customers are converting this data into different formats. Often this is apache parquet so that they can bring spark and compute on the genome. And so we take care of that for with a single api call, all you have to do is call, start varying import job, give us your vcf and then it'll show up and i'll talk a little bit more about what that looks like in a little bit. But it's not just variant data, it's annotation data. It's the known information about positions in the genome, the variants in the genome, genes transcripts, et cetera that bring a lot of additional meaning to the analysis and we need ways to store that as well. So we have annotation stores and what this unlocks at the end is the ability to do what you know we call what we see in the industry. Like this multimodal multi omics analysis where you're bringing in uh different types of omics data, whether it's within our service or also more broadly uh outside of the service as well combined with clinical and medical imaging data. This could be things that reside, for example, in amazon health like analytics and healthlake imaging, which we announced two weeks ago at the health conference. It could be other types of phenotypic data that you bring in. But fundamentally that end state being able to bring all this data together, i think is what a lot of us are striving towards. And that really gets to why we build this.
What are the fo what are the focuses? What are we hoping the outcomes and benefits that our customers will get out of using the service? First one as I just alluded to is this idea of multi emi and multimodal. The second thing I think is we're seeing across the globe are these massive scale investments. And so we need to be able to meet the scale of our customers and be able to handle population level scale uh for these different types of whether it's bio banking efforts, population sequencing or anything in between.
We know that our customers have told us time and again, they aren't really big fans of managing their own bioinformatics workflow. So we want to take care of that for them. But security and compliance is critically important. We're hip eligible service launched uh with hip eligibility and security and privacy and compliance is critically important. It's not just data privacy and security, it's quality management systems, it's being able to track and trace and start to look at data lineage. You'll see a lot of investments from us in that area as well.
So i mentioned this briefly, but we are part of the health a i organization and i wanted to sort of just give a context of where this fits in within the broader health a i portfolio. So obviously, we have amazon omics now that's really focused a lot on biological data and omics. Well omics data right to store query, analyze, bring all that data together, but the data is always better when you combine it with clinical data, clinical phenotypes. So we have health like imaging and analytics.
Um we also have n lp and a sr services right comprehended to, you know, we talk a lot about and i'll talk a lot about today about how we bring these things together in the right schema. That's often half the problem, especially with clinical data, you need to be able to actually normalize the semantics of what um what the the fundamental text is. And so copyright medical helps that transcribed medical as well.
Um really helps our customers with uh medical a sr automatically sort of converting medical uh text to speech or speech to text. Sorry. So key service features, what is the service, high level, right? We can, we can talk about, let's get a little bit deeper.
So what i'll do in the next few slides is i'll sort of highlight a couple of key talking points and i'll go into a reasonably bu busy architecture diagram which we'll walk through in a guided way and i want to do it that way because, you know, there is a lot here, there's a lot that we built. I'm incredibly, incredibly appreciative of the phenomenal engineering team that has been working on this since we started, but we'll, we'll, we'll walk through it and sort of just, you know, go, go on our way.
So let's start with storage. It's the, you know, the very first thing i sort of talked about earlier in the slide. And fundamentally, it's the first thing if you're starting with data that you need to worry about. So i think there's a few things that if i had if i'm talking to customers that ii i try to get customers to walk away with.
So the first one is around fair data access, fair data use, right? Fair being in this case, findable, accessible interoperable and reusable, which means we need a way to discover data, which means our api s need to make it easier for you to discover data, which means we need to make it easier for you to apply domain specific metadata. You know, things like subject id sample id where this was generated from and provide sort of really simple managed um search capabilities.
We need to be able to share data. Now, the reality is as we generate thousands or tens of thousands or hundreds of thousands of uh of these data sets, the the permission on an individual resource level gets quite cumbersome. It's really hard to write resource based policies for sort of these, each individual um each individual resources, customers, you know, they may want to find a cohort and permission based on a cohort. So treating attribute based access control as a first class citizen within our service is incredibly important and really making it easy. So you can permission a subject id sample id, you can find your own custom cohort with tags, making it easier to share permission on that data and do it so securely.
And then the last thing is a low cost per gigaba across any sequencing platform. We support fast q bams and crams in this service, uh this portion of our service today. Now, if you were to use a short read sequencer versus a long read sequencer and do sort of, and, and look at how many gigabytes are generated, you find it varies, it varies for a lot of reasons. It varies for the amount of, well, the fundamental var variance is due to the entropy that is associated within the files. But this could be due to short read versus long reads. This could be due to some of the newer sequencers that have been out for a while, have, you know, fewer what are fred or quality scores uh and have bin them together versus, you know, the upwards of 60 to 90 fred scores that can theoretically exist in a fq bam or cram file, there's a lot of variability there
And one thing we want to make sure of is that our customers have the ability to bring the analysis that they want. For example, with their sequencing provider, that's most important to them or that they are, are using and not have to worry about what it costs, have very predictable costs.
You look in the manuals, they're all rated in gigas or teras, we're doing the same. And so when i say a low cost per gigaba, this actually means that if you go to the aws ec or amazon omics billing page, and you look, you will see a cost per gigaba, we do not bill you in gigabytes, which means you get predictable pricing across the board no matter the sequencing technology now.
And in the future, we've also done a lot of optimizations under the hood. And so a good way to think about this is if you're using s3 today, you're maybe using s3 intelligent tearing, which has multiple um storage classes as tiers. And if you're using s3 intelligent tearing, what ends up happening is if you access the object, it goes into s3 standard for 30 days and it gradually tears back into sort of the the glacier um i in the retrieval equivalent.
And so we have two storage storage classes or tiers within our service, active and archive. And what ends up happening is you bring the data into the service stays in active tier for 30 days. And if it's not used 30 days since last access, it's automatically archived and you get significant even more cost savings, what we found is with the technologies and innovations that our team has built. this delivers an incredible tco for our customers who are storing this type of data on aws the matter of the sequencing technology.
And so let's take a look a little bit about sort of under the hood, what are sort of the key features, things that you might be interacting with within the service and as this as sort of allude to there's a lot here, there's a lot here and sort of the three service components we'll walk through them.
So the first thing in the field is we often need references, those references are stored in a reference store from a file format, they're called fast of files, but fundamentally, right, this is a string of characters that we align to, right? uh and for ho human genomes, it's currently hg 38 or grch 38 or some variant of that. And all it is is you call an import reference api we bring that data in and you can permission off of it, you can apply your own keys to it.
And that reference is really important because it helps map to crams and bams, it helps calculate uh some of the summary statistics that we do on your behalf. And so that's often the first thing that you do, right? you'll go in, you'll load your reference. It's often a one time thing because most of your analyses are gonna be on a similar reference.
And then you'll say, ok, well, i have a bunch of data. i want to bring it in. i'm gonna create a sequence store. Great way to think about a sequence store. it's very similar. if it were drawing analogies to an s3 bucket, that sequence store is a level of isolation that is yours, the customers and it makes it so that everything gets packaged up within that you can then import resets.
Today we support imports directly from s3 um sort of as a staging area. And this import reset can import up to 1000 different files at a time. Resets. i are sort of our abstraction over fast q spams and crams. They're all just right sets of reeds, go figure. Finally got naming at least a little bit clear, hopefully.
Um but re uh the, the, the resets, then we bring it in. It's a scale out process high throughput and that data gets brought into your sequence source. Now, if that data is in, it's not touched, as i said, after 30 days, it'll automatically move to archive until you need it again, which you can activate it. But you can also do a couple of things for it, right? We recognize custom, you know, data liquidity is very important.
You can export your data uh back out to s3 and to wherever you need it to go. But obviously, if you want to analyze the data jos data runs on file systems, it still does. It's been that way for a long time. Yes, there are streaming implementations for certain tools, but by and large, they still expect poss file systems.
So we do obviously enable the ability to copy to local file systems. We do this uh using what is called a uh a transfer manager which we've developed, it's install using, you know, pi pi. Um but you can do a multipart download at basically s3 equivalent performance for downloading a single object.
So if you take a look, you store your references, you store your genomics data, you want to process it, you can pull it down and do your analysis. If we start with the raw data at one side, the other side is analytics, we've done our processing starting with uh varying data. And how do we make it easier and easier for our customers to, to get started? right.
Um i, i love the slide, adam put up yesterday where he called etl this. uh and i and i heard this actually from a customer, a thankless unsustainable black hole. Anyone who's done this, i've done this in my past life. it's, it's not fun to manage these pipelines.
So we take care of all this for you. Um we import it, it goes in, it uses aws best practices for data management and governance, meaning you can govern uh fine grain access control through aws lake formation. It scales with you, you import more data, it just scales, it automatically updates everything's version.
So you can see what imports, map to what samples were put in, et cetera. And so it's really just designed to completely decouple the imports from your actual analysis and really just getting you focused on those analyses.
And so what this looks like under the hood is there are two main constructs and then the process within these are relatively the same. So you have varying stores go figure store variant data, we support vcfs, we also support genome vcfs today. So if for example, reference calls are important to you, the whole absence of evidence does not equal evidence of absence, we support that.
We also support annotation stores. Specifically, we support tsvsc sps. So delimited files, we support annotated vcfs, right? Things that go in the info field that may not have corresponding sample information, we support gff files and what we'll do on your behalf with these annotation sources and variant source for that matter.
So if you want to do vcf normalization or variant normalization, i should say more broadly, we'll do that for you. So doing it with the vcf pretty straightforward, it's done a lot of a lot of ways or by a lot of tools already out there, we'll do this for your delimited files too.
So if you give us, for example, a tab delimited file could be let's just take an example, clin vr um right releases three different versions. They have the xml version, they have the tsv version and they have the vcf version. Let's say you take in the the tsv version, you tell us which headers correspond to chromosome position, reva or chromosome start andrea and we normalize all that for you to a reference.
The importance of this right is your data is then normalized and you have a lot more better uh referential integrity and it makes it easier to query and analyze do joins, compete on the genome, etcetera to bring this data in. It's really simple. It's an api call, import, start uh start variant import job, start annotation, import job. We'll just bring the data in nothing for you to do. No etl for you to manage. You're not billed for any etl. Just a single uh low cost per gigabyte.
The data gets stored nothing. And as i said, right, there's nothing to manage in the fullness of time. And then when you want to analyze it, things go into lake formation, you can create link resource links within lake formation. And then one of the things that customers do a lot is they run queries say with amazon athena,
Um athena obviously had a a new announcement today which i'm really excited to take a look about which is athena supporting spark. We'll work with that too. And so when you have all that data together, it's then, well, what do we do with it?
And so i wanna take a minute and just sort of bring this back up a level and talk through sort of how we are envisioning this service fits in with other services, right? And so imagine you have an entire healthcare life sciences data lake with your clinical data, your omics data, your imaging data, each one you're importing again, you can, you know, one of the things that's really important to us is designing highly modular systems that are, in fact, better together.
So you can use healthlake analytics, health like imaging and this together. You can also use them separately and bring in other types of, of data stores. Uh for example, we can normalize that for you. And then with third party tools or aws services like emr or athena or sage maker or quick site, which was in a demo in an earlier talk today or third party applications for that matter.
You can start to really quickly without having to manage any of these transformation pipelines yourself, get started. And i think that's really powerful having been in this on the other side of it and having to manage all these pipelines myself in my past life, it really makes it easier to do things like say, hey, give me all patients with, you know, between ages x and y that have this clinical phenotype and show me how many have, you know, this variant or a set of variants in this gene that's been associated with this disease or other type of uh phenotype.
You can really quickly start to do these interactive queries in, in drill down. And the great thing also is because everything is partitioned, it's using right, a patch of barquet, an iceberg on the back end, your queries get optimized. And so when you're paying per query, you're paying per use that gets decreased as well because the data stores are already optimized.
And so that sort of concludes the, the overall, i'd say data store story, right? We have the raw storage, the raw data, we have the analytics ready data. But when i was finishing up my phd and going starting to look at potentially doing a postdoc, one things that always stuck with me was my potential advisor, always talked about competing on the genome.
And that's incredibly important, not just for secondary analysis but across the board, tertiary analysis f annotation, other types of things. And so it's really important to us to not only focus on analytics and raw's data storage, but but workflows. How do we make this easier for customers and make it easier and easier for you to compete on the genome or any other types of omics modalities, right?
So when we talk to customers, what we found is not all of them, but a lot of them use domain specific languages. The majority seem to use next flow and whittle today, which is what we support today. And if they were doing this on aws, right, you spin up a head node, use what was often called a batch squared architecture where you spin up the head node and the head node then spins up and runs everything else.
Um really efficient, does a good job. Um but it's again, it's stuff to manage. You have to then provision your file system, you have to pick all that. And that is all time that you are spent focusing on infrastructure rather than science.
And so we wanted to flip that and provide a fully managed experience. So you just have to focus on what defines your science and nothing else, the science, your tools, your workflows, your data. So i was talking about earlier, we wanna make sure that we don't have to have any up time, right? If you're not doing analysis, you shouldn't be paying for it
so it's purely pay as you go pricing based on the task that you need to execute, which you specify usually in your workflow scripts. and we only get predictable and visible cost per workflow. this was something that actually took me a little bit. um pleasantly by surprise as we were talking to customers, one thing that we heard over and over again is i do this a lot, but i, i'm not totally sure how much this costs. and so with our service, it's actually very easy. everything is broken down at the task level which gets bundled up to the workflow level. and so you can very easily see, oh, hey, these tasks costs this much bundle up this work flow costs this much.
if you're say in the clinical space, it makes it really easy to think about costs of goods sold. if you're in the academic medical center space, it makes it easier to budget for grants and do those types of things. and so predictability and visibility is really important to us. we wanted to make sure that we could offer that.
so under the hood, this is what it looks like. there's a lot here and some of this is obviously what you know, you as a user would do. some of this is what you know, we, we take care of under the hood, but let's start at the beginning.
so first thing we do is we have our analysis, we need to define our analysis. so we have our, our tools, our docker images, we push those say to amazon ecr, we apply the appropriate repository policies. then we have our workflow scripts that reference those tools and the dependencies with it so that we can do our analysis over and over again. and so these are whittle and xfl today we're flow description language and next flow is both the engine and the language name you zip those up. so you can have nest and files as well and you run ac what's the crate workflow api what happens under the hood is you get an item put and resource id back. never will exist again. so you implicitly get versioning, you can permission off that. it's a resource, you can tag off that. and what's nice about that? and is for example, if you have a clinical workflow or you just want to run reproducible analysis on a given version, something that's also very important, you create a workflow, you record that version and then all you have to do is called start run, start run and start run. it's all gonna be the same, right? it's the same analysis. it's basically a workflow registry that you're then referencing.
now you have to uh give us a couple of things, right? you need to, to find an execution role, right? if we're processing data, we don't have unfettered access to customer data. so you have to grant access via im some of the standard best practices that you would see, say if you're using ecs or ad ms batch today, um you define your inputs. so we go, we get all that set up. we're doing start run and start run and start doing our analysis. and what happens under the hood is all these tasks run. we manage the provisioning the resources, the concurrency, the scaling everything like that you can define priorities. so if you have a job that needs to go quicker, for some reason, you can submit that with a higher priority, all those tasks will basically jump to the front of the line when they're ready. and we run all this in a high performance shared file system. and so what that gets you is, you know, performance scalability.
and one other thing on this slide is this idea of, of, of run groups. so we want concurrency, we want our customers to be able to do the science that we need. but we also recognize that there are governance things in play. we want to make, be able, we wanna maybe be able to say something to the fact of these users can only run in this run group with this amount of permissions to prevent, you know, runaway billing cost control. if you're an isv or you have some type of multi tenancy within your system, even if it's within one organization, it allows you to partition a lot of your aws account limits so that you can, in fact, um you sort of satisfy users rather than having to stamp out many different uh aws accounts, for example, to, to do this, then you run your work flows, outputs get sent back out. we support reading directly from our our storage service or sequence stores as well as from, from s3. of course, and your outputs of course can, can go back.
so with that, let's now think about how this works, right? we, we have that sort of high level, how it works diagram at the beginning, dive a level deeper. so we start, we can store raw sequencing data that can be reference store or references in fast to format that go into reference stores. they can be these large genomic reads, you know, fast q band of grams that can be anywhere from, you know, 50 to several 100 gigabytes or gabbas in size. you don't have to worry about the compression ratios or anything like that. we take care of all that under the hood. so you can just focus on the gigas is that to find your experiment, you can run analysis workflows off of it. you can create your workflow, run it again and again, gives you some data lineage uh niceness there because again, these workflows are not overwritten. so you know, if you're referencing this uh this uh a w srn that it's always going to be the same thing.
and once that's done, you can s you can store it, you can transform that data, bring together with other types of, you know, cloak hole or phenotypic data and do a lot of, you know, downstream analysis. and that fundamentally is what we're trying to get at today. it's really just making it easier and easier, ease of use is really important to us for you to just get started and go. so you can focus more time on science, precision medicine or whatever is important to your organization.
so i'm gonna pivot and talk a little bit now about, you know, we started a little bit about benefits. we talked a little bit about the service, but use cases and customers ultimately, you know, i've been here 6.5 years. the idea that we build our road map and our services based on customer feedback is incredibly true. and so i want to focus on some of the great work that we've seen from, from customers today and some of the use cases that we think this service is applicable for
so before we go to specific customers, let's talk about these top use cases that we sort of see in in, in the here.
first one is obviously population scale sequencing. we see these initiatives across the board being able to handle that amount of storage, reduce cost for these customers as they're generating petabytes upon petabytes upon petabytes of broad genomics data being able to simplify and scale clinical genomics. obviously, there's a lot that goes into that under the hood. there's, you know, uh different sort of quality management types of things that a lot of those customers are concerned about. we wanna make sure it's really easy for you to build reproducible and traceable workflows because what that gets you is it makes it much easier to sort of say this thing ran this way at this time and i know it and i versioned it and i can reference all that third one where we, we're seeing a lot of growth from talking to customers is in clinical trials and r and d from from a farmer's standpoint. but clinical trials is a really interesting one as we look at things like patient stratification in later stage clinical trials, trying to make sure like almost treating genomics or other types of omics as companion diagnostics. um a as we're developing new therapeutics. and then yeah, research and innovation, whether it's in an academic medical center, whether it's in pharma, start ups agtech, et cetera, we really want to make it easy so that you're not spent, you know, whether it's a start up with the first couple hires who are often not engineers to get started using the service to the largest enterprises that one just really focus on speeding through and, and, and you know, increasing um you know, target identification, fx c, et cetera, just make it really, really easy.
and so some of our customers are already starting to focus on just that, right. you know, jeff pennington and chung's hospital of philadelphia multi modal analytics is incredibly important, right? uh chung chop is an incredible organization across the board, but really trying to think about how do we bring together, you know, the omics data from patients in their health system along with their clinical data, bring together and and sort of identify whether it's novel treatments for diseases or diagnose new diseases. and it really just starts all the way down with sort of the dna and it goes all the way up to their clinical indication.
g 42 health care i don't um is, is another organization um that is really using our service to accelerate growth. and so one of the things that uh we we talk to customers a lot as you know, digital transformation, how do we make sure they can go faster, focus on the things that they care about? and that's really the the the focus here is being able to abstract away a lot of the, the fundamental challenges of building cloud based um omics pipelines and analysis and just allowing them to, to focus more and more about delivering value to their customers.
nc two, i is a phenomenal start up. i had the pleasure of actually um coming from the start up org uh before here and what they found is it makes it really easy to, for their researchers and data scientists to just get up and running. and so, you know, we, we, we, we found um you know, the in our beta that the instructions were just sort of handed off. and then all of a sudden there are a bunch of people doing really cool downstream analysis um with, with the service. and the big thing here is right is it reduces a as, as uri was, was alluding to it. it, what our hope is, right? is it reduces the amount of time that it takes you whether it's to get to market or to find or run that experiment. so you can do more experiments in a shorter amount of time. that's really what we're trying to focus on. it's not about our only our customers.
one of the things that's also really important to us here across aws. ruba had our keynote this afternoon is our partners. and so we're really um blessed and pleased to be working with a wide set of uh of aws partners already um who have been using the service by a team club. three or three diamond age life bit loca ovation. ptp senton and 10 x are, um, some of the partners that we've been working with, um, already today.
and so with that, i'll be here, i'll be around the side. we can obviously chat a little bit after. um, but to, to get started, you can go to our developer guide our website. feel free to take pictures of that if you don't want to take pictures of that. aws.amazon.com/omic pretty easy to type out. very excited that every time you use our cli you get to type in osi. um and with that, i just wanna say thank you. everything that we do here is really driven by our customers and you and others is, is really what motivates. um i've gotten emotional a lot this week, having worked with the service uh and building this a lot, but it, it's, it's really just about trying collectively all of us together to improve health of all of us people that aren't here. um and i'm incredibly personally, very appreciative of all you and all the great work that you're doing and i'll be on the side happy to chat. um but just wanna say thank you so much. and if you need to reach me anything, my emails here very with the exception of this week, very respon responsible on email. um so please shoot me an email, happy to personally um address any any questions you might have. thank you very much and have a great evening.