Innovating for health equity to drive social impact with generative AI

Good afternoon, everyone. My name is Danielle Morris and I am the Global Health Equity Lead for AWS's Social Responsibility and Impact team. I am really excited about this, what will surely be a dynamic conversation. We're going to have a bit of a mix of presentation and panel discussion and we will make sure to leave time for Q&A.

But one of the reasons that I'm really excited about this conversation is that we're able to talk about how innovative and cutting edge technology such as generative AI is being applied to real-world, real situations to improve the health and well-being and quality of life of populations around the world.

So I am being joined by two panelists who will introduce themselves with their presentations. But please, as you're listening to the conversation today, I have one ask of you and that is to really think about how you can take learnings from today's conversation back into your everyday lives and your organizations to drive social impact with generative AI and technology.

So without further ado passing by, hi everyone. Good afternoon and thanks for being here. My name is King Leon De, I'm the founder and CEO of Huron AI where medtech start up that utilizes generative AI and other AI applications to make precision care uh accessible and available for everyone around the world.

We are mainly focused on underrepresented populations and we are also focused on improving drug safety in these populations. Today, we only have one oncologist to 3000 cancer patients in Africa. Think about that for a second compared to the United States where we have only one oncologist to 300 cancer patients. This means that patients who have to travel long distances to see the oncologist when they have side effects, is very difficult to manage these side effects and doctors are just overwhelmed with the patient load.

So at a start up, we think about how can we use a culturally sensitive official intelligence to ensure that physicians can do their work more efficiently and have patients uh have access to remote care when they need it. The other problem we are solving is the gap in clinical trials. You know, in the past 20 years, all the FDA approved uh cancer medications only less than 3% of the trial participants were of African descent and less than 6% of Hispanic descent. This means that a lot of the efficacy and safety data that we have today is not truly the representative of the population.

And so our company, we're thinking about how can we innovate, how can we show clinical trial readiness reports, uh uh reports in our, in our partner sites and ensure that we can have, we can leverage our software and generative AI tools to do things like cohort discovery uh and ensure that we have more recruitment of underrepresent uh underrepresented populations around the world.

Uh this is just uh you know, a visual of uh a patient's journey, a cancer patient's journey. Um and this is by no means exhaustive where you can see that there are many pain points and there are many uh things to think about when we're, when we're thinking about e I solutions to make this whole process efficient and ultimately to improve patient outcomes. At our company, we are focused on treatment. Uh at this point, we're focused on treatment, uh survivorship planning and side effects uh management.

And so we designed a solution called gua. Uh our initial focus is in Africa and Latin America. And so the first place we tested, our technology was at the Rwandan cancer center. Rwanda is a country of 13 million people with only 13 oncologists. And so uh rwanda was a very good place to launch. This is where other silicon valley companies had launched. Uh like zipline, we're based in seattle. Uh but i'm originally from nigeria and have worked in global oncology for over a decade um in seattle. And so i, i have uh levied partnerships um in these sites around the world uh to build this company and uh solution is a two way communications platform.

We have uh the physicians face inside and we have the patient face inside uh when patients are diagnosed with cancer. Um it's, we, we have an api that can uh draw data from emrs or we have a very easy to use uh system where you can record things like demographic data and clinical data. And then through predicting analytics, uh patients get questions about their side effects, they can answer yes or no and then score the side effects.

And uh we have an intuitive uh dashboard where uh the oncology care team can see how each of their patients are doing in real time. And uh with a click of a button, they can generate personalized messages to these patients that incorporates their demographic data, their level of literacy. What kind of uh chemotherapy that they are taking to address this uh side effect?

Um they can also uh you know, with a click of a button, see if their treatment plans are in line with uh the the guidelines that have been approved um to ensure that they are, they are providing the right treatment. Uh patients. On the other hand, when they report uh side effects that uh potential um emergencies uh system flux them uh immediately to call a certain number, um the emergency number to to go to the hospital.

Uh and so uh patients can get remote care uh through this way and, and all uh uh system is designed in such a way that even if patients do not understand english, um you know, the messages they get will be in, in their local language. In this case, kenya rwanda. Uh so patients were able to interact with the platform and ensure that they were managing their side effects or concerns um as the need ari a arose.

Uh and this is one of the major things that we're looking at as a start up. Uh the the concept of algorithmic bias. Um i often say that if a patient um uh presents to the family physician here in las vegas and become uh let's say a 28 year old young man, you know, comes with fever, uh night sweats and some weight loss. The first thing the physician will be thinking about is hodgkin's lymphoma, which is a kind of cancer. But if you go to a place like nigeria and present with those same symptoms, the first thing that the doctor is going to be thinking about is malaria.

So if you design algorithms and use data to train here, you can't just export that technology. We have to build culturally sensitive technology that is responsive to the algorithms, the data, the epidemiology of and the clinical context of, of those regions. And i like to show this diagram. This is kusi sakuma uh on the left uh or on your right is uh caucasian or lighter skin.

Um it's a kind of skin cancer. If you use, if you train, um if you train an algorithm to identify this cancer and you use mostly caucasian skin, it's going to misidentify um you know, kus sarcoma in black skin. And that's why uh data representation is uh and diversity is very important to us.

And we have to because this is a relatively new technology um in the regions that we, we serve. Um we, we have to ensure that we uh our, our design is multidisciplinary. We ask a lot of questions we co build with oncologist to ensure that we are really addressing pin points in the health care system and, and for the patients.

Um we also um try to explain to physicians uh how the algorithms uh reach the decision so that they will have more trust and so that patients can have more trust in this technology. This is just the beginning and we have uh we, we, we have a long road ahead of us.

And so other things we do on our platform is a doctor can uh call or a nurse can call a patient directly from our software. They just need to click a button. When they have those conversations, it's, it's recorded, it's saved. And we have clinical summaries and we've been uh using amazon the tools the foundational models on amazon bedrock, which really makes generative AI uh very transparent and it's not just a black box.

Uh we can fine tune uh to ensure that we are responsive to the regions that we serve. And so this, this is just three month data when we did, when we first did our pilot, it took about seven months for the for us to get an irb approval because uh we were the first um organization to have submitted uh any form of research that had to do with official intelligence to the rwandan government.

So it took a lot of back and forth for them to really grasp the concept. But we had very good engagement. As you can see that uh patient engagement was at least 92%. Uh we had about 480 patient interventions and out of that nine of them were automatically detected emergencies.

And uh there have been a lot of excitement about our use of bedrock and about our use of this technology in patient engagement. Um uh and, and uh and symptom management and side effect management. And we recently just expanded to nigeria and kenya and we're almost in brazil in the third largest cancer center there. And we're really excited uh for, for our next phase phase of growth.

And uh i, i don't do this alone. Um there's uh an excellent team behind us that uh works every day to it's a lean team as you can see and we're growing, but we work every day to ensure uh that we can make cancer care equity a tangible reality uh in our time.

Thank you so much.

Thank you. Thank you, doctor Kingsley. Um thank you for sharing that inspiring story of how generative AI is transforming access to cancer care.

Um I'm r, i'm a principal solutions architect with aws for nonprofits. Thank you for joining us in my part of the presentation. I'm going to be going into some of the use cases that you can be thinking about. Uh what doctor Kingsley mentioned is basically a clinical virtual assistant, right? But there are really a lot of different use cases that you can build for using generative AI technology

So I'm going to take you through some of those examples and then give you some key considerations for building your own generative AI applications. Then I'll leave you with something actionable where you can go deploy an end-to-end generative AI application in an AWS account in just a few minutes and then get hands on with it.

So let's get into it.

Really, your generative AI journey starts with identifying your use case. Then, once you identify that use case, you work backwards from it. So think about what kind of application you want to build - is it something that's external facing where you have your end users accessing it? Or is it something that's maybe facing your internal employees - something you want to use to boost your employee productivity or even make your organizational processes more efficient? So think about that.

Just to get you started, here are some examples. This is not a comprehensive list, just a really small subset of what you can do with it:

  • For end user experience, you can pretty much build any virtual assistant that you can think of, given the capabilities and choices of foundation models today. If you can imagine it, you can build it - a legal assistant, research assistant, even your own personal concierge.

  • You can use the capabilities of foundation models to do creating your own personalized content.

  • If you're looking to improve programmer productivity, you could use foundation models to generate code or give code suggestions - think of that as a coding companion.

  • When it comes to improving organizational processes, you could look at intelligent document processing or having conversations with your documents.

It's really unlimited at this point - if you can imagine it, you can build it.

So no matter your use case, there are some key considerations you have to take into account before you even start building. These are really centered around selecting a model that serves your use case, because there's no one size fits all. There are hundreds of foundation models, proprietary or publicly available, different sizes and context windows. So model selection is critical.

I'll give you some pointers on how to do this:

  • First, do your research - look at model cards. Model cards are transparency resources that tell you the use cases a specific model supports, limitations, and sometimes information on training data for public models.

  • Second, there is publicly available benchmarking information, like Stanford's HE which provides benchmarks across scenarios (tasks like text generation, QA, information extraction) using specific datasets and metrics beyond just accuracy - things like fairness, bias, toxicity, calibration, robustness, and efficiency. So HE is one resource, but not sufficient.

  • If you really want to pick a model and put it into production, you need a rigorous evaluation process - coming up with datasets specific to your use case/industry/domain, defining metrics, benchmarking models, and automating the process since you'll use it many times.

So that's the crux of model selection - research, benchmarking, and rigorous evaluation.

Then, depending on your use case and model, some customization is almost always needed. I'll go into that shortly.

With generative AI, data is your differentiator, so keeping it private and secure is paramount.

The last consideration is infrastructure - self-managed or fully managed options, which drives costs.

Diving deeper into model selection, there are three key dimensions:

  • Output quality - not just accuracy, but fairness, bias, toxicity, robustness, and appropriateness.

  • Cost - know your users and traffic patterns to estimate costs of inferences.

  • Latency - low latency for customer-facing apps, more tolerance internally.

You need to strike a balance across these dimensions for a holistic evaluation.

On customization, there are different levels based on your model:

  • Prompt engineering - being specific in prompts, providing context and examples via few-shot learning or retrieval augmented generation.

  • Fine-tuning - bringing in labeled data to customize to a specific task.

  • Continued pre-training - adapting to an industry with unlabeled data.

The approach depends on your use case and model quality.

So with that, I promised something actionable - an end-to-end generative AI app you can deploy in AWS with one click. It's an AWS Solutions Implementation that gives you a web portal to interact with models from Amazon Bedrock or your own. It has retrieval augmented generation implemented.

You can experiment with different models and select which to use as you converse with the chatbot. So it brings together model selection and customization.

so um something for you to um get hands on and um get building and that's the qr code where you can find this. so with that happy building and over to you daniel, we say thank you so much for sharing your insights.

and i'd love to now have a conversation about layering the real-world application and then the back end of the technology. so kingsley, i'm going to start with you if you could, can you explain the power and potential of generative a i to address social impact and particularly layering it within the low and middle income countries? and, and what inspired here on a i to leverage generative a i as a key solution?

yeah. um so many years ago, my very close aunt, uh she passed on from uh colorectal cancer and uh there were just a lot of gaps in her care. and uh at that point when, when that happened, uh you know, i never knew or i never envisioned that would have like uh such powerful acceleration in, in a i research to where we are today in generative a i. and so that's what has really inspired my career in puna career. like in global oncology to improve systems, especially in low and middle income countries.

now comes the advent of generative a i. and then i'm thinking about all the uh inefficiencies, especially with the numbers i just presented, you know, one oncologist to 3000 cancer patients. uh you think about like how can a physician uh really uh you know, get all the cognitive abilities in place when they have that, you know, patient overload. and so the questions, the questions we asked when we were doing focus groups and figuring out what the biggest pain point is, uh oncologist said that it's just so difficult to follow up with their follow up their patients and patients inevitably have, you know, side effects and uh they have problems like during treatment and need to be followed and, and uh those concerns addressed adequately.

and so this was when uh the spark of generative a i came came about. uh but obviously, there, there were concerns, you know, how, how would this technology uh be fine tuned to, you know, to the use case uh in, in this uh at this point and, and how would uh you know, the oncologists like uh accept it as a technology to augment their care? and so when we started doing uh beta testing, when we built and started doing beta testing in rwanda, we were getting a lot of feedback and we're iterating the platform uh to ensure that we are incorporating uh both patient perspective uh and, and the oncologist perspective. and overall, we just wanted to ensure that we can have a system that helps deliver efficient care, um helps uh patient to address their concerns even remotely and with generative a i and all the fine tuning you can do, especially with incorporating local treatment protocols, language, you know, clinical context, they just feel that, you know, they are not alone and uh there's still a lot of work to be done because every day we keep uh you know, gathering feedback to know how and where we can improve, to make sure that our system is powerful. but, but i think, and i strongly believe that generative a i is not a magic bullet, but it's definitely pivotal to, to ensure that we can amplify the work of health care workers and relieve some of the cognitive burden that they have in, in taking care of their patients.

thank you. and so me, i kind of want to build on, build on kingsley's response. how have you seen or could you provide some specific examples of how generative a i is helping nonprofits and other mission driven organizations help to achieve their social impact goals and kind of what unique advantages does generative a i in helping to do that?

yeah, that's a great question, daniel. um so i work with um nonprofits kind of across different verticals and they all have different use cases. but really, um kind of the common thread there is um you know, they're all resource constrained. um they are budget constraint, things like that. so, uh our nonprofits love the easy buttons and, and that's exactly what amazon patro is, right? um and the, and the, and, and kind of explain the reasons behind it.

so if you look at amazon bedrock, right, it serves as a hub for um high performing foundation models from leading a i companies as well as amazon's own models and now publicly available foundation models as well. so just having access. so just just what bedrock is doing is democratizing generative a i, right? so nonprofits can, you know, very, very quickly build applications by consuming these high performing models as with just a simple api and then they can bring their specific data.

so i'll give you an example of a conversation that i had with a nonprofit a couple of months ago, a lot of things change. you know, this is a very, very dynamic environment, there's a lot of innovation, you know, two months behind and now, but the fact is, you know, this was basically a nonprofit that was committed to improving social policy for education and work.

um and so they were um looking so they as part of their work and, and this is, you know, uh for education and work for low income populations. so, um so what they were trying to do was um and they're definitely resource constrained. they were trying to create a kind of a social policy assistant to respond to rfps and rfis because they didn't really have in house staff that really could understand all the nuances of these policies. but they had a lot of um documents, a data uh that they had in different places. and they wanted to build this application that can go access um this kind of corpus of trusted uh data that they have, but um have an assistant that they can quickly uh provide as a tool to their staff to go ask questions and, and respond to this art piece.

thank you. um so kingsley, i just want to dive a bit deeper into something that you mentioned in your presentation about the importance of needing to be culturally responsive or culturally sensitive. so can you give me some examples of how you're leveraging generative a i or how you're even thinking about it to help make your solution more culturally responsive, particularly as you're thinking about expanding into diverse regions with different languages and cultural contexts.

yeah. so in building, we, we first want to have a conversation with all the key stakeholders because uh you know, one can read a peer review journal to see what the problems are from uh you know, from a global scale. uh but when you talk to people on the ground, when you talk to patients, you talk to the government, you talk to oncologists. um it helps you drill down into the biggest pain points and how they want the solutions to be designed. and that's kind of how we approached uh our product development.

and so um when we first launched in rwanda, uh i, i just had this assumption that uh in rwanda smartphones, uh you know, like 80% of the population had smartphones like they did in nigeria. uh but when i went there, it, it was, the numbers were like just 30%. so 30% of, of the population only had smartphones and 70% used like regular mobile phones. and so uh for the patient side, we had to innovate around us sd text messaging uh because we didn't want to build like a sleek app. and then, you know, we now leave a lot of people behind. and so uh patients uh partnered with uh with uh a big uh telecom company where we, you know, we had um uh you know, bulk s ms and, you know, patients could interact with our platform just on s ms and we had the web platform uh for doctors. so that's one way.

secondly, um we didn't just assume that, oh, you know, in rwanda, they speak english and french, but lots of patients don't speak any of those two languages. they speak another language called kenya rwanda, which is actually spoken and understood by 98% of the population. and so we had to ensure that we had options for patients uh to um receive care, you know, in that language. um and you know, everything gets automatically translated to english when it comes back to the web platform where the doctors see.

so we, we basically um engage and the engagement is a continuous one. it's not just like prior to building, it's like, ok, now you have this product in your hands, it's providing value. how can we provide more value, what is working best, what is not working? and we keep iterating our platform uh to do that.

the the very another important aspect i'd like to uh point point out here is just the treatment protocols. so for example, in cancer care, one of the you know, a major reference point that oncologists use to deliver care is called the national comprehensive cancer network protocols. and this is developed here in the us as a group of experts that meets regularly every year to review all the evidence and prefer the best, you know, the the the best care based on all the evidence out there for, you know, particular cancers.

and so for rwanda, you know, nccn designed what they call a stratified guideline. so for example, if you know guideline for a breast cancer is this, it might not really apply to another country because some of those drugs might not be available there. you know, some imaging might not be available. so the stratified guidelines basically dig down to a country's resources and what they have. so they can follow the best next evidence if one is not available.

so we have to also incorporate that so that uh oncologists are not just getting like treatment protocols or treatment guidance that is just highfalutin and they can't really, you know, it can be actionable in their own clinical context. so these are all the things that we we think about in building these technologies. so that the end, the goal is not a feel good goal. it's like the goal is to ensure that you're really providing value both to the health care system as a whole. but more importantly to the patients and the oncologist. and the only way to know that and the only way to really build to be responsive is by always engaging the users and the stakeholders in that ecosystem.

thank you. and so kind of on the other side of cultural ensuring that your, your solution is culturally responsive mood. you touched a bit about working with your solution to ensure that there's, you're eliminating bias. can you talk about some best practices or kind of how you've seen or what kind of recommendations you even have for nonprofits or social impact driven organizations to ensure that the bias is eliminated so that you can get to that culturally responsive solution.

yeah. so that's um uh kind of a multidimensional question i would say because um eliminating bias um uh it's, it's a, it's a challenge um in, in and of itself because when just by definition, right, a foundation model especially um you know, the the first um suite of foundation models they've been trained on internet scale data um which uh and, and these are all unlabeled data sets, right? they come with all the inherent biases that are out there.

um and, and unless you take additional measures to kind of um detect and filter out uh that inappropriate content or toxic content biases in the data before you use it for training a foundation model. it is there, right? so that's why i was uh you know, previously talking about uh using a model card which is a transparency resource.

um it's uh typically available, you know, if it's a publicly available model, uh you will find it like in hugging face. if you go, you know, you can look at a model card, it will tell you what data it was trained on. uh what are some of the limitations? uh what are some inherent biases it will go into all of that.

um but uh the the the really the question is uh how do you, how do you select a model that doesn't have that bias? right. so this is where you have to choose responsibly because if you want your end users to trust the applications that you build um the it's your responsibility to kind of go through that rigorous model selection process that i was talking about having a really rigorous evaluation process so that you choose um not just uh not just a high performing model but a safe model to deploy as part of your applications.

so uh you know, the most important thing is model selection, right? something that doesn't come with that inherent bias

And even if it does during training, there are guardrails put into place so that uh there are guardrails for detecting any inappropriate content or inappropriate requests on the input, which is the prompt and also guard rails on the output. So the response is coming back from the model, you need to be able to detect and filter out toxic content.

So there's, you know, kind of elimination of bias in the data that goes into training, but also having guard rails in place on the input and the output to make sure that it doesn't affect your end users.

So I'd like to pivot slightly Kingsley. You've talked about working in Rwanda, you're working, you're working to scale in other countries, but solutions don't work in a vacuum, right? So being able to roll out a solution really requires collaboration with governments, with health care workers. How are you, how do partnerships and collaborations play a role in, in rolling out solutions that are, are AI driven?

Yeah. No. So you know, at this um we, our company was launched November 2021. So we're just at the two year mark. And so at this stage where we're just now about to go to market. Um the collaborations play a very, very key role. And so one of the things we are doing is to help pharma and support pharma companies to diversify, you know, their data, especially for phase three trials. And the FDA just mandated, you know, early this year that pharma companies would have to reset levels of diversity, you know, or else they will face drug rejections. And we see these populations as one way that they can diversify that data. So we're working actively to collaborate with um you know, farmer to fund some of our trials and to also uh you know, look at some of our pilots so that we can keep uh you know, our operations initially sustainable.

Um uh we partnered with a large kidney center, this is a lot of urological cancers in Nigeria and, you know, it's a very well funded hospital. And so, uh you know, because we're working with them to solve a very big pain point, um they're going to be uh injecting funding uh to our company to ensure that they can, you know, pilot our services and skill uh for Rwanda.

Um for Kenya, uh we signed an agreement with the government of uh of Kisumu, which is the third largest state in Kenya. And uh there is like, you know, you see an opportunity uh the governor of that state, you know, was a former cancer survivor. Um you know, uh and, and we saw that he was very interested in technology. You know, i met him at a conference at Harvard and he happens to be Lupita Nyong'o's father, the famous actor. Uh but um he now invited us that, you know, this technology would be um very interesting for their county. And so that was how we, we made uh you know, market entry. But more importantly, we really want to be successful in this initial places so that we can now use that success to really uh skill.

And for us, we also look at what the government's overall national cancer control strategy is. So for, for example, in Nigeria, even though we, we've not officially partnered with the government of Nigeria, the minister of communications and digital economy launched an AI strategy for the country, the new minister of health, who was the former interim CEO of and he went back to Nigeria to become, you know, the minister of health. They are major strategies to digitize healthcare because they see the value of really understanding the healthcare data of the 200 million Nigerians, that's the biggest black nation in the world.

And so in digitizing healthcare, obviously, they need to, they, they've already indicated interest that they want to partner with private companies from anywhere around the world to ensure that they can achieve their goal of digitizing healthcare and making it more efficient. So we don't want to um you know, just go into a place that don't even have like, you know, uh data as, you know, or technology as a strategy except to build them into it. I want to ensure that we are fitting into what uh government uh you know, sees as their roadmap for um using technology and improving uh you know, healthcare in general uh in that region.

And we're kind of building on that same question. Organizations like Huron are working in resource constrained countries or in resource resource constrained contexts where governments, for example, may not be as tech savvy or may not be comfortable with, with using technology and specifically the introduction of genAI. What recommendations do you have for innovators who may be in this room, who are thinking about working in these type of contexts and, and how to engage, effectively engage governments to adopt new technology.

Yeah, my, my biggest advice there is, you know, go up the stack. So what i mean by this is um if, if you've heard of, if you've heard the um the keynote Adam's keynote yesterday, he was talking about the generative AI stack.

Um so think of that as three layers, right? So at the very top, you have applications that are powered by AI. So these are things like Amazon Q, for example, the business assistant, right? Uh but you also have um Amazon Q incorporated and you know, in general generative AI capabilities inside contact center, inside business intelligence tools. We also have that as part of other AI services as well. Recognition for one where you can do custom custom content moderation, um things like that.

So the, so what these are the kind of, you know, these AI services uh which already have generative AI infused in them are really the top of this tack, right? And then one level below is uh bedrock, Amazon bedrock with not just its foundation models, but also its managed capabilities like uh knowledge bases and agents that are now available that can kind of come together to implement that workflow that i was talking about to where you can use that approach for customization to improve the accuracy, relevance and context specificity of the responses coming back, right?

So that's kind of that middle layer and now there's also guardrails for bedrock that's been announced. So that's that middle layer. And then below that, you know, is the infrastructure for if you wanted to train your own foundation models and for infer.

So when i say go go up the stack, what i'm saying is pick those easy buttons. You know, if the, if your use cases fit um you know, those services and their capabilities start with those. If they don't, then that's, that's when you kind of start coming down the stack and, and doing some of your own building.

Thank you. So, before we get into the Q&A, I have a question for one last question for both of you. So as technology like JI evolves, what are some of your recommendations for how innovators in the room and, and out in the world, how do they continue to track how the technology is evolving? But then also how it fits into the evolving needs of health and health equity?

Yes. So what I'll say is uh uh it's still, I mean, we've made a lot of progress, it still, but it's, we're still early. We, we there's the potential is enormous and for us to really track and see where we're going and ensure that we're still being impactful. It's about engaging the users. I understanding that there's a mirror, there's a plethora of problems to solve in healthcare, climate change. But just to keep looking at the data that is published out there, but talking to the people on the ground to really understand what can we build to be impactful or even if you have a product out there already, um just figuring out what can i do or what can your team do to ensure that you continue to remain impactful in this space?

And, and, and the only way to know that is just to keep talking to your users and monitoring and getting the data that is being, you know, got in just by their usage and data got in from talking to the users, that will be my take on that, I would say just get hands on. Really, there's no substitute for experience, you know, the best way to learn technology is to get your hands dirty.

Um and that's why i was, you know, kind of gifting you that little um uh into solution. But uh they are really easy ways to experiment and um uh interact with models. Uh for example, there is um this new tool called Party Rock, which is really a compliment uh complementary to Amazon Bedrock. And what Party Rock is is it's a foundation model playground. You don't need to have an AWS account, you know, you just kind of log in with your social media account and then you have access to an environment where you can build generative AI applications with no code whatsoever.

Um and then you can build them, you can select the um uh foundation model that you want to choose. Um you can, you know, once you build it, you can share it with the community. Uh things like that. I mean, uh i would say just get hands on, you know, find all the resources there are. And i think with Party Rock, you actually get certain number of free credits as well when you get started.

Um so, you know, get your hands dirty. And as you know, it's a very dynamic landscape, there are new models coming out, new versions of models coming out all the time. So, um you know, one of the things with these model playgrounds is you're also kind of um getting really savvy with the prompt engineering, which is a very important um uh skill set, i would say these days, uh english is your new programming language if you will.

So um get really good at, you know, prompting prompt engineering so that you can get the best possible response you can from a model.

Well, thank you. Um and as you all are going back into your organizations and driving your own social impact uh uh digital solutions. Um AWS is committed to supporting your growth and your innovation. And so we have a, we have a resource called the AWS Health Equity Initiative, which is a $40 million dollars three year commitment to supporting organizations using the cloud to address health disparities as a means of advancing health equity and you can receive recipient.

So it's been a program exactly. And, and organizations who are accepted in the program can get up to $250,000 in promotional credits as well as tech expertise. So we want to encourage you to continue to innovate, to drive social impact. But thank you all so very much, but i'd like to open up the conversation to the audience. Does anybody have any questions?

Thank you everybody so much for participating in this conversation. If you have any questions, please feel free to reach out. But I'd encourage you also to take the survey to evaluate the presentation. Thank you so much.

Thank you. Thank you.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值