Unlocking the power of AWS AI and data with Salesforce (sponsored by Salesforce) (Salesforce)

Welcome everyone. Uh my name is Narinder Singh. I'm VP of Product for Sale for State Cloud and I'm joined by my colleague here.

Hi, everyone. My name is Darryl Martis. I'm a Director of Product uh for AI at Salesforce.

Thank you for being here. We start with, this is what we call Safe Harbor. As always, we're going to talk about some exciting innovation that's coming up, which we recently announced with respect to our integration with deep integration with AWS. Thanks to the close collaboration with the AWS team and make your decisions based on the features which are originally available in our products.

And thank you again. Uh if you're CSFO customer extra, thank you. And hopefully you will become CS force customer after seeing our presentation and this is who we are. We already gave a quick intro and today's agenda.

So first, we're going to give you a quick overview of our new product offering Sales Force Data Cloud. It's an evolution of our CDP offering which we had launched way back in, in 2020. It's a way we want to apply this deeply across the entire customer 360 life cycle as part of the CSO platform. I'll give you a quick overview of that.

We'll talk about CSO Data Cloud plus Amazon Red Shift better together story, which is part of our vision of how we kind of bring the data, seamless access to the data bidirectional and a together between AWS and on the Salesforce cloud side, which is going to translate into me talking a bit more in detail about the zero ETL bidirectional, bring your own lake demo, which will give you an overview of that feature functionality, how that's gonna work, what the experience is gonna look like.

And then my colleague, Darryl will talk about the other innovation which are doing on the AI layer side of the thing, same thing with bring your own AI model builder capability in integration with the AWS and he'll walk you through the details of that and it will also give you a dem of that.

And after that, we'll go into the Q&A. Ok. So what we're doing in terms of we do regular surveys with all the leaders, typically the CX organization leaders, marketing leaders, customer engagement leaders, and they all agree on one single thing which is that companies cannot deliver growth without having a single source of truck means having a holistic understanding of the customers. And as important as it is, it is also a big challenge, right?

While they recognize it and why that is a challenge because at the end of the day, they have to deal with customer data which is spread across multiple systems, sales service, commerce, marketing analytics system and and bringing all of that is is not a simple feat. It's a very complex thing if they have to imagine doing that in terms of all the complexities underneath, at the same time, the privacy regulations are coming up and becoming stronger and stronger in terms of making sure that the customer data is not used in a way that customer doesn't want you to use the data.

So all of that has to be tied to in terms of this challenge and this opportunity in terms of our customer wanna offer personalized, you know, one on one and intent driven experiences for their customers, right? And that's where but connecting all of the data isn't easy as always, right? This is part of our survey that we can do with several of our customers in terms of how many, you know, application systems that they see right in the mix and these are hundreds to thousands, right that they they typically are dealing with large enterprise customers in terms of where the customer data is spread.

They they they are talking about the web engagement systems, right? Email engagement systems, commerce systems, analytic systems point of sale systems, loyalty systems, the order management systems. So just imagine right, the amount of time and effort that has to be thought of in terms of integrating that to bring together as customer single source of truth.

Sometimes they are doing this point to point integration, sometimes they do hub approach, but all of that requires a significant investment from the IT teams in terms of time, budget resourcing and maintaining those and keep running those right and becomes a big pain over time in terms of how you want to think about.

And at the same time, inconsistency in terms of privacy and other regulations and if they follow the traditional ETL approach, it has lots of challenges, right requires multiple costly options, having ETL tools pipelines, keeping those pipelines running and monitoring them and e walling them as the schema changes happens at the same time, data freshness is again a challenge because it's late and data, these pipelines are typically batch pipelines, they are, they are bringing it at a certain latency, the volume of the data as it grows, the challenges become more and more complex to be able to manage those.

And often after they have done that, they have to think of a system in which they want to bring the data and turn the data into value. Because at the end of the day, it's a value that matters because with the value, you want to turn that into those personalized experiences driven on the customer's intent, preferences and what they're desiring, right, in terms of their engagement with you.

And often that needs to be integrated back into the customer of engagement, what we conceptually call reverse ETL approach. And that itself is another integration, right, all of those things. And underneath that, if you think about what they need to put together is what we call a data platform underneath which requires these capabilities to connect to the data bash data streaming data, then transform the data, then being able to turn it into some sort of a uniform model to understand the customer and being able to then drive insights and whether it's predictive, whether it's it's a traditional BI kind of insights or generative insights and then bring them back into the system of engagement. All that is a complex, complex work where they spent. Typically, we've seen like one third of the time just goes into doing that alone.

Now, that's why we built from Salesforce point of view, what we call Salesforce Data Cloud. We want to give our customers an ability in which we make it really easy for them to connect all of the data across the various sources. First and foremost, all the Salesforce sources as well as external sources because data is spread across external platforms, external applications, as well as Salesforce systems.

We are the number one CRM in the market. We have a lot of customer engagement, data, customer relationship data. But we totally recognize that us customers have invested in terms of other tools platforms applications from where you want to be able to connect. So our first goal was how can we make it easy to connect the data at hyper scale in real time is something that you can connect. You can even federate, you can ingest, you can harmonize the data and then you can unify the data when you say unify because as you get data from these systems, they are coming in different identities and these identities are not like exact match. You have to do do based on their personal attributes, being able to switch all of that data together and then build into what we call a unified profile, the customer data graph around it that you can use as a way for you to do granular understanding of that customer and then can drive insights on top of it.

And this data cloud is built for scale, it's built for what we call petabyte scale in terms of the billions of engagements events that you want to capture about your customers, transactional behavioral events all coming together, which is deeply integrated into the Salesforce platform. And it's become the basis of how we are imagining the full AI layer which is going to talk about as well. Einstein, predictive AI generative AI all being then use to power the engagements in the flow of the work.

That is the most important thing we give you the full mile in terms of taking this all the way to the end in the flow of the work and being able to do that in conjunction with data cloud being the substrate through which you do open integration with the data sitting in other data. And that's what we have done primarily with AWS with respect to the data with the Redshift integration. And that what we have already announced with other data platforms like Snowflake Big Query as well. And there is a bidirectional so you can connect to the data sitting in those data lakes and warehouses at ease without requiring to move or copy the data.

And this is how we imagining now the evolution of our Salesforce platform in terms of how we build the entire customer 360 data cloud is a substrate which sits next at the data layer level through which all the layers above with the consistent power of our meta data platform gets more smarter and better with a unified profile and a ground low data at hyperscale being built into platform at the heart of it with the Einstein layer with our automation layer in terms of process orchestration, orchestration, as well as our UI layer being able to then power all the features across all of our suits of application, horizontal applications as well as vertical application sales, cloud service, cloud, marketing, cloud commerce, cloud, and all of these benefit from these.

And we're going to light up tons of feature as part of our roadmap, which you already started on in terms of leveraging the power of the unified data insights and AI and automation. As part of the next generation of features we're gonna ship here.

Now, how does it work? Right at the heart of data cloud is a modern data platform and that modern data platform provides ability to connect to all of the data which can come from our own systems. With first and foremost, everything comes with a point and click and a configure customized experience, which is at the heart of how we think about our platform.

So we're bringing big data scale with the same mindset of how we build the Salesforce platform for the power users to be able to manage that. We will allow you to bring data from hyper scalar cloud storages, AWS Amazon S3, Google Cloud Storage, Azure, as well as being able to do bring your own lake with respect to connecting to these data lakes and warehouses to be able to bring the data without doing any extract and being able to connect to the data in place.

And of course, we have web SDK, we have injection APIs and we also allow you to connect to legacy systems through MuleSoft connectors through which we are able to bring all of the data from SAP system, our Oracle system. Once you brought all of the data, whether you bring it in batch fashion, streaming fashion or bring your own lake fashion where you connect to the data, you can then transform prep, cleanse the data with the transformation tools built into the data cloud.

Next, what you do is what you call the the harmonization of the data through which you then take the data, map it into a data model so that you can have canonical way in terms of you understand the the semantics of the data around the customer, whether it's uh individual, their preferences, their consents, their contact points, their orders, their web engagements. And then you are able to manage the data in terms of mapping that into economical form, into harmonizing it.

And then being able to pull the full customer 360 degree view as a data graph. And that's what then you can stitch together with identity resolution capabilities into one single unified profile based on exact and fuzzy match rules. And that provides you a basis in which now you can analyze act on the data and act across all of our surface areas with the power of analytics, whether you want to build multidimensional insights or whether you want to build segments in terms of targeting and marketing use cases, whether you want to be able to advertise for third party premium platforms or through analytics, being able to do that with t the same thing is available for you to analyze with the uniform connectivity of a t to be able to analyze and connect the data live in terms of building your BIs and dashboards as well as to be able to open up the data for access with the bring your own lake sharing into AWS and other platforms.

Same data in a zero tol is available for you to access to analyze, to build your own custom models in an AI tool or analytics tool of your choice. In this case, for example, with AWS Redshift, you are able to do Redshift ML or do with Redshift on top of it. And the same thing can be done with the App Exchange ecosystem activation, which is a broader set in which how you can activate the data out to a tech platforms or our own premium platforms like direct integration with Amazon Marketing and or or with the Google with respect to marketing directly to Google Ads or as well as with Facebook and Meta.

And in terms of the the other innovations, what we've done is with Einstein, you get the same consistent layer with which now you are able to build that in terms of your machine learning models and your generative models as well as you do bring your own model in integration with. For example, other platforms like SageMaker that we have announced now as well as with Google Vertex or in Databricks models, you can build and use that to make inferences and enrich the data as well.

Now at the heart of it, right? We designed data cloud to be open and extensible and that permeates across our data layer, our AI layer as well as through our broader ecosystem, through our App Exchange partnerships, right? In specific with AWS as part of our open data access.

Now you get the ability to have bidirectional zero ETL integration between Salesforce cloud through the data cloud and Amazon Redshift. The same thing we do also for bring your own AI with Amazon SageMaker. You can take your models, you can train the models with data in the data cloud. You can deploy those models and use it to make infer with the data in the data cloud to enrich the profiles and use that for engagement and activation of the data in terms of the business processes as well as being advertised, being able to directly activate your segments into Amazon for Amazon advertising purposes as well as a premium integration first class as part of the data cloud.

And then we also give you myriads of partners through which you can do the same with respect to activating the data or enriching your customer profile data, both from an identity provider side of it or data providers like Axiom or being able to activate back into third party activation platforms like Traders and in the same way. And we have, you know, right now, around 40 to 50 partners with which you can and extend the power of the platform.

Now, what we do is together as part of the open and extensible approach of data cloud is what we're calling as if we are fostering a new way of data integration of the modern data. From the traditional approach where you build your pipelines, you are able to to you have to maintain them, run them, which are later to more modern approach, which is what we call the zero tail approach, which is secure, which gives you near real time access to the data. You don't have to think about breaking because of the schema changes and it's trusted and secure and it's cost efficient. But that's the most important reason, right? You no longer have to invest specific data integration engineers which go in and manage these and run these and continuously evolve these, it becomes a very simple point click share experience with which you are able to see that with seamless evolution of your schema.

Now, at the heart of it, the architecture, what we have done is is what we call the bidirectional architecture, which you get a seamless approach of what we call data sharing and data federation. Conceptually what we say is that whenever you want to bring data into data cloud, you're trying to federate the data within the sales force data cloud

And we want to take the data from data cloud into, into aws. That's what we call data sharing right. And the seamless loop kind of helps you to realize more value from the totality of the data that is sitting between sales force ecosystem and the amazon ecosystem from us.

We have all the crm data that comes from our own first party data sources as well as all the streaming data that you want to push with the respect to web and mobile engagements and other kind of ingestion a p purposes from other systems. You can have your loyalty data, you can have your point of sale data sitting as part of your aws system.

And with that, you can easily build a unified 360 degree view which you will then power end to end processes within our own platform with respect to our own applications as well as your own kind of custom applications that you want to be able to build or custom kind of bi analytics that you want to be able to build or even custom kind of a il that you want to be able to build in amazon on top of it.

No, it's fit for purpose. Like as always said, whether it's a business persona on our end here, typically people come with a specific business goal in mind with a use case in mind where for example, imagine from marketer persona point of view, he wants to be able to create a segment by combining imagine a point of sale data sitting within the amazon red shift and they are able to do that easily with the power of this integration to be able to bring that data connected to the customer data model, build the customer data graph and then use that data as a way for them to define the segments and then activate seamlessly anywhere they want from the data persona point of view or a data engineer on a point of view who wants to be able to use the data sales force data to be able to train a custom model or a custom bi as part of aws tool. They will be able to do the same thing by securely sharing the sales force data into red shift and then being able to use the data to be able to define their own model or being able to define their own kind of bi or any kind of custom analytics in terms of the jobs that they want to be able to define on the amazon site.

Now, how does that work within the uh what we call the left side, which is a data federation site from red shift to data cloud. You get the ability with what we call a federated connector on our side, you go into the data cloud, you will choose that and once you connect that you are able to seamlessly connect to all the tables sitting within the red shift and then you are able to mount them as what we call external data lake objects within, within our system and then use them after that point in time, just like normal objects you would have within our own system which has been ingested and maintained as like native data leak objects.

And that helps you to enrich your customer profiles in real time without requiring to move any data or copy the data and then being able to activate them consistently across end to end within the sales force platform where now you are able to use them to show a unified customer 360 degree view. You are able to then build customer behavioral insights that you want to be able to use for for personalization purposes or for the segmentation purposes and then also being able to understand them and activate them to the third party platform with the totality of the first party data that you have.

And this feature is gonna go pilot in december, which is pretty soon next month by mid december, we're gonna start launching what we call a pilot pilot in sales. So knowledge is like a closed beta. We will be working with a handheld set of customers with, with the first batch and then we're gonna go ga in the next first quarter of the next year.

As part of this, the other side of the feature is called the data sharing where we allow a seamless sharing of the data set within the sales force ecosystem with the data cloud to amazon redshift with the concept of data shares. Again, it's a very simple approach. You go ahead in the data cloud, you create a data share, add the sales force objects into the data share and you share it with the target which points to the target red shift account, aws account and then the redshift server over there. And then all these objects in that seamlessly share be made available as a database with the secure views within the red shift. And then you are able to use them just like regular objects to run your sql queries on top of it. And you can combine sales force data with the data setting in red shift for combined analytics. And this feature will also go pilot next month and then overall, this feature will go ga around middle of next year and, and that will give you the full bidirectional in terms of the capabilities that we will offer here from data federation to data sharing.

Now, how does this work underneath uh with respect to the data federation, we are starting first with the approach of sql based access where we create what we call external data lake objects which are like mounted onto the table sitting within your redshift cluster. And we take care of running all the things in the, in terms of the capabilities of the data cloud seamlessly on top of it once you do that and in future, we can extend the same thing for file based federation with respect to your data sitting in your data lake in your s3 data lake in the form of, you know, table files, which could be in the form of part or iceberg tables and being able to then seamlessly also federate over those. And that's something which will launch, expected to launch around the middle of next year.

So overall, it makes it very seamless in terms of how you want to think about with both approaches where you could have native storage within the redshift side, or you could have more open storages in terms of how you are maintaining your data.

And with respect to data out, we from a data cloud side of the house are built with open standard in terms of the iceberg being the storage format with which we store our data. And that's what we have done that we have shared that at the storage level with amazon. So that, that opens up the possibility for us to share our tables as external tables within redshift and as we create our data shares and share it with the target red shift server within the your own aws account. They are mounted as tables on the power of these iceberg based standard tables underneath and you are able to run seamless queries on top of it.

Overall, this is in specific about how we make it happen with the amazon red shift. And internally we manage in terms of the sharing of creation of all the red shift views underneath seamlessly for you, which where the lake lake formation sharing gets shared into your catalog in the target aws account, which then gets mounted as specific views created within your own redshift cluster. And then you're able to run your query seamlessly on top of it.

And in the reverse side, we have created a federated connector through which we are able to then run our query engine and our analytics, all of our capabilities as part of our services seamlessly on top of it. And that's how you get the combined power of bidirectional integration here.

Now we're gonna extend these capabilities pretty soon as a first step, what we have done with red shift over to a broader set of aws analytic services with the power of the late formation sharing and some of the upcoming innovation, we are working closely with amazon with the iceberg being the standard format as to how we share the data so that once we share it into the group catalog, it will open up seamless capability for you to go and use all of our data across all of aws services, whether it's amazon athena, whether you want to be able to run an emr job or whether you want to be able to run it as part of red shift.

And the same thing which is what we're gonna extend also being across other a i services as well with the sage maker where we currently offer a sql based integration, which is gonna then transition into this file-based integration as well underneath. And from the reverse side, we're gonna have our sql based connector which we exist, which is gonna be launched next month and out of that, then we're gonna evolve into mid next year into a full file-based federation approach as well.

Now let me walk you through a quick overview of how this feature and functionality will work end to end so that you can imagine the experience and relate to it.

So there are two parts to it in terms of the demo. The part one is what we call, how do you federate and bring the data sitting in amazon red shift over into the sales force data cloud. And part two of the demo is going to be about how you take the sales force data sitting in data cloud and seamlessly share it with, with amazon redshift.

Now, in terms of the data federation demo, the step one imagine you are the the data architect personas is in the sales force data cloud whose job is to is to connect unify and harmonize the data and create your customer 360 degree view, which with which then you can offer it for, for data analysts, data scientists or any of the business users on top to be able to work on that unified customer data.

So he comes in, he creates a normal redshift connection. You'll go ahead and provide the information about the credentials, how to connect to the redshift server. Once it's done, they are able to test the connection and and the connection goes into the active state.

Next, what they do is they go into data cloud and create what's called data streams. Data streams is a way in which how you kind of conceptually connect to any data in sitting in in the source. When they will create a new data stream, there will be a new connector available for them for amazon redshift, which under the category of data federation, which will be able to connect to these tables live, they will select the connector, they'll do the next. And after that, they'll be able to see all the databases and the schemes which are available within the red shift.

They'll be able to select specific tables. In this case, imagine the customer has their, their, their product purchase data, their catalog data and their web engagement data sitting in red shift server. So they are able to connect to those tables and then they are able to mount these tables via these data stream. In terms of the specific objects that they want to be able to bring here.

They are able to categorize the data on the sales data cloud site. These are profile data engagement data or other data. They provide this usual metadata on top of us and once they provide this meta data, they're able to select the specific fields that they want to be able to bring over.

And once they do that all of the data gets connected and being become available as part of sales force data cloud. And internally what happens is that all these objects get mounted from a metadata perspective as what we call external data lake objects. And once they are mounted, they are available like native data leak objects within sales flow data cloud.

And now you are able to map them and model them. Mapping. And modeling is a process that we have within our product as a way for you to harmonize the data into some canonical shapes. Because purchase history contains data about the sales order. Sales order is a canonical object that we have within the data cloud. And once you are able to do that, you are able to map it into to bring it into a consistent semantic concept of a sales order created within our system.

Once you do that, all of that data becomes part of your full customer data graph and customer data graph is how you relate the data tied to an individual or an account at the at multiple levels in terms of sales order, sales order items or in terms of product and the product catalog. And once you have that you have the full 360 degree view and with the 360 degree view, now you are able to use that in any of our services up there.

And one of the services we have is for for marketing purposes, which is about creating segments, the cohorts of data that you want to be able to target in terms of your, your messaging or, or for sending offers. So in this case, he will go and he'll create a new segment. Now, we are switching over to the market here, personas who will go ahead and create a new segment because they want to be able to target the customers who have shown a high propensity to buy and and they are the customers who have been purchasing with us on a frequent basis, but they haven't done a purchase in the last, let's say 90 days enough in terms of being able to follow up with them.

So we are looking for customers who have a propensity to buy in the jackets category. And based on the data available from the amazon and the unified data available in data cloud, we are able to build this propensity model now and being able to target those customers.

And once we build this segment, we are then able to seamlessly activate the segment back into marketing cloud sales force marketing cloud to be able to put those customers in a journey, which you can send them as a proportional offer to be able to make that purchase for those jackets that they are interested to buy. And it's a seamless again experience with which we give the full loop connectivity in terms of deep integration within the sales force applications. In this case, marketing cloud where they're able to activate it and then they are able to send them a specific personalized email messages or being able to target them for s ms or mobile kind of messaging as well.

The other aspect of this is that all of this data is consistently available within the sales force applications with a full 360 degree view of that customer. So you are able to see all the insights that are tied to them in terms of the purchases that they made, in terms of their pro to buy, in terms of the timeline and the activities at which they have bought. And all of this power seamlessly permits across the entire set of applications that we have with the power of the integration of this profile and their insights and the a i specific predictions that we have done as part of all of our suit of applications.

Now that is one part of the demo. Now let's look at the reverse part of the demo as to how we can take the data that is sitting within the sales for ecosystem. Imagine think in terms of the the customers, in terms of having their, their email engagements that they have done. Clicks open sense because you put them on a campaign, they are viewing certain emails, they have logged a certain set of support tickets and cases which are all within the sales source systems

How you are able to share them out seamlessly to Redshift to be able to do combined analytics. So in this case again, the Bob the data cloud architects, his job would be to how to share this data out to to the data persona on the AWS site.

What they will do is they will come, they'll create what we call a data share target. A data share target is a way in which you point to the target AWS account and the Redshift server to which you want to be able to share the data.

Here, you will get the option to select. Now, Amazon Redshift, you create that target account tied to the credentials of that account to be able to authenticate you are sharing with the right account.

Once you create the data share target, the next thing what you do is you create what's called a data share. And this is like the new modern way of integration. In terms of how easily you can share the data, you give it a logical name in terms of the purpose of this data share that you're creating.

Once you create this data share, all you do is you, you select a simple thing, point, click, select the objects that you want to be able to share. You wanna be able to share the unified profile and their cases that you wanna be able to take and share with, with your Redshift account with the AWS account.

And with the Redshift server in there you go ahead and then you associate this share with the target. In this case, you select the the the data share target that you had earlier created and and that's all you need to do. And the data share will do all the magic behind the scenes to make this data available seamlessly as part of the target a account within your Redshift server.

So let's look at the analyst on the AWS side who wants to build, let's say a Redshift ML model and a BI kind of a report on top of it. What she will do is once the data is shared, this will now become available after mounting through the Lake Formation as a database in the Redshift server with the specific set of tables which were shared will become available here.

They can write normal SQL queries on top of it. In this case, they are trying to build a customer churn model with which they are trying to predict their their customer train scores based on the case data that has been shared from the Salesforce data cloud.

They make that prediction that prediction is written out into another output table with Redshift. And using that now they are able to make any kind of analytics on top of that, right, with the BI in this case, now, they are able to predict which are the customers who are gonna churn and what is their count based on the different categories that they have?

And with this, now I'll hand over to Darryl for part two of the presentation. She's going to talk about specifically the AI side of the house.

Great. Thank you very much.

Right. Um over the last few years, a lot of companies have been rethinking their AI strategy. There have been several challenges to providing value from AI. Some of the challenges include disconnected data. For example, the average company has about 1000 different applications. Many of them provide a siloed and fragmented view of their customers leading to poor experiences.

This also leads to integration challenges. There's also a limited talent for AI/ML specialists out there and there's a whole host of security questions that come into play. Where is my data going? Is it going to be trained with a large language model externally? And there's also a fear of hallucinations is my AI model giving me the right output and finally will the AI be customized to my company and to my use case.

During Greenforce, we introduced our Salesforce Einstein, one trusted AI platform built for CRM. And what we've learned is that AI is just the tip of the spear, you need to be able to use, customers need to be able to use all of their data with AI no matter where it's stored.

And our Data Cloud gives that harmonized real time data to feed into AI for personalization and analytics. You also need to be able to trust the AI with your most sensitive customer and business data. And we've architected the Einstein Trust layer which I'll talk about in a little bit to keep your data safe.

And last, but not the least you need access. You need access to many different kinds of AI models outside of Salesforce as well. And we've designed our open ecosystem to let you do just that.

Now, we also introduced the Einstein Copilot Studio model builder as part of our, our Dreamforce conference in September. What this lets you do is that you can bring in AI models from leading AI platforms such as SageMaker into Salesforce and you can create your own models as well in a no-code fashion which I'll talk about in a little bit.

You can train your models on data with your Data Cloud data on SageMaker with no ETL required. And then you can use it, bring it back into Salesforce, you can bring those inferences, those predictions recommendations, anything that the model gives you into Salesforce and use Flow Apex and any of our other AI to action tools.

We have G8 with SageMaker today. So you can bring in your predictive model from SageMaker today. We will integrate with Bedrock. So you can bring in your large language models from Bedrock into Salesforce into Data Cloud in June of next year. And we will also have our no code model builder available in February.

This is how this looks architecturally. You have a Data Cloud underlying everything with the data lake and the lake house. And on top of that, you have the open and extensible models that we talked about. So we have our own models. You can bring in your own models and we have, we've, we've partnered with many different companies out there as well.

In order to do that, we have our trust layer which can do secure data retrieval, dynamic grounding, prompt masking toxicity detection and a lot of that, we have our builders on top prompt builder, prediction builder, copilot studio, and all of our GPT products and our apps as well which live on top and underlying all of this at the end of the day is Hyperforce, which is our cloud with data residency and compliance.

These are some of the use cases that you can do once you have your data and Data Cloud. So for example, from customer service, you can do next best offer. This is one kind of model. For example that you can build contextual engagement with sales, you can do cross sell based or personalized loyalty offers for employee success, personalized benefits for your employees.

It can do optimized costs for onboarding likelihood of a product adoption many different such use cases and these are just a select few of those.

This is again how Einstein Copilot Studio model builder works. So let's imagine you have your data in Data Cloud and you start a data sharing in order to share your data in a zero copy fashion to the AWS Glue catalog. Then you can configure the Salesforce connector in that Glue catalog. And then you use this data to then train a model on SageMaker.

And you can then create that predictive AI model, create an open API Gateway once you've deployed the model and then bring those inferences back into model builder which lives within Data Cloud and you can use and once you've registered that model, you can use that within Salesforce. Like I mentioned Apex, Flow, Query API, there's many, many different ways and you can bring in and use those predictions to create many different kinds of models.

This again is just an overview, you know, you've connected your data. What you can do is you can connect your data here, you can then build train with SageMaker and then you can use the existing flows and other AI to action products within Salesforce. So this is just a overview slide off that.

Some of the main use cases that we've seen are product recommendations, propensity to buy cross sell upsell, like I mentioned before, propensity to churn next best action and customer sentiment. So we've had this for the last few months and these are some of the main use cases that we've seen from our customers using model builder.

Now let me talk a little bit about large language models. And I mentioned that starting in June of 2024 you'll be able to bring in your large language models from AWS into Salesforce. And these also include SageMaker JumpStart models, which includes many of the other open source models as well as any Bedrock models.

So the difference between these two of course is that Bedrock is more of an API a VPC is spun up, but SageMaker JumpStart is more within the managed service of SageMaker. So you'll be able to bring either of those models, we are prioritizing first with Bedrock and then you'll be also be able to bring in your JumpStart models as well into Salesforce through model builder. Ok?

Now, let's look at an end to end workflow of how you can bring an Amazon Bedrock model and how it can be leveraged in Salesforce. So let's say there is a specific prompt, a prompt that is available from, that is created by one of our apps or from prompt builder. For example, you can, what we, what you can do is that you can ground that data with information about the customer. So their past purchases, past cases, purchase information, all of that can be grounded with it.

And then we can provide access controls and masking of PII information. What that means is that we, we say some of these fields are sensitive personal information and we give you the customer as well to be able to say which of your fields are sensitive. So names, addresses things like that are what, what are available by default, but you will be able to also mask those.

"And by masking an example instead of my name going out to a large language model, it will be replaced by, by John Smith or you know something along those lines. Now these mass prompts then then get sent to bedrock models and then once they come back, they are we check for completion, of course, but then once they come back, we de mask it so that it's the original information that's available and they, we will also check for toxicity and biases. So toxicity detection, bias detection will automatically be a part of our trust layer here and we will also have a human in the loop. So anything that is available, we will also uh be able to s you will be able to say if it's accepted or rejected and you can learn from that. Uh and any of these uh feedbacks that are available will also be kept in data cloud.

And uh let's say it's accepted. You can use it with any of the other uh hyper personalized uh outputs here, flow apex law of the other tools that you have within sales force. Like I mentioned earlier, there will also be an audit trail so that you know what was sent when who sent it, uh what was the initial prompt and when it comes back as well, you will know what was uh what was received, what was generated from that uh external model from uh bedrock and any other metadata information as well. So that's all within our uh our trust layer and our, our, our, our audit trails as well.

Ok. So let's look at an example of uh a uh uh generative a I case uh with bedrock. So let's imagine you're a hotel with salesforce data cloud and you have your customer comments and feedback so you can then run sentiment analysis. So in this case, uh in this first example, we uh there's a comment that says we've stayed here several times and the staff is always helpful on a most recent trip, we were moved to a quite a room even upgrading us to a room with a terrace. Thank you. That gets classified as an extremely positive view.

The next one is a little uh more negative so the staff was not helpful at all. The food of the bistro was also let down. We picked this hotel for wonderful reviews, but we're surprised to find the staff to be uh unpleasant in the food subpar. So that can be classified as a extremely negative review. Once you have that you can send your personalized uh email to the customer, maybe, maybe offer them some sort of discount for their experience. That is just one example.

Now a second example here could be email generation. So let us or header generation. So let's imagine here that you are a luxury goods provider such as chanel and you have uh you want to generate a subject line for an email to give discounts to customers. So you are giving a prompt here saying generate a email subject line recommendation for a customer. Eva who frequently purchases handbags from chanel and lives in new york and has provided a negative review in the past and here on the right and exclusive offer for eva experience the best of chanel handbags with 25% discount that is generated. But we will, we're also providing contextual information uh about uh about the customer eva along with any of that information. So let's imagine this is sent out all of that information uh such as the name, if there is address, et cetera that is masked and when it comes back, it is de massed again. So, uh but that's one way, you know, you can uh create a subject line generation with model builder and with bedrock.

So let me give you a quick demo of bringing in a predictive model into uh into salesforce using model builder. So let's say you have a data cloud with information about your uh your customer. So you have account information, case information. Um uh any other purchase information about the c any engagement data, you can bring that into sage maker. Uh you go to uh uh sage maker studio and look for data wrangler flow. Once data wrangler flow comes up, you can say import data and you can bring data into uh into, into sage maker by clicking on the salesforce data cloud icon and providing the credentials for salesforce. You can see here that it asks for the name as well as the org information that is a salesforce org. Once that's done, the uh data from data cloud will be made available here on sage maker. And these are the different uh data objects that we have. We call them uh data model objects and you can also run a query on it. For example, over here, uh this is a sample query that was run, you can then create a data set. And once this data set is created, you will then be able to create a model from that data using a notebook, a python notebook there.

So I'm not going to run through the entire python notebook here. But this is just an example of how you can use uh sky could learn model to create a product recommendations or product interest model. So what from all of that data, from all of the data that you have from data cloud now that you have it here in sage maker, it includes information, purchase information. You want to find out what should i recommend for my customer and you create this model over here, you, you have it in your model registry and then you open an api gateway with a url that points to the model.

So here you have uh this model here and you take that url from the uh model registry, then you come back into data cloud and then you click on einstein's studio on the top right. Einstein studio allows you to to bring in the inferences from that model. So you give it a certain name for the model. And the end point here is the actual url from the api gateway. You can then also provide input features that go into the model. So you choose which of the data, data uh objects went into the actual model from data cloud. You can also then say which of my uh out where should the outputs go from that model? You can say that within data cloud, i want it saved within a certain object. I want it saved in a certain field called product interest field. And then you can also see the refresh history. Once you've activated the model, you can then see a uh refresh history as well as uh how many uh how many different rows came in? How many different inferences came in here? And here is the actual output here. This is the object where it was output. And you can see that product interest as well as all of the other information about the, the the model is available.

And then from here, you can also see this is how you can use it in flow. Let's say you have all of these inferences coming in in flow, you loop through all of your customers and find out if they are a club or rewards member. If we are a rewards member, uh find out, find out what products i should recommend to that particular customer and surface it up to my sales rep. So this is what this flow is doing over here. It's creating a task here and to for the sales rep, to see who the customers are as well as what they should recommend. So you see here right in the middle of the screen, this is the view for the sales rep within sales cloud. And these are all the customers that is that he has as well as a recommended product. So whether it's a backpack for, uh, for judith lopez over here or anyone else that is a, ah, they can also send a, uh, he or she can also send an email directly to the customer personalized email, et cetera. But this is just one example of how you can use this. You can see here that he is looking to send an email. Yeah, and you can also segment your customers uh and send them um uh different emails, et cetera using uh using data clouds, strong uh segmentation tools.

Ok. Great. Uh I'm gonna quickly talk about the no-code model builder with this no code model builder, which will be available in february. You will be able to create predictive a I models within salesforce directly without needing an external a i platform. It's an intuitive and a no-code experiences uh that uh is available. There are other similar tools that we've created in the past. We're consolidating that into this uh no code builder within model studio. And uh you can use this as well again with our existing uh a I to action tools. And here are some examples of what you can do with that no-code model builder directly within sales force. You can increase conversion rate for sales, cloud, improve win rate probability, woods service, cloud likelihood of extension with the industry clouds revenue forecasting late payments with field service improved network utilization with marketing cloud purchase conversion propensity email campaign targeting all of this. You can create models to do so directly within sales person starting in february with model builder.

Ok. Uh that's it from us. Thank you. Uh."

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值