[EN] re:Invent 2023 - Adam Selipsky Keynote Speech

Keyword: Amazon Web Services re:Invent 2023, Amazon Q, Generative AI, AWS, Cloud Computing, Data, Technology
Words: 4100, Reading time: 20 minutes

Video

Introduction

Adam Selipsky, CEO of Amazon Web Services, shares his perspective on cloud transformation. He highlights innovations in data, infrastructure, and artificial intelligence and machine learning that are helping AWS customers achieve their goals faster, mine untapped potential, and create a better future.

Highlights of the Speech

The following is a summary of the highlights from this speech, around 3,800 words, estimated reading time is 19 minutes. If you would like to further understand the speech content or watch the full speech, please view the complete speech video or refer to the original speech text below.

The 12th Annual AWS re:Invent Conference Commences, Adam Selipsky, CEO of AWS, warmly welcomed the over 50,000 attendees and 300,000 virtual attendees to the 12th annual AWS re:Invent conference in Las Vegas. He expressed his excitement to be there with so many people, both in-person and online. Selipsky noted that the re:Invent conference has always been focused on learning, with opportunities everywhere to expand one's knowledge - over 2,200 sessions, an incredible array of partners in the expo halls, and many chances to connect with other members of the AWS community.

Selipsky highlighted the great slate of keynotes lined up for the event. The previous evening, Peter DeSantis had given a keynote diving deep into AWS's innovations in serverless technology. Still to come were keynotes by Swami Sivasubramanian on AI and machine learning, the RBAs partner keynote, and Andy Jassy capping off the conference on Thursday. But first, Selipsky was thrilled to share what AWS had in store for the attendees right then and there.

The Sheer Variety of AWS Customers

One of the aspects Selipsky loves about re:Invent is the immense variety of customers in attendance. People from every industry, every region, and every possible use case come together, all relying on AWS to power their businesses and organizations. He cited some examples:

In financial services, AWS works with industry leaders like JPMorgan Chase, Goldman Sachs, Nasdaq, Fidelity, Allianz, Itaú, UBS, Visa, HSBC, Capital One, S&P Global, NuBank, and more.

In healthcare, AWS partners with companies including Pfizer, Gilead, Medtronic, Novo Nordisk, Roche, Moderna, Illumina, 3M, and various health information systems.

In the automotive space, AWS collaborates with BMW, Toyota, Volkswagen Group, Nissan, Hyundai, Ford Motor Company, Mercedes-Benz, and more.

These are just a sample of the amazing customers and partners using AWS to shape the future of their industries. From global enterprises to the most promising startups, organizations of every kind are choosing AWS for its security, reliability, rapid innovation, and ability to enable growth.

Expanding the AWS and Salesforce Partnership

One customer and partner AWS has been working closely with for years is Salesforce. In 2016, Salesforce selected AWS as its primary cloud provider. Today, Salesforce uses AWS's compute, storage, data services, and more to power key offerings like Salesforce Data Cloud.

Just yesterday, AWS had announced a significant expansion of its partnership with Salesforce. Salesforce will be dramatically increasing its already sizable use of AWS services. AWS will utilize more Salesforce technologies. And by integrating Amazon Bedrock with Salesforce's Einstein platform, the companies aim to make it easier for developers to quickly build and deploy generative AI applications.

The two companies are also working to unify data management by enabling zero-ETL integrations between Salesforce Data Cloud and AWS storage, compute, and data services. This will empower customers to gain insights faster.

Additionally, AWS and Salesforce are building tighter integrations between Salesforce's Service Cloud and Amazon Connect. And for the first time, Salesforce will offer its products on the AWS Marketplace, simplifying procurement for AWS customers.

With this expanded partnership, AWS and Salesforce users will have exciting new ways to innovate and build applications. But the benefits aren't limited to major corporations - fast-growing startups continue to choose AWS as well.

AWS: Home to Over 80% of Unicorns

According to Pitchbook data, there are over 1,000 "unicorn" startups valued at over $1 billion globally. The vast majority - over 80% - are AWS customers. Selipsky cited some examples:
 

  • Wiz, a cloud security company and AWS partner, is the world's largest cybersecurity unicorn and the fastest SaaS company ever to reach a $10 billion valuation.
  • Genomics One is one of Fast Company's Most Innovative Companies in Biotech. It's revolutionizing DNA sequencing and reducing costs dramatically.
  • Plaid provides the fintech infrastructure powering popular apps like Venmo, Acorns, and Robinhood. Millions use it to manage their finances.

AWS powers organizations of every size, industry, and region - even companies you'd never expect to be on the cloud. Selipsky recently learned of Loudon Guitars, a boutique guitar maker in rural Ireland using AWS to create electronic passports that ease musicians' international travels. The possibilities are endless on AWS.

What Makes AWS the Cloud Leader?

Selipsky credited AWS's leadership to its relentless focus on customers' needs and pain points, dating back to its founding. AWS was first to market by 5-7 years with the broadest, deepest set of cloud capabilities. It remains the most secure, reliable, and innovative provider.

AWS thinks differently about infrastructure, leading it to reinvent continuously and push past perceived limitations. Its global network was built from the ground up to be fundamentally different from other cloud providers - and still is today.

For instance, AWS operates 32 geographic regions worldwide, with plans for more. Every region consists of at least three isolated Availability Zones (AZs). No other provider offers this level of redundancy in every region, which is critical for capacity and uptime.

The AZs are physically separated by a significant distance, up to 100 km, and connected by redundant fiber networking for single-digit millisecond latency. Each AZ is an entirely discrete data center with its own power, water, and connectivity. So if one AZ goes down due to utility failure, traffic spikes, human error, or even natural disaster, the application remains operational in the region.

Some providers may claim all clouds are equal, but the facts say otherwise. Imagine a region with only one data center, or AZs in a single location. One incident could knock out the entire region for days. AWS's infrastructure provides unparalleled resilience.

Reinventing for Customers: Amazon S3 Innovations

AWS continuously reinvents to help customers reinvent themselves. A prime example is storage. Amazon S3, AWS's first service, provided highly durable, scalable object storage on demand - reinventing how organizations consume storage.

Rather than limited disk arrays taking weeks to provision, S3 enabled near-infinite capacity instantly at dramatically lower cost. Since launch, AWS has added multiple storage tiers to address different needs, like S3 Glacier Deep Archive at under $.01 per GB.

But why should customers manage tiers? S3 Intelligent Tiering optimizes by automatically transitioning data between access tiers, already saving customers over $2 billion.

Clearly, storage innovation is far from over. Many workloads require millisecond latency and extreme performance, like financial trading, analytics, machine learning, and real-time advertising. Currently, customers cache data in custom solutions and run analytics separately. The complexity of managing multiple infrastructures and APIs is less than ideal.

Introducing Amazon S3 Express One Zone

To solve this, AWS is excited to announce Amazon S3 Express One Zone - a new S3 class purpose-built for the highest performance, lowest latency object storage. S3 Express One Zone utilizes custom hardware and software to accelerate data processing.

For the first time, customers can choose to locate data next to compute resources in a specific AZ to minimize latency. S3 Express One Zone supports millions of requests per minute with consistent single-digit millisecond latency - up to 10x faster than S3 Standard.

The dramatically faster data processing cuts compute costs up to 60% for workloads like machine learning. Data access costs are 50% lower than S3 Standard too.

For instance, Pinterest observed 10x faster read speeds while reducing total costs by 40% using S3 Express One Zone for its machine learning visual discovery engine. This powers faster iteration and personalization, improving user experiences.

Seventeen years after reinventing cloud storage with S3, AWS continues innovating to transform how developers use it - same simple interface, ultra-low cost, now lightning fast.

Reinventing Compute with AWS Graviton

Let's look at another example of reinventing a core capability - general purpose computing. To push price-performance further for all workloads, AWS realized it had to reinvent computing from the silicon up, just like it did with storage.

In 2018, AWS became the first major cloud provider developing its own general purpose server processors. The Graviton chips deliver superior price-performance for scale-out workloads like microservices and web apps.

AWS subsequently introduced Graviton 2, boosting performance 7x for a wide range of applications. It didn't stop there - the latest Graviton 3 provides up to 25% higher compute performance over Graviton 2, with the best price-performance in EC2. Importantly, Graviton 3 also uses 60% less energy per performance.

AWS was the first to develop its own server processors, now in the 4th generation within 5 years. Competitors have yet to deliver their first. Over 150 Graviton-based instances are available across EC2, used by all top 100 customers, with over 50,000 total customers realizing major price-performance gains.

For example, by moving HANA to Graviton, SAP has seen up to 35% better price-performance for analytics workloads and aims to reduce carbon impact by 45%.

Introducing AWS Graviton4

AWS relentlessly innovates, as exemplified by its latest processor. Today, Selipsky is thrilled to announce AWS Graviton4 - the most powerful and energy efficient chip yet, with 50% more cores and 75% more memory bandwidth than Graviton3.

On average, Graviton4 provides 30% higher performance over Graviton3, with even greater gains on some workloads like 40% for databases and 45% for Java applications.

AWS also announced the preview of RHG instances, the first powered by Graviton4. RHG offers the best price-performance and energy efficiency for memory-intensive workloads like databases and real-time big data analytics.

Many more Graviton4-based instances are coming soon, enabling customers to continue driving tremendous gains.

Introducing AWS Trainium2

In addition to leading in general purpose computing, AWS recognized the need to reinvent silicon for machine learning workloads. This led it to develop Trainium for training and Inferentia for cost-optimized inference.

Earlier this year, AWS announced Inferentia2, providing the most cost-efficient EC2 inference with up to 4x higher throughput and 10x lower latency than M1. Adopters like Adobe, Deutsche Telekom, and Leonardo AI are seeing great early results deploying generative AI models at scale.

With growing interest in training as customers create larger models with more data, AWS knew it must keep pushing price-performance boundaries. Today, Selipsky is excited to announce AWS Trainium2 - the second-generation purpose-built training chip.

Trainium2 delivers up to 4x faster performance over the first generation, ideal for training massive foundation models with hundreds of billions or trillions of parameters. It will power the next generation of EC2 Ultra Clusters, delivering up to 65 exaflops of aggregate compute. The first Trainium2-based instances will be available next year, enabling customers to achieve unprecedented breakthroughs.

Meanwhile, competitors are still just talking about their own machine learning chips. AWS customers are actively training models and transforming their businesses with industry-leading custom silicon today.

Democratizing Generative AI with Amazon Bedrock

Selipsky next discussed Amazon's long history of innovating in AI, using it to enhance supply chain optimization, retail search, and products like Alexa. AI has reached an inflection point with generative AI, which will reinvent every application we use.

AWS is ready to help customers harness generative AI because it thinks differently about meeting their needs. It continues reinventing across all layers of the technology stack - infrastructure, services, and applications. And its intimate understanding of constant reinvention positions it well to help customers with theirs.

Organizations of all sizes are eager to move from early experimentation to real-world productivity gains with generative AI. To do so, they need the broadest, deepest capabilities, best price-performance, and enterprise-grade security - all of which require continuous AWS innovation.

For example, generative AI has three key layers:

1. Infrastructure for training and running foundation models

2. Services providing access to models and tools to build applications

3. Applications leveraging models for specific use cases

Let's explore the first layer - infrastructure. Two main workloads are training, which teaches models using large datasets, and inference, which runs models to generate outputs. These workloads demand massive compute resources, so performing them economically requires purpose-built ML infrastructure.

GPUs excel at the calculations required for demanding workloads like generative AI. AWS has collaborated with Nvidia for over 13 years to bring GPUs to the cloud, starting with the first cloud GPU instances in 2010. AWS was again the first major cloud to offer Nvidia's latest H100 GPUs in EC2 this year.

But chips alone aren't enough - customers need high-performance clusters to achieve the next level of generative AI performance. AWS's GPU instances run in Ultra Clusters, letting customers scale up to 20,000 GPUs interconnected by high-speed EFA networking. This provides up to 20 exaflops of capacity - rivaling a supercomputer.

All GPU instances launched in the past six years leverage Nitro, AWS's breakthrough system that dedicates the server's entire compute capacity to run customers' workloads indistinguishably from bare metal, but at lower cost. Nitro's security also continuously monitors and protects hardware and firmware.

The combination of EFA networking, Nitro virtualization, and tight collaboration with Nvidia on integrating AWS innovations enables AWS to deliver the most advanced generative AI infrastructure.

Expanding the AWS-Nvidia Partnership

To share more on the expanded Nvidia collaboration, Selipsky welcomed Nvidia founder and CEO Jensen Huang to the stage.

Huang expressed excitement to celebrate the two companies' long partnership. AWS was the first cloud to recognize the importance of GPU-accelerated computing, deploying the first cloud GPUs years ahead of competitors. In just the last few years, AWS and Nvidia have deployed 2 million GPUs - over 3000 exascale supercomputers' worth of capacity.

Huang and Selipsky announced several major new initiatives:
 

  • AWS will be the first cloud provider to offer Nvidia's advanced GH200 Grace Hopper Superchips, which connect multiple revolutionary Arm processors using NVLink. This enables AWS to offer Grace Hopper as a single giant virtual GPU instance using Nitro, then connect 32 together using AWS Ultra Clusters and EFA networking.
  • The companies are collaborating to bring Nvidia's most popular AI software stacks to AWS, like robotics simulation and large language model frameworks, enabling customers to build custom AI solutions.
  • Nvidia DGX Cloud will launch on AWS, providing access to Nvidia's own AI supercomputer for developing custom enterprise AI solutions in collaboration with Nvidia's AI experts.
  • Together, AWS and Nvidia are deploying over 1 exabyte of new capacity every quarter - an incredible pace of growth and innovation.


This expanded partnership will give developers access to amazing technology to innovate with generative AI.

Enabling On-Demand Access to GPUs

To enable customers to build and train models, AWS realized they need flexible access to clustered capacity. But requirements fluctuate - you may pause training to evaluate, add data, or tweak the model.

AWS recently addressed this by launching EC2 Capacity Blocks - the first on-demand, consumption-based model for reserving GPU capacity. This eliminates having to permanently over-provision capacity.

Capacity Blocks provide guaranteed access to EFA-connected Ultra Clusters when you need them, charged by the hour. Customers can efficiently run short ML workloads without long-term commitments.

Democratizing Access to Leading Models

The infrastructure layer enables training and running models cost-effectively. The next layer provides access to the most powerful models and tools to build applications with them.

Customers want to easily experiment with different models and switch as needed. But some cloud providers restrict model choice by partnering with only certain providers.

AWS knew an open, multi-provider approach was needed. That's why it built Amazon Bedrock - the easiest way to develop and scale generative AI applications using any model.

Bedrock offers the broadest model selection, including Cohere, Anthropic, AI21, and Amazon's own TITAN models. It makes customizing models with your own data easy with techniques like fine-tuning. And enterprise-grade security and privacy are built-in by design.

The excitement has been overwhelming, with over 10,000 customers adopting Bedrock across virtually every industry since its September launch.

For instance, Adidas is using Bedrock to help developers get answers to technical questions. Carrier is generating recommendations to help customers cut energy consumption and carbon emissions. Nasdaq is automating workflows to strengthen anti-financial crime efforts.

Many more like Bridgewater, Clarion, Cox Automotive, GoDaddy, and LexisNexis are already using Bedrock to transform experiences and processes by infusing AI into their businesses.

Expanding Collaboration with Anthropic

Anthropic, a long-time AWS customer and leading AI safety company, builds powerful yet controllable models like Claude. The companies recently announced an enhanced collaboration.

Anthropic will use AWS Trainium and Inferentia to train future generations of models. Bedrock customers gain exclusive early access to Anthropic's model customization and fine-tuning capabilities.

To discuss the partnership, Selipsky welcomed Anthropic CEO and co-founder Dario Amodei to the stage.

Amodei explained Anthropic's mission is making AI that is safe, reliable, and steerable while still highly capable. The founders were OpenAI researchers who developed key ideas behind the generative AI boom.

Anthropic's Claude excels at knowledge work like content generation, reasoning, and complex Q&A. Half of the Fortune 500 are using or testing it.

The expanded AWS-Anthropic collaboration has three components:

1. Compute infrastructure - Anthropic has used AWS for training since 2021 and is now making it its primary cloud provider.

2. Deployment to customers - Bedrock provides the perfect avenue to deliver capabilities together that couldn't be provided otherwise, like customization with proprietary data.

3. Hardware optimization - Anthropic is working with AWS to optimize Trainium and Inferentia for current and future model generations.

Some examples of the partnership in action:

  • Biomedical: Working with Pfizer to hopefully accelerate medical research and life-saving drug development
  • Legal: Created a custom model deployed via Bedrock to improve legal search for LexisNexis
  • Finance: Developing an investment analyst assistant for Bridgewater

Collaborating with AWS will help Anthropic and AWS provide powerful new experiences to customers across industries.

Taking Action with Amazon Bedrock Agents

Bedrock makes customizing generative models easy, but ultimately customers want to use them to take action - book travel, file claims, deploy software. This normally requires connecting models across multiple internal systems.

Bedrock agents solve this by executing multi-step tasks across company systems and data sources. For instance, an agent could take an order and automatically update inventory levels, invoice the customer, and notify shipping, with no engineering required.

Today, Selipsky is pleased to announce that this game-changing capability is generally available, allowing customers to rapidly automate processes using generative AI.

Delivering Enterprise-Grade Security

For organizations to adopt generative AI, Bedrock must offer robust security. Customers' data is never used for training or exposed publicly. Communication is encrypted and secured within Amazon's network. Bedrock enforces strict access controls and compliance with standards like HIPAA and GDPR.

And today, Bedrock has achieved SOC compliance, validating its controls for security, availability, and confidentiality.

Additionally, AWS announced Bedrock Guardrails, which lets customers define content policies to safeguard their AI applications - for example, restricting certain topics. This provides consistent protection across all generative activities.

Reinventing with Amazon Q - Your Enterprise AI Assistant

Bedrock allows organizations to customize generative AI with their own data. But ultimately, you want AI to help people get work done.

That's why AWS announced Amazon Q - a new AI assistant designed for business. Q understands company data and systems, helping employees get quick, accurate, secure answers to questions.

For instance, Q has deep AWS knowledge to help developers build cloud solutions. It can pull data from services like Salesforce and Slack to answer business questions. It can even take actions through plugins to company systems, while respecting user permissions.

Q delivers this assistance while ensuring rigorous data privacy and security - unlike consumer chat tools recently banned by many CIOs. AWS engineered Q from the ground up for the enterprise.

Industry-Specific Amazon Q Integrations

AWS also unveiled vertical capabilities showcasing how Amazon Q can integrate natively into solutions for lines of business:
 

  • Q for Amazon Connect enhances customer service with automated call summarization and contact center management.
  • Q for Amazon QuickSight makes business intelligence faster by generating visualizations and summaries with natural language.

Data: The Key to AI Success

Selipsky noted that while capabilities like infrastructure and services are crucial, ultimately, data is what will differentiate organizations' AI. Customization with a company's own data is imperative.

To effectively use data with AI, it must be up-to-date, accurate, discoverable, and accessible on-demand. This requires a comprehensive cloud data strategy - storing, processing, analyzing, and governing data in the cloud.

AWS offers the broadest, deepest portfolio of data services purpose-built for any use case, like data lakes with S3, relational databases like Amazon Aurora, and analytics services like Amazon Redshift.

Breaking Down Data Silos with AWS

But accessing data across services remains challenging. Most organizations have data fragmented across databases, data warehouses, and other systems. They painstakingly move and transform data between them using ETL processes.

To solve this, AWS introduced a vision of a zero-ETL future. Rather than moving data, zero-ETL seamlessly connects it between services and applications.

Last year, AWS launched the first zero-ETL integration between Amazon Aurora and Amazon Redshift, enabling near real-time analytics on live data.

Today, Selipsky was thrilled to announce three more zero-ETL integrations with Redshift for Amazon Aurora PostgreSQL, Amazon RDS for MySQL, and Amazon DynamoDB - now in preview.

And expanding beyond databases, today AWS launched a new zero-ETL integration between DynamoDB and Amazon OpenSearch Service, now generally available.

These innovations help customers unite data for greater insights, faster than ever before.

Using Generative AI to Enhance Data Discovery

To enable innovation, organizations need to make data readily discoverable and understandable. Amazon DataZone provides a data catalog and governance to find, understand, and control data.

Today, Selipsky announced a new DataZone capability that uses generative AI to automatically suggest business metadata. This saves significant time by adding contextual descriptions to data sets with just a few clicks.

DataZone recommendations will help employees discover the right data faster for quicker insights. AWS will keep rapidly innovating data management to unleash the potential of data.

Amazon Project Kuiper - Global Satellite Broadband

Selipsky noted that behind all of AWS's innovation is a willingness to make bold long-term bets to reinvent what's possible.

One example is Project Kuiper - a network of LEO satellites to provide broadband globally, including rural and remote areas. Last month, Kuiper successfully launched its first two prototype satellites.

In addition to public internet connectivity, Kuiper will provide private secure connections, enabling customers to harness Kuiper from within the AWS cloud. Early adopter testing will begin in 2024.

Kuiper is the latest bold bet to empower customer innovation on AWS.

Reinventing Together

In closing, Selipsky reiterated AWS's mission to reinvent continuously in service of customers' reinventions. He invited attendees to reimagine what's possible and make it real.

With game-changing announcements across infrastructure, services like Amazon Bedrock and Amazon Q, data management, and space connectivity, AWS is providing the essential foundation.

Selipsky expressed excitement for the journey ahead. By working backwards from customers, AWS will keep thinking differently to transform what's possible.

Here are some exciting moments of the speech:

The leader of the AWS department warmly welcomes over 50,000 attendees and 300,000 virtual participants to the 12th annual re:Invent conference.

Bedrock allows you to easily create conversational AI agents by selecting a model, providing basic instructions, connecting to APIs and data sources - no coding required.

Lydia Fonseca, the Chief Digital and Technical Officer of Pfizer, joins the stage to discuss how her company is reinventing with generative AI built on AWS.

The leader emphasizes the importance of building security into the fundamental design of chatbot technology, unlike other providers who launched without essential data privacy capabilities.

The leader emphasizes the tedious nature of maintenance and upgrades that developers face, which often traps them in an endless loop.

Amazon Q learns everything about your business while keeping your data completely private and secure.

Summary

AWS CEO Adam Selipsky delivered the opening keynote at re:Invent 2022, announcing major new AI services and partnerships. Selipsky emphasized AWS's customer-obsessed approach and breadth of enterprise customers. He revealed an expanded partnership with Salesforce, then welcomed Nvidia's Jensen Huang to announce new AI infrastructure collaborations utilizing Nvidia's latest GPUs and Grace Hopper Superchips.

Next, Selipsky introduced Amazon Q, a new AI assistant tailored for business that connects to company data and systems
. Other launches include S3 Express for ultra-fast storage, Graviton 4 chips, and Trainium 2 for training AI models. Selipsky stressed the importance of data, unveiling new zero-ETL integrations and automated metadata tagging in DataZone. He previewed Kuiper's secure private satellite connectivity for global access.

The keynote highlighted AWS's relentless focus on reinventing cloud infrastructure and services to meet evolving customer needs. Major themes included accelerating development with CodeWhisperer, integrating AI throughout operations with Amazon Q, and removing data silos for easier analytics. AWS is providing the tools enterprises need to build with generative AI at scale.
 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值