英伟达黄仁勋COMPUTEX 2024最新主题演讲英文字幕

1
00:00:00,000 --> 00:00:07,000
>> All right.

2
00:00:07,000 --> 00:00:09,000
Let's get started.

3
00:00:09,000 --> 00:00:10,000
>> Good take.

4
00:00:10,000 --> 00:00:11,000
Good take.

5
00:00:11,000 --> 00:00:12,000
>> Oh, geez.

6
00:00:12,000 --> 00:00:13,000
>> And cut.

7
00:00:13,000 --> 00:00:14,000
>> Okay.

8
00:00:14,000 --> 00:00:15,000
You guys ready?

9
00:00:15,000 --> 00:00:16,000
>> Yeah.

10
00:00:16,000 --> 00:00:17,000
>> Yeah.

11
00:00:17,000 --> 00:00:19,000
>> Everybody thinks we make GPUs.

12
00:00:19,000 --> 00:00:24,000
And there's so much more than that.

13
00:00:24,000 --> 00:00:34,000
>> This whole keynote is going to be about that.

14
00:00:34,000 --> 00:00:35,000
>> Okay.

15
00:00:35,000 --> 00:00:37,000
So we'll start at the top.

16
00:00:37,000 --> 00:00:39,000
>> Examples of the use cases.

17
00:00:39,000 --> 00:00:41,000
>> And then seeing it in action.

18
00:00:41,000 --> 00:00:43,000
That's kind of the flow.

19
00:00:43,000 --> 00:00:45,000
>> It's such a compelling story.

20
00:00:45,000 --> 00:00:47,000
>> I'm super nervous about this.

21
00:00:47,000 --> 00:00:49,000
>> Just got to get in the rhythm.

22
00:00:49,000 --> 00:00:51,000
>> We're two weeks away from it.

23
00:00:51,000 --> 00:00:53,000
You guys can go really, really major.

24
00:00:53,000 --> 00:00:55,000
>> Daily status on that animation.

25
00:00:55,000 --> 00:00:56,000
>> Can you mute?

26
00:00:56,000 --> 00:00:57,000
Because I hear myself.

27
00:00:57,000 --> 00:00:58,000
>> Sorry.

28
00:00:58,000 --> 00:01:00,000
>> What's the drop date for all the videos?

29
00:01:00,000 --> 00:01:02,000
>> It needs to be done on the 28th.

30
00:01:02,000 --> 00:01:09,000
>> Did you get all that?

31
00:01:09,000 --> 00:01:11,000
>> Safe travels, everybody.

32
00:01:11,000 --> 00:01:13,000
>> Super excited to see everyone.

33
00:01:13,000 --> 00:01:15,000
>> See you guys soon.

34
00:01:15,000 --> 00:01:16,000
>> Okay.

35
00:01:16,000 --> 00:01:17,000
>> Bye.

36
00:01:17,000 --> 00:01:18,000
>> Bye.

37
00:01:18,000 --> 00:01:22,000
>> We're basically moving as fast as the world can absorb technology.

38
00:01:22,000 --> 00:01:24,000
So we've got to leap from ourselves.

39
00:01:24,000 --> 00:01:33,000
>> The sponges have to figure out a way to make it pop.

40
00:01:33,000 --> 00:01:37,000
You know what I'm saying?

41
00:01:37,000 --> 00:01:38,000
You know what I'm saying?

42
00:01:38,000 --> 00:01:41,000
You want to -- yeah.

43
00:01:41,000 --> 00:01:43,000
That kind of thing.

44
00:01:43,000 --> 00:02:12,000
[ Music ]

45
00:02:12,000 --> 00:02:17,000
[ Speaking Chinese ]

46
00:02:17,000 --> 00:02:18,000
>> Please welcome to the stage,

47
00:02:18,000 --> 00:02:21,000
NVIDIA founder and CEO, Jen Fung-Hua.

48
00:02:21,000 --> 00:02:35,000
[ Music ]

49
00:02:35,000 --> 00:02:40,000
[ Speaking Chinese ]

50
00:02:40,000 --> 00:02:43,000
>> I am very happy to be back.

51
00:02:43,000 --> 00:02:48,000
Thank you, NTU, for letting us use your stadium.

52
00:02:48,000 --> 00:02:56,000
The last time I was here, I received a degree from NTU.

53
00:02:56,000 --> 00:03:04,000
[ Applause ]

54
00:03:04,000 --> 00:03:09,000
And I gave the run, don't walk speech.

55
00:03:09,000 --> 00:03:12,000
And today we have a lot to cover.

56
00:03:12,000 --> 00:03:13,000
So I cannot walk.

57
00:03:13,000 --> 00:03:15,000
I must run.

58
00:03:15,000 --> 00:03:17,000
We have a lot to cover.

59
00:03:17,000 --> 00:03:19,000
I have many things to tell you.

60
00:03:19,000 --> 00:03:22,000
I'm very happy to be here in Taiwan.

61
00:03:22,000 --> 00:03:28,000
Taiwan is the home of our treasured partners.

62
00:03:28,000 --> 00:03:34,000
This is, in fact, where everything NVIDIA does begins.

63
00:03:34,000 --> 00:03:39,000
Our partners and ourselves take it to the world.

64
00:03:39,000 --> 00:03:49,000
Taiwan and our partnership has created the world's AI infrastructure.

65
00:03:49,000 --> 00:03:55,000
Today I want to talk to you about several things.

66
00:03:55,000 --> 00:04:04,000
One, what is happening and the meaning of the work that we do together?

67
00:04:04,000 --> 00:04:06,000
What is generative AI?

68
00:04:06,000 --> 00:04:14,000
What is its impact on our industry and on every industry?

69
00:04:14,000 --> 00:04:23,000
A blueprint for how we will go forward and engage this incredible opportunity.

70
00:04:23,000 --> 00:04:27,000
And what's coming next?

71
00:04:27,000 --> 00:04:34,000
Generative AI and its impact, our blueprint, and what comes next?

72
00:04:34,000 --> 00:04:38,000
These are really, really exciting times.

73
00:04:38,000 --> 00:04:47,000
A restart of our computer industry, an industry that you have forged,

74
00:04:47,000 --> 00:04:55,000
an industry that you have created, and now you're prepared for the next major journey.

75
00:04:55,000 --> 00:05:05,000
But before we start, NVIDIA lives at the intersection of computer graphics,

76
00:05:05,000 --> 00:05:10,000
simulations, and artificial intelligence.

77
00:05:10,000 --> 00:05:13,000
This is our soul.

78
00:05:13,000 --> 00:05:18,000
Everything that I show you today is simulation.

79
00:05:18,000 --> 00:05:26,000
It's math, it's science, it's computer science, it's amazing computer architecture,

80
00:05:26,000 --> 00:05:31,000
none of it's animated, and it's all homemade.

81
00:05:31,000 --> 00:05:39,000
This is NVIDIA Soul, and we put it all into this virtual world we call Omniverse.

82
00:05:39,000 --> 00:05:42,000
Please enjoy.

83
00:05:42,000 --> 00:05:47,000
[Music]

84
00:05:49,000 --> 00:05:53,000
[Music]

85
00:05:53,000 --> 00:05:56,000
[Music]

86
00:05:56,000 --> 00:05:59,000
[Music]

87
00:05:59,000 --> 00:06:02,000
[Music]

88
00:06:02,000 --> 00:06:05,000
[Music]

89
00:06:05,000 --> 00:06:08,000
[Music]

90
00:06:08,000 --> 00:06:11,000
[Music]

91
00:06:11,000 --> 00:06:14,000
[Music]

92
00:06:14,000 --> 00:06:17,000
[Music]

93
00:06:17,000 --> 00:06:20,000
[Music]

94
00:06:20,000 --> 00:06:23,000
[Music]

95
00:06:23,000 --> 00:06:26,000
[Music]

96
00:06:26,000 --> 00:06:29,000
[Music]

97
00:06:29,000 --> 00:06:32,000
[Music]

98
00:06:32,000 --> 00:06:35,000
[Music]

99
00:06:35,000 --> 00:06:38,000
[Music]

100
00:06:38,000 --> 00:06:41,000
[Music]

101
00:06:41,000 --> 00:06:44,000
[Music]

102
00:06:44,000 --> 00:06:47,000
[Music]

103
00:06:47,000 --> 00:06:50,000
[Music]

104
00:06:50,000 --> 00:06:53,000
[Music]

105
00:06:53,000 --> 00:06:56,000
[Music]

106
00:06:56,000 --> 00:06:59,000
[Music]

107
00:06:59,000 --> 00:07:02,000
[Music]

108
00:07:02,000 --> 00:07:05,000
[Music]

109
00:07:05,000 --> 00:07:08,000
[Music]

110
00:07:08,000 --> 00:07:11,000
[Music]

111
00:07:11,000 --> 00:07:14,000
[Music]

112
00:07:14,000 --> 00:07:17,000
[Music]

113
00:07:17,000 --> 00:07:20,000
[Music]

114
00:07:20,000 --> 00:07:23,000
[Music]

115
00:07:23,000 --> 00:07:26,000
[Music]

116
00:07:26,000 --> 00:07:29,000
[Music]

117
00:07:29,000 --> 00:07:32,000
[Music]

118
00:07:32,000 --> 00:07:35,000
[Music]

119
00:07:35,000 --> 00:07:38,000
[Music]

120
00:07:38,000 --> 00:07:41,000
[Applause]

121
00:07:41,000 --> 00:07:44,000
[Applause]

122
00:07:44,000 --> 00:07:47,000
[Applause]

123
00:07:47,000 --> 00:07:50,000
[Music]

124
00:07:50,000 --> 00:07:53,000
[Applause]

125
00:07:53,000 --> 00:07:56,000
[Applause]

126
00:07:56,000 --> 00:07:59,000
[Music]

127
00:07:59,000 --> 00:08:03,000
I want to speak to you in Chinese, but I have so much to tell you,

128
00:08:03,000 --> 00:08:07,000
I have to think too hard to speak Chinese.

129
00:08:07,000 --> 00:08:11,000
So I have to speak to you in English.

130
00:08:11,000 --> 00:08:17,000
At the foundation of everything that you saw was two fundamental technologies,

131
00:08:17,000 --> 00:08:24,000
accelerated computing and artificial intelligence, running inside the omniverse.

132
00:08:24,000 --> 00:08:32,000
Those two technologies, those two fundamental forces of computing,

133
00:08:32,000 --> 00:08:36,000
are going to reshape the computer industry.

134
00:08:36,000 --> 00:08:42,000
The computer industry is now some 60 years old.

135
00:08:42,000 --> 00:08:51,000
In a lot of ways, everything that we do today was invented the year after my birth, in 1964.

136
00:08:51,000 --> 00:08:58,000
The IBM System 360 introduced central processing units, general purpose computing,

137
00:08:58,000 --> 00:09:04,000
the separation of hardware and software through an operating system,

138
00:09:04,000 --> 00:09:13,000
multitasking, I/O subsystems, DMA, all kinds of technologies that we use today,

139
00:09:13,000 --> 00:09:19,000
architectural compatibility, backwards compatibility, family compatibility,

140
00:09:19,000 --> 00:09:25,000
all of the things that we know today about computing largely described in 1964.

141
00:09:26,000 --> 00:09:30,000
Of course, the PC revolution democratized computing

142
00:09:30,000 --> 00:09:33,000
and put it in the hands and the houses of everybody.

143
00:09:33,000 --> 00:09:40,000
And then in 2007, the iPhone introduced mobile computing and put the computer in our pocket.

144
00:09:40,000 --> 00:09:47,000
Ever since, everything is connected and running all the time through the mobile cloud.

145
00:09:47,000 --> 00:09:55,000
This last 60 years, we saw several, just several, not that many actually,

146
00:09:55,000 --> 00:10:05,000
two or three major technology shifts, two or three tectonic shifts in computing,

147
00:10:05,000 --> 00:10:10,000
where everything changed and we're about to see that happen again.

148
00:10:10,000 --> 00:10:13,000
There are two fundamental things that are happening.

149
00:10:13,000 --> 00:10:20,000
The first is that the processor, the engine by which the computer industry runs on,

150
00:10:20,000 --> 00:10:26,000
the central processing unit, the performance scaling has slowed tremendously.

151
00:10:26,000 --> 00:10:35,000
And yet, the amount of computation we have to do is still doubling very quickly, exponentially.

152
00:10:35,000 --> 00:10:43,000
If processing requirement, if the data that we need to process continues to scale exponentially,

153
00:10:43,000 --> 00:10:49,000
but performance does not, we will experience computation inflation.

154
00:10:49,000 --> 00:10:52,000
And in fact, we're seeing that right now as we speak.

155
00:10:52,000 --> 00:10:57,000
The amount of data center power that's used all over the world is growing quite substantially.

156
00:10:57,000 --> 00:11:00,000
The cost of computing is growing.

157
00:11:00,000 --> 00:11:03,000
We are seeing computation inflation.

158
00:11:03,000 --> 00:11:07,000
This, of course, cannot continue.

159
00:11:07,000 --> 00:11:16,000
The data is going to continue to increase exponentially and CPU performance scaling will never return.

160
00:11:16,000 --> 00:11:22,000
There is a better way. For almost two decades now, we've been working on accelerated computing.

161
00:11:22,000 --> 00:11:32,000
CUDA augments a CPU, offloads and accelerates the work that a specialized processor can do much, much better.

162
00:11:32,000 --> 00:11:37,000
In fact, the performance is so extraordinary that it is very clear now,

163
00:11:37,000 --> 00:11:45,000
as CPU scaling has slowed and substantially stopped, we should accelerate everything.

164
00:11:45,000 --> 00:11:51,000
I predict that every application that is processing intensive will be accelerated

165
00:11:51,000 --> 00:11:56,000
and surely every data center will be accelerated in the near future.

166
00:11:56,000 --> 00:12:00,000
Now, accelerated computing is very sensible. It's very common sense.

167
00:12:00,000 --> 00:12:07,000
If you take a look at an application, and here the 100T means 100 units of time.

168
00:12:07,000 --> 00:12:12,000
It could be 100 seconds, it could be 100 hours, and in many cases, as you know,

169
00:12:12,000 --> 00:12:17,000
we're now working on artificial intelligence applications that run for 100 days.

170
00:12:17,000 --> 00:12:28,000
The 1T is code that requires sequential processing, where single-threaded CPUs are really quite essential.

171
00:12:28,000 --> 00:12:36,000
Operating systems, control logic, really essential to have one instruction executed after another instruction.

172
00:12:36,000 --> 00:12:40,000
However, there are many algorithms, computer graphics is one,

173
00:12:40,000 --> 00:12:43,000
that you can operate completely in parallel.

174
00:12:43,000 --> 00:12:50,000
Computer graphics, image processing, physics simulations, combinatorial optimizations,

175
00:12:50,000 --> 00:12:58,000
graph processing, database processing, and of course, the very famous linear algebra of deep learning.

176
00:12:58,000 --> 00:13:05,000
There are many types of algorithms that are very conducive to acceleration through parallel processing.

177
00:13:05,000 --> 00:13:13,000
So we invented an architecture to do that. By adding the GPU to the CPU,

178
00:13:13,000 --> 00:13:19,000
the specialized processor can take something that takes a great deal of time

179
00:13:19,000 --> 00:13:24,000
and accelerate it down to something that is incredibly fast.

180
00:13:24,000 --> 00:13:29,000
And because the two processors can work side by side, they're both autonomous and they're both separate,

181
00:13:29,000 --> 00:13:37,000
independent that is, we can accelerate what used to take 100 units of time down to one unit of time.

182
00:13:37,000 --> 00:13:43,000
Well, the speedup is incredible. It almost sounds unbelievable.

183
00:13:43,000 --> 00:13:49,000
It almost sounds unbelievable, but today I'll demonstrate many examples for you.

184
00:13:49,000 --> 00:13:58,000
The benefit is quite extraordinary. A hundred times speedup, but you only increase the power by about a factor of three.

185
00:13:58,000 --> 00:14:05,000
And you increase the cost by only about 50%. We do this all the time in the PC industry.

186
00:14:05,000 --> 00:14:14,000
We add a GPU, a $500 GPU, GeForce GPU, to a $1,000 PC, and the performance increases tremendously.

187
00:14:14,000 --> 00:14:19,000
We do this in a data center, a billion dollar data center.

188
00:14:19,000 --> 00:14:26,000
We add $500 million worth of GPUs, and all of a sudden it becomes an AI factory.

189
00:14:26,000 --> 00:14:34,000
This is happening all over the world today. Well, the savings are quite extraordinary.

190
00:14:34,000 --> 00:14:44,000
You're getting 60 times performance per dollar, a hundred times speedup, you only increase your power by 3x,

191
00:14:44,000 --> 00:14:52,000
a hundred times speedup, you only increase your cost by 1.5x. The savings are incredible.

192
00:14:52,000 --> 00:15:04,000
The savings are measured in dollars. It is very clear that many, many companies spend hundreds of millions of dollars processing data in the cloud.

193
00:15:04,000 --> 00:15:12,000
If it was accelerated, it is not unexpected that you could save hundreds of millions of dollars.

194
00:15:12,000 --> 00:15:16,000
Now, why is that? Well, the reason for that is very clear.

195
00:15:16,000 --> 00:15:22,000
We've been experiencing inflation for so long in general purpose computing.

196
00:15:22,000 --> 00:15:33,000
Now that we finally came to--we're finally determined to accelerate, there's an enormous amount of captured loss that we can now regain.

197
00:15:33,000 --> 00:15:42,000
A great deal of captured, retained waste that we can now relieve out of the system, and that will translate into savings.

198
00:15:42,000 --> 00:15:59,000
Savings in money, savings in energy. And that's the reason why you've heard me say, "The more you buy, the more you save."

199
00:15:59,000 --> 00:16:07,000
And now I've shown you the mathematics. It is not accurate, but it is correct.

200
00:16:07,000 --> 00:16:15,000
Okay, that's called CEO math. CEO math is not accurate, but it is correct. The more you buy, the more you save.

201
00:16:15,000 --> 00:16:20,000
Well, accelerated computing does deliver extraordinary results, but it is not easy.

202
00:16:20,000 --> 00:16:25,000
Why is it that it saves so much money, but people haven't done it for so long?

203
00:16:25,000 --> 00:16:29,000
The reason for that is because it's incredibly hard.

204
00:16:29,000 --> 00:16:39,000
There is no such thing as a software that you can just run through a C compiler and all of a sudden that application runs 100 times faster.

205
00:16:39,000 --> 00:16:46,000
That is not even logical. If it was possible to do that, they would have just changed the CPU to do that.

206
00:16:46,000 --> 00:16:50,000
You, in fact, have to rewrite the software. That's the hard part.

207
00:16:50,000 --> 00:17:01,000
The software has to be completely rewritten so that you could refactor, re-express the algorithms that was written on a CPU

208
00:17:01,000 --> 00:17:06,000
so that it could be accelerated, offloaded, accelerated, and run in parallel.

209
00:17:06,000 --> 00:17:15,000
That computer science exercise is insanely hard. Well, we've made it easy for the world over the last 20 years.

210
00:17:15,000 --> 00:17:21,000
Of course, the very famous cuDNN, the deep learning library that processes neural networks.

211
00:17:21,000 --> 00:17:27,000
We have a library for AI physics that you could use for fluid dynamics and many other applications

212
00:17:27,000 --> 00:17:31,000
where the neural network has to obey the laws of physics.

213
00:17:31,000 --> 00:17:37,000
We have a great new library called Ariel that is a CUDA-accelerated 5G radio

214
00:17:37,000 --> 00:17:46,000
so that we can software define and accelerate the telecommunications networks the way that we've software defined

215
00:17:46,000 --> 00:17:50,000
the world's networking internet.

216
00:17:50,000 --> 00:17:58,000
The ability for us to accelerate that allows us to turn all of telecom into essentially the same type of platform,

217
00:17:58,000 --> 00:18:01,000
a computing platform, just like we have in the cloud.

218
00:18:01,000 --> 00:18:12,000
cuLitho is a computational lithography platform that allows us to process the most computationally intensive parts of chip manufacturing,

219
00:18:12,000 --> 00:18:14,000
making the mask.

220
00:18:14,000 --> 00:18:21,000
TSMC is in the process of going to production with cuLitho, saving enormous amounts of energy and enormous amounts of money,

221
00:18:21,000 --> 00:18:30,000
but the goal for TSMC is to accelerate their stack so that they're prepared for even further advances in algorithm

222
00:18:30,000 --> 00:18:36,000
and more computation for deeper and deeper narrow and narrow transistors.

223
00:18:36,000 --> 00:18:38,000
Parabricks is our gene sequencing library.

224
00:18:38,000 --> 00:18:42,000
It is the highest throughput library in the world for gene sequencing.

225
00:18:42,000 --> 00:18:51,000
CuOpt is an incredible library for combinatorial optimization, route planning optimization, the traveling salesman problem.

226
00:18:51,000 --> 00:18:53,000
Incredibly complicated.

227
00:18:53,000 --> 00:19:00,000
People have -- scientists have largely concluded that you needed a quantum computer to do that.

228
00:19:00,000 --> 00:19:04,000
We created an algorithm that runs on accelerated computing that runs lightning fast.

229
00:19:04,000 --> 00:19:06,000
23 world records.

230
00:19:06,000 --> 00:19:10,000
We hold every single major world record today.

231
00:19:10,000 --> 00:19:15,000
cuQuantum is an emulation system for a quantum computer.

232
00:19:15,000 --> 00:19:18,000
If you want to design a quantum computer, you need a simulator to do so.

233
00:19:18,000 --> 00:19:22,000
If you want to design quantum algorithms, you need a quantum emulator to do so.

234
00:19:22,000 --> 00:19:23,000
How would you do that?

235
00:19:23,000 --> 00:19:30,000
How would you design these quantum computers, create these quantum algorithms, if the quantum computer doesn't exist?

236
00:19:30,000 --> 00:19:36,000
Well, you use the fastest computer in the world that exists today, and we call it, of course, NVIDIA CUDA.

237
00:19:36,000 --> 00:19:41,000
And on that, we have an emulator that simulates quantum computers.

238
00:19:41,000 --> 00:19:47,000
It is used by several hundred thousand researchers around the world.

239
00:19:47,000 --> 00:19:51,000
It is integrated into all the leading frameworks for quantum computing,

240
00:19:51,000 --> 00:19:55,000
and it is used in scientific supercomputing centers all over the world.

241
00:19:55,000 --> 00:20:01,000
cuDF is an unbelievable library for data processing.

242
00:20:01,000 --> 00:20:06,000
Data processing consumes the vast majority of cloud spend today.

243
00:20:06,000 --> 00:20:08,000
All of it should be accelerated.

244
00:20:08,000 --> 00:20:13,000
cuDF accelerates the major libraries used in the world.

245
00:20:13,000 --> 00:20:22,000
Spark, many of you probably use Spark in your companies, Pandas, a new one called Polar,

246
00:20:22,000 --> 00:20:29,000
and of course, NetworkX, which is a graph processing database library.

247
00:20:29,000 --> 00:20:31,000
And so these are just some examples.

248
00:20:31,000 --> 00:20:32,000
There are so many more.

249
00:20:32,000 --> 00:20:40,000
Each one of them had to be created so that we could enable the ecosystem to take advantage of accelerated computing.

250
00:20:40,000 --> 00:20:46,000
If we hadn't created cuDNN, CUDA alone wouldn't have been possible

251
00:20:46,000 --> 00:20:49,000
for all of the deep learning scientists around the world to use,

252
00:20:49,000 --> 00:20:57,000
because CUDA and the algorithms that are used in TensorFlow and PyTorch, the deep learning algorithms,

253
00:20:57,000 --> 00:21:00,000
the separation is too far apart.

254
00:21:00,000 --> 00:21:03,000
It's almost like trying to do computer graphics without OpenGL.

255
00:21:03,000 --> 00:21:08,000
It's almost like doing data processing without SQL.

256
00:21:08,000 --> 00:21:12,000
These domain-specific libraries are really the treasure of our company.

257
00:21:12,000 --> 00:21:16,000
We have 350 of them.

258
00:21:16,000 --> 00:21:23,000
These libraries is what it takes and what has made it possible for us to have such open so many markets.

259
00:21:23,000 --> 00:21:26,000
I'll show you some other examples today.

260
00:21:26,000 --> 00:21:33,000
Well, just last week, Google announced that they put cuDF in the cloud and accelerate Pandas.

261
00:21:33,000 --> 00:21:37,000
Pandas is the most popular data science library in the world.

262
00:21:37,000 --> 00:21:40,000
Many of you in here probably already use Pandas.

263
00:21:40,000 --> 00:21:47,000
It's used by 10 million data scientists in the world, downloaded 170 million times each month.

264
00:21:47,000 --> 00:21:52,000
It is the Excel, that is the spreadsheet of data scientists.

265
00:21:52,000 --> 00:21:58,000
Well, with just one click, you can now use Pandas in Colab,

266
00:21:58,000 --> 00:22:03,000
which is Google's cloud data centers platform, accelerated by cuDF.

267
00:22:03,000 --> 00:22:06,000
The speed up is really incredible. Let's take a look.

268
00:22:06,000 --> 00:22:18,000
[Music]

269
00:22:18,000 --> 00:22:21,000
That was a great demo, right? It didn't take long.

270
00:22:21,000 --> 00:22:23,000
[Laughter]

271
00:22:23,000 --> 00:22:28,000
[Applause]

272
00:22:28,000 --> 00:22:34,000
When you accelerate data processing that fast, demos don't take long.

273
00:22:34,000 --> 00:22:41,000
Okay, well, CUDA has now achieved what people call a tipping point.

274
00:22:41,000 --> 00:22:45,000
But it's even better than that. CUDA has now achieved a virtuous cycle.

275
00:22:45,000 --> 00:22:47,000
This rarely happens.

276
00:22:47,000 --> 00:22:52,000
If you look at history and all the computing architecture, computing platforms,

277
00:22:52,000 --> 00:22:59,000
in the case of microprocessor CPUs, it has been here for 60 years.

278
00:22:59,000 --> 00:23:03,000
It has not been changed for 60 years.

279
00:23:03,000 --> 00:23:08,000
At this level, this way of doing computing, accelerated computing,

280
00:23:08,000 --> 00:23:13,000
has been around, has -- creating a new platform is extremely hard

281
00:23:13,000 --> 00:23:15,000
because it's a chicken and egg problem.

282
00:23:15,000 --> 00:23:20,000
If there are no developers that use your platform,

283
00:23:20,000 --> 00:23:22,000
then of course there will be no users.

284
00:23:22,000 --> 00:23:25,000
But if there are no users, there are no install base.

285
00:23:25,000 --> 00:23:28,000
If there are no install base, developers aren't interested in it.

286
00:23:28,000 --> 00:23:32,000
Developers want to write software for a large install base.

287
00:23:32,000 --> 00:23:35,000
A large install base requires a lot of applications

288
00:23:35,000 --> 00:23:38,000
so that users would create that install base.

289
00:23:38,000 --> 00:23:43,000
This chicken or the egg problem has rarely been broken

290
00:23:43,000 --> 00:23:47,000
and has taken us now 20 years, one domain library after another,

291
00:23:47,000 --> 00:23:49,000
one acceleration library after another,

292
00:23:49,000 --> 00:23:53,000
and now we have five million developers around the world.

293
00:23:53,000 --> 00:23:58,000
We serve every single industry from healthcare, financial services,

294
00:23:58,000 --> 00:24:00,000
of course the computer industry, automotive industry,

295
00:24:00,000 --> 00:24:02,000
just about every major industry in the world,

296
00:24:02,000 --> 00:24:06,000
just about every field of science.

297
00:24:06,000 --> 00:24:10,000
Because there are so many customers for our architecture,

298
00:24:10,000 --> 00:24:15,000
OEMs and cloud service providers are interested in building our systems.

299
00:24:15,000 --> 00:24:18,000
System makers, amazing system makers like the ones here in Taiwan

300
00:24:18,000 --> 00:24:20,000
are interested in building our systems,

301
00:24:20,000 --> 00:24:24,000
which then takes and offers more systems to the market,

302
00:24:24,000 --> 00:24:28,000
which of course creates greater opportunity for us,

303
00:24:28,000 --> 00:24:31,000
which allows us to increase our scale, R&D scale,

304
00:24:31,000 --> 00:24:34,000
which speeds up the application even more.

305
00:24:34,000 --> 00:24:38,000
Well, every single time we speed up the application,

306
00:24:38,000 --> 00:24:41,000
the cost of computing goes down.

307
00:24:41,000 --> 00:24:44,000
This is that slide I was showing you earlier.

308
00:24:44,000 --> 00:24:50,000
100x speedup translates to 97, 96%, 98% savings.

309
00:24:50,000 --> 00:24:55,000
And so when we go from 100x speedup to 200x speedup to 1,000x speedup,

310
00:24:55,000 --> 00:25:00,000
the savings, the marginal cost of computing continues to fall.

311
00:25:00,000 --> 00:25:09,000
Well, of course, we believe that by reducing the cost of computing incredibly,

312
00:25:09,000 --> 00:25:15,000
the market developers, scientists, inventors will continue to discover new algorithms

313
00:25:15,000 --> 00:25:19,000
that consume more and more and more computing

314
00:25:19,000 --> 00:25:27,000
so that one day something happens, that a phase shift happens,

315
00:25:27,000 --> 00:25:30,000
that the marginal cost of computing is so low

316
00:25:30,000 --> 00:25:34,000
that a new way of using computers emerge.

317
00:25:34,000 --> 00:25:37,000
In fact, that's what we're seeing now.

318
00:25:37,000 --> 00:25:40,000
Over the years, we have driven down the marginal cost of computing

319
00:25:40,000 --> 00:25:45,000
in the last 10 years in one particular algorithm by a million times.

320
00:25:45,000 --> 00:25:54,000
Well, as a result, it is now very logical and very common sense

321
00:25:54,000 --> 00:25:59,000
to train large language models with all of the data on the Internet.

322
00:25:59,000 --> 00:26:02,000
Nobody thinks twice.

323
00:26:02,000 --> 00:26:08,000
This idea that you could create a computer that could process so much data

324
00:26:08,000 --> 00:26:13,000
to write its own software, the emergence of artificial intelligence,

325
00:26:13,000 --> 00:26:16,000
was made possible because of this complete belief

326
00:26:16,000 --> 00:26:19,000
that if we made computing cheaper and cheaper and cheaper,

327
00:26:19,000 --> 00:26:21,000
somebody's going to find a great use.

328
00:26:21,000 --> 00:26:25,000
Well, today, CUDA has achieved a virtuous cycle,

329
00:26:25,000 --> 00:26:30,000
install base is growing, computing cost is coming down,

330
00:26:30,000 --> 00:26:34,000
which causes more developers to come up with more ideas,

331
00:26:34,000 --> 00:26:37,000
which drives more demand,

332
00:26:37,000 --> 00:26:40,000
and now we're in the beginning of something very, very important.

333
00:26:40,000 --> 00:26:45,000
But before I show you that, I'm going to show you what is not possible

334
00:26:45,000 --> 00:26:49,000
if not for the fact that we created CUDA,

335
00:26:49,000 --> 00:26:52,000
that we created the modern version of generative--

336
00:26:52,000 --> 00:26:56,000
the modern big bang of AI, generative AI.

337
00:26:56,000 --> 00:26:58,000
What I'm about to show you would not be possible.

338
00:26:58,000 --> 00:27:01,000
This is Earth-2.

339
00:27:01,000 --> 00:27:07,000
The idea that we would create a digital twin of the Earth,

340
00:27:07,000 --> 00:27:11,000
that we would go and simulate the Earth

341
00:27:11,000 --> 00:27:15,000
so that we could predict the future of our planet

342
00:27:15,000 --> 00:27:19,000
to better avert disasters

343
00:27:19,000 --> 00:27:21,000
or better understand the impact of climate change

344
00:27:21,000 --> 00:27:23,000
so that we can adapt better,

345
00:27:23,000 --> 00:27:25,000
so that we could change our habits now.

346
00:27:25,000 --> 00:27:28,000
This digital twin of Earth

347
00:27:28,000 --> 00:27:31,000
is probably one of the most ambitious projects

348
00:27:31,000 --> 00:27:33,000
that the world's ever undertaken,

349
00:27:33,000 --> 00:27:36,000
and we're taking large steps every single year,

350
00:27:36,000 --> 00:27:38,000
and I'll show you results every single year,

351
00:27:38,000 --> 00:27:40,000
but this year we made some great breakthroughs.

352
00:27:40,000 --> 00:27:42,000
Let's take a look.

353
00:27:42,000 --> 00:27:55,000
On Monday, the storm will veer north again and approach Taiwan.

354
00:27:55,000 --> 00:27:58,000
There are big uncertainties regarding its path.

355
00:27:58,000 --> 00:28:02,000
Different paths will have different levels of impact on Taiwan.

356
00:28:02,000 --> 00:28:05,000
The weather is very cold,

357
00:28:05,000 --> 00:28:07,000
and the weather is very cold.

358
00:28:07,000 --> 00:28:09,000
The weather is very cold,

359
00:28:09,000 --> 00:28:11,000
and the weather is very cold.

360
00:28:11,000 --> 00:28:13,000
The weather is very cold,

361
00:28:13,000 --> 00:28:15,000
and the weather is very cold.

362
00:28:15,000 --> 00:28:17,000
The weather is very cold,

363
00:28:17,000 --> 00:28:20,000
and the weather is very cold.

364
00:28:20,000 --> 00:28:26,000
This is the video of the Earth-2 video.

365
00:28:26,000 --> 00:28:30,000
The AI video is a video of the Earth-2 video.

366
00:28:30,000 --> 00:28:34,000
The AI video is a video of the Earth-2 video.

367
00:28:34,000 --> 00:28:44,000
The AI video is a video of the Earth-2 video.

368
00:28:44,000 --> 00:28:48,000
The AI video is a video of the Earth-2 video.

369
00:28:48,000 --> 00:28:52,000
The AI video is a video of the Earth-2 video.

370
00:28:52,000 --> 00:28:56,000
The AI video is a video of the Earth-2 video.

371
00:28:56,000 --> 00:29:00,000
The AI video is a video of the Earth-2 video.

372
00:29:00,000 --> 00:29:06,000
The AI video is a video of the Earth-2 video.

373
00:29:06,000 --> 00:29:09,000
The AI video is a video of the Earth-2 video.

374
00:29:09,000 --> 00:29:13,000
The AI video is a video of the Earth-2 video.

375
00:29:13,000 --> 00:29:17,000
The AI video is a video of the Earth-2 video.

376
00:29:17,000 --> 00:29:19,000
The AI video is a video of the Earth-2 video.

377
00:29:19,000 --> 00:29:22,000
The AI video is a video of the Earth-2 video.

378
00:29:22,000 --> 00:29:24,000
The Earth-2 video is a video of the Earth-2 video.

379
00:29:24,000 --> 00:29:27,000
The AI video is a video of the Earth-2 video.

380
00:29:27,000 --> 00:29:31,000
The AI video is a video of the Earth-2 video.

381
00:29:31,000 --> 00:29:34,000
The AI video is a video of the Earth-2 video.

382
00:29:34,000 --> 00:29:37,000
The Earth-2 video is a video of the Earth-2 video.

383
00:29:37,000 --> 00:29:40,000
The AI video is a video of the Earth-2 video.

384
00:29:40,000 --> 00:29:43,000
The Earth-2 video is a video of the Earth-2 video.

385
00:29:43,000 --> 00:29:46,000
The Earth-2 video is a video of the Earth-2 video.

386
00:29:46,000 --> 00:29:49,000
The Earth-2 video is a video of the Earth-2 video.

387
00:29:49,000 --> 00:29:52,000
The Earth-2 video is a video of the Earth-2 video.

388
00:29:52,000 --> 00:29:54,000
The Earth-2 video is a video of the Earth-2 video.

389
00:29:54,000 --> 00:29:56,000
The Earth-2 video is a video of the Earth-2 video.

390
00:29:56,000 --> 00:29:58,000
The Earth-2 video is a video of the Earth-2 video.

391
00:29:58,000 --> 00:30:00,000
The Earth-2 video is a video of the Earth-2 video.

392
00:30:00,000 --> 00:30:02,000
The Earth-2 video is a video of the Earth-2 video.

393
00:30:02,000 --> 00:30:04,000
The Earth-2 video is a video of the Earth-2 video.

394
00:30:04,000 --> 00:30:06,000
The Earth-2 video is a video of the Earth-2 video.

395
00:30:06,000 --> 00:30:08,000
The Earth-2 video is a video of the Earth-2 video.

396
00:30:08,000 --> 00:30:10,000
The Earth-2 video is a video of the Earth-2 video.

397
00:30:10,000 --> 00:30:12,000
The Earth-2 video is a video of the Earth-2 video.

398
00:30:12,000 --> 00:30:14,000
The Earth-2 video is a video of the Earth-2 video.

399
00:30:14,000 --> 00:30:16,000
The Earth-2 video is a video of the Earth-2 video.

400
00:30:16,000 --> 00:30:22,000
The Earth-2 video is a video of the Earth-2 video.

401
00:30:22,000 --> 00:30:24,000
The Earth-2 video is a video of the Earth-2 video.

402
00:30:24,000 --> 00:30:26,000
The Earth-2 video is a video of the Earth-2 video.

403
00:30:26,000 --> 00:30:28,000
The Earth-2 video is a video of the Earth-2 video.

404
00:30:28,000 --> 00:30:30,000
The Earth-2 video is a video of the Earth-2 video.

405
00:30:30,000 --> 00:30:32,000
The Earth-2 video is a video of the Earth-2 video.

406
00:30:32,000 --> 00:30:34,000
The Earth-2 video is a video of the Earth-2 video.

407
00:30:34,000 --> 00:30:36,000
The Earth-2 video is a video of the Earth-2 video.

408
00:30:36,000 --> 00:30:38,000
The Earth-2 video is a video of the Earth-2 video.

409
00:30:38,000 --> 00:30:40,000
The Earth-2 video is a video of the Earth-2 video.

410
00:30:40,000 --> 00:30:42,000
The Earth-2 video is a video of the Earth-2 video.

411
00:30:42,000 --> 00:30:44,000
The Earth-2 video is a video of the Earth-2 video.

412
00:30:44,000 --> 00:30:46,000
The Earth-2 video is a video of the Earth-2 video.

413
00:30:46,000 --> 00:30:52,000
The Earth-2 video is a video of the Earth-2 video.

414
00:30:52,000 --> 00:30:54,000
The Earth-2 video is a video of the Earth-2 video.

415
00:30:54,000 --> 00:30:56,000
The Earth-2 video is a video of the Earth-2 video.

416
00:30:56,000 --> 00:30:58,000
The Earth-2 video is a video of the Earth-2 video.

417
00:30:58,000 --> 00:31:00,000
The Earth-2 video is a video of the Earth-2 video.

418
00:31:00,000 --> 00:31:02,000
The Earth-2 video is a video of the Earth-2 video.

419
00:31:02,000 --> 00:31:04,000
The Earth-2 video is a video of the Earth-2 video.

420
00:31:04,000 --> 00:31:06,000
The Earth-2 video is a video of the Earth-2 video.

421
00:31:06,000 --> 00:31:08,000
The Earth-2 video is a video of the Earth-2 video.

422
00:31:08,000 --> 00:31:10,000
The Earth-2 video is a video of the Earth-2 video.

423
00:31:10,000 --> 00:31:12,000
The Earth-2 video is a video of the Earth-2 video.

424
00:31:12,000 --> 00:31:14,000
The Earth-2 video is a video of the Earth-2 video.

425
00:31:14,000 --> 00:31:16,000
The Earth-2 video is a video of the Earth-2 video.

426
00:31:16,000 --> 00:31:18,000
The Earth-2 video is a video of the Earth-2 video.

427
00:31:18,000 --> 00:31:20,000
The Earth-2 video is a video of the Earth-2 video.

428
00:31:20,000 --> 00:31:22,000
The Earth-2 video is a video of the Earth-2 video.

429
00:31:22,000 --> 00:31:24,000
The Earth-2 video is a video of the Earth-2 video.

430
00:31:24,000 --> 00:31:26,000
The Earth-2 video is a video of the Earth-2 video.

431
00:31:26,000 --> 00:31:28,000
The Earth-2 video is a video of the Earth-2 video.

432
00:31:28,000 --> 00:31:30,000
The Earth-2 video is a video of the Earth-2 video.

433
00:31:30,000 --> 00:31:32,000
The Earth-2 video is a video of the Earth-2 video.

434
00:31:32,000 --> 00:31:34,000
The Earth-2 video is a video of the Earth-2 video.

435
00:31:34,000 --> 00:31:36,000
The Earth-2 video is a video of the Earth-2 video.

436
00:31:36,000 --> 00:31:38,000
The Earth-2 video is a video of the Earth-2 video.

437
00:31:38,000 --> 00:31:44,000
That is a miracle. That is a miracle indeed.

438
00:31:44,000 --> 00:31:48,000
However, in 2012, something very important happened.

439
00:31:48,000 --> 00:31:51,000
Because of our dedication to advancing CUDA,

440
00:31:51,000 --> 00:31:56,000
because of our dedication to continuously improve the performance of Drive the Cost Down,

441
00:31:56,000 --> 00:32:01,000
researchers discovered, AI researchers discovered CUDA in 2012.

442
00:32:01,000 --> 00:32:07,000
That was NVIDIA's first contact with AI.

443
00:32:07,000 --> 00:32:09,000
This was a very important day.

444
00:32:09,000 --> 00:32:14,000
We had the good wisdom to work with the scientists

445
00:32:14,000 --> 00:32:17,000
to make it possible for deep learning to happen.

446
00:32:17,000 --> 00:32:21,000
And AlexNet achieved, of course, a tremendous computer vision breakthrough.

447
00:32:21,000 --> 00:32:26,000
But the great wisdom was to take a step back and understanding

448
00:32:26,000 --> 00:32:31,000
what was the background, what is the foundation of deep learning,

449
00:32:31,000 --> 00:32:36,000
what is its long-term impact, what is its potential.

450
00:32:36,000 --> 00:32:40,000
And we realized that this technology has great potential to scale.

451
00:32:40,000 --> 00:32:45,000
An algorithm that was invented and discovered decades ago,

452
00:32:45,000 --> 00:32:50,000
all of a sudden, because of more data, larger networks,

453
00:32:50,000 --> 00:32:54,000
and very importantly, a lot more compute,

454
00:32:54,000 --> 00:33:01,000
all of a sudden, deep learning was able to achieve what no human algorithm was able to.

455
00:33:01,000 --> 00:33:05,000
Now imagine if we were to scale up the architecture even more.

456
00:33:05,000 --> 00:33:08,000
Larger networks, more data, and more compute.

457
00:33:08,000 --> 00:33:10,000
What could be possible?

458
00:33:10,000 --> 00:33:14,000
So we dedicated ourselves to reinvent everything.

459
00:33:14,000 --> 00:33:19,000
After 2012, we changed the architecture of our GPU to add Tensor Cores.

460
00:33:19,000 --> 00:33:21,000
We invented NVLink.

461
00:33:21,000 --> 00:33:24,000
That was 10 years ago now.

462
00:33:24,000 --> 00:33:31,000
cuDNN, TensorRT, Nickel.

463
00:33:31,000 --> 00:33:37,000
We bought Mellanox, Tensor RTLM, the Triton inference server.

464
00:33:37,000 --> 00:33:43,000
And all of it came together on a brand new computer nobody understood.

465
00:33:43,000 --> 00:33:47,000
Nobody asked for it, nobody understood it,

466
00:33:47,000 --> 00:33:50,000
and in fact, I was certain nobody wanted to buy it.

467
00:33:50,000 --> 00:33:57,000
And so we announced it at GTC, and OpenAI, a small company in San Francisco, saw it.

468
00:33:57,000 --> 00:34:01,000
And they asked me to deliver one to them.

469
00:34:01,000 --> 00:34:10,000
I delivered the first DGX, the world's first AI supercomputer, to OpenAI in 2016.

470
00:34:10,000 --> 00:34:15,000
Well, after that, we continued to scale.

471
00:34:15,000 --> 00:34:19,000
From one AI supercomputer, one AI appliance,

472
00:34:19,000 --> 00:34:23,000
we scaled it up to large supercomputers, even larger.

473
00:34:23,000 --> 00:34:28,000
By 2017, the world discovered transformers

474
00:34:28,000 --> 00:34:32,000
so that we could train enormous amounts of data

475
00:34:32,000 --> 00:34:38,000
and recognize and learn patterns that are sequential over large spans of time.

476
00:34:38,000 --> 00:34:42,000
It is now possible for us to train these large language models to understand

477
00:34:42,000 --> 00:34:47,000
and achieve a breakthrough in natural language understanding.

478
00:34:47,000 --> 00:34:50,000
And we kept going after that.

479
00:34:50,000 --> 00:34:52,000
We built even larger ones.

480
00:34:52,000 --> 00:34:57,000
And then in November 2022, trained on thousands,

481
00:34:57,000 --> 00:35:03,000
tens of thousands of NVIDIA GPUs in a very large AI supercomputer,

482
00:35:03,000 --> 00:35:06,000
OpenAI announced ChatGPT.

483
00:35:06,000 --> 00:35:11,000
One million users after five days.

484
00:35:11,000 --> 00:35:15,000
One million after five days, a hundred million after two months.

485
00:35:15,000 --> 00:35:18,000
The fastest growing application in history.

486
00:35:18,000 --> 00:35:21,000
And the reason for that is very simple.

487
00:35:21,000 --> 00:35:26,000
It is just so easy to use and it was so magical to use.

488
00:35:26,000 --> 00:35:30,000
To be able to interact with a computer like it's human

489
00:35:30,000 --> 00:35:33,000
instead of being clear about what you want.

490
00:35:33,000 --> 00:35:36,000
It's like the computer understands your meaning.

491
00:35:36,000 --> 00:35:39,000
It understands your intention.

492
00:35:39,000 --> 00:35:44,000
Oh, I think here it asked the closest night market.

493
00:35:44,000 --> 00:35:48,000
As you know, the night market is very important to me.

494
00:35:48,000 --> 00:35:50,000
[laughter]

495
00:35:50,000 --> 00:35:55,000
So when I was young, I think I was four and a half years old,

496
00:35:55,000 --> 00:35:59,000
I used to love going to the night market because I just love watching people.

497
00:35:59,000 --> 00:36:04,000
And so my parents used to take us to the night market.

498
00:36:04,000 --> 00:36:12,000
[speaking Chinese]

499
00:36:12,000 --> 00:36:17,000
And I love going.

500
00:36:17,000 --> 00:36:23,000
And one day, you guys might see that I have a large scar on my face.

501
00:36:23,000 --> 00:36:27,000
My face was cut because somebody was washing their knife and I was a little kid.

502
00:36:27,000 --> 00:36:33,000
But my memories of the night market is so deep because of that.

503
00:36:33,000 --> 00:36:36,000
And I still love going to the night market.

504
00:36:36,000 --> 00:36:38,000
And I just need to tell you guys this.

505
00:36:38,000 --> 00:36:44,000
The Tong Hua night market is really good because there's a lady,

506
00:36:44,000 --> 00:36:48,000
she's been working there for 43 years.

507
00:36:48,000 --> 00:36:53,000
She's the fruit lady and it's in the middle between the two.

508
00:36:53,000 --> 00:36:55,000
Go find her.

509
00:36:55,000 --> 00:36:59,000
[speaking Chinese]

510
00:36:59,000 --> 00:37:04,000
[applause]

511
00:37:04,000 --> 00:37:07,000
She's really terrific.

512
00:37:07,000 --> 00:37:11,000
I think it would be funny after this all of you go to see her.

513
00:37:11,000 --> 00:37:14,000
Every year she's doing better and better.

514
00:37:14,000 --> 00:37:19,000
Her car test has improved and I just love watching her succeed.

515
00:37:19,000 --> 00:37:27,000
Anyways, Chat GPT came along and something is very important in this slide.

516
00:37:27,000 --> 00:37:33,000
Here, let me show you something.

517
00:37:33,000 --> 00:37:39,000
This slide and this slide.

518
00:37:39,000 --> 00:37:43,000
The fundamental difference is this.

519
00:37:43,000 --> 00:37:54,000
Until Chat GPT revealed it to the world, AI was all about perception.

520
00:37:54,000 --> 00:38:01,000
Natural language understanding, computer vision, speech recognition.

521
00:38:01,000 --> 00:38:05,000
It's all about perception and detection.

522
00:38:05,000 --> 00:38:11,000
This was the first time the world saw a generative AI.

523
00:38:11,000 --> 00:38:18,000
It produced tokens, one token at a time, and those tokens were words.

524
00:38:18,000 --> 00:38:25,000
Some of the tokens, of course, could now be images or charts or tables,

525
00:38:25,000 --> 00:38:30,000
songs, words, speech, videos.

526
00:38:30,000 --> 00:38:35,000
Those tokens could be anything, anything that you can learn the meaning of.

527
00:38:35,000 --> 00:38:42,000
It could be tokens of chemicals, tokens of proteins, genes.

528
00:38:42,000 --> 00:38:49,000
You saw earlier in Earth-2, we were generating tokens of the weather.

529
00:38:49,000 --> 00:38:52,000
We can learn physics.

530
00:38:52,000 --> 00:38:55,000
If you can learn physics, you could teach an AI model physics.

531
00:38:55,000 --> 00:39:00,000
The AI model could learn the meaning of physics and it can generate physics.

532
00:39:00,000 --> 00:39:08,000
We were scaling down to one kilometer not by using filtering, it was generating.

533
00:39:08,000 --> 00:39:17,000
And so we can use this method to generate tokens for almost anything, almost anything of value.

534
00:39:17,000 --> 00:39:22,000
We can generate steering wheel control for a car.

535
00:39:22,000 --> 00:39:27,000
We can generate articulation for a robotic arm.

536
00:39:27,000 --> 00:39:32,000
Everything that we can learn, we can now generate.

537
00:39:32,000 --> 00:39:38,000
We have now arrived not at the AI era, but a generative AI era.

538
00:39:38,000 --> 00:39:43,000
But what's really important is this.

539
00:39:43,000 --> 00:39:51,000
This computer that started out as a supercomputer has now evolved into a data center

540
00:39:51,000 --> 00:39:55,000
and it produces one thing.

541
00:39:55,000 --> 00:39:58,000
It produces tokens.

542
00:39:58,000 --> 00:40:01,000
It's an AI factory.

543
00:40:01,000 --> 00:40:09,000
This AI factory is generating, creating, producing something of great value, a new commodity.

544
00:40:09,000 --> 00:40:17,000
In the late 1890s, Nikola Tesla invented an AC generator.

545
00:40:17,000 --> 00:40:20,000
We invented an AI generator.

546
00:40:20,000 --> 00:40:24,000
The AC generator generated electrons.

547
00:40:24,000 --> 00:40:28,000
NVIDIA's AI generator generates tokens.

548
00:40:28,000 --> 00:40:32,000
Both of these things have large market opportunities.

549
00:40:32,000 --> 00:40:36,000
It's completely fungible in almost every industry.

550
00:40:36,000 --> 00:40:41,000
And that's why it's a new industrial revolution.

551
00:40:41,000 --> 00:40:49,000
We have now a new factory producing a new commodity for every industry that is of extraordinary value.

552
00:40:49,000 --> 00:40:53,000
And the methodology for doing this is quite scalable.

553
00:40:53,000 --> 00:40:57,000
And the methodology of doing this is quite repeatable.

554
00:40:57,000 --> 00:41:04,000
Notice how quickly so many different AI models, generative AI models, are being invented, literally daily.

555
00:41:04,000 --> 00:41:09,000
Every single industry is now piling on.

556
00:41:09,000 --> 00:41:16,000
For the very first time, the IT industry, which is $3 trillion,

557
00:41:16,000 --> 00:41:26,000
$3 trillion IT industry is about to create something that can directly serve $100 trillion of industry.

558
00:41:26,000 --> 00:41:34,000
No longer just an instrument for information storage or data processing,

559
00:41:34,000 --> 00:41:39,000
but a factory for generating intelligence for every industry.

560
00:41:39,000 --> 00:41:43,000
This is going to be a manufacturing industry.

561
00:41:43,000 --> 00:41:50,000
Not a manufacturing industry of computers, but using the computers in manufacturing.

562
00:41:50,000 --> 00:41:52,000
This has never happened before.

563
00:41:52,000 --> 00:41:54,000
Quite an extraordinary thing.

564
00:41:54,000 --> 00:42:04,000
What started with accelerated computing led to AI, led to generative AI, and now an industrial revolution.

565
00:42:04,000 --> 00:42:11,000
Now the impact to our industry is also quite significant.

566
00:42:11,000 --> 00:42:18,000
Of course we could create a new commodity, a new product we call tokens for many industries,

567
00:42:18,000 --> 00:42:21,000
but the impact to ours is also quite profound.

568
00:42:21,000 --> 00:42:29,000
For the very first time, as I was saying earlier, in 60 years, every single layer of computing has been changed.

569
00:42:29,000 --> 00:42:34,000
From CPUs, general purpose computing, to accelerated GPU computing,

570
00:42:34,000 --> 00:42:43,000
where the computer needs instructions, now computers process LLMs, large language models, AI models.

571
00:42:43,000 --> 00:42:49,000
And whereas the computing model of the past is retrieval-based,

572
00:42:49,000 --> 00:42:56,000
almost every time you touch your phone, some pre-recorded text or pre-recorded image or pre-recorded video

573
00:42:56,000 --> 00:43:02,000
is retrieved for you and recomposed based on a recommender system

574
00:43:02,000 --> 00:43:06,000
to present it to you based on your habits.

575
00:43:06,000 --> 00:43:14,000
But in the future, your computer will generate as much as possible, retrieve only what's necessary.

576
00:43:14,000 --> 00:43:21,000
And the reason for that is because generated data requires less energy to go fetch information.

577
00:43:21,000 --> 00:43:25,000
Generated data also is more contextually relevant.

578
00:43:25,000 --> 00:43:29,000
It will encode knowledge, it will encode your understanding of you,

579
00:43:29,000 --> 00:43:39,000
and instead of get that information for me or get that file for me, you just say, ask me for an answer.

580
00:43:39,000 --> 00:43:50,000
And instead of a tool, instead of your computer being a tool that we use, the computer will now generate skills.

581
00:43:50,000 --> 00:43:52,000
It performs tasks.

582
00:43:52,000 --> 00:43:59,000
And instead of an industry that is producing software, which was a revolutionary idea in the early '90s,

583
00:43:59,000 --> 00:44:07,000
remember the idea that Microsoft created for packaging software revolutionized the PC industry.

584
00:44:07,000 --> 00:44:13,000
Without packaged software, what would we use the PC to do?

585
00:44:13,000 --> 00:44:15,000
It drove this industry.

586
00:44:15,000 --> 00:44:20,000
And now we have a new factory, a new computer.

587
00:44:20,000 --> 00:44:24,000
And what we will run on top of this is a new type of software.

588
00:44:24,000 --> 00:44:30,000
And we call it NIMS, NVIDIA Inference Microservices.

589
00:44:30,000 --> 00:44:34,000
Now what happens is the NIM runs inside this factory.

590
00:44:34,000 --> 00:44:38,000
And this NIM is a pre-trained model.

591
00:44:38,000 --> 00:44:40,000
It's an AI.

592
00:44:40,000 --> 00:44:45,000
Well, this AI is, of course, quite complex in itself.

593
00:44:45,000 --> 00:44:50,000
The computing stack that runs AIs are insanely complex.

594
00:44:50,000 --> 00:44:56,000
When you go and use ChatGPT, underneath their stack is a whole bunch of software.

595
00:44:56,000 --> 00:44:59,000
Underneath that prompt is a ton of software.

596
00:44:59,000 --> 00:45:04,000
And it's incredibly complex because the models are large, billions to trillions of parameters.

597
00:45:04,000 --> 00:45:06,000
It doesn't run on just one computer.

598
00:45:06,000 --> 00:45:08,000
It runs on multiple computers.

599
00:45:08,000 --> 00:45:13,000
It has to distribute the workload across multiple GPUs, tensor parallelism, pipeline parallelism,

600
00:45:13,000 --> 00:45:20,000
data parallelism, all kinds of parallelism, expert parallelism, all kinds of parallelism,

601
00:45:20,000 --> 00:45:25,000
distributing the workload across multiple GPUs, processing it as fast as possible.

602
00:45:25,000 --> 00:45:34,000
Because if you are in a factory, if you run a factory, your throughput directly correlates to your revenues.

603
00:45:34,000 --> 00:45:37,000
Your throughput directly correlates to quality of service.

604
00:45:37,000 --> 00:45:42,000
And your throughput directly correlates to the number of people who can use your service.

605
00:45:42,000 --> 00:45:48,000
We are now in a world where data center throughput utilization is vitally important.

606
00:45:48,000 --> 00:45:51,000
It was important in the past, but not vitally important.

607
00:45:51,000 --> 00:45:54,000
It was important in the past, but people don't measure it.

608
00:45:54,000 --> 00:45:57,000
Today, every parameter is measured.

609
00:45:57,000 --> 00:46:03,000
Start time, uptime, utilization, throughput, idle time, you name it.

610
00:46:03,000 --> 00:46:06,000
Because it's a factory.

611
00:46:06,000 --> 00:46:13,000
When something is a factory, its operations directly correlate to the financial performance of the company.

612
00:46:13,000 --> 00:46:19,000
And so we realized that this is incredibly complex for most companies to do.

613
00:46:19,000 --> 00:46:27,000
So what we did was we created this AI in a box and the containers, an incredible amount of software.

614
00:46:27,000 --> 00:46:36,000
Inside this container is CUDA, cuDNN, TensorRT, Triton for inference services.

615
00:46:36,000 --> 00:46:41,000
It is cloud native so that you could auto scale in a Kubernetes environment.

616
00:46:41,000 --> 00:46:45,000
It has management services and hooks so that you can monitor your AIs.

617
00:46:45,000 --> 00:46:53,000
It has common APIs, standard APIs, so that you could literally chat with this box.

618
00:46:53,000 --> 00:46:56,000
You download this NIM and you can talk to it.

619
00:46:56,000 --> 00:47:02,000
So long as you have CUDA on your computer, which is now, of course, everywhere,

620
00:47:02,000 --> 00:47:08,000
it's in every cloud, available from every computer maker, it is available in hundreds of millions of PCs.

621
00:47:08,000 --> 00:47:14,000
When you download this, you have an AI and you can chat with it like chat GPT.

622
00:47:14,000 --> 00:47:16,000
All of the software is now integrated.

623
00:47:16,000 --> 00:47:20,000
Four hundred dependencies all integrated into one.

624
00:47:20,000 --> 00:47:29,000
We tested this NIM, each one of these pre-trained models against all kind, our entire install base that's in the cloud,

625
00:47:29,000 --> 00:47:38,000
all the different versions of Pascal and Amperes and Hoppers and all kinds of different versions.

626
00:47:38,000 --> 00:47:40,000
I even forget some.

627
00:47:40,000 --> 00:47:47,000
NIMS, incredible invention. This is one of my favorites.

628
00:47:47,000 --> 00:47:55,000
And of course, as you know, we now have the ability to create large language models and pre-trained models of all kinds.

629
00:47:55,000 --> 00:48:03,000
And we have all of these various versions, whether it's language-based or vision-based or imaging-based.

630
00:48:03,000 --> 00:48:07,000
We have versions that are available for healthcare, digital biology.

631
00:48:07,000 --> 00:48:12,000
We have versions that are digital humans that I'll talk to you about.

632
00:48:12,000 --> 00:48:16,000
And the way you use this, just come to ai.nvidia.com.

633
00:48:16,000 --> 00:48:25,000
And today we just posted up in Hugging Face the Llama 3 NIM, fully optimized.

634
00:48:25,000 --> 00:48:27,000
It's available there for you to try.

635
00:48:27,000 --> 00:48:29,000
And you can even take it with you.

636
00:48:29,000 --> 00:48:31,000
It's available to you for free.

637
00:48:31,000 --> 00:48:35,000
And so you could run it in the cloud, run it in any cloud.

638
00:48:35,000 --> 00:48:42,000
You could download this container, put it into your own data center, and you could host it, make it available for your customers.

639
00:48:42,000 --> 00:48:46,000
We have, as I mentioned, all kinds of different domains.

640
00:48:46,000 --> 00:48:55,000
Physics, some of it is for semantic retrieval called Rags, vision languages, all kinds of different languages.

641
00:48:55,000 --> 00:49:03,000
And the way that you use it is connecting these microservices into large applications.

642
00:49:03,000 --> 00:49:09,000
One of the most important applications in the coming future, of course, is customer service agents.

643
00:49:09,000 --> 00:49:13,000
Customer service agents are necessary in just about every single industry.

644
00:49:13,000 --> 00:49:19,000
It represents trillions of dollars of customer service around the world.

645
00:49:19,000 --> 00:49:23,000
Nurses are customer service agents in some ways.

646
00:49:23,000 --> 00:49:32,000
Some of them are non-prescription or non-diagnostic based nurses, are essentially customer service.

647
00:49:32,000 --> 00:49:37,000
Customer service for retail, for quick service foods, financial services, insurance.

648
00:49:37,000 --> 00:49:47,000
Just tens and tens of millions of customer service can now be augmented by language models and augmented by AI.

649
00:49:47,000 --> 00:49:50,000
And so these boxes that you see are basically NIMS.

650
00:49:50,000 --> 00:49:53,000
Some of the NIMS are reasoning agents.

651
00:49:53,000 --> 00:49:58,000
Given a task, figure out what the mission is, break it down into a plan.

652
00:49:58,000 --> 00:50:01,000
Some of the NIMS retrieve information.

653
00:50:01,000 --> 00:50:05,000
Some of the NIMS might go and do search.

654
00:50:05,000 --> 00:50:10,000
Some of the NIMS might use a tool like Kuop that I was talking about earlier.

655
00:50:10,000 --> 00:50:14,000
It could use a tool that could be running on SAP.

656
00:50:14,000 --> 00:50:19,000
And so it has to learn a particular language called ABAP.

657
00:50:19,000 --> 00:50:22,000
Maybe some NIMS have to do SQL queries.

658
00:50:22,000 --> 00:50:29,000
And so all of these NIMS are experts that are now assembled as a team.

659
00:50:29,000 --> 00:50:31,000
So what's happening?

660
00:50:31,000 --> 00:50:36,000
The application layer has been changed.

661
00:50:36,000 --> 00:50:45,000
What used to be applications written with instructions are now applications that are assembling teams.

662
00:50:45,000 --> 00:50:47,000
Assembling teams of AIs.

663
00:50:47,000 --> 00:50:50,000
Very few people know how to write programs.

664
00:50:50,000 --> 00:50:54,000
Almost everybody knows how to break down a problem and assemble teams.

665
00:50:54,000 --> 00:51:00,000
Every company, I believe, in the future will have a large collection of NIMS.

666
00:51:00,000 --> 00:51:04,000
And you would bring down the experts that you want.

667
00:51:04,000 --> 00:51:06,000
You connect them into a team.

668
00:51:06,000 --> 00:51:13,000
And you don't even have to figure out exactly how to connect them.

669
00:51:13,000 --> 00:51:24,000
You just give the mission to an agent, to a NIM to figure out who to break the task down and who to give it to.

670
00:51:24,000 --> 00:51:35,000
And that central leader of the application, if you will, the leader of the team, would break down the task and give it to the various team members.

671
00:51:35,000 --> 00:51:39,000
The team members would perform their task, bring it back to the team leader.

672
00:51:39,000 --> 00:51:44,000
The team leader would reason about that and present an information back to you.

673
00:51:44,000 --> 00:51:47,000
Just like humans.

674
00:51:47,000 --> 00:51:49,000
This is in our near future.

675
00:51:49,000 --> 00:51:51,000
This is the way applications are going to look.

676
00:51:51,000 --> 00:52:01,000
Now, of course, we could interact with these AI services with text prompts and speech prompts.

677
00:52:01,000 --> 00:52:09,000
However, there are many applications where we would like to interact with what is otherwise a human-like form.

678
00:52:09,000 --> 00:52:11,000
We call them digital humans.

679
00:52:11,000 --> 00:52:15,000
NVIDIA has been working on digital human technology for some time.

680
00:52:15,000 --> 00:52:16,000
Let me show it to you.

681
00:52:16,000 --> 00:52:21,000
Before I do that, hang on a second.

682
00:52:21,000 --> 00:52:27,000
Digital humans has the potential of being a great interactive agent with you.

683
00:52:27,000 --> 00:52:29,000
They make much more engaging.

684
00:52:29,000 --> 00:52:32,000
They can be much more empathetic.

685
00:52:32,000 --> 00:52:43,000
And, of course, we have to cross this incredible chasm, this uncanny chasm of realism,

686
00:52:43,000 --> 00:52:47,000
so that the digital humans would appear much more natural.

687
00:52:47,000 --> 00:52:50,000
This is, of course, our vision.

688
00:52:50,000 --> 00:52:52,000
This is a vision of where we love to go.

689
00:52:52,000 --> 00:52:57,000
But let me show you where we are.

690
00:52:57,000 --> 00:52:59,000
Great to be in Taiwan.

691
00:52:59,000 --> 00:53:04,000
Before I head out to the night market, let's dive into some exciting frontiers of digital humans.

692
00:53:04,000 --> 00:53:09,000
Imagine a future where computers interact with us just like humans can.

693
00:53:09,000 --> 00:53:14,000
Hi, my name is Sophie, and I am a digital human brand ambassador for Uniq.

694
00:53:14,000 --> 00:53:18,000
This is the incredible reality of digital humans.

695
00:53:18,000 --> 00:53:27,000
Digital humans will revolutionize industries, from customer service to advertising and gaming.

696
00:53:27,000 --> 00:53:30,000
The possibilities for digital humans are endless.

697
00:53:30,000 --> 00:53:34,000
Using the scans you took of your current kitchen with your phone.

698
00:53:34,000 --> 00:53:41,000
They will be AI interior designers, helping generate beautiful photorealistic suggestions and sourcing the materials and furniture.

699
00:53:41,000 --> 00:53:45,000
We have generated several design options for you to choose from.

700
00:53:45,000 --> 00:53:51,000
They'll also be AI customer service agents, making the interaction more engaging and personalized.

701
00:53:51,000 --> 00:53:57,000
Or digital healthcare workers who will check on patients, providing timely, personalized care.

702
00:53:57,000 --> 00:54:01,000
I did forget to mention to the doctor that I am allergic to penicillin.

703
00:54:01,000 --> 00:54:03,000
Is it still okay to take the medications?

704
00:54:03,000 --> 00:54:10,000
The antibiotics you've been prescribed, ciprofloxacin and metronidazole, don't contain penicillin.

705
00:54:10,000 --> 00:54:12,000
So it's perfectly safe for you to take them.

706
00:54:12,000 --> 00:54:18,000
And they'll even be AI brand ambassadors, setting the next marketing and advertising trends.

707
00:54:18,000 --> 00:54:23,000
Hi, I'm Ima, Japan's first virtual model.

708
00:54:23,000 --> 00:54:34,000
New breakthroughs in generative AI and computer graphics let digital humans see, understand and interact with us in human-like ways.

709
00:54:34,000 --> 00:54:41,000
Hmmm, from what I can see, it looks like you're in some kind of recording or production setup.

710
00:54:41,000 --> 00:54:48,000
The foundation of digital humans are AI models, built on multilingual speech recognition and synthesis.

711
00:54:48,000 --> 00:54:52,000
And LLMs that understand and generate conversation.

712
00:54:52,000 --> 00:54:55,000
I'm a real-life person, I'm a real-life person.

713
00:54:55,000 --> 00:54:58,000
I'm a real-life person, I'm a real-life person.

714
00:54:58,000 --> 00:55:01,000
I'm a real-life person, I'm a real-life person.

715
00:55:01,000 --> 00:55:08,000
The AIs connect to another generative AI to dynamically animate a lifelike 3D mesh of a face.

716
00:55:08,000 --> 00:55:17,000
And finally, AI models that reproduce lifelike appearances, enabling real-time path traced subsurface scattering

717
00:55:17,000 --> 00:55:24,000
to simulate the way light penetrates the skin, scatters and exits at various points.

718
00:55:24,000 --> 00:55:27,000
Giving skin its soft and translucent appearance.

719
00:55:27,000 --> 00:55:37,000
NVIDIA ACE is a suite of digital human technologies packaged as easy-to-deploy fully optimized microservices or NIMS.

720
00:55:37,000 --> 00:55:44,000
Developers can integrate ACE NIMS into their existing frameworks, engines and digital human experiences.

721
00:55:44,000 --> 00:55:51,000
Nemotron SLM and LLM NIMS to understand our intent and orchestrate other models.

722
00:55:51,000 --> 00:55:55,000
Riva Speech NIMS for interactive speech and translation.

723
00:55:55,000 --> 00:56:00,000
Audio-to-face and gesture NIMS for facial and body animation.

724
00:56:00,000 --> 00:56:04,000
And Omniverse RTX with DLSS for neural rendering of skin and hair.

725
00:56:04,000 --> 00:56:11,000
ACE NIMS run on NVIDIA GDN, a global network of NVIDIA accelerated infrastructure

726
00:56:11,000 --> 00:56:17,000
that delivers low-latency digital human processing to over 100 regions.

727
00:56:17,000 --> 00:56:36,000
Pretty incredible. Well, those -- those -- ACE runs in the cloud, but it also runs on PCs.

728
00:56:36,000 --> 00:56:42,000
We had the good wisdom of including Tensor Core GPUs in all of RTX.

729
00:56:42,000 --> 00:56:49,000
So we've been shipping AI GPUs for some time, preparing ourselves for this day.

730
00:56:49,000 --> 00:56:51,000
The reason for that is very simple.

731
00:56:51,000 --> 00:56:56,000
We always knew that in order to create a new computing platform, you need an installed base first.

732
00:56:56,000 --> 00:56:58,000
Eventually the application will come.

733
00:56:58,000 --> 00:57:03,000
If you don't create the installed base, how could the application come?

734
00:57:03,000 --> 00:57:06,000
And so if you build it, they might not come.

735
00:57:06,000 --> 00:57:10,000
But if you build it -- if you don't build it, they cannot come.

736
00:57:10,000 --> 00:57:16,000
And so we installed every single RTX GPU with Tensor Core processing,

737
00:57:16,000 --> 00:57:24,000
and now we have 100 million GeForce RTX AI PCs in the world, and we're shipping 200.

738
00:57:24,000 --> 00:57:29,000
And this -- this Computex, we're featuring four new amazing laptops.

739
00:57:29,000 --> 00:57:32,000
All of them are able to run AI.

740
00:57:32,000 --> 00:57:36,000
Your future laptop, your future PC will become an AI.

741
00:57:36,000 --> 00:57:40,000
It will be constantly helping you, assisting you in the background.

742
00:57:40,000 --> 00:57:46,000
The PC will also run applications that are enhanced by AI.

743
00:57:46,000 --> 00:57:52,000
Of course, all your photo editing, your writing, your tools, all the things that you use will all be enhanced by AI.

744
00:57:52,000 --> 00:58:01,000
And your PC will also host applications with digital humans that are AIs.

745
00:58:01,000 --> 00:58:07,000
And so there are different ways that AIs will manifest themselves and become used in PCs,

746
00:58:07,000 --> 00:58:10,000
but PCs will become very important AI platform.

747
00:58:10,000 --> 00:58:13,000
And so where do we go from here?

748
00:58:13,000 --> 00:58:24,000
I spoke earlier about the scaling of our data centers, and every single time we scaled, we found a new phase change.

749
00:58:24,000 --> 00:58:34,000
When we scaled from DGX into large AI supercomputers, we enabled transformers to be able to train on enormously large data sets.

750
00:58:34,000 --> 00:58:41,000
Well, what happened was, in the beginning, the data was human supervised.

751
00:58:41,000 --> 00:58:46,000
It required human labeling to train AIs.

752
00:58:46,000 --> 00:58:49,000
Unfortunately, there is only so much you can human label.

753
00:58:49,000 --> 00:58:55,000
Transformers made it possible for unsupervised learning to happen.

754
00:58:55,000 --> 00:59:01,000
Now, transformers just look at an enormous amount of data, or look at an enormous amount of video,

755
00:59:01,000 --> 00:59:07,000
or look at an enormous amount of images, and they can learn from studying an enormous amount of data,

756
00:59:07,000 --> 00:59:09,000
find the patterns and relationships itself.

757
00:59:09,000 --> 00:59:15,000
Well, the next generation of AI needs to be physically based.

758
00:59:15,000 --> 00:59:20,000
Most of the AIs today don't understand the laws of physics.

759
00:59:20,000 --> 00:59:23,000
It's not grounded in the physical world.

760
00:59:23,000 --> 00:59:33,000
In order for us to generate images and videos and 3D graphics and many physics phenomenons,

761
00:59:33,000 --> 00:59:39,000
we need AIs that are physically based and understand the laws of physics.

762
00:59:39,000 --> 00:59:42,000
Well, the way that you could do that is, of course, learning from video is one source.

763
00:59:42,000 --> 00:59:46,000
Another way is synthetic data, simulation data.

764
00:59:46,000 --> 00:59:50,000
And another way is using computers to learn with each other.

765
00:59:50,000 --> 00:59:55,000
This is really no different than using AlphaGo, having AlphaGo play itself.

766
00:59:55,000 --> 01:00:04,000
Self-play and between the two capabilities, same capabilities, playing each other for a very long period of time,

767
01:00:04,000 --> 01:00:06,000
they emerge even smarter.

768
01:00:06,000 --> 01:00:10,000
And so you're going to start to see this type of AI emerging.

769
01:00:10,000 --> 01:00:17,000
Well, if the AI data is synthetically generated and using reinforcement learning,

770
01:00:17,000 --> 01:00:22,000
it stands to reason that the rate of data generation will continue to advance.

771
01:00:22,000 --> 01:00:25,000
And every single time data generation grows,

772
01:00:25,000 --> 01:00:29,000
the amount of computation that we have to offer needs to grow with it.

773
01:00:29,000 --> 01:00:33,000
We are about to enter a phase where AIs can learn the laws of physics

774
01:00:33,000 --> 01:00:36,000
and understand and be grounded in physical world data.

775
01:00:36,000 --> 01:00:39,000
And so we expect that models will continue to grow.

776
01:00:39,000 --> 01:00:41,000
And we need larger GPUs.

777
01:00:41,000 --> 01:00:45,000
Well, Blackwell was designed for this generation.

778
01:00:45,000 --> 01:00:47,000
This is Blackwell.

779
01:00:47,000 --> 01:00:49,000
And it has several very important technologies.

780
01:00:49,000 --> 01:00:52,000
One, of course, is just the size of the chip.

781
01:00:52,000 --> 01:00:55,000
We took two of the largest--

782
01:00:55,000 --> 01:00:58,000
a chip that is as large as you can make it at TSMC,

783
01:00:58,000 --> 01:01:04,000
and we connected two of them together with a 10 terabytes per second link between.

784
01:01:04,000 --> 01:01:08,000
The world's most advanced thirdies connecting these two together.

785
01:01:08,000 --> 01:01:12,000
We then put two of them on a computer node,

786
01:01:12,000 --> 01:01:14,000
connected with a gray CPU.

787
01:01:14,000 --> 01:01:16,000
The gray CPU could be used for several things.

788
01:01:16,000 --> 01:01:22,000
In the training situation, it could be used for fast checkpoint and restart.

789
01:01:22,000 --> 01:01:25,000
In the case of inference and generation,

790
01:01:25,000 --> 01:01:28,000
it could be used for storing context memory

791
01:01:28,000 --> 01:01:35,000
so that the AI has memory and understands the context of the conversation you would like to have.

792
01:01:35,000 --> 01:01:38,000
It's our second generation transformer engine.

793
01:01:38,000 --> 01:01:44,000
Transformer engine allows us to adapt dynamically to a lower precision

794
01:01:44,000 --> 01:01:49,000
based on the precision and the range necessary for that layer of computation.

795
01:01:49,000 --> 01:01:53,000
This is our second generation GPU that has secure AI

796
01:01:53,000 --> 01:01:58,000
so that you could ask your service providers to protect your AI

797
01:01:58,000 --> 01:02:02,000
from being either stolen, from theft, or tampering.

798
01:02:02,000 --> 01:02:05,000
This is our fifth generation NVLink.

799
01:02:05,000 --> 01:02:08,000
NVLink allows us to connect multiple GPUs together,

800
01:02:08,000 --> 01:02:10,000
and I'll show you more of that in a second.

801
01:02:10,000 --> 01:02:16,000
And this is also our first generation with a reliability and availability engine.

802
01:02:16,000 --> 01:02:21,000
This system, this RAS system, allows us to test every single transistor,

803
01:02:21,000 --> 01:02:25,000
flip-flop, memory on chip, memory off chip,

804
01:02:25,000 --> 01:02:31,000
so that we can, in the field, determine whether a particular chip

805
01:02:31,000 --> 01:02:33,000
is failing.

806
01:02:33,000 --> 01:02:36,000
The MTBF, the mean time between failure,

807
01:02:36,000 --> 01:02:40,000
of a supercomputer with 10,000 GPUs

808
01:02:40,000 --> 01:02:43,000
is measured in hours.

809
01:02:43,000 --> 01:02:48,000
The mean time between failure of a supercomputer with 100,000 GPUs

810
01:02:48,000 --> 01:02:51,000
is measured in minutes.

811
01:02:51,000 --> 01:02:56,000
And so the ability for a supercomputer to run for a long period of time

812
01:02:56,000 --> 01:03:01,000
and train a model that could last for several months is practically impossible

813
01:03:01,000 --> 01:03:05,000
if we don't invent technologies to enhance its reliability.

814
01:03:05,000 --> 01:03:08,000
Reliability would, of course, enhance its uptime,

815
01:03:08,000 --> 01:03:10,000
which directly affects the cost.

816
01:03:10,000 --> 01:03:12,000
And then lastly, decompression engine.

817
01:03:12,000 --> 01:03:15,000
Data processing is one of the most important things we have to do.

818
01:03:15,000 --> 01:03:18,000
We added a data compression engine, decompression engine,

819
01:03:18,000 --> 01:03:22,000
so that we can pull data out of storage 20 times faster

820
01:03:22,000 --> 01:03:24,000
than what's possible today.

821
01:03:24,000 --> 01:03:26,000
Well, all of this represents Blackwell,

822
01:03:26,000 --> 01:03:28,000
and I think we have one here that's in production.

823
01:03:28,000 --> 01:03:33,000
During GTC, I showed you Blackwell in the prototype state.

824
01:03:33,000 --> 01:03:35,000
Um...

825
01:03:35,000 --> 01:03:37,000
The other side?

826
01:03:37,000 --> 01:03:39,000
[laughter]

827
01:03:39,000 --> 01:03:41,000
This is why we practice.

828
01:03:41,000 --> 01:03:43,000
[laughter]

829
01:03:43,000 --> 01:03:48,000
[speaking Chinese]

830
01:03:48,000 --> 01:03:50,000
[laughter]

831
01:03:53,000 --> 01:03:56,000
[chuckles]

832
01:03:56,000 --> 01:03:59,000
Ladies and gentlemen, this is Blackwell.

833
01:03:59,000 --> 01:04:02,000
[cheers and applause]

834
01:04:02,000 --> 01:04:08,000
Blackwell is in production.

835
01:04:08,000 --> 01:04:11,000
Incredible amounts of technology.

836
01:04:11,000 --> 01:04:17,000
This is our production board.

837
01:04:17,000 --> 01:04:20,000
This is the most complex, highest performance computer

838
01:04:20,000 --> 01:04:22,000
the world's ever made.

839
01:04:24,000 --> 01:04:26,000
This is the gray CPU.

840
01:04:26,000 --> 01:04:28,000
And these are--

841
01:04:28,000 --> 01:04:30,000
You could see each one of these Blackwell dies,

842
01:04:30,000 --> 01:04:32,000
two of them connected together.

843
01:04:32,000 --> 01:04:34,000
You see that?

844
01:04:34,000 --> 01:04:37,000
It is the largest die-- the largest chip the world makes.

845
01:04:37,000 --> 01:04:39,000
And then we connect two of them together

846
01:04:39,000 --> 01:04:42,000
with a 10 terabyte per second link.

847
01:04:42,000 --> 01:04:48,000
Okay, and that makes the Blackwell computer.

848
01:04:48,000 --> 01:04:50,000
And the performance is incredible.

849
01:04:50,000 --> 01:04:53,000
Take a look at this.

850
01:04:53,000 --> 01:04:55,000
So, um...

851
01:04:55,000 --> 01:05:02,000
You see--you see our, uh, the--

852
01:05:02,000 --> 01:05:06,000
the computational-- the flops, the AI flops,

853
01:05:06,000 --> 01:05:08,000
uh, for each generation

854
01:05:08,000 --> 01:05:12,000
has increased by a thousand times in eight years.

855
01:05:12,000 --> 01:05:16,000
Moore's Law in eight years

856
01:05:16,000 --> 01:05:21,000
is something along the lines of, oh, I don't know,

857
01:05:21,000 --> 01:05:24,000
maybe 40, 60?

858
01:05:24,000 --> 01:05:27,000
And in the last eight years,

859
01:05:27,000 --> 01:05:30,000
Moore's Law has gone a lot, lot less.

860
01:05:30,000 --> 01:05:33,000
And so just to compare,

861
01:05:33,000 --> 01:05:36,000
even Moore's Law at its best of times

862
01:05:36,000 --> 01:05:39,000
compared to what Blackwell could do.

863
01:05:39,000 --> 01:05:42,000
So the amount of computations is incredible.

864
01:05:42,000 --> 01:05:45,000
And whenever we bring the computation high,

865
01:05:45,000 --> 01:05:48,000
the thing that happens is the cost goes down.

866
01:05:48,000 --> 01:05:50,000
And I'll show you.

867
01:05:50,000 --> 01:05:52,000
What we've done is we've increased,

868
01:05:52,000 --> 01:05:54,000
through its computational capability,

869
01:05:54,000 --> 01:06:00,000
the energy used to train a GPT-4

870
01:06:00,000 --> 01:06:02,000
two-trillion parameter,

871
01:06:02,000 --> 01:06:05,000
eight-trillion tokens.

872
01:06:05,000 --> 01:06:07,000
The amount of energy that is used

873
01:06:07,000 --> 01:06:10,000
has gone down by 350 times.

874
01:06:10,000 --> 01:06:17,000
Well, Pascal would have taken 1,000 gigawatt hours.

875
01:06:17,000 --> 01:06:19,000
1,000 gigawatt hours means

876
01:06:19,000 --> 01:06:22,000
that it would take a gigawatt data center--

877
01:06:22,000 --> 01:06:24,000
the world doesn't have a gigawatt data center--

878
01:06:24,000 --> 01:06:26,000
but if you had a gigawatt data center,

879
01:06:26,000 --> 01:06:28,000
it would take a month.

880
01:06:28,000 --> 01:06:31,000
If you had a hundred-watt, hundred-megawatt data center,

881
01:06:31,000 --> 01:06:33,000
it would take about a year.

882
01:06:33,000 --> 01:06:36,000
And so nobody would, of course,

883
01:06:36,000 --> 01:06:41,000
create such a thing, and that's the reason why

884
01:06:41,000 --> 01:06:43,000
these large language models, chat GPT,

885
01:06:43,000 --> 01:06:45,000
was impossible only eight years ago.

886
01:06:45,000 --> 01:06:47,000
By us driving down the--

887
01:06:47,000 --> 01:06:49,000
increasing the performance, the energy efficient--

888
01:06:49,000 --> 01:06:52,000
while keeping and improving energy efficiency

889
01:06:52,000 --> 01:06:56,000
along the way, we've now taken, with Blackwell,

890
01:06:56,000 --> 01:06:59,000
what used to be 1,000 gigawatt hours to three,

891
01:06:59,000 --> 01:07:02,000
an incredible advance.

892
01:07:02,000 --> 01:07:04,000
Three gigawatt hours,

893
01:07:04,000 --> 01:07:10,000
if it's a 10,000 GPUs, for example,

894
01:07:10,000 --> 01:07:12,000
it would only take a couple--

895
01:07:12,000 --> 01:07:15,000
10,000 GPUs, I guess, would take a few days.

896
01:07:15,000 --> 01:07:17,000
Ten days or so.

897
01:07:17,000 --> 01:07:21,000
So the amount of advance in just eight years is incredible.

898
01:07:21,000 --> 01:07:24,000
Well, this is for inference.

899
01:07:24,000 --> 01:07:27,000
This is for token generation.

900
01:07:27,000 --> 01:07:30,000
Our token generation performance

901
01:07:30,000 --> 01:07:32,000
has made it possible for us to drive the energy down

902
01:07:32,000 --> 01:07:36,000
by 45,000 times.

903
01:07:36,000 --> 01:07:41,000
17,000 joules per token, that was Pascal.

904
01:07:41,000 --> 01:07:45,000
17,000 joules is kind of like two light bulbs

905
01:07:45,000 --> 01:07:48,000
running for two days.

906
01:07:48,000 --> 01:07:50,000
It would take two light bulbs

907
01:07:50,000 --> 01:07:52,000
running for two days amounts of energy,

908
01:07:52,000 --> 01:07:54,000
200 watts running for two days,

909
01:07:54,000 --> 01:07:59,000
to generate one token of GPT-4.

910
01:07:59,000 --> 01:08:03,000
It takes about three tokens to generate one word.

911
01:08:03,000 --> 01:08:06,000
And so the amount of energy used necessary for Pascal

912
01:08:06,000 --> 01:08:08,000
to generate GPT-4

913
01:08:08,000 --> 01:08:10,000
and have a chat GPT experience with you

914
01:08:10,000 --> 01:08:12,000
was practically impossible.

915
01:08:12,000 --> 01:08:17,000
But now we only use 0.4 joules per token,

916
01:08:17,000 --> 01:08:20,000
and we can generate tokens at incredible rates

917
01:08:20,000 --> 01:08:22,000
and very little energy.

918
01:08:22,000 --> 01:08:26,000
Okay, so Blackwell is just an enormous leap.

919
01:08:26,000 --> 01:08:29,000
Well, even so, it's not big enough.

920
01:08:29,000 --> 01:08:32,000
And so we have to build even larger machines.

921
01:08:32,000 --> 01:08:35,000
And so the way that we build it is called DGX.

922
01:08:35,000 --> 01:08:39,000
So this is our Blackwell chips,

923
01:08:39,000 --> 01:08:42,000
and it goes into DGX systems.

924
01:08:42,000 --> 01:08:52,000
That's why we should practice.

925
01:08:54,000 --> 01:08:58,000
So this is a DGX Blackwell.

926
01:08:58,000 --> 01:09:00,000
This is air-cooled,

927
01:09:00,000 --> 01:09:03,000
has eight of these GPUs inside.

928
01:09:03,000 --> 01:09:09,000
Look at the size of the heat sinks on these GPUs.

929
01:09:09,000 --> 01:09:14,000
About 15 kilowatts, 15,000 watts,

930
01:09:14,000 --> 01:09:16,000
and completely air-cooled.

931
01:09:16,000 --> 01:09:19,000
This version supports x86,

932
01:09:19,000 --> 01:09:21,000
and it goes into the infrastructure

933
01:09:21,000 --> 01:09:23,000
that we've been shipping hoppers into.

934
01:09:23,000 --> 01:09:26,000
However, if you would like to have liquid cooling,

935
01:09:26,000 --> 01:09:28,000
we have a new system,

936
01:09:28,000 --> 01:09:34,000
and this new system is based on this board,

937
01:09:34,000 --> 01:09:37,000
and we call it MGX for modular.

938
01:09:37,000 --> 01:09:40,000
And this modular system,

939
01:09:40,000 --> 01:09:42,000
you won't be able to see this.

940
01:09:42,000 --> 01:09:45,000
Can they see this?

941
01:09:45,000 --> 01:09:47,000
Can you see this?

942
01:09:47,000 --> 01:09:49,000
You can?

943
01:09:49,000 --> 01:09:51,000
Are you...

944
01:09:51,000 --> 01:09:53,000
Okay.

945
01:09:53,000 --> 01:09:57,000
I see.

946
01:09:57,000 --> 01:10:01,000
And so this is the MGX system,

947
01:10:01,000 --> 01:10:04,000
and here's the two Blackwell boards.

948
01:10:04,000 --> 01:10:07,000
So this one node has four Blackwell chips.

949
01:10:07,000 --> 01:10:09,000
These four Blackwell chips,

950
01:10:09,000 --> 01:10:12,000
this is liquid-cooled.

951
01:10:12,000 --> 01:10:17,000
Nine of them, nine of them,

952
01:10:17,000 --> 01:10:24,000
well, 72 of these GPUs, 72 of these GPUs,

953
01:10:24,000 --> 01:10:28,000
are then connected together with a new NVLink.

954
01:10:28,000 --> 01:10:33,000
This is NVLink switch fifth generation.

955
01:10:33,000 --> 01:10:36,000
And the NVLink switch is a technology miracle.

956
01:10:36,000 --> 01:10:38,000
This is the most advanced switch the world's ever made.

957
01:10:38,000 --> 01:10:41,000
The data rate is insane.

958
01:10:41,000 --> 01:10:44,000
And these switches connect every single one of these Blackwells

959
01:10:44,000 --> 01:10:46,000
to each other

960
01:10:46,000 --> 01:10:52,000
so that we have one giant 72 GPU Blackwell.

961
01:10:52,000 --> 01:10:55,000
Well, the benefit...

962
01:10:55,000 --> 01:10:59,000
the benefit of this is that in one domain,

963
01:10:59,000 --> 01:11:01,000
one GPU domain,

964
01:11:01,000 --> 01:11:03,000
this now looks like one GPU.

965
01:11:03,000 --> 01:11:07,000
This one GPU has 72 versus the last generation of eight,

966
01:11:07,000 --> 01:11:09,000
so we increased it by nine times.

967
01:11:09,000 --> 01:11:12,000
The amount of bandwidth we've increased by 18 times.

968
01:11:12,000 --> 01:11:15,000
The AI flops, we've increased by 45 times,

969
01:11:15,000 --> 01:11:18,000
and yet the amount of power is only ten times.

970
01:11:18,000 --> 01:11:23,000
This is 100 kilowatts, and that is 10 kilowatts.

971
01:11:23,000 --> 01:11:25,000
And that's for one.

972
01:11:25,000 --> 01:11:27,000
Now, of course, well,

973
01:11:27,000 --> 01:11:29,000
you could always connect more of these together,

974
01:11:29,000 --> 01:11:31,000
and I'll show you how to do that in a second.

975
01:11:31,000 --> 01:11:34,000
But what's the miracle of this chip, this NVLink chip?

976
01:11:34,000 --> 01:11:36,000
People are starting to awaken to the importance

977
01:11:36,000 --> 01:11:38,000
of this NVLink chip

978
01:11:38,000 --> 01:11:41,000
as it connects all these different GPUs together

979
01:11:41,000 --> 01:11:43,000
because the large language models are so large,

980
01:11:43,000 --> 01:11:46,000
it doesn't fit on just one GPU.

981
01:11:46,000 --> 01:11:48,000
It doesn't fit on just one node.

982
01:11:48,000 --> 01:11:51,000
It's gonna take the entire rack of GPUs,

983
01:11:51,000 --> 01:11:54,000
like this new DGX that I was just standing next to,

984
01:11:54,000 --> 01:11:57,000
to hold a large language model

985
01:11:57,000 --> 01:12:00,000
that are tens of trillions of parameters large.

986
01:12:00,000 --> 01:12:02,000
NVLink switch in itself is a technology miracle.

987
01:12:02,000 --> 01:12:04,000
It's 50 billion transistors,

988
01:12:04,000 --> 01:12:07,000
74 ports at 400 gigabits each,

989
01:12:07,000 --> 01:12:10,000
four links, cross-sectional bandwidth

990
01:12:10,000 --> 01:12:12,000
of 7.2 terabytes per second.

991
01:12:12,000 --> 01:12:14,000
But one of the important things

992
01:12:14,000 --> 01:12:16,000
is that it has mathematics inside the switch

993
01:12:16,000 --> 01:12:18,000
so that we can do reductions,

994
01:12:18,000 --> 01:12:20,000
which is really important in deep learning,

995
01:12:20,000 --> 01:12:22,000
right on the chip.

996
01:12:22,000 --> 01:12:27,000
And so this is what a DGX looks like now.

997
01:12:27,000 --> 01:12:31,000
And a lot of people ask us...

998
01:12:31,000 --> 01:12:33,000
You know, they say...

999
01:12:33,000 --> 01:12:37,000
And there's this confusion about what NVIDIA does,

1000
01:12:37,000 --> 01:12:41,000
and how is it possible

1001
01:12:41,000 --> 01:12:46,000
that NVIDIA became so big building GPUs?

1002
01:12:46,000 --> 01:12:48,000
And so there's an impression

1003
01:12:48,000 --> 01:12:50,000
that this is what a GPU looks like.

1004
01:12:50,000 --> 01:12:52,000
Now, this is a GPU.

1005
01:12:52,000 --> 01:12:54,000
This is one of the most advanced GPUs in the world,

1006
01:12:54,000 --> 01:12:56,000
but this is a gamer GPU.

1007
01:12:56,000 --> 01:12:59,000
But you and I know that this is what a GPU looks like.

1008
01:12:59,000 --> 01:13:02,000
This is one GPU.

1009
01:13:02,000 --> 01:13:05,000
Ladies and gentlemen, DGX GPU.

1010
01:13:05,000 --> 01:13:07,000
[applause]

1011
01:13:07,000 --> 01:13:16,000
The back of this GPU is the NVLink spine.

1012
01:13:16,000 --> 01:13:21,000
The NVLink spine is 5,000 wires,

1013
01:13:21,000 --> 01:13:24,000
two miles...

1014
01:13:24,000 --> 01:13:28,000
And it's right here.

1015
01:13:28,000 --> 01:13:34,000
This is an NVLink spine.

1016
01:13:34,000 --> 01:13:40,000
And it connects 72 GPUs to each other.

1017
01:13:40,000 --> 01:13:45,000
This is an electrical, mechanical miracle.

1018
01:13:45,000 --> 01:13:48,000
The transceivers makes it possible for us

1019
01:13:48,000 --> 01:13:51,000
to drive the entire length in copper.

1020
01:13:51,000 --> 01:13:55,000
And as a result, this switch, the NVLink switch,

1021
01:13:55,000 --> 01:13:58,000
driving the NVLink spine in copper

1022
01:13:58,000 --> 01:14:03,000
makes it possible for us to save 20 kilowatts in one rack.

1023
01:14:03,000 --> 01:14:07,000
20 kilowatts can now be used for processing.

1024
01:14:07,000 --> 01:14:09,000
Just an incredible achievement.

1025
01:14:09,000 --> 01:14:13,000
So this is the NVLink spine.

1026
01:14:13,000 --> 01:14:21,000
[applause]

1027
01:14:21,000 --> 01:14:23,000
Wow.

1028
01:14:23,000 --> 01:14:25,000
[laughter]

1029
01:14:25,000 --> 01:14:28,000
[chuckles]

1030
01:14:28,000 --> 01:14:32,000
That went down today.

1031
01:14:32,000 --> 01:14:36,000
And even this is not big enough.

1032
01:14:36,000 --> 01:14:38,000
Even this is not big enough for AI factories,

1033
01:14:38,000 --> 01:14:40,000
so we have to connect it all together

1034
01:14:40,000 --> 01:14:43,000
with very high-speed networking.

1035
01:14:43,000 --> 01:14:45,000
Well, we have two types of networking.

1036
01:14:45,000 --> 01:14:47,000
We have InfiniBand, which has been used

1037
01:14:47,000 --> 01:14:51,000
in supercomputing and AI factories all over the world.

1038
01:14:51,000 --> 01:14:54,000
And it is growing incredibly fast for us.

1039
01:14:54,000 --> 01:14:57,000
However, not every data center can handle InfiniBand

1040
01:14:57,000 --> 01:15:00,000
because they've already invested their ecosystem

1041
01:15:00,000 --> 01:15:02,000
in Ethernet for too long.

1042
01:15:02,000 --> 01:15:05,000
And it does take some specialty and some expertise

1043
01:15:05,000 --> 01:15:08,000
to manage InfiniBand switches and InfiniBand networks.

1044
01:15:08,000 --> 01:15:12,000
And so what we've done is we've brought the capabilities

1045
01:15:12,000 --> 01:15:15,000
of InfiniBand to the Ethernet architecture,

1046
01:15:15,000 --> 01:15:17,000
which is incredibly hard.

1047
01:15:17,000 --> 01:15:19,000
And the reason for that is this.

1048
01:15:19,000 --> 01:15:25,000
Ethernet was designed for high average throughput

1049
01:15:25,000 --> 01:15:29,000
because every single node, every single computer

1050
01:15:29,000 --> 01:15:31,000
is connected to a different person on the Internet,

1051
01:15:31,000 --> 01:15:34,000
and most of the communications is the data center

1052
01:15:34,000 --> 01:15:36,000
with somebody on the other side of the Internet.

1053
01:15:36,000 --> 01:15:41,000
However, deep learning in AI factories,

1054
01:15:41,000 --> 01:15:44,000
the GPUs are not communicating with people

1055
01:15:44,000 --> 01:15:46,000
on the Internet mostly.

1056
01:15:46,000 --> 01:15:48,000
It's communicating with each other.

1057
01:15:48,000 --> 01:15:50,000
They're communicating with each other

1058
01:15:50,000 --> 01:15:54,000
because they're all-- they're collecting partial products,

1059
01:15:54,000 --> 01:15:57,000
and they have to reduce it and then redistribute it.

1060
01:15:57,000 --> 01:16:01,000
Chunks of partial products, reduction, redistribution.

1061
01:16:01,000 --> 01:16:05,000
That traffic is incredibly bursty.

1062
01:16:05,000 --> 01:16:09,000
And it is not the average throughput that matters.

1063
01:16:09,000 --> 01:16:12,000
It's the last arrival that matters.

1064
01:16:12,000 --> 01:16:14,000
Because if you're reducing,

1065
01:16:14,000 --> 01:16:17,000
collecting partial products from everybody,

1066
01:16:17,000 --> 01:16:19,000
if I'm trying to take all of your...

1067
01:16:19,000 --> 01:16:23,000
[speaking Chinese]

1068
01:16:23,000 --> 01:16:25,000
[laughter]

1069
01:16:25,000 --> 01:16:28,000
[speaking Chinese]

1070
01:16:28,000 --> 01:16:30,000
So it's not the average throughput.

1071
01:16:30,000 --> 01:16:34,000
It's whoever gives me the answer last.

1072
01:16:34,000 --> 01:16:35,000
Okay?

1073
01:16:35,000 --> 01:16:37,000
Ethernet has no provision for that.

1074
01:16:37,000 --> 01:16:40,000
And so there are several things that we had to create.

1075
01:16:40,000 --> 01:16:43,000
We created an end-to-end architecture

1076
01:16:43,000 --> 01:16:46,000
so that the NIC and the switch can communicate.

1077
01:16:46,000 --> 01:16:49,000
And we applied four different technologies

1078
01:16:49,000 --> 01:16:50,000
to make this possible.

1079
01:16:50,000 --> 01:16:53,000
Number one, NVIDIA has the world's most advanced RDMA.

1080
01:16:53,000 --> 01:16:57,000
And so now we have the ability to have a network-level RDMA

1081
01:16:57,000 --> 01:17:00,000
for Ethernet that is incredibly great.

1082
01:17:00,000 --> 01:17:03,000
Number two, we have congestion control.

1083
01:17:03,000 --> 01:17:06,000
The switch does telemetry at all times incredibly fast.

1084
01:17:06,000 --> 01:17:12,000
And whenever the GPUs or the NICs

1085
01:17:12,000 --> 01:17:13,000
are sending too much information,

1086
01:17:13,000 --> 01:17:14,000
we can tell them to back off

1087
01:17:14,000 --> 01:17:17,000
so that it doesn't create hot spots.

1088
01:17:17,000 --> 01:17:20,000
Number three, adaptive routing.

1089
01:17:20,000 --> 01:17:25,000
Ethernet needs to transmit and receive in order.

1090
01:17:25,000 --> 01:17:28,000
We see congestions,

1091
01:17:28,000 --> 01:17:32,000
or we see ports that are not currently being used.

1092
01:17:32,000 --> 01:17:34,000
Irrespective of the ordering,

1093
01:17:34,000 --> 01:17:36,000
we will send it to the available ports,

1094
01:17:36,000 --> 01:17:39,000
and Bluefield, on the other end,

1095
01:17:39,000 --> 01:17:42,000
reorders it so that it comes back in order.

1096
01:17:42,000 --> 01:17:44,000
That adaptive routing, incredibly powerful.

1097
01:17:44,000 --> 01:17:47,000
And then lastly, noise isolation.

1098
01:17:47,000 --> 01:17:50,000
There's more than one model being trained

1099
01:17:50,000 --> 01:17:53,000
or something happening in the data center at all times,

1100
01:17:53,000 --> 01:17:55,000
and their noise and their traffic

1101
01:17:55,000 --> 01:17:58,000
could get into each other and causes jitter.

1102
01:17:58,000 --> 01:18:01,000
And so when the noise of one training model,

1103
01:18:01,000 --> 01:18:03,000
one model training,

1104
01:18:03,000 --> 01:18:07,000
causes the last arrival to end up too late,

1105
01:18:07,000 --> 01:18:09,000
it really slows down the training.

1106
01:18:09,000 --> 01:18:11,000
Well, overall, remember,

1107
01:18:11,000 --> 01:18:16,000
you have built a $5 billion or $3 billion data center,

1108
01:18:16,000 --> 01:18:18,000
and you're using this for training.

1109
01:18:18,000 --> 01:18:23,000
If the utilization, network utilization,

1110
01:18:23,000 --> 01:18:28,000
was 40% lower, and as a result,

1111
01:18:28,000 --> 01:18:31,000
the training time was 20% longer,

1112
01:18:31,000 --> 01:18:34,000
the $5 billion data center

1113
01:18:34,000 --> 01:18:37,000
is effectively like a $6 billion data center.

1114
01:18:37,000 --> 01:18:39,000
So the cost is incredible--

1115
01:18:39,000 --> 01:18:43,000
the cost impact is quite high.

1116
01:18:43,000 --> 01:18:45,000
Ethernet with SpectrumX

1117
01:18:45,000 --> 01:18:49,000
basically allows us to improve the performance so much

1118
01:18:49,000 --> 01:18:51,000
that the network is basically free.

1119
01:18:51,000 --> 01:18:53,000
And so this is really quite an achievement.

1120
01:18:53,000 --> 01:18:56,000
We're very--we have a whole pipeline

1121
01:18:56,000 --> 01:18:57,000
of Ethernet products behind us.

1122
01:18:57,000 --> 01:18:59,000
This is SpectrumX 800.

1123
01:18:59,000 --> 01:19:03,000
It is 51.2 terabits per second

1124
01:19:03,000 --> 01:19:06,000
and 256 radix.

1125
01:19:06,000 --> 01:19:10,000
The next one coming is 512 radix is one year from now.

1126
01:19:10,000 --> 01:19:14,000
512 radix, and that's called SpectrumX 800 Ultra,

1127
01:19:14,000 --> 01:19:17,000
and the one after that is X 1600.

1128
01:19:17,000 --> 01:19:19,000
But the important idea is this.

1129
01:19:19,000 --> 01:19:24,000
X 800 is designed for tens of thousands--

1130
01:19:24,000 --> 01:19:27,000
tens of thousands of GPUs.

1131
01:19:27,000 --> 01:19:32,000
X 800 Ultra is designed for hundreds of thousands of GPUs,

1132
01:19:32,000 --> 01:19:36,000
and X 1600 is designed for millions of GPUs.

1133
01:19:36,000 --> 01:19:40,000
The days of millions of GPU data centers are coming.

1134
01:19:40,000 --> 01:19:43,000
And the reason for that is very simple.

1135
01:19:43,000 --> 01:19:45,000
Of course, we want to train much larger models,

1136
01:19:45,000 --> 01:19:48,000
but very importantly, in the future,

1137
01:19:48,000 --> 01:19:51,000
almost every interaction you have with the Internet

1138
01:19:51,000 --> 01:19:54,000
or with a computer will likely have a generative AI

1139
01:19:54,000 --> 01:19:57,000
running in the cloud somewhere.

1140
01:19:57,000 --> 01:19:59,000
And that generative AI is working with you,

1141
01:19:59,000 --> 01:20:01,000
interacting with you,

1142
01:20:01,000 --> 01:20:04,000
generating videos or images or text

1143
01:20:04,000 --> 01:20:06,000
or maybe a digital human.

1144
01:20:06,000 --> 01:20:08,000
And so you're interacting with your computer

1145
01:20:08,000 --> 01:20:10,000
almost all the time,

1146
01:20:10,000 --> 01:20:13,000
and there's always a generative AI connected to that.

1147
01:20:13,000 --> 01:20:16,000
Some of it is on-prem, some of it is on your device,

1148
01:20:16,000 --> 01:20:18,000
and a lot of it could be in the cloud.

1149
01:20:18,000 --> 01:20:22,000
These generative AIs will also do a lot of reasoning capability.

1150
01:20:22,000 --> 01:20:24,000
Instead of just one-shot answers,

1151
01:20:24,000 --> 01:20:26,000
they might iterate on answers

1152
01:20:26,000 --> 01:20:28,000
so that it improves the quality of the answer

1153
01:20:28,000 --> 01:20:30,000
before they give it to you.

1154
01:20:30,000 --> 01:20:33,000
And so the amount of generation we're gonna do in the future

1155
01:20:33,000 --> 01:20:35,000
is going to be extraordinary.

1156
01:20:35,000 --> 01:20:37,000
Let's take a look at all of this put together.

1157
01:20:37,000 --> 01:20:41,000
Now, tonight, this is our first nighttime keynote.

1158
01:20:41,000 --> 01:20:45,000
I want to thank...

1159
01:20:45,000 --> 01:20:48,000
[cheers and applause]

1160
01:20:48,000 --> 01:20:54,000
I want to thank all of you for coming out tonight

1161
01:20:54,000 --> 01:20:56,000
at 7 o'clock,

1162
01:20:56,000 --> 01:20:59,000
and so what I'm about to show you

1163
01:20:59,000 --> 01:21:01,000
has a new vibe, okay?

1164
01:21:01,000 --> 01:21:03,000
There's a new vibe.

1165
01:21:03,000 --> 01:21:06,000
This is kind of the nighttime keynote vibe.

1166
01:21:06,000 --> 01:21:08,000
So enjoy this.

1167
01:21:08,000 --> 01:21:11,000
[cheers and applause]

1168
01:21:11,000 --> 01:21:14,000
[upbeat music]

1169
01:21:14,000 --> 01:21:16,000
- ♪ Black flow ♪

1170
01:21:16,000 --> 01:21:20,000
♪ Ha ♪

1171
01:21:20,000 --> 01:21:27,000
♪ Let's go ♪

1172
01:21:27,000 --> 01:21:29,000
♪ Go, go, go, go ♪

1173
01:21:29,000 --> 01:21:34,000
♪ Okay ♪

1174
01:21:34,000 --> 01:21:35,000
♪ Uh ♪

1175
01:21:35,000 --> 01:21:41,000
♪ Uh ♪

1176
01:21:41,000 --> 01:21:45,000
♪ Black flow ♪

1177
01:21:45,000 --> 01:21:46,000
♪ Ha ♪

1178
01:21:46,000 --> 01:21:50,000
♪ Ha ♪

1179
01:21:50,000 --> 01:21:54,000
♪ Ha ♪

1180
01:21:54,000 --> 01:21:57,000
♪ Just something sharp ♪

1181
01:21:57,000 --> 01:21:59,000
♪ Uh, don't go away ♪

1182
01:21:59,000 --> 01:22:01,000
♪ Uh, push it on me ♪

1183
01:22:01,000 --> 01:22:03,000
♪ Uh, just the way ♪

1184
01:22:03,000 --> 01:22:04,000
♪ Uh ♪

1185
01:22:04,000 --> 01:22:07,000
♪ Uh ♪

1186
01:22:07,000 --> 01:22:09,000
♪ Come on ♪

1187
01:22:09,000 --> 01:22:12,000
♪ Yeah, yeah, yeah, yeah ♪

1188
01:22:12,000 --> 01:22:16,000
♪ Get it, y'all ♪

1189
01:22:16,000 --> 01:22:17,000
♪ Get it, y'all ♪

1190
01:22:17,000 --> 01:22:23,000
♪ Let's go ♪

1191
01:22:23,000 --> 01:22:30,000
♪ Uh, uh, uh ♪

1192
01:22:30,000 --> 01:22:31,000
♪ Uh, uh ♪

1193
01:22:31,000 --> 01:22:32,000
♪ Uh, uh ♪

1194
01:22:32,000 --> 01:22:36,000
♪ The more you back, the more you safe ♪

1195
01:22:36,000 --> 01:22:38,000
♪ With top AI, tailor-made ♪

1196
01:22:38,000 --> 01:22:40,000
♪ That's nothing to speed or light ♪

1197
01:22:40,000 --> 01:22:42,000
♪ Efficient, that's to date ♪

1198
01:22:42,000 --> 01:22:43,000
♪ Just something sharp ♪

1199
01:22:43,000 --> 01:22:45,000
♪ Uh, don't go away ♪

1200
01:22:45,000 --> 01:22:47,000
♪ Uh, push it on me ♪

1201
01:22:47,000 --> 01:22:51,000
♪ Let's go ♪

1202
01:22:51,000 --> 01:22:54,000
[cheers and applause]

1203
01:22:58,000 --> 01:23:01,000
Now, you can't do that on a morning keynote.

1204
01:23:01,000 --> 01:23:09,000
I think that style of keynote has never been done in Computex ever.

1205
01:23:09,000 --> 01:23:13,000
Might be the last.

1206
01:23:13,000 --> 01:23:16,000
[applause]

1207
01:23:16,000 --> 01:23:21,000
Only NVIDIA can pull off that.

1208
01:23:21,000 --> 01:23:24,000
Only I can do that.

1209
01:23:24,000 --> 01:23:27,000
[cheers and applause]

1210
01:23:29,000 --> 01:23:31,000
Blackwell, of course, uh,

1211
01:23:31,000 --> 01:23:35,000
is the first generation of NVIDIA platforms

1212
01:23:35,000 --> 01:23:37,000
that was launched at the beginning,

1213
01:23:37,000 --> 01:23:39,000
at the--right as the world knows,

1214
01:23:39,000 --> 01:23:42,000
the generative AI era is here,

1215
01:23:42,000 --> 01:23:46,000
just as the world realized the importance of AI factories,

1216
01:23:46,000 --> 01:23:49,000
just as the beginning of this new industrial revolution.

1217
01:23:49,000 --> 01:23:51,000
Uh, we have so much support.

1218
01:23:51,000 --> 01:23:54,000
Nearly every OEM, every computer maker,

1219
01:23:54,000 --> 01:23:59,000
every CSP, every GPU cloud, sovereign clouds,

1220
01:23:59,000 --> 01:24:02,000
even telecommunication companies,

1221
01:24:02,000 --> 01:24:05,000
enterprises all over the world,

1222
01:24:05,000 --> 01:24:08,000
the amount of success, the amount of adoption,

1223
01:24:08,000 --> 01:24:11,000
the amount of enthusiasm for Blackwell

1224
01:24:11,000 --> 01:24:12,000
is just really off the charts,

1225
01:24:12,000 --> 01:24:14,000
and I want to thank everybody for that.

1226
01:24:14,000 --> 01:24:18,000
[applause]

1227
01:24:18,000 --> 01:24:22,000
We're not stopping there.

1228
01:24:22,000 --> 01:24:26,000
Uh, during this-- during the time of this incredible growth,

1229
01:24:26,000 --> 01:24:30,000
we want to make sure that we continue to enhance performance,

1230
01:24:30,000 --> 01:24:32,000
continue to drive down cost,

1231
01:24:32,000 --> 01:24:35,000
cost of training, cost of inference,

1232
01:24:35,000 --> 01:24:38,000
and continue to scale out AI capabilities

1233
01:24:38,000 --> 01:24:40,000
for every company to embrace.

1234
01:24:40,000 --> 01:24:43,000
The further we-- the further performance we drive up,

1235
01:24:43,000 --> 01:24:45,000
the greater the cost decline.

1236
01:24:45,000 --> 01:24:47,000
Hopper Platform, of course,

1237
01:24:47,000 --> 01:24:50,000
was the most successful data center processor

1238
01:24:50,000 --> 01:24:52,000
probably in history.

1239
01:24:52,000 --> 01:24:56,000
And this is just an incredible, incredible success story.

1240
01:24:56,000 --> 01:24:58,000
However, Blackwell is here,

1241
01:24:58,000 --> 01:25:01,000
and every single platform, as you'll notice,

1242
01:25:01,000 --> 01:25:02,000
are several things.

1243
01:25:02,000 --> 01:25:04,000
You've got the CPU, you have the GPU,

1244
01:25:04,000 --> 01:25:05,000
you have NVLink, you have the NIC,

1245
01:25:05,000 --> 01:25:07,000
and you have the switch.

1246
01:25:07,000 --> 01:25:10,000
The--the NVLink switch, the, uh, connects

1247
01:25:10,000 --> 01:25:14,000
all of the GPUs together as large of a domain as we can,

1248
01:25:14,000 --> 01:25:16,000
and whatever we can do, we connect it with large,

1249
01:25:16,000 --> 01:25:19,000
um, very large and very high-speed switches.

1250
01:25:19,000 --> 01:25:21,000
Every single generation, as you'll see,

1251
01:25:21,000 --> 01:25:24,000
is not just the GPU, but its entire platform.

1252
01:25:24,000 --> 01:25:27,000
We build the entire platform,

1253
01:25:27,000 --> 01:25:28,000
we integrate the entire platform

1254
01:25:28,000 --> 01:25:30,000
into an AI factory supercomputer.

1255
01:25:30,000 --> 01:25:34,000
However, then we disaggregate it and offer it to the world.

1256
01:25:34,000 --> 01:25:37,000
And the reason for that is because all of you

1257
01:25:37,000 --> 01:25:41,000
could create interesting and innovative configurations

1258
01:25:41,000 --> 01:25:45,000
and--and--and, uh, all kinds of different, uh, uh, styles

1259
01:25:45,000 --> 01:25:47,000
and--and, uh, fit different, uh, data centers

1260
01:25:47,000 --> 01:25:49,000
and different customers and different places,

1261
01:25:49,000 --> 01:25:51,000
some of it for Edge, some of it for Telco.

1262
01:25:51,000 --> 01:25:54,000
Uh, all of the different innovation are possible

1263
01:25:54,000 --> 01:25:56,000
if it--we made the systems open

1264
01:25:56,000 --> 01:25:58,000
and make it possible for you to innovate.

1265
01:25:58,000 --> 01:26:01,000
And so we designed it, integrated,

1266
01:26:01,000 --> 01:26:04,000
but we offer it to you disintegrated

1267
01:26:04,000 --> 01:26:06,000
so that you could create modular systems.

1268
01:26:06,000 --> 01:26:09,000
The Blackwell platform is here.

1269
01:26:09,000 --> 01:26:12,000
Our company is on a one-year rhythm.

1270
01:26:12,000 --> 01:26:15,000
We're--our basic philosophy is very simple.

1271
01:26:15,000 --> 01:26:18,000
One, build the entire data center scale,

1272
01:26:18,000 --> 01:26:21,000
disaggregate it and sell it to you in parts

1273
01:26:21,000 --> 01:26:23,000
on a one-year rhythm,

1274
01:26:23,000 --> 01:26:25,000
and we push everything to technology limits.

1275
01:26:25,000 --> 01:26:29,000
Whatever TSMC process technology

1276
01:26:29,000 --> 01:26:31,000
will push it to the absolute limits.

1277
01:26:31,000 --> 01:26:33,000
Whatever packaging technology, push it to the absolute limits.

1278
01:26:33,000 --> 01:26:36,000
Whatever memory technology, push it to absolute limits.

1279
01:26:36,000 --> 01:26:38,000
Serdes technology, optics technology,

1280
01:26:38,000 --> 01:26:41,000
everything is pushed to the limit.

1281
01:26:41,000 --> 01:26:44,000
Well, and then after that, do everything in such a way

1282
01:26:44,000 --> 01:26:49,000
so that all of our software runs on this entire install base.

1283
01:26:49,000 --> 01:26:53,000
Software inertia is the single most important thing in computers.

1284
01:26:53,000 --> 01:26:56,000
It'll--when a computer is backwards compatible

1285
01:26:56,000 --> 01:26:58,000
and it's architecturally compatible,

1286
01:26:58,000 --> 01:27:00,000
with all the software that has already been created,

1287
01:27:00,000 --> 01:27:03,000
your ability to go to market is so much faster.

1288
01:27:03,000 --> 01:27:07,000
And so the velocity is incredible

1289
01:27:07,000 --> 01:27:10,000
when we can take advantage of the entire install base

1290
01:27:10,000 --> 01:27:12,000
of software that has already been created.

1291
01:27:12,000 --> 01:27:14,000
So Blackwell is here.

1292
01:27:14,000 --> 01:27:17,000
Next year is Blackwell Ultra.

1293
01:27:17,000 --> 01:27:20,000
Just as we had H100 and H200,

1294
01:27:20,000 --> 01:27:24,000
you'll probably see some pretty exciting new generation from us

1295
01:27:24,000 --> 01:27:26,000
for Blackwell Ultra,

1296
01:27:26,000 --> 01:27:28,000
again pushed to the limits,

1297
01:27:28,000 --> 01:27:32,000
and the next generation Spectrum switches I mentioned.

1298
01:27:32,000 --> 01:27:35,000
Well, this is the very first time

1299
01:27:35,000 --> 01:27:40,000
that this next click has been made.

1300
01:27:40,000 --> 01:27:44,000
And I'm not sure yet whether I'm gonna regret this or not.

1301
01:27:44,000 --> 01:27:47,000
[cheers and applause]

1302
01:27:47,000 --> 01:27:53,000
We have code names in our company,

1303
01:27:53,000 --> 01:27:56,000
and we try to keep them very secret.

1304
01:27:56,000 --> 01:27:59,000
Oftentimes, most of the employees don't even know.

1305
01:27:59,000 --> 01:28:02,000
But our next generation platform is called Rubin.

1306
01:28:02,000 --> 01:28:05,000
The Rubin platform--the Rubin platform--

1307
01:28:05,000 --> 01:28:07,000
I'm not gonna spend much time on it.

1308
01:28:07,000 --> 01:28:09,000
I know what's gonna happen.

1309
01:28:09,000 --> 01:28:10,000
You're gonna take pictures of it,

1310
01:28:10,000 --> 01:28:12,000
and you're gonna go look at the fine prints

1311
01:28:12,000 --> 01:28:13,000
and feel free to do that.

1312
01:28:13,000 --> 01:28:15,000
So we have the Rubin platform,

1313
01:28:15,000 --> 01:28:18,000
and one year later, we have the Rubin Ultra platform.

1314
01:28:18,000 --> 01:28:20,000
All of these chips that I'm showing you here

1315
01:28:20,000 --> 01:28:23,000
are all in full development, 100% of them.

1316
01:28:23,000 --> 01:28:27,000
And the rhythm is one year at the limits of technology,

1317
01:28:27,000 --> 01:28:30,000
all 100% architecturally compatible.

1318
01:28:30,000 --> 01:28:33,000
So this is basically what NVIDIA's building

1319
01:28:33,000 --> 01:28:35,000
and all of the richness of software on top of it.

1320
01:28:35,000 --> 01:28:38,000
So in a lot of ways, the last 12 years,

1321
01:28:38,000 --> 01:28:42,000
from that moment of ImageNet

1322
01:28:42,000 --> 01:28:45,000
and us realizing that the future of computing

1323
01:28:45,000 --> 01:28:48,000
was gonna radically change to today

1324
01:28:48,000 --> 01:28:51,000
is really exactly as I was holding up earlier--

1325
01:28:51,000 --> 01:28:56,000
GeForce, pre-2012, and NVIDIA today.

1326
01:28:56,000 --> 01:28:59,000
The company has really transformed tremendously,

1327
01:28:59,000 --> 01:29:01,000
and I want to thank all of our partners here

1328
01:29:01,000 --> 01:29:04,000
for supporting us every step along the way.

1329
01:29:04,000 --> 01:29:07,000
This is the NVIDIA Blackwell platform.

1330
01:29:07,000 --> 01:29:10,000
[cheers and applause]

1331
01:29:10,000 --> 01:29:19,000
Let me talk about what's next.

1332
01:29:19,000 --> 01:29:23,000
The next wave of AI is physical AI,

1333
01:29:23,000 --> 01:29:26,000
AI that understands the laws of physics,

1334
01:29:26,000 --> 01:29:30,000
AI that can work among us.

1335
01:29:30,000 --> 01:29:34,000
And so they have to understand the world model

1336
01:29:34,000 --> 01:29:37,000
so that they understand how to interpret the world,

1337
01:29:37,000 --> 01:29:39,000
how to perceive the world.

1338
01:29:39,000 --> 01:29:42,000
They have to, of course, have excellent cognitive capabilities

1339
01:29:42,000 --> 01:29:43,000
so they can understand us,

1340
01:29:43,000 --> 01:29:48,000
understand what we asked, and perform the tasks.

1341
01:29:48,000 --> 01:29:55,000
In the future, robotics is a much more pervasive idea.

1342
01:29:55,000 --> 01:29:57,000
Of course, when I say robotics,

1343
01:29:57,000 --> 01:29:59,000
there's a humanoid robotics that's usually

1344
01:29:59,000 --> 01:30:01,000
the representation of that.

1345
01:30:01,000 --> 01:30:04,000
But that's not at all true.

1346
01:30:04,000 --> 01:30:06,000
Everything is gonna be robotic.

1347
01:30:06,000 --> 01:30:08,000
All of the factories will be robotic.

1348
01:30:08,000 --> 01:30:11,000
The factories will orchestrate robots,

1349
01:30:11,000 --> 01:30:14,000
and those robots will be building products

1350
01:30:14,000 --> 01:30:16,000
that are robotic.

1351
01:30:16,000 --> 01:30:19,000
Robots interacting with robots,

1352
01:30:19,000 --> 01:30:22,000
building products that are robotic.

1353
01:30:22,000 --> 01:30:24,000
Well, in order for us to do that,

1354
01:30:24,000 --> 01:30:26,000
we need to make some breakthroughs,

1355
01:30:26,000 --> 01:30:28,000
and let me show you the video.

1356
01:30:30,000 --> 01:30:32,000
[video playing]

1357
01:30:32,000 --> 01:30:35,000
[music playing]

1358
01:30:35,000 --> 01:30:38,000
The era of robotics has arrived.

1359
01:30:38,000 --> 01:30:44,000
One day, everything that moves will be autonomous.

1360
01:30:44,000 --> 01:30:46,000
Researchers and companies around the world

1361
01:30:46,000 --> 01:30:51,000
are developing robots powered by physical AI.

1362
01:30:51,000 --> 01:30:55,000
Physical AIs are models that can understand instructions

1363
01:30:55,000 --> 01:31:01,000
and autonomously perform complex tasks in the real world.

1364
01:31:01,000 --> 01:31:04,000
Multimodal LLMs are breakthroughs

1365
01:31:04,000 --> 01:31:07,000
that enable robots to learn, perceive,

1366
01:31:07,000 --> 01:31:09,000
and understand the world around them

1367
01:31:09,000 --> 01:31:11,000
and plan how they'll act.

1368
01:31:11,000 --> 01:31:13,000
And from human demonstrations,

1369
01:31:13,000 --> 01:31:16,000
robots can now learn the skills required

1370
01:31:16,000 --> 01:31:21,000
to interact with the world using gross and fine motor skills.

1371
01:31:21,000 --> 01:31:24,000
One of the integral technologies for advancing robotics

1372
01:31:24,000 --> 01:31:26,000
is reinforcement learning.

1373
01:31:26,000 --> 01:31:29,000
Just as LLMs need RLHF,

1374
01:31:29,000 --> 01:31:32,000
or reinforcement learning from human feedback,

1375
01:31:32,000 --> 01:31:35,000
to learn particular skills, generative physical AI

1376
01:31:35,000 --> 01:31:38,000
can learn skills using reinforcement learning

1377
01:31:38,000 --> 01:31:41,000
from physics feedback in a simulated world.

1378
01:31:41,000 --> 01:31:43,000
These simulation environments

1379
01:31:43,000 --> 01:31:46,000
are where robots learn to make decisions

1380
01:31:46,000 --> 01:31:49,000
by performing actions in a virtual world

1381
01:31:49,000 --> 01:31:52,000
that obeys the laws of physics.

1382
01:31:52,000 --> 01:31:54,000
In these robot gyms,

1383
01:31:54,000 --> 01:31:58,000
a robot can learn to perform complex and dynamic tasks

1384
01:31:58,000 --> 01:32:00,000
safely and quickly,

1385
01:32:00,000 --> 01:32:04,000
refining their skills through millions of acts of trial and error.

1386
01:32:04,000 --> 01:32:08,000
We built NVIDIA Omniverse as the operating system

1387
01:32:08,000 --> 01:32:11,000
where physical AIs can be created.

1388
01:32:11,000 --> 01:32:14,000
Omniverse is a development platform

1389
01:32:14,000 --> 01:32:16,000
for virtual world simulation,

1390
01:32:16,000 --> 01:32:20,000
combining real-time, physically-based rendering.

1391
01:32:20,000 --> 01:32:22,000
physics simulation,

1392
01:32:22,000 --> 01:32:26,000
and generative AI technologies.

1393
01:32:26,000 --> 01:32:30,000
In Omniverse, robots can learn how to be robots.

1394
01:32:30,000 --> 01:32:34,000
They learn how to autonomously manipulate objects with precision,

1395
01:32:34,000 --> 01:32:38,000
such as grasping and handling objects,

1396
01:32:38,000 --> 01:32:41,000
or navigate environments autonomously,

1397
01:32:41,000 --> 01:32:46,000
finding optimal paths while avoiding obstacles and hazards.

1398
01:32:46,000 --> 01:32:51,000
Learning in Omniverse minimizes the sim-to-real gap

1399
01:32:51,000 --> 01:32:55,000
and maximizes the transfer of learned behavior.

1400
01:32:55,000 --> 01:32:58,000
Building robots with generative physical AI

1401
01:32:58,000 --> 01:33:00,000
requires three computers,

1402
01:33:00,000 --> 01:33:04,000
NVIDIA AI supercomputers to train the models,

1403
01:33:04,000 --> 01:33:06,000
NVIDIA Jetson Oren

1404
01:33:06,000 --> 01:33:09,000
and next-generation Jetson Thor robotic supercomputer

1405
01:33:09,000 --> 01:33:11,000
to run the models,

1406
01:33:11,000 --> 01:33:13,000
and NVIDIA Omniverse,

1407
01:33:13,000 --> 01:33:18,000
where robots can learn and refine their skills in simulated worlds.

1408
01:33:18,000 --> 01:33:20,000
We build the platforms,

1409
01:33:20,000 --> 01:33:22,000
acceleration libraries,

1410
01:33:22,000 --> 01:33:26,000
and AI models needed by developers and companies,

1411
01:33:26,000 --> 01:33:32,000
and allow them to use any or all of the stacks that suit them best.

1412
01:33:32,000 --> 01:33:35,000
The next wave of AI is here.

1413
01:33:35,000 --> 01:33:38,000
Robotics, powered by physical AI,

1414
01:33:38,000 --> 01:33:41,000
will revolutionize industries.

1415
01:33:43,000 --> 01:33:50,000
[Applause]

1416
01:33:50,000 --> 01:33:52,000
This isn't the future.

1417
01:33:52,000 --> 01:33:55,000
This is happening now.

1418
01:33:55,000 --> 01:33:58,000
There are several ways that we're going to serve the market.

1419
01:33:58,000 --> 01:34:00,000
The first, we're going to create platforms

1420
01:34:00,000 --> 01:34:03,000
for each type of robotic systems,

1421
01:34:03,000 --> 01:34:05,000
one for robotic factories and warehouses,

1422
01:34:05,000 --> 01:34:10,000
one for robots that manipulate things,

1423
01:34:10,000 --> 01:34:12,000
one for robots that move,

1424
01:34:12,000 --> 01:34:15,000
and one for robots that are humanoid.

1425
01:34:15,000 --> 01:34:19,000
And so each one of these robotics platforms

1426
01:34:19,000 --> 01:34:22,000
is like almost everything else we do--

1427
01:34:22,000 --> 01:34:25,000
a computer, acceleration libraries, and pre-trained models.

1428
01:34:25,000 --> 01:34:28,000
Computers, acceleration libraries, pre-trained models.

1429
01:34:28,000 --> 01:34:31,000
And we test everything, we train everything,

1430
01:34:31,000 --> 01:34:34,000
and integrate everything inside Omniverse,

1431
01:34:34,000 --> 01:34:37,000
where Omniverse is, as the video was saying,

1432
01:34:37,000 --> 01:34:39,000
where robots learn how to be robots.

1433
01:34:39,000 --> 01:34:43,000
Now, of course, the ecosystem of robotic warehouses

1434
01:34:43,000 --> 01:34:45,000
is really, really complex.

1435
01:34:45,000 --> 01:34:48,000
It takes a lot of companies, a lot of tools,

1436
01:34:48,000 --> 01:34:51,000
a lot of technology to build a modern warehouse.

1437
01:34:51,000 --> 01:34:53,000
And warehouses are increasingly robotic.

1438
01:34:53,000 --> 01:34:55,000
One of these days will be fully robotic.

1439
01:34:55,000 --> 01:34:58,000
And so in each one of these ecosystems,

1440
01:34:58,000 --> 01:35:01,000
we have SDKs and APIs

1441
01:35:01,000 --> 01:35:04,000
that are connected into the software industry,

1442
01:35:04,000 --> 01:35:08,000
SDKs and APIs connected into edge AI industry,

1443
01:35:08,000 --> 01:35:11,000
and companies, and then also, of course,

1444
01:35:11,000 --> 01:35:13,000
systems that are designed for PLCs

1445
01:35:13,000 --> 01:35:16,000
and robotic systems for the ODMs.

1446
01:35:16,000 --> 01:35:18,000
It's then integrated by integrators

1447
01:35:18,000 --> 01:35:23,000
created for ultimately building warehouses for customers.

1448
01:35:23,000 --> 01:35:26,000
Here we have an example of Ken Mac

1449
01:35:26,000 --> 01:35:30,000
building a robotic warehouse for a giant group.

1450
01:35:37,000 --> 01:35:38,000
Okay.

1451
01:35:38,000 --> 01:35:41,000
And then here, now let's talk about factories.

1452
01:35:41,000 --> 01:35:44,000
Factories has a completely different ecosystem.

1453
01:35:44,000 --> 01:35:45,000
And Foxconn is building

1454
01:35:45,000 --> 01:35:47,000
some of the world's most advanced factories.

1455
01:35:47,000 --> 01:35:51,000
Their ecosystem, again, edge computers and robotics,

1456
01:35:51,000 --> 01:35:55,000
software for designing the factories,

1457
01:35:55,000 --> 01:35:58,000
the workflows, programming the robots,

1458
01:35:58,000 --> 01:36:00,000
and of course, PLC computers

1459
01:36:00,000 --> 01:36:04,000
that orchestrate the digital factories and the AI factories.

1460
01:36:04,000 --> 01:36:07,000
We have SDKs that are connected into each one

1461
01:36:07,000 --> 01:36:09,000
of these ecosystems as well.

1462
01:36:09,000 --> 01:36:12,000
This is happening all over Taiwan.

1463
01:36:12,000 --> 01:36:17,000
Foxconn is building digital twins of their factories.

1464
01:36:17,000 --> 01:36:21,000
Delta is building digital twins of their factories.

1465
01:36:21,000 --> 01:36:24,000
By the way, half is real, half is digital,

1466
01:36:24,000 --> 01:36:26,000
half is Omniverse.

1467
01:36:26,000 --> 01:36:29,000
Pegatron is building digital twins

1468
01:36:29,000 --> 01:36:32,000
of their robotic factories.

1469
01:36:32,000 --> 01:36:35,000
Wishtron is building digital twins

1470
01:36:35,000 --> 01:36:38,000
of their robotic factories.

1471
01:36:38,000 --> 01:36:40,000
And this is really cool.

1472
01:36:40,000 --> 01:36:43,000
This is a video of Foxconn's new factory.

1473
01:36:43,000 --> 01:36:45,000
Let's take a look.

1474
01:36:45,000 --> 01:36:52,000
Demand for NVIDIA-accelerated computing is skyrocketing

1475
01:36:52,000 --> 01:36:55,000
as the world modernizes traditional data centers

1476
01:36:55,000 --> 01:36:58,000
into generative AI factories.

1477
01:36:59,000 --> 01:37:03,000
Foxconn, the world's largest electronics manufacturer,

1478
01:37:03,000 --> 01:37:05,000
is gearing up to meet this demand

1479
01:37:05,000 --> 01:37:09,000
by building robotic factories with NVIDIA Omniverse and AI.

1480
01:37:09,000 --> 01:37:11,000
Factory planners use Omniverse

1481
01:37:11,000 --> 01:37:13,000
to integrate facility and equipment data

1482
01:37:13,000 --> 01:37:15,000
from leading industry applications

1483
01:37:15,000 --> 01:37:20,000
like Siemens Teamcenter X and Autodesk Revit.

1484
01:37:20,000 --> 01:37:23,000
In the digital twin, they optimize floor layout

1485
01:37:23,000 --> 01:37:25,000
and line configurations

1486
01:37:25,000 --> 01:37:27,000
and locate optimal camera placements

1487
01:37:27,000 --> 01:37:29,000
to monitor future operations

1488
01:37:29,000 --> 01:37:33,000
with NVIDIA Metropolis-powered Vision AI.

1489
01:37:33,000 --> 01:37:36,000
Virtual integration saves planners

1490
01:37:36,000 --> 01:37:40,000
on the enormous cost of physical change orders.

1491
01:37:40,000 --> 01:37:44,000
During construction, the Foxconn teams

1492
01:37:44,000 --> 01:37:47,000
use the digital twin as the source of truth

1493
01:37:47,000 --> 01:37:51,000
to communicate and validate accurate equipment layout.

1494
01:37:51,000 --> 01:37:56,000
The Omniverse digital twin is also the robot gym,

1495
01:37:56,000 --> 01:37:59,000
where Foxconn developers train and test

1496
01:37:59,000 --> 01:38:01,000
NVIDIA ISAAC AI applications

1497
01:38:01,000 --> 01:38:04,000
for robotic perception and manipulation

1498
01:38:04,000 --> 01:38:08,000
and Metropolis AI applications for sensor fusion.

1499
01:38:08,000 --> 01:38:12,000
In Omniverse, Foxconn simulates two robot AIs

1500
01:38:12,000 --> 01:38:16,000
before deploying runtimes to jets and computers.

1501
01:38:16,000 --> 01:38:18,000
On the assembly line,

1502
01:38:18,000 --> 01:38:21,000
they simulate ISAAC manipulator libraries and AI models

1503
01:38:21,000 --> 01:38:25,000
for automated optical inspection for object identification,

1504
01:38:25,000 --> 01:38:29,000
defect detection, and trajectory planning.

1505
01:38:29,000 --> 01:38:32,000
To transfer HGX systems to the test pods,

1506
01:38:32,000 --> 01:38:36,000
they simulate ISAAC Perceptor-powered Ferrobot AMRs

1507
01:38:36,000 --> 01:38:39,000
as they perceive and move about their environment

1508
01:38:39,000 --> 01:38:42,000
with 3D mapping and reconstruction.

1509
01:38:42,000 --> 01:38:46,000
With Omniverse, Foxconn builds their robotic factories

1510
01:38:46,000 --> 01:38:50,000
that orchestrate robots running on NVIDIA ISAAC

1511
01:38:50,000 --> 01:38:52,000
to build NVIDIA AI supercomputers,

1512
01:38:52,000 --> 01:38:56,000
which in turn train Foxconn's robots.

1513
01:38:56,000 --> 01:39:14,000
So a robotic factory is designed with three computers.

1514
01:39:14,000 --> 01:39:17,000
Train the AI on NVIDIA AI.

1515
01:39:17,000 --> 01:39:21,000
You have the robot running on the PLC systems

1516
01:39:21,000 --> 01:39:24,000
for orchestrating the factories,

1517
01:39:24,000 --> 01:39:27,000
and then you, of course, simulate everything inside Omniverse.

1518
01:39:27,000 --> 01:39:30,000
Well, the robotic arm and the robotic AMRs

1519
01:39:30,000 --> 01:39:33,000
are also the same way-- three computer systems.

1520
01:39:33,000 --> 01:39:37,000
The difference is the two Omniverses will come together,

1521
01:39:37,000 --> 01:39:39,000
so they'll share one virtual space.

1522
01:39:39,000 --> 01:39:41,000
When they share one virtual space,

1523
01:39:41,000 --> 01:39:47,000
that robotic arm will become inside the robotic factory.

1524
01:39:47,000 --> 01:39:51,000
And again, three computers,

1525
01:39:51,000 --> 01:39:55,000
and we provide the computer, the acceleration layers,

1526
01:39:55,000 --> 01:39:58,000
and pre-trained AI models.

1527
01:39:58,000 --> 01:40:02,000
We've connected NVIDIA Manipulator and NVIDIA Omniverse

1528
01:40:02,000 --> 01:40:04,000
with Siemens, the world's leading

1529
01:40:04,000 --> 01:40:07,000
industrial automation software and systems company.

1530
01:40:07,000 --> 01:40:09,000
This is really a fantastic partnership,

1531
01:40:09,000 --> 01:40:12,000
and they're working on factories all over the world.

1532
01:40:12,000 --> 01:40:16,000
Semantic-PIC AI now integrates ISAAC Manipulator

1533
01:40:16,000 --> 01:40:21,000
and Sematic-PIC AI runs, operates ABB, KUKA,

1534
01:40:21,000 --> 01:40:27,000
Yaskawa, Finuc, Universal Robotics, and Techman.

1535
01:40:27,000 --> 01:40:30,000
And so Siemens is a fantastic integration.

1536
01:40:30,000 --> 01:40:32,000
We have all kinds of other integrations.

1537
01:40:32,000 --> 01:40:34,000
Let's take a look.

1538
01:40:34,000 --> 01:40:39,000
ArcBest is integrating ISAAC Perceptor

1539
01:40:39,000 --> 01:40:42,000
into FoxSmart autonomy robots

1540
01:40:42,000 --> 01:40:44,000
for enhanced object recognition

1541
01:40:44,000 --> 01:40:48,000
and human motion tracking in material handling.

1542
01:40:48,000 --> 01:40:52,000
BYD Electronics is integrating ISAAC Manipulator

1543
01:40:52,000 --> 01:40:55,000
and Perceptor into their AI robots

1544
01:40:55,000 --> 01:41:00,000
to enhance manufacturing efficiencies for global customers.

1545
01:41:00,000 --> 01:41:02,000
Idealworks is building ISAAC Perceptor

1546
01:41:02,000 --> 01:41:08,000
into their IWOS software for AI robots in factory logistics.

1547
01:41:08,000 --> 01:41:10,000
Intrinsic, an Alphabet company,

1548
01:41:10,000 --> 01:41:14,000
is adopting ISAAC Manipulator into their FlowState platform

1549
01:41:14,000 --> 01:41:16,000
to advance robot grasping.

1550
01:41:16,000 --> 01:41:19,000
Gideon is integrating ISAAC Perceptor

1551
01:41:19,000 --> 01:41:21,000
into TrayAI-powered forklifts

1552
01:41:21,000 --> 01:41:24,000
to advance AI-enabled logistics.

1553
01:41:24,000 --> 01:41:27,000
Argo Robotics is adopting ISAAC Perceptor

1554
01:41:27,000 --> 01:41:32,000
into Perception Engine for advanced vision-based AMRs.

1555
01:41:32,000 --> 01:41:35,000
Solomon is using ISAAC Manipulator AI models

1556
01:41:35,000 --> 01:41:39,000
in their Acupic3D software for industrial manipulation.

1557
01:41:39,000 --> 01:41:42,000
Techman Robot is adopting ISAAC SIM and Manipulator

1558
01:41:42,000 --> 01:41:47,000
into TMFlow, accelerating automated optical inspection.

1559
01:41:47,000 --> 01:41:51,000
Terradine Robotics is integrating ISAAC Manipulator

1560
01:41:51,000 --> 01:41:54,000
into Polyscope X for cobots

1561
01:41:54,000 --> 01:41:58,000
and ISAAC Perceptor into mirror AMRs.

1562
01:41:58,000 --> 01:42:01,000
Vension is integrating ISAAC Manipulator

1563
01:42:01,000 --> 01:42:05,000
into Machine Logic for AI manipulation robots.

1564
01:42:08,000 --> 01:42:10,000
Robotics is here.

1565
01:42:10,000 --> 01:42:12,000
Physical AI is here.

1566
01:42:12,000 --> 01:42:14,000
This is not science fiction,

1567
01:42:14,000 --> 01:42:17,000
and it's being used all over Taiwan.

1568
01:42:17,000 --> 01:42:19,000
It's just really, really exciting.

1569
01:42:19,000 --> 01:42:22,000
And that's the factory, the robots inside,

1570
01:42:22,000 --> 01:42:24,000
and of course all the products are going to be robotics.

1571
01:42:24,000 --> 01:42:27,000
There are two very high-volume robotics products.

1572
01:42:27,000 --> 01:42:30,000
One, of course, is the self-driving car,

1573
01:42:30,000 --> 01:42:33,000
or cars that have a great deal of autonomous capability.

1574
01:42:33,000 --> 01:42:35,000
NVIDIA, again, builds the entire stack.

1575
01:42:35,000 --> 01:42:38,000
Next year, we're gonna go to production

1576
01:42:38,000 --> 01:42:40,000
with the Mercedes fleet, and after that,

1577
01:42:40,000 --> 01:42:43,000
in 2026, the JLR fleet.

1578
01:42:43,000 --> 01:42:45,000
We offer the full stack to the world.

1579
01:42:45,000 --> 01:42:48,000
However, you're welcome to take whichever parts,

1580
01:42:48,000 --> 01:42:51,000
whichever layer of our stack,

1581
01:42:51,000 --> 01:42:55,000
just as the entire drive stack is open.

1582
01:42:55,000 --> 01:42:58,000
The next high-volume robotics product

1583
01:42:58,000 --> 01:43:01,000
that's gonna be manufactured by robotic factories

1584
01:43:01,000 --> 01:43:05,000
with robots inside will likely be humanoid robots.

1585
01:43:05,000 --> 01:43:09,000
And this has great progress in recent years

1586
01:43:09,000 --> 01:43:12,000
in both the cognitive capability

1587
01:43:12,000 --> 01:43:13,000
because of foundation models

1588
01:43:13,000 --> 01:43:16,000
and also the world-understanding capability

1589
01:43:16,000 --> 01:43:18,000
that we're in the process of developing.

1590
01:43:18,000 --> 01:43:20,000
I'm really excited about this area

1591
01:43:20,000 --> 01:43:23,000
because obviously the easiest robot to adapt in the world

1592
01:43:23,000 --> 01:43:26,000
are humanoid robots because we built the world for us.

1593
01:43:26,000 --> 01:43:28,000
We also have the vast--

1594
01:43:28,000 --> 01:43:30,000
the most amount of data to train these robots

1595
01:43:30,000 --> 01:43:32,000
than other types of robots

1596
01:43:32,000 --> 01:43:35,000
because we have the same physique.

1597
01:43:35,000 --> 01:43:37,000
And so the amount of training data we can provide

1598
01:43:37,000 --> 01:43:39,000
through demonstration capabilities

1599
01:43:39,000 --> 01:43:41,000
and video capabilities is gonna be really great.

1600
01:43:41,000 --> 01:43:44,000
And so we're gonna see a lot of progress in this area.

1601
01:43:44,000 --> 01:43:47,000
Well, I think we have some robots

1602
01:43:47,000 --> 01:43:50,000
that we'd like to welcome.

1603
01:43:50,000 --> 01:43:54,000
Here we go.

1604
01:43:54,000 --> 01:43:56,000
About my size.

1605
01:43:56,000 --> 01:43:58,000
[laughter]

1606
01:43:58,000 --> 01:44:01,000
[applause]

1607
01:44:01,000 --> 01:44:07,000
And we have some friends to join us.

1608
01:44:07,000 --> 01:44:10,000
So the future of robotics is here,

1609
01:44:10,000 --> 01:44:12,000
the next wave of AI.

1610
01:44:12,000 --> 01:44:15,000
And of course, you know,

1611
01:44:15,000 --> 01:44:20,000
Taiwan builds computers with keyboards.

1612
01:44:20,000 --> 01:44:23,000
You build computers for your pocket.

1613
01:44:23,000 --> 01:44:26,000
You build computers for data centers in the cloud.

1614
01:44:26,000 --> 01:44:30,000
In the future, you're gonna build computers that walk

1615
01:44:30,000 --> 01:44:34,000
and computers that roll, you know, around.

1616
01:44:34,000 --> 01:44:37,000
And so these are all just computers.

1617
01:44:37,000 --> 01:44:39,000
And as it turns out,

1618
01:44:39,000 --> 01:44:43,000
the technology is very similar to the technology of building

1619
01:44:43,000 --> 01:44:45,000
all of the other computers that you already built today.

1620
01:44:45,000 --> 01:44:50,000
So this is gonna be a really extraordinary journey for us.

1621
01:44:50,000 --> 01:44:53,000
Well, I want to thank--

1622
01:44:53,000 --> 01:44:56,000
[laughter]

1623
01:44:56,000 --> 01:45:01,000
I want to thank--

1624
01:45:01,000 --> 01:45:04,000
I've made one last video, if you don't mind.

1625
01:45:04,000 --> 01:45:09,000
Something that we really enjoyed making.

1626
01:45:09,000 --> 01:45:12,000
And if you--let's run it.

1627
01:45:12,000 --> 01:45:17,000
[dramatic music]

1628
01:45:17,000 --> 01:45:20,000
[dramatic music]

1629
01:45:20,000 --> 01:45:24,000
[speaking Chinese]

1630
01:45:26,000 --> 01:45:30,000
[speaking Chinese]

1631
01:45:30,000 --> 01:45:34,000
[speaking Chinese]

1632
01:45:36,000 --> 01:45:40,000
[speaking Chinese]

1633
01:45:40,000 --> 01:45:52,000
[speaking Chinese]

1634
01:45:52,000 --> 01:45:56,000
[speaking Chinese]

1635
01:46:20,000 --> 01:46:24,000
[speaking Chinese]

1636
01:46:24,000 --> 01:46:28,000
[speaking Chinese]

1637
01:46:28,000 --> 01:46:32,000
[speaking Chinese]

1638
01:46:32,000 --> 01:46:36,000
[speaking Chinese]

1639
01:46:36,000 --> 01:46:40,000
[speaking Chinese]

1640
01:46:40,000 --> 01:46:44,000
[speaking Chinese]

1641
01:46:44,000 --> 01:46:48,000
[speaking Chinese]

1642
01:46:48,000 --> 01:46:52,000
[speaking Chinese]

1643
01:46:52,000 --> 01:46:56,000
[speaking Chinese]

1644
01:46:56,000 --> 01:47:00,000
[dramatic music]

1645
01:47:00,000 --> 01:47:04,000
[speaking Chinese]

1646
01:47:04,000 --> 01:47:18,000
[cheers and applause]

1647
01:47:18,000 --> 01:47:22,000
[speaking Chinese] Thank you!

1648
01:47:22,000 --> 01:47:26,000
[cheers and applause]

1649
01:47:26,000 --> 01:47:30,000
I love you guys. Thank you.

1650
01:47:30,000 --> 01:47:34,000
[speaking Chinese]

1651
01:47:34,000 --> 01:47:36,000
[applause]

1652
01:47:36,000 --> 01:47:40,000
Thank you all for coming. Have a great Computex.

1653
01:47:40,000 --> 01:47:44,000
[cheers and applause]

1654
01:47:44,000 --> 01:47:46,000
Thank you.

1655
01:47:46,000 --> 01:47:50,000
[cheers and applause]

1656
01:47:50,000 --> 01:47:54,000
[dramatic music]

1657
01:47:56,000 --> 01:48:00,000
[dramatic music]

  • 5
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值