gpt 引导驱动器_gpt 3将取代编码器

gpt 引导驱动器

Nowadays, everybody is talking about GPT-3, San Francisco based OpenAI’s new language model. GPT stands for “generative pre-training transformer” and GPT-3 is the third iteration of this model which has 175 billion parameters — a big jump from its predecessor GPT-2 that has 1.5 billion parameters. Beta users are exploring use-cases to understand GPT-3’s capabilities and many use-cases have already gone viral on social media.

如今,每个人都在谈论基于旧金山的OpenAI新语言模型GPT-3。 GPT代表“发电式预训练变压器”,而GPT-3是该模型的第三次迭代,具有1750亿个参数,这比其具有15亿个参数的前身GPT-2有了很大的进步。 Beta用户正在探索用例以了解GPT-3的功能,并且许多用例已经在社交媒体上风行一时。

Duygu Oktem Clark, our Venture Partner and founder of DO Venture Partners spoke with Yigit Ihlamur, cofounder and General Partner of Vela Partners about GPT-3 including its effects on our lives and the worries about harmful biases.

我们的风险合伙人兼DO Venture Partners创始人Duygu Oktem Clark与Vela Partners的联合创始人兼普通合伙人Yigit Ihlamur谈了GPT-3,包括对我们生活的影响以及对有害偏见的担忧。

Duygu Oktem Clark: I know that you specialized in Machine Learning during your graduate study in 2009. How do you see the evolution of AI and machine learning since then?

Duygu Oktem Clark:我知道您在2009年的研究生学习期间专攻机器学习。从那时起,您如何看待AI和机器学习的发展?

Yigit Ihlamur: When I started my studies in 2009, academia was mainly concentrated on math-focused algorithms. Towards the end of 2010, access to storage and compute-power improved the performance of data-intensive algorithms. Academia started experimenting with those algorithms, specifically neural networks, and testing them against benchmarks such as object recognition accuracy in images. In 2012, neural networks outperformed every other algorithm in object recognition. This success motivated more people to experiment with 30+ years old neural network algorithms and iterate on them. Machine learning experts realized that more data and compute-power improve accuracy without changing the fundamentals of neural networks. Hence, the machine learning expertise moved towards data science, statistics, and custom chip design making data consumption and computation faster and easier.

Yigit Ihlamur:当我于2009年开始学习时,学术界主要集中在以数学为中心的算法上。 到2010年年底,对存储和计算能力的访问提高了数据密集型算法的性能。 学术界开始尝试这些算法,特别是神经网络,并针对基准进行测试,例如图像中的对象识别准确性。 在2012年,神经网络在对象识别方面的表现优于其他所有算法。 这项成功促使更多的人尝试使用30多年的神经网络算法并对其进行迭代。 机器学习专家意识到,更多的数据和计算能力可以提高准确性,而无需更改神经网络的基础知识。 因此,机器学习专业知识已转向数据科学,统计和定制芯片设计,从而使数据消耗和计算变得更快,更容易。

Over time, neural networks became more sophisticated. Easy-to-use developer libraries emerged. More models were trained with large datasets with cheap compute power. Thanks to the concept of ‘Transfer Learning’, experts started to build on top of other experts’ models. This spiraled an exponential growth and high collaboration within the community.

随着时间的流逝,神经网络变得越来越复杂。 易于使用的开发人员库应运而生。 使用具有低计算能力的大型数据集训练了更多模型。 得益于“转移学习”的概念,专家们开始在其他专家模型的基础上进行构建。 这导致社区内的指数级增长和高度协作。

Around 2016, neural networks were able to detect objects in images better than humans. This was similar to the time when computers were able to do math better than humans. Since this revolution, computers can see as good as humans, if not better in some use-cases. Unlike people, computers can scale and never get tired.

在2016年左右,神经网络比人类能够更好地检测图像中的物体。 这类似于计算机能够比人类做得更好的数学时代。 自从这场革命以来,即使在某些用例中,计算机也可以像人类一样好。 与人不同,计算机可以扩展并且永不疲劳。

Around that time, I got my hands on a computer vision algorithm. I was blown away as I was able to train my algorithm to detect objects in less than 2 hours. This innovation caused large industries such as automotive, robotics, security, and manufacturing to evolve rapidly.

在那段时间,我接触了计算机视觉算法。 能够训练我的算法在不到2小时的时间内检测到物体,这让我感到震惊。 这项创新使汽车,机器人,安全和制造业等大型行业Swift发展。

While computer vision reached an important milestone, other popular applications of

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值