gpt mbr ext3_GPT3撰写了这个故事吗? *

gpt mbr ext3

Earlier this month AI think tank OpenAI released a closed beta version of their Language Modeling platform known as GPT3.

本月初,AI智囊团OpenAI发布了其语言建模平台GPT3的封闭Beta版本。

GPT3 is the world’s largest language model by an order of magnitude, essentially it has been trained on the world’s websites using CommonCrawl.org, Wikipedia, and other well-known text corpa, its predecessor GPT2 was one of the world’s most advanced language models trained on 1.7 billion parameters, GPT3 is multiple orders of magnitude larger with 175 billion parameters and pre-trained on nearly half a trillion words on a Supercomputer.

GPT3是世界上规模最大的语言模型,从本质上讲 ,它已经使用CommonCrawl.org ,Wikipedia和其他知名文本corpa在世界各地的网站上进行了培训,其前身GPT2是世界上接受过最高级培训的语言模型之一在17亿个参数上,GPT3的大小增加了多个数量级,有1750亿个参数,并且在超级计算机上预训练了将近0.5万亿个字。

为OpenAI开发的超级计算机是一个单一系统,每个GPU服务器具有超过285,000个CPU内核,10,000个GPU和每秒400吉比特的网络连接。 (The supercomputer developed for OpenAI is a single system with more than 285,000 CPU cores, 10,000 GPUs and 400 gigabits per second of network connectivity for each GPU server.)

Microsoft Blog

微软博客

So what does a language model do? Essentially, given a sample sentence or question a language model will give you the most likely sentence or paragraph or even full story that corresponds with your initial sample sentence.

那么语言模型有什么作用? 本质上,给定一个示例句子或问题,语言模型将为您提供最可能与您的初始示例句子相对应的句子或段落甚至全文。

WAPI for accessing new AI models developed by OpenAI. Unlike most AI systems which are designed for one use-case, the API today provides a general-purpose “text in, text out” interface, allowing users to try it on virtually any English language task. You can now request access in order to integrate the API into your product, develop an entirely new application, or help us explore the strengths and limits of this technology.

W API,用于访问由OpenAI开发的新AI模型。 与大多数针对一个用例设计的AI系统不同,如今的API提供了通用的“文本输入,文本输出”界面,使用户几乎可以在任何英语任务中进行尝试。 现在,您可以请求访问权限 ,以将API集成到您的产品中,开发全新的应用程序,或帮助我们探索该技术的优势和局限性。

Given any text prompt, the API will return a text completion, attempting to match the pattern you gave it. You can “program” it by showing it just a few examples of what you’d like it to do; its success generally varies depending on how complex the task is. The API also allows you to hone performance on specific tasks by training on a dataset (small or large) of examples you provide, or by learning from human feedback provided by users or labelers.

在出现任何文本提示的情况下,API都会返回一个文本补全,尝试匹配您给它的模式。 您可以通过显示一些您想要做的事来“编程”它。 它的成功通常取决于任务的复杂程度。 该API还允许您通过对提供的示例的数据集(大小)进行培训,或者从用户或标签提供的人工反馈中学习,从而提高特定任务的性能。

Size aside, what is special about this one? In previous models a model was trained and then fine-tuned for specific tasks, GPT3 has reduced the need to provide specific training data and fine-tuning for specific tasks and can operate in what is known as Few or Single Shot mode, it only needs a sentence or example to create meaningful output.

小号 IZE不谈,有什么特别之处吗? 在以前的模型中,先对模型进行训练,然后针对特定任务进行微调,GPT3减少了为特定任务提供特定训练数据和微调的需求,并且可以在“很少”或“单发”模式下运行,它只需要创建有意义的输出的句子或示例。

By training the model on such a huge dataset this has essentially created a machine that can write human-sounding sentences that often make just as much sense as some you read on various news sites, but the applications are so much wider.

通过在如此庞大的数据集上训练模型,这实际上创建了一种机器,该机器可以写出听起来像在各种新闻站点上阅读的句子一样有意义的人类用语,但是应用范围非常广泛。

Because it’s trained on the largest body of text in history, it has learned how to do really useful things. Early beta testers have managed to demonstrate the following

由于它是经过历史上最大的文本训练的,因此它学会了如何做真正有用的事情。 早期的Beta测试人员已设法证明以下内容

  • Generate working software components and elementary websites in a number of programming languages.

    用多种编程语言生成有效的软件组件和基本网站。
  • A working search engine that can answer questions

    可以回答问题的有效搜索引擎
  • Perform basic math on 3 digit numbers

    对3位数字执行基本数学运算
  • Writing Investment Memo’s for VC firms

    为风投公司撰写投资备忘录
  • Diagnosing symptoms or answering complex medical questions

    诊断症状或回答复杂的医学问题
  • Pattern matching from sample data in spreadsheets and filling out answers for the next input

    从电子表格中的样本数据进行模式匹配,并为下一个输入填写答案
  • Grading and correcting papers from students

    对学生的论文进行评分和更正
  • Writing Fiction, Songs, Interviews

    写作小说,歌曲,访谈
  • Answering Support Questions

    回答支持问题
  • Language Translation

    语言翻译

观察结果 (Observations)

For some years 7 years or so I have been an occasional side project experimenter with AI/ML, I like to run experiments to learn. I am a business guy that asks talented AI freelancers to try to build experiments to solve my problems, I'm certainly not capable of coding an AI model (although I have spent a lot of time experimenting with parameters) nor do I do much coding myself. I am technical from a high-level perspective, I can’t write production code but I can tell you what I need you to build to achieve what I want.

在大约7年的时间里,我偶尔是AI / ML的辅助项目实验员,我喜欢进行实验来学习。 我是一名商务人士,要求有才华的AI自由职业者尝试建立实验来解决我的问题,我当然不能编码AI模型(尽管我花了很多时间进行参数试验),我也不做太多编码我。 从高层次的角度来看,我是技术人员,我无法编写生产代码,但是我可以告诉您需要实现什么才能实现自己的目标。

Some of my experiments have included a model that can detect Malaria pathogens in blood cell images, models that can spot defects on roads, a news aggregation and summarisation system, a mobile object detection app that can read out what is in front of you (for vision impaired) and an algo trading app that can run equity trading strategies and backtesting.

我的一些实验包括可以检测血细胞图像中的疟疾病原体的模型,可以发现道路上的缺陷的模型,新闻汇总和摘要系统,可以读取您面前事物的移动物体检测应用程序(用于视力受损)和可以运行股权交易策略和回测的算法交易应用。

My initial observations of GPT3 are as seen through the lens of an investor and technical business guy.

我对GPT3的初步观察是从投资者和技术业务人员的角度来看的。

The supercomputer developed for OpenAI is a single system with more than 285,000 CPU cores, 10,000 GPUs and 400 gigabits per second of network connectivity for each GPU server. Microsoft Blog

为OpenAI开发的超级计算机是一个单一系统,每个GPU服务器具有超过285,000个CPU内核,10,000个GPU和每秒400吉比特的网络连接。 微软博客

  • Individual developers and startups, even most large corporations just can’t compete with the moat that this sort of raw compute power provides. The days of developers running these models on their own GPU at home or even on their own instances in the cloud are gone, the model is too large (can you imagine paying the monthly AWS/Azure bill on the V100 GPU enabled servers).

    单个开发人员和初创公司,甚至大多数大型公司都无法与这种原始计算能力所提供的护城河竞争。 开发人员在家里自己的GPU上甚至在云中自己的实例上运行这些模型的日子已经一去不复返了,该模型太大了(您可以想象在启用V100 GPU的服务器上每月支付AWS / Azure账单)。
  • The model is huge, it was trained on 45Tb of compressed data from Commoncrawl.org that is housed in special compressed formats on Amazon Web Services and needs to be processed with specialised data ingestion tools, its a non-trivial technical task to do properly.

    该模型非常庞大,它接受了来自Commoncrawl.org的 45Tb压缩数据的训练,该压缩数据以特殊压缩格式存储在Amazon Web Services上,需要使用专门的数据提取工具进行处理,这是一项正常的技术任务。

  • Interestingly a lot of the code to run Commoncrawl was built by our friend and AI expert Smerity originally from Sydney University.

    有趣的是,运行Commoncrawl的许多代码是由我们的朋友和AI专家Smerity最初从悉尼大学构建的

  • IMO this raw compute power combined with massive AI models probably implies that OpenAI is on its way to being one of the next major tech giants, they have a number of AI-based platforms that can perform many AI tasks that Google, AWS, and Microsoft do now without having to involve Developers and use their APIs, they probably have the resources and scale to build a viable AI competitor.

    IMO将这种原始计算能力与海量AI模型相结合,可能意味着OpenAI即将成为下一个主要的技术巨头之一,他们拥有许多基于AI的平台,可以执行Google,AWS和Microsoft的许多AI任务现在无需参与开发人员并使用他们的API,他们就可以拥有足够的资源和规模来构建可行的AI竞争对手。

  • It’s also possible a variant of GPT3 could also emerge as a search competitor to Google.

    GPT3的变体也有可能成为Google的搜索竞争对手。
  • Because of the sheer scale, these platform tools will be owned and operated by OpenAI and exposed as an API for developers to use as a service (most other AI software tools can be installed and run as standalone services).

    由于规模庞大,这些平台工具将由OpenAI拥有和操作,并作为API公开,供开发人员用作服务(大多数其他AI软件工具都可以作为独立服务安装和运行)。
  • GPT3 won’t work for all use cases but the street will work out how they want to use these capabilities, businesses, and developers will experiment, discard the failures, and enhance the successful experiments.

    GPT3并非适用于所有用例,但华尔街会弄清楚他们如何使用这些功能,企业和开发人员将进行试验,丢弃故障并增强成功的试验。
  • It doesn’t appear to update or learn in real-time, it was reportedly trained in Oct 2019, so it might be able to write about COVID but it appears in its current form it is not going to create real-time updated coverage with fresh facts.

    它似乎没有实时更新或学习,据报道它于2019年10月接受了培训,因此它可能能够撰写有关COVID的信息,但它以当前形式出现,因此不会创建具有以下内容的实时更新的覆盖范围:新鲜的事实。
  • As these tools become commonly available it is going to become increasingly difficult to separate fact vs opinion vs fiction. GPT3 has the potential to automate the production of news however there is a real risk that it creates readable interesting stories that are fundamentally fake news. (this is not really much different to humans writing fake news, just probably better written and in far greater volumes)

    随着这些工具变得普遍可用,将事实与观点,虚构分开变得越来越困难。 GPT3具有自动制作新闻的潜力,但是存在真正的风险,即它会创建从根本上讲是假新闻的可读有趣故事。 (这与写虚假新闻的人并没有太大的区别,只是写得更好,数量更多)
  • It makes the role of editor and fact-checker far more important as there the potential for creating disinformation on a massive scale (given so much news is copy-pasted from other news sites, news with interesting but false assertions spreads very quickly).

    它使编辑器和事实检查器的作用变得更加重要,因为那里可能大量散布虚假信息(鉴于从其他新闻站点复制粘贴了太多新闻,带有有趣但错误主张的新闻Swift传播了)。
  • Who/what can you trust? I think it probably also highlights the need for journalists to become celebrities and become known for the quality and authenticity of their work so that their personal brand is able to act as means of authentication for the story.

    您可以信任谁/什么? 我认为这也可能凸显了新闻工作者必须成为名人并以其作品的质量和真实性而闻名,以便他们的个人品牌能够作为故事的鉴证手段。
  • GPT3 doesn’t have its own opinion or character, but it can probably mimic someone else’s

    GPT3没有自己的见解或特征,但可能可以模仿他人的观点或性格。
  • Models like GPT3 probably also drive the need for “Explainable AI” to validate the output from AI models.

    像GPT3这样的模型也可能促使人们需要“可解释的AI”来验证AI模型的输出。
  • If GPT3 can assemble the code to run basic neural networks, potentially it can improve itself in the future (especially if paired with a Reinforcement Learning Algorithm)

    如果GPT3可以汇编代码以运行基本的神经网络,则它有可能在将来改进自身(尤其是与强化学习算法配合使用时)
  • While the first few paragraphs of GPT3 generated text may be very readable, some of the examples I have seen start to wander off topic when running extended examples.

    尽管GPT3生成的文本的前几段很容易阅读,但我看到的一些示例在运行扩展示例时开始偏离主题。

摘要 (Summary)

There are many news stories and hype claiming the release of GPT3 heralds the arrival of General Artificial Intelligence. I don’t believe this is true, it’s really not General AI, it has learned to Ape humans very well for a lot of tasks, but it isn’t reasoning and it performs badly on many tasks that humans do easily (see Sam Altman’s, one of the key OpenAI founders comments below).

有许多新闻报道和炒作声称GPT3的发布预示了通用人工智能的到来。 我不相信这是真的,它不是通用人工智能,它已经很好地学习了人类在许多任务上的猿猴,但它不是推理,并且在许多人类容易完成的任务上表现不佳(请参阅Sam Altman的著作) ,以下是OpenAI创始人的主要评论之一)。

Image for post

GPT3会取代人类吗? (Is GPT3 going to replace humans?)

Probably some, but not universally. It can probably be tuned to do tasks that don’t require reasoning, design or creativity, repetitive work, where the worker adds very little value, low wage white-collar jobs like writing repetitive legal documents, collecting, collating, transcribing, researching, formatting, assessing, tasks that take significant time but perhaps follow rules or guidelines.

可能有些,但不是普遍的。 可以将其调整为不需要进行推理,设计或创造力的工作,重复性工作,而在重复性工作中,工人几乎没有增加任何价值,从事低工资的白领工作,例如撰写重复性法律文件,收集,整理,转录,研究,格式化,评估需要大量时间但可能遵循规则或准则的任务。

It is difficult to imagine it will take over truly creative or work where the author, their character and the way they write or perform is part of the enjoyment.

很难想象它将接管真正的创意或工作,而作者,他们的性格以及他们的写作或表演方式是享受的一部分。

实验与范例 (Experiments & Examples)

Screenshots from experiments found on Twitter and the web.

在Twitter和网络上找到的实验截图。

This is brilliant

太好了

Generating a basic Neural Network with Pytorch

用Pytorch生成基本的神经网络

Generating Graphs from data using plain English commands

使用简单的英语命令从数据生成图形

This is brilliant, I hope that it's right

太好了,我希望是对的

Image for post
Image for post
Image for post
Image for post
Image for post

SQL queries from plain English statements

简单英语语句SQL查询

Image for post
Image for post
Image for post
Image for post
Image for post

GPT3上的有趣帖子 (Interesting Posts on GPT3)

The original OpenAI paper

OpenAI原始论文

GPT3 an AI that's eerily good at writing almost anything

GPT3一种擅长编写几乎所有内容的AI

https://medium.com/@julienlauret

https://medium.com/@julienlauret

Giving GPT3 a Turing test

给GPT3进行图灵测试

How do you know a human wrote this?

你怎么知道一个人写的?

3-minute explainer on why GPT3 is overhyped

3分钟的解释器,说明为何GPT3被过度炒作

Airtable list of GPT3 Experiments

GPT3实验的空气清单

You can sign up for more deep tech news at Main Sequence Ventures

您可以在Main Sequence Ventures上注册以获得更深入的技术新闻

*Answer to my headline

*回答我的标题

No GPT3 did not write this article. See Betteridges Law of Headlines

没有GPT3没有写这篇文章。 参见 头条新闻

I managed to get access to AI Dungeon one of the apps that has been given early access to GPT3. AI Dungeon is set up to generate stories like an early adventure game so it wants to play a game to give context, it really isnt set up for article writing and some of the commentary that comes back is a bit random

我设法获得了AI Dungeon的应用程序之一,该应用程序已获得了GPT3的早期访问权。 AI Dungeon的设置是为了产生类似于早期冒险游戏的故事,因此它想玩一款游戏来提供背景信息,它实际上并不是为撰写文章而设置的,返回的一些评论有点随机

翻译自: https://medium.com/swlh/did-gpt3-write-this-story-88865a843531

gpt mbr ext3

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值