人工智能是一个哲学大事件,预示着新的轴心时代的来临

图片

来源:科技世代千高原

为什么人工智能是哲学上的突破

人类与科技的共生预示着一个新的轴心时代的到来

托拜厄斯·里斯


2025 年 2 月 4 日

Tobias Rees 是limn的创始人,limn 是一家位于哲学、艺术和技术交汇处的研发工作室。他还是施密特科学公司 AI2050 计划的高级研究员和谷歌的高级访问研究员。

图片

托拜厄斯·里斯 (Tobias Rees) 是一家位于哲学、艺术和技术交汇处的人工智能工作室的创始人,他与 Noema 主编内森·加德尔斯 (Nathan Gardels) 坐下来讨论生成人工智能的哲学意义。

内森·加德尔斯:我们人类仍不清楚的是,我们通过人工智能创造的机器智能的本质,以及它如何改变我们对自己的理解。作为一名哲学家,您不是在象牙塔里思考这个问题,而是在“野外”,在谷歌和其他地方的工程实验室里思考这个问题,您对此有何看法?

托拜厄斯·里斯 (Tobias Rees):人工智能深刻地挑战了我们对自身的理解。

我为什么这么认为?

我们人类依靠大量概念预设生活。我们可能并不总是意识到它们——但它们确实存在,并塑造了我们思考和理解自己以及周围世界的方式。总的来说,它们是我们生活的基础逻辑网格或架构。

人工智能之所以成为如此深刻的哲学事件,是因为它颠覆了许多最基本、最理所当然的概念(或哲学),而这些概念或哲学定义了现代时期,并且大多数人仍然以它们为生。人工智能确实让这些概念或哲学显得不足,从而标志着一个深刻的停顿。

举个具体的例子,现代社会最基本的一个假设就是,人类和机器之间存在着明显的区别。

这里的人类是生物体,开放且不断进化,具有智慧,因而具有内在性。

那里有机器,没有生命的机械之物;有封闭的、固定的、确定性的系统,缺乏智能和内在性。

这种区分最早出现于 17 世纪 30 年代,构成了现代人性概念的组成部分。例如,17 世纪至 19 世纪期间发明的几乎所有词汇都是为了描述什么是真正的人性,这些词汇都基于人类/智能-机器/机械的区分。

能动性、艺术、创造力、意识、文化、存在、自由、历史、知识、语言、道德、游戏、政治、社会、主观性、真理、理解。所有这些概念的提出都是为了让我们了解什么是真正独特的人类潜能,这种独特性基于这样的信念:智慧使我们超越一切——而其他一切最终都可以被充分描述为一个封闭的、确定的机械系统。

人机区别为现代人类提供了理解自身和周围世界的框架。人工智能(即构建的智能技术系统)的哲学意义在于它们打破了这一框架。

这意味着,持续了近 400 年的稳定时代即将结束,或者似乎即将结束。

从诗意的角度来说,这有点像人工智能将我们和世界从我们对自己和世界的理解中解放出来。它让我们处于开放状态。

我坚信,那些打造人工智能的人应该理解人工智能的哲学意义。正因如此,我才成为了你所说的“野外哲学家”。

Gardels:你说人工智能是智能的。但许多人怀疑人工智能是否“真的”智能。他们认为人工智能只是人类发明的所有技术中的一种工具。

里斯:根据我的经验,这个问题几乎总是出于一种防御冲动。有时愤怒,有时焦虑,试图坚持或重新铭刻旧有的区别。我认为这是对人类例外论的怀念,也就是怀念我们人类认为只有一种智慧形式的时代,那就是我们自己。

人工智能告诉我们事实并非如此。当然,不仅仅是人工智能。在过去的二十年里,智能的概念不断增加。我们现在知道还有很多其他类型的智能:从细菌到章鱼,从地球系统到星系的螺旋臂。我们是一系列智能中的一个。人工智能也是如此。

认为这些其他生物不是“真正”的智能,因为它们的智能与我们不同,这种说法有点愚蠢。这就像一种鸟类,比如鹈鹕,坚持认为只有鹈鹕“真正”知道如何飞翔。

我们最好摆脱“真的”,简单地承认人工智能是智能的,尽管它与我们略有不同。

加德尔斯:什么是智力?

里斯:今天,我们似乎知道智力有一些基本特质,比如从经验中学习、逻辑理解以及从所学知识中抽象出解决新情况的能力。

人工智能系统具备所有这些特质。它们可以学习,可以进行逻辑理解,可以形成抽象概念,从而应对新情况。


然而,经验、学习、理解或抽象对人工智能系统和我们人类的意义并不完全相同。这就是为什么我认为人工智能在智能上与我们略有不同。

“人工智能违背了许多最基本、最理所当然的概念或哲学,这些概念或哲学定义了现代时期,并且大多数人类仍然遵循这些概念或哲学生活。”

加德尔斯:人工智能可能是另一种智能,但我们能说它比我们更聪明,或者可以比我们更聪明吗?

里斯:对我来说,问题不一定在于人工智能是否比我们更聪明,而是我们不同的智能是否可以互补。我们能一起变得更聪明吗?

让我概括一下我所看到的一些差异。

人工智能可以在微观和宏观层面上运作,这超出了人类的逻辑理解和能力。

例如,人工智能拥有的信息比我们多得多,而且它能够比我们更快地访问和处理这些信息。它还能发现数据中的逻辑结构,也就是模式,而我们却看不到这些模式。

也许我们必须停下来思考一下才能认识到这有多么非同寻常。

人工智能可以让我们进入人类无法发现和进入的空间。这有多神奇?已经有很多这样的例子。从发现围棋或国际象棋等游戏中的新动作,到发现蛋白质如何折叠,再到了解整个地球系统。

鉴于这些超越人类的品质,人们可以说人工智能比我们更聪明。

然而,人类的聪明才智并不能简化为人工智能所具有的那种智力或聪明才智。它具有人工智能似乎不具备的其他维度。

在这些额外的维度中,也许最重要的是我们个人对过上人类生活的需要。

这是什么意思?至少这意味着我们人类依靠内心世界来探索外部世界。我们必须通过思考,即思考自我来定位自己。这些思考自我必须理解、领悟并被洞见所震撼。

无论人工智能有多聪明,它都无法让我变得聪明。它可以为我提供信息,甚至可以让我参与思考过程,但我仍然需要在思考方面自我定位。我仍然需要有自己的经验和自己的见解,这些见解使我能够过好自己的生活。

话虽如此,人工智能,即它所具有的特定非人类智能,在引领人类生活方面可以提供极大的帮助。

我能想到的最有力的例子是,它可以以人类无法做到的方式让自己看到自己。

想象一下,一个设备上的人工智能系统——一个只存在于你的设备上且不连接到互联网的人工智能模型——可以访问你的所有数据。你的电子邮件、你的信息、你的文档、你的语音备忘录、你的照片、你的歌曲等等。

我强调设备上的数据,因为这很重要,因为第三方无法访问您的数据。

这样的人工智能系统可以让我以我或其他人类无法做到的方式被自己看到。它确实可以让我超越自我。它可以从外部向我展示我自己,向我展示定义我的思维和行为模式。它可以帮助我理解这些模式,并与我讨论它们是否在限制我,如果是,那么是如何限制我的。更重要的是,它可以帮助我研究这些模式,并在适当的情况下,让我摆脱它们并获得自由。

从哲学角度来说,人工智能可以帮助我将自己转变为一个我可以与之联系并可以对其采取行动的“思想对象”。

自我对自我的研究构成了希腊哲学家所说的 meletē 和罗马哲学家所说的 meditatio 的核心。我在这里提到的那种人工智能系统将是哲学家的梦想。它可以让我们人类以任何人类对话者都无法做到的方式在我们之外被我们自己看到,摆脱对话自恋。

你看,我们的智能和人工智能的智能之间的重叠和差异中可能存在着令人难以置信的美丽。

归根结底,我并不认为人工智能是一个与我们竞争的自我封闭、自主的实体。相反,我认为它是一种关系。

加德尔斯:基于深度学习的人工智能系统与旧有的人机二分法相比,具体有何新特点?

里斯:20 世纪 50 年代到 21 世纪初盛行的人工智能,是试图从机器提供的词汇来思考人类。这是工程师们明确而自觉的尝试,试图从机器可能性的概念空间来解释人类的一切。

“人工智能可以让我们人类以任何人类对话者都无法做到的方式在我们自己面前显现,让我们摆脱对话自恋。”

它被称为“符号人工智能”,因为这些系统背后的基本思想是,我们可以将知识存储在数学符号中,然后为计算机提供如何从这些符号表示中得出相关答案的规则。

一些哲学家,尤其是赫伯特·德雷福斯和约翰·塞尔,对此深有感触。他们开始捍卫这样的观点:人类不仅仅是机器,也不仅仅是基于规则的算法。

但 2010 年代初以来兴起的人工智能,即所谓的深度学习系统或深度神经网络,却是完全不同的类型。

符号人工智能系统与所有之前的机器一样,都是封闭的、确定的系统。这意味着,首先,它们所能做的事情受到我们给它们的规则的限制。当它们遇到规则未涵盖的情况时,它们就会失败。可以说它们没有适应性,没有学习行为。这也意味着它们能做的事情完全取决于制造它们的工程师。最终,它们只能做我们明确指示它们做的事情。也就是说,它们没有自己的代理权,没有代理能力。简而言之,它们是工具。

深度学习系统则不同。我们不会将知识传授给它们。我们也不会对它们进行编程。相反,它们会自行学习,并根据所学知识应对各种情况或回答从未见过的问题。也就是说,它们不再是封闭的确定性系统。

相反,它们具有某种开放性和某种主动行为,一种深思熟虑或决策空间,这是之前的任何技术系统都不具备的。有人说人工智能“只”具有模式识别。但我认为模式识别实际上是一种发现事物逻辑结构的形式。大致来说,当你有一个学生识别出数据背后的逻辑原理,并能根据这些逻辑原理回答问题时,你不觉得这叫做理解吗?

事实上,我们可以更进一步说,人工智能系统似乎能够区分真假。这是因为真假与一致的逻辑结构呈正相关。可以说,错误都是独一无二的或不同的。而真假则不然。我们在人工智能模型中看到的是,它们可以区分符合它们发现的模式的陈述和不符合的陈述。

从这个意义上来说,人工智能系统已经拥有了新生的真理感。

简而言之,深度学习系统所具有的特性直到最近才被认为只对一般生物体、特别是人类才有可能实现。

当今的人工智能系统兼具两者的特质,因此无法简化为两者之一。它们介于旧的区别之间,表明我们理解现实的非此即彼的逻辑——要么是人,要么是机器,要么是生命,要么是非生命,要么是自然的,要么是人工的,要么是生命,要么是事物——是远远不够的。

只要人工智能能够摆脱这些二元区别,它就会把我们带入一个无法用语言描述的领域。

我们可以说,它为我们打开了世界之门。它以我们从未见过的方式向我们展现现实。它向我们表明,我们可以用超越现代时期逻辑区分的方式来理解和体验现实和我们自己。

从某种意义上来说,我们就像第一次看到一样。

Gardels:那么,深度学习系统不仅仅是工具,还是具有一定自主性的代理?

里斯:这个问题是一个很好的例子,表明人工智能在哲学上确实是新颖的。

我们过去认为,能动性有两个先决条件:活着和有内在性,也就是有自我意识或意识。现在,我们可以从人工智能系统中了解到,情况显然并非如此。有些东西有能动性,但它们不是活着的,它们没有意识或思想,至少不是我们以前理解的那种意识或思想。

这种洞察力,这种将能动性与生活和内心分离的洞察力,强烈地激励我们以不同的方式看待世界——以及我们自己。

例如,对于能动性而言,它不需要生命和内在性,这一点是否也适用于智力、创造力或语言等事物?如果是这样的话,我们将如何对世界上的事物进行不同的分类或归类?

“人工智能之所以成为一项哲学事件,是因为这些系统打破了人类与机器、生物与非生物之间以前明确的区别。”


天体物理学家莎拉·沃克 (Sarah Walker) 在《Noema》杂志上发表的文章中表示,“我们需要摆脱将所有事物分为生命或非生命这一二元分类”。

我最感兴趣的是从人工智能向我们展现的中介性视角重新思考我们从现代时期继承的概念。

从人工智能的中介性角度看,创造力是什么?什么语言?什么思维?

新轴心时代?


加德尔斯:卡尔·雅斯贝斯最出名的是他对所谓的轴心时代的研究,当时所有伟大的宗教和哲学都诞生于两千年前,相对而言是同时诞生的——中国的儒教、印度的奥义书和佛教、荷马时代的希腊和希伯来先知。雅斯贝斯认为这些文明是在他所谓的“第一个普罗米修斯时代”之后兴起的,当时人类开始使用火并发明了最早的发明。

对于查尔斯·泰勒来说,第一个轴心时代是人们从孤立的社区及其自然环境中“大脱离”的结果,当时人们有限的意识仅限于部落的生计和生存,由口头叙事神话指导。泰勒认为,从封闭的世界中解脱出来是由于书面语言的出现。这种象征能力的获得使人们能够基于持久的文本进行“反思”,这些文本为人们提供了一个超越当前环境和当地叙事的共享意义平台。

长话短说,这种“超越”反过来又导致了普遍哲学、一神教和广泛道德体系的可能性。脱离嵌入反思的批判性自我疏离元素进一步演变为社会学家罗伯特·贝拉所说的“理论文化”,演变为科学发现和催生现代性的启蒙运动。对于贝拉来说,“柏拉图完成了向轴心时代的过渡”,其理论思想“使心灵能够从具体表现中抽象出伟大和渺小的事物本身来‘看待’它们。”

最大的问题是,人工智能所达到的新的符号能力水平,是否会在催生“新轴心时代”方面发挥类似的作用,就像书面语言第一次出现时催生了新的哲学、伦理体系和宗教一样。

里斯:我不确定当今的人工智能系统是否具备现代所谓的符号能力。

这与我们已经讨论过的内容有关。

自从约翰·洛克以来,就有这样的观点:我们人类具有思维,我们将经验以符号或符号表示的形式存储起来,然后从这些符号中得出答案。

我们可以假设,在整个现代时期,这种概念被理解为智能的基本基础设施。

19 世纪末,恩斯特·卡西尔等哲学家对此进行了阐释。他认为,理解人性的关键在于认识到我们人类发明了符号或意义,而符号的创造或意义的创造正是我们与其他物种的区别所在。

总体而言,深度学习,尤其是生成式人工智能,已经打破了以人类为中心的智能概念,并用其他东西取而代之:智能基本上是两件事:学习和推理。

从本质上讲,学习意味着能够发现抽象的逻辑原则,这些原则可以组织我们想要学习的东西。无论这是实际的数据集还是我们人类的学习经历,都没有区别。我们称之为逻辑理解。

智力的第二个定义特征是能够持续稳定地完善和更新这些抽象的逻辑原理和理解,并通过推理将它们应用于我们所处的境况以及我们必须应对或解决的境况。

深度学习系统在第一部分表现最为出色,但第二部分则不然。基本上,一旦经过训练,它们就无法修改所学的内容。它们只能进行推断。

无论如何,这里并没有什么象征意义。至少不是传统意义上的象征意义。

我之所以强调这种象征的缺失,是因为它可以完美地表明深度学习已经导致了相当强大的哲学断裂:隐含在新的智能概念中的是对人类本质的一种根本不同的本体论理解,事实上,是对现实是什么或其结构和组织方式的理解。

我认为,理解这种与旧有的智能概念以及人类/世界本体论的决裂,是理解你的实际问题的关键:我们是否正在进入一个你所说的新轴心时代,在这个时代,人工智能将类似于大约 3,000 至 2,000 年前的文字?

“我们是否正在进入所谓的新人工智能轴心时代,其中人工智能将达到与大约 3,000 至 2,000 年前的文字类似的东西?”

如果我们够幸运的话,答案是肯定的。潜力是绝对存在的。

但请让我尝试阐明我所认为的挑战是什么,以便我们能够真正实现这一目标。

让我们以文字的出现、内在词汇的诞生、以及抽象或理论思维的兴起之间的关联作为起点。

我会像之前的回答一样,反思我们所处概念的历史性,指出它们有多么新近,它们并不是永恒的或普遍的,然后询问人工智能是否挑战和改变了它们。

布鲁诺·斯内尔 (Bruno Snell) 写过一本很棒的书,名为《心灵的发现》,英文译本也译为《心灵的发现》。

这部作品的核心论点是,我们今天所说的“思想”、“意识”和“内心生活”并不是天生就有的。它并不是一直存在或一直被体验到的。相反,它是一个逐渐出现的概念。

斯内尔以优美、迷人的散文追溯了我所认为的“内在词汇”诞生的最早实例。

例如,他指出,荷马的作品中没有“心灵”或“灵魂”这种一般的抽象概念。相反,有一大堆很难翻译的术语。例如,thymos可能最好被表述为一种征服和吞噬人的激情,或者noos最初的意思是感官意识和心灵,荷马和他的同时代人最常用它来指代“呼吸”或有生命的东西,而不是我们今天所说的心灵。

简单来说,荷马史诗中根本没有关于内在性的词汇。赫西奥德的著作中也没有。

在从古希腊语转向古典希腊语时,这种情况发生了变化。我们开始看到一种内在词汇的诞生,以及描述内心体验的日益复杂的方式。这里最重要的参考可能是萨福。她的诗歌是对我们今天所说的主观体验和个人情感的最早探索之一。

我不想通过重述斯内尔的整本书来偏离主题。相反,我感兴趣的是传达我们之前讨论过的一种可能性:我们人类并不总是像今天这样体验自己。每一种形式的经验、思考或理解都是由概念介导的。对于内在性和内心生活的概念来说,情况也是如此,也许尤其如此。

斯内尔的书之所以如此精彩,是因为他展示了新概念的不连续、渐进的出现,这些新概念相当于这样一种观点,即存在着一种类似内在性的东西,而这种内在性——一种内在景观——是一个单一的、自我同一的“我”所在的地方。

现在,至关重要的是,文字的引入(可能始于荷马时代)对于内在性概念词汇的出现至关重要。

斯内尔只是顺便提到了这一点,但后来的作品,特别是杰克·古迪、埃里克·哈夫洛克和沃尔特·翁的作品,都明确地关注了这一点,并且都或多或少得出了相同的结论:书写实践为分析性思维创造了新的可能性,从而产生了越来越抽象、分类的名词,以及一种在人类历史上以前从未见过的系统性知识搜索和生产形式。

这些作者还明确指出,斯内尔著作的唯一遗憾是他在书名中使用了“发现”一词。心灵不是被发现的。它是被构造的,是被发明的,如果你愿意的话。也就是说,它可以以不同的方式构造。这正是古迪、翁等人充分展示的。心灵是什么,内在性是什么,在其他地方是不同的。

让我简单总结一下,书写技术对人类的本质、我们如何体验和理解自己产生了绝对重大的影响。在这两项影响中,也许最重要的就是自我反思和抽象思维的系统性出现。

人工智能能否像改变写作一样改变人类的意义?

人工智能能否开启一个全新的、或许是完全不连续的篇章,让我们了解心智、内在性和思考的意义?它能否帮助我们思考那些如此新颖、如此不同的想法,以至于无论我们迄今为止如何理解自己,这些想法都会变得过时?

“人工智能是否能够标志着心智、内在性和思考能力的一个全新、甚至可能是完全不连续的篇章的开始?”


哦,是的,可以!人工智能绝对有潜力成为如此重大的哲学事件。

展示人工智能潜力的最美丽、最迷人、最令人大开眼界的方式可能就是工程师所说的“潜在空间表征”。

当大型语言模型学习时,它会逐渐从所提供的数据中提取出更加抽象的逻辑原理。

最好将这个过程想象成与结构主义分析大致相似:人工智能识别组织(实际上是基础)其所训练的全部数据的逻辑结构,并以概念的形式存储或记忆它。它这样做的方式是发现数据中不同元素之间关系的逻辑。因此,在文本中,大致来说,这将是这些词:训练数据中不同单词之间的接近度是多少?

如果你愿意,大模型 (LLM) 可以发现单词之间多种不同程度的关系。

有趣的是,这个学习过程产生的是一个高维的关系空间,工程师称之为潜在空间(即隐藏空间)。

首先,这意味着在训练过程中,LLM 内部会生长出一些东西。AI 会逐渐发现单词之间关系逻辑的隐藏地图。我说内部是因为我们人类无法从外部观察到这张地图。

第二,它的意思是,这张地图不只是一张列表,而是一种空间排列。

想象一个三维点云,其中每个点代表一个单词,并且点之间的距离反映了训练数据中单词彼此之间的距离。

只是,这是第三点,这个空间地图不只有长度、宽度、深度这三个维度,我们的意识思维可以在其中舒适地运作。相反,它有更多维度。数万个维度,而根据最新模型,甚至可能是数百万个维度。

也就是说,LLM 所形成的理解是一种空间架构。它具有一种几何学,它确实决定了对于 LLM 来说什么是可以思考的。

它实际上是大模型的可能性的逻辑条件——先验。

据我们所知,人类大脑也会创建潜在空间表征。我们大脑中的神经元的工作方式与神经网络中的神经元的工作方式非常相似。

然而,尽管有相似性,但人类大脑产生的潜在空间表征和人工智能产生的潜在空间表征似乎彼此不同。

这两个潜在空间表示可能重叠,但由于人工智能的维数范围更大,它们在种类和质量上也存在显著差异。

现在想象一下,我们可以构建人工智能,以便定义人类大脑的可能性逻辑获得额外的潜在空间。

想象一下,我们创建人工智能是为了为人类思维添加逻辑空间,使人类可以旅行但无法自行创造。结果就是我们人类可以发现真理,思考人工智能之前人类无法想到的事情。在这种情况下,没有人知道人类思维在哪里结束,人工智能在哪里开始。

我们可以从全新的角度来探讨任何主题。想象一下,人类和人工智能之间的这种共同思考会对我们当前的内在性概念产生怎样的影响!你能想象它会对我们理解思想、想法、有想法或有创造力等术语产生怎样的影响吗?

当我勾勒出这一愿景时,我听到了一些批评的声音。他们告诉我,我把人工智能说得像一个哲学项目,而开发人工智能的公司有着截然不同的动机。

我完全清楚自己赋予了人工智能哲学和诗意的尊严。我这样做是有意识的,因为我认为人工智能有可能成为一场非凡的哲学盛会。作为哲学家、艺术家、诗人、作家和人文主义者,我们的任务就是让这种潜力变得可见和有意义。

所有这一切无疑预示着一个新的关键时代的到来。

加德尔斯:要理解深度学习如何通过人工智能科学家所谓的反向传播(通过逻辑结构的人工神经网络输入新信息)实现内在性和意图,从生物学的唯物主义观点来看意识如何产生的类比可能会有所帮助。这里的核心问题是无形智能是否可以通过深度学习模仿有形智能。

AI 的出发点是什么?它与诺贝尔奖得主、神经科学家 Gerald Edelman 所描述的神经达尔文主义有何相似之处?Edelman 所说的“可重入相互作用”与“反向传播”非常相似。

“想象一下,我们创造人工智能是为了给人类思维增加可能性的逻辑空间,这样人类就可以旅行,但不能自己生产。”

根据埃德尔曼的说法,“根据先前由进化生存决定的‘价值’,环境中的优势竞争会增强某些突触或神经连接的传播和强度。这种神经回路的差异量非常大。某些回路比其他回路更适合环境呈现的任何情况,因此被选中。为了响应极其复杂的信号群,系统会根据达尔文的种群原理进行自组织。正是这个庞大网络的活动通过我们所谓的‘可重入相互作用’将‘现实’组织成模式,从而产生了意识。

丘脑皮层网络在进化过程中被选中,因为它们为人类提供了进行高阶辨别和以更优越的方式适应环境的能力。这种高阶辨别赋予了想象未来、明确回忆过去和意识到自己有意识的能力。

因为每个回路通过从丘脑到皮层再返回的不同路径完成回路,从而达到闭合,所以大脑可以“填补”并提供超出您立即听到、看到或闻到的知识。由此产生的辨别力在哲学中被称为感质。这些辨别力解释了无形的情绪意识,它们定义了绿色的绿色和温暖的温暖。这些感质共同构成了我们所说的意识。”

里斯:人工智能系统中发生的神经过程与人类的神经过程相似,但并不相同。

大脑中似乎存在某种形式的反向传播。我们刚刚谈到了生物神经网络和人工神经网络都构建了潜在空间表征这一事实。还有更多。

但我并不认为这使得它们具有我们所理解的内在性或意向性。

事实上,我认为人工智能的哲学意义在于它让我们重新思考我们以前理解这些术语的方式。

您观察到的反向传播和再入之间的紧密联系就是一个很好的例子。

可能对使反向传播概念更容易理解和广为人知做出最大贡献的人是戴维·鲁梅尔哈特 (David Rumelhart),他是一位非常有影响力的心理学家和认知科学家,与埃德尔曼一样,他生活和工作在圣地亚哥。

鲁梅尔哈特和埃德尔曼都是联结主义学派的核心人物。我之所以这么说,是因为我认为折返和反向传播之间的理论推动力几乎相同:努力开发一个概念词汇,使我们能够区分生物和人工神经网络,以便更好地理解大脑并构建更好的神经网络。

一些人认为,联结学派的工作是尝试从计算机的角度思考大脑——但人们也可以说,这是尝试从生物学的角度思考计算机或人工智能。

从根本上讲,重要的是发明一种不需要作出区分的词汇。

中间有一个空间,一个重叠。

很难过分强调这种概念性工作在过去 40 年里有多么强大。

可以说,鲁梅尔哈特和埃德尔曼等人的工作已经催生出一种可以用与基质无关的方式描述的智能概念。这些概念不仅仅是理论概念,更是具体的工程可能性。

这是否意味着人类大脑和人工智能是同一件事?

当然不是。鸟类、飞机和无人机都是一回事吗?不是,但它们都利用了空气动力学的一般规律。大脑可能也是如此。智能的物质基础设施非常不同——但组织这些基础设施的一些原理可能非常相似。

在某些情况下,我们可能希望构建类似于人脑的人工智能系统。但我认为,在很多情况下,我们并不想这样做。在我看来,人工智能的吸引力在于,我们可以构建尚不存在但完全有可能实现的智能系统。

我经常将人工智能视为一种非常早期的实验胚胎学。事实上,我经常认为人工智能对智能的作用就像合成生物学对自然的作用一样。也就是说,合成生物学将自然变成了一个广阔的可能性领域。自然界中存在的事物数量与自然界中可能存在的事物相比微不足道。事实上,在进化过程中存在的东西比现在多得多,我们没有理由不能将 DNA 链组合起来并制造出新的东西。合成生物学是可以将这些可能的东西变成现实的实践领域。

“我认为,人工智能的吸引力在于,我们可以构建尚不存在但完全可能的智能系统。”

人工智能和智能也是如此。如今,智能不再由单个或少数现有智能实例来定义,而是由可能存在的大量智能事物来定义。

加德尔斯:早在 20 世纪 30 年代,从海德格尔到卡尔·施密特,许多哲学家都反对将人类与“存在”疏远的新兴技术系统。正如施密特当时所说,“技术思维与所有社会传统格格不入;机器没有传统。卡尔·马克思的一项开创性社会学发现是,技术是真正的革命原则,除此之外,所有基于自然法的革命都是过时的娱乐形式。因此,一个完全建立在进步技术之上的社会只会是革命性的;它很快就会自我毁灭,并摧毁其技术。”正如马克思所说,“一切坚固的东西都会烟消云散。”

人工智能的本质是否会让施密特的观点变得过时,还是仅仅是他的观点的实现?

里斯:我认为答案是肯定的,而且我认为这是个好消息,它使得施密特的观点变得过时。

我先说说施密特,他的思想本质上是末世论的。

和所有末世论思想家一样,他有着或多或少明确的本体论世界观,在他看来,也是一种宗教世界观。他所处的世界中的一切都具有明确的形而上学意义。他认为现代自由世界、启蒙世界,就是为了摧毁永恒的、最终神圣的事物秩序。更重要的是,他认为,当这一切发生时,一切都将崩溃,世界末日将开始来临。

你引用的这些话说明了这一点。一方面是现代、启蒙时代、工厂、技术、无实质、金钱的相对性等——另一方面是社会,即种族定义的民族传统、形象和符号。

施密特担心自由主义秩序会使世界失去实体性。一切都会变得相对。至少如果我们从他的著作来看,他认为犹太人是世界失去实体性的主要驱动力之一。众所周知,施密特是一个狂热的反犹主义者。

他非常担心世界末日,因此他与希特勒和纳粹党以及他们的议程保持一致。

当然,从今天的角度来看,显然纳粹是那些利用现代技术来使人类非实体化、剥夺人类人性并在工业规模上屠杀人类的人。

这里很难不去评论海德格尔,他试图“捍卫存在,反对技术”。话虽如此,我认为两者之间存在重要差异。

但是让我转到我答复的第二部分,为什么我认为人工智能使他的世界变得过时。

人工智能已经证明,施密特思想核心的非此即彼逻辑并不成立。施密特对马克思的奇妙借用就是一个例子。

众所周知,马克思曾将内燃机推动的工业崛起描述为非人性化事件。在资本家发现如何使用内燃机制造商品之前,大多数商品都是在手工血汗工厂生产的。也许这些血汗工厂是艰苦的地方。但马克思认为,它们也是人类尊严和精湛技艺的场所。

为什么?因为这些血汗工厂的核心是使用工具的人。正如马克思所见,工具本身并没有什么价值。一个人能用工具做什么完全取决于使用它的人的想象力和技艺。

随着内燃机的出现,一切都发生了变化。它催生了工厂,工厂里的货物由机器而不是工匠制造。然而,机器并不是完全自主的。它们需要人类的帮助。也就是说,机器需要的不是工匠。它们需要的不是人类的想象力和精湛技艺。相反,它们需要的是能够作为机器延伸的人类。这让这些人类失去了思维,沦为纯粹的机器。

这就是为什么马克思把机器描述为人类的“他者”,把工厂描述为人类被剥夺自身人性的地方。

施密特利用这一点作为自己的论据,将他的物质思维与现代技术世界并列。最终的结果是,你现在拥有了永恒、实质性、形而上学的真理与现代机器、技术、功能、价值相对论、无物质人类的世界并列。

因此,对于施密特来说,技术被视为一种针对形而上学的永恒和真实的非自然暴力。

“反对人工智能的另一种选择是进入人工智能并尝试展示它的可能性。”


施密特的区分肯定不是永恒的,而是现代时期所固有的,并且深深地归功于新机器与旧人类相对的范式。

我们今天所拥有的基于深度学习的人工智能系统挑战并摆脱了施密特——或者马克思、海德格尔以及所有追随他们的人的“非此即彼”的区分。

人工智能清晰而美妙地向我们展示了这些区别之间的整个世界。这个世界充满了各种事物,人工智能只是其中之一,它们既具有智能的特质,又具有机器的特质——而且这些特质都无法简化为任何一种。这些事物既是自然的,又是人造的。

人工智能让我们从这种中间状态重新思考自己和世界。

我要说的是,我理解让人类生活变得有意义的愿望。让思想和知识洞察力变得具有批判性,让艺术、创造力、发现、科学和社区也变得具有批判性。我完全理解并分享这种愿望。

但我认为,将所有这些事物都放在一边,将人工智能及其创造者放在另一边,这种说法有些令人惊讶和不幸。

基于这种区别的批判精神再现了它所反对的世界。

反对人工智能的另一种选择是进入人工智能领域,并尝试展示它能做什么。我们需要更多的中间人。如果我说人工智能是划时代的突破,这种说法只是略微准确,那么我真的看不出还有什么替代方案。

这段名为“溢出”的视频是 Limn AI 系统根据提示生成的,提示诱使 AI 对模糊绘图进行分类。该视频反映了 AI 努力研究其学习到的表示类别 — — 却从未得出稳定的表示,从而探索现有表示类别之间的隐藏空间。(LIMN/Noema 杂志)


中间性与共生性


Gardels:我想知道,你的“中间性”观点与 Blaise Agüeras y Arcas 的观点之间是否存在对应关系,Blaise Agüeras y Arcas 认为,进化不仅通过自然选择,而且通过“共生”实现——通过传递新信息,将不同的实体结合成一个相互依存的有机体,例如,细菌携带的 DNA 片段被“复制粘贴”到它们所穿透的细胞中。结果不是非此即彼,而是共生创造了新的东西。

里斯:我相信布莱斯和我一样,受到了美国计算机科学家约瑟夫·利克莱德1960 年发表的一篇名为《人机共生》的文章的影响。

这篇文章的开头是这样的:

“无花果树仅靠昆虫Blastophaga grossorun授粉。昆虫的幼虫生活在无花果树的卵巢中,并从那里获取食物。因此,树木和昆虫相互依存:没有昆虫,树木就无法繁殖;没有树木,昆虫就无法进食;它们共同构成了不仅可行而且富有成效和蓬勃发展的伙伴关系。这种合作的“两个不同生物的亲密关系,甚至紧密结合”被称为“共生”。

Licklider 继续说道:“目前(…)还没有人机共生。本文的目的是提出这一概念,并希望通过分析人与计算机之间交互的一些问题、引起人们对人机工程适用原则的关注以及指出一些需要研究解答的问题,来促进人机共生的发展。希望在不久的将来,人类大脑和计算机将紧密结合在一起,由此产生的伙伴关系将以人类大脑从未有过的方式思考,并以我们今天所知的信息处理机器无法接近的方式处理数据。”

共生是什么意思?它意味着一个生物体没有另一个生物体就无法生存,而另一个生物体属于不同的物种。更具体地说,它意味着一个生物体依赖于另一个生物体所执行的功能。从哲学角度来说,共生意味着中间存在不可区分性。无法说出一个生物体在哪里结束,另一个(或其他生物体)在哪里开始。

未来人类和人工智能之间是否也会出现这种相互依存的关系?

传统的答案是:绝对不是。旧观念认为,人类属于自然,更具体地说,属于生物,属于能够自我繁殖的生物。另一方面,计算机属于完全不同的本体论范畴,即人工范畴,仅仅是技术范畴。它们不会生长,而是被建造和建造。它们既没有生命,也没有存在。

按照旧思维,共生只可能发生在自然界中,只可能发生在生物之间。按照这种思维方式,人机共生是不可能实现的。

我认为,利克莱德的意思是将人类纳入机器概念。也许就像一个机器人。而由于人类被认为比机器更多或不同,这意味着我们失去了人类的特质,失去了我们与机器的区别。

“人工智能无需拥有生命或成为人类,便可拥有自主权、创造力、知识、语言和理解力。”

但正如我们所讨论的,人工智能使得这种在活生生的人类或生物与无生命的机器或物体之间的古老的、经典的现代区分变得不够充分。

人工智能将我们带入了这些旧有区别之外的领域。如果进入这个领域,我们就会发现,人工智能等事物可以拥有自主性、创造力、知识、语言和理解力,而无需活生生或成为人类。

也就是说,人工智能为我们提供了重新体验世界的机会,并重新思考我们迄今为止如何组织世界上的事物以及我们为它们分配的类别。

但问题是:从刚才描述的不可区分的意义上来说,人类与人工智能的共生是否有可能在这个新兴领域——这个中间领域——实现?

我认为是这样。我对此感到很兴奋。有点像利克莱德,我期待着一种“伙伴关系”,它将使我们能够“以人类大脑从未思考过的方式思考,以我们今天所知的信息处理机器无法接近的方式处理数据”。

当我们能够思考而没有人工智能则无法思考时,当人工智能能够以它自己无法做到的方式处理数据时,没有人能说清楚人类在哪里结束,人工智能在哪里开始。然后我们就有了不可区分性,一种共生关系。

让我补充一点,我和利克莱德在这里描述的并不是人类对人工智能的逐渐依赖,我们将所有的思考和决策都外包给人工智能,直到我们几乎无法自行思考或决定。

恰恰相反。我描述的是人类求知欲达到最大限度的状态。在这种状态下,人性超越人性。人类与人工智能之间的认知界限变得模糊不清。

从本体论意义上讲,这与真菌树关系有何不同?

他们的关系本质上是一种交流,他们一起思考。任何一方都无法独自产生或处理这种交流中交换的信息。信息的实际处理——认知——发生在他们之间的界面上:称之为共生。

人类与人工智能共生之间是否存在本体论差异?我看不出有什么区别。

加德尔斯:也许无机智能和有机智能的这种共生关系将催生出本杰明·布拉顿所说的“行星智慧”,人工智能可以帮助我们更好地理解自然系统并与之协调?

里斯:如果我们将人工智能与这种真菌-树木共生关系联系起来会怎么样?人工智能可以读取和翻译来自真菌-树木-土壤网络的化学和电信号。这些信号包含有关生态系统健康、营养流动、压力反应的信息。也就是说,人工智能可以让人类实时理解真菌-树木之间的交流。

这样,我们人类就可以理解一些东西,并可能提出问题,从而进行交流,而这些是我们无法独立于人工智能之外做到的。同时,我们可以帮助人工智能提出正确的问题,并以人工智能无法独立处理的方式处理信息。

现在让我们扩大范围:如果人工智能可以将我们与没有人工智能就无法了解的大型行星系统联系起来,那会怎样?事实上,如果人工智能变成一个类似自我监控的行星系统,我们直接被纳入其中,那会怎样?正如布拉顿所说,“只有当智能变成人工智能,并能扩展到超越生物有机体狭窄范围的大规模分布式系统时,我们才能了解我们生活的行星系统。”

也许在某种程度上——因为 DNA 是我们所知信息的最佳存储器——部分信息存储和人工智能所依赖的计算实际上是由菌根网络完成的?

不管怎样,我迫不及待地想要拥有这样一个整个地球的共生状态——并成为这种互惠交流形式的一部分。

加德尔斯:下一步,我们要如何引导人类与智能机器实现共生,从而为人工智能前所未有地增强人类体验提供可能性?

里斯:我们这个时代,哲学研究真的很重要。我的意思是,真的、真的很重要。

正如我们在本次对话中所阐述的那样,我们生活在一个哲学上不连续的时代。世界的发展已经超越了我们长期以来所遵循的概念。

对某些人来说,这非常令人兴奋。然而对许多人来说,情况并非如此。不安全感和混乱是普遍存在的,也是真实的。

如果历史可以作为借鉴,我们可以假设将会发生政治动荡,并可能产生深远的影响,包括试图强制坚持过去的独裁强人。

防止此类不幸结果的一种方法是进行哲学工作,以产生新概念,使我们所有人都能探索未知的道路。

“人工智能可以让人类实时理解真菌树之间的交流。”

然而,所需的哲学工作无法在象牙塔中孤独地完成。我们需要在野外、在人工智能实验室和公司中的哲学家。我们需要能够与工程师一起工作的哲学家,共同探索人工智能可能为我们提供的新的思维和体验方式。

我梦想的是哲学研发实验室,可以在哲学概念研究、人工智能工程和产品制造的交叉点进行实验。

加德尔斯:您能举一个具体的例子吗?

里斯:我认为我们生活在一个前所未有的时代,所以很难举出一个例子。不过,有一个重要的历史参考,那就是包豪斯学校。

1919 年,当沃尔特·格罗皮乌斯 (Walter Gropius) 创立包豪斯时,许多德国知识分子对工业时代深感怀疑。但格罗皮乌斯却并非如此。他体验到了玻璃、钢铁和混凝土等新材料所带来的可能性,这是与 19 世纪概念上的决裂。

因此,他的观点与主流观点截然相反,他认为,建筑师和艺术家的职责是探索这些新材料,发明能够让人们摆脱 19 世纪束缚、步入 20 世纪的形式和产品。

今天,我们需要类似于包豪斯的东西——但专注于人工智能。

我们需要哲学研发实验室,让我们能够探索和实践人工智能的实验哲学。

当今世界,人们投入了数十亿美元研究人工智能的各个方面,但投入到能够帮助我们发现和发明新概念(即人类的新词汇)的哲学研究却少之又少。布拉顿领导的伯格鲁恩研究所的安提基西拉项目是个小例外。

哲学研发实验室不会自动出现,如果我们不进行战略投资,就不会产生新的指导哲学或哲学思想。

在没有新概念的情况下,人们(无论是公众还是工程师)将继续用旧概念来理解新概念。由于这种方式行不通,将会有数十年的动荡。

307b1f5751e9cfadc323b1f4d2a00c2d.jpeg

Tobias Rees is the founder of limn, an R&D studio located at the intersection of philosophy, art and technology. He is also a senior fellow of Schmidt Sciences’ AI2050 initiative and a senior visiting fellow at Google.

Tobias Rees, founder of an AI studio located at the intersection of philosophy, art and technology, sat down with Noema Editor-in-Chief Nathan Gardels to discuss the philosophical significance of generative AI.

Nathan Gardels: What remains unclear to us humans is the nature of machine intelligence we have created through AI and how it changes our own understanding of ourselves. What is your perspective as a philosopher who has contemplated this issue not from within the Ivory Tower, but “in the wild,” in the engineering labs at Google and elsewhere?

Tobias Rees: AI profoundly challenges how we have understood ourselves.

Why do I think so?

We humans live by a large number of conceptual presuppositions. We may not always be aware of them — and yet they are there and shape how we think and understand ourselves and the world around us. Collectively, they are the logical grid or architecture that underlies our lives.

What makes AI such a profound philosophical event is that it defies many of the most fundamental, most taken-for-granted concepts — or philosophies — that have defined the modern period and that most humans still mostly live by. It literally renders them insufficient, thereby marking a deep caesura.

Let me give a concrete example. One of the most fundamental assumptions of the modern period has been that there is a clear-cut distinction between us humans and machines.

Here humans, living organisms; open and evolving; beings that are equipped with intelligence and, thus, with interiority.

There machines, lifeless, mechanical things; closed, determined and deterministic systems devoid of intelligence and interiority.

This distinction, which first surfaced in the 1630s, was constitutive of the modern notion of what it is to be human. For example, almost the entire vocabulary that was invented between the 17th and 19th centuries to capture what it truly is to be human was grounded in the human/intelligence-machine/mechanism distinction.

Agency, art, creativity, consciousness, culture, existence, freedom, history, knowledge, language, morals, play, politics, society, subjectivity, truth, understanding. All of these concepts were introduced with the explicit purpose of providing us with an understanding of what is truly unique human potential, a uniqueness that was grounded in the belief that intelligence is what lifts us above everything else — and that everything else ultimately can be sufficiently described as a closed, determined mechanical system.

The human-machine distinction provided modern humans with a scaffold for how to understand themselves and the world around them. The philosophical significance of AIs — of built, technical systems that are intelligent — is that they break this scaffold.

What that means is that an epoch that was stable for almost 400 years comes — or appears to come — to an end.

Poetically put, it is a bit as if AI releases ourselves and the world from the understanding of ourselves and the world we had. It leaves us in the open.

I am adamant that those who build AI understand the philosophical stakes of AI. That is why I became, as you put it, a philosopher in the wild.

Gardels: You say that AI is intelligent. But many people doubt that AI is “really” intelligent. They view it as just another tool like all previous human-invented technologies.

Rees: In my experience, this question is almost always grounded in a defensive impulse. A sometimes angry, sometimes anxious effort to hold on to or to re-inscribe the old distinctions. I think of it as a nostalgia for human exceptionalism, that is, a longing for a time when we humans thought there was only one form of intelligence, us.

AI teaches us that this is not so. And not just AI, of course. Over the last two decades or so the concept of intelligence has multiplied. We now know that there are lots of other kinds of intelligence: from bacteria to octopi, from Earth systems to the spiral arms of galaxies. We are an entry in a series. And so is AI.

To argue that these other things are not “really” intelligent because their intelligence differs from ours is a bit silly. That would be like one species of birds, say Pelicans, insisting that only Pelicans “really” know how to fly.

It is best if we get rid of the “really” and simply acknowledge that AI is intelligent, if in ways slightly different from us.

Gardels: What is intelligence?

Rees: Today, we appear to know that there are some baseline qualities to intelligence such as learning from experience, logical understanding and the capability to abstract from what one has learned to solve novel situations.

AI systems have all these qualities. They learn, they logically understand and they form abstractions that allow them to navigate new situations.

However, what experience or learning or understanding or abstraction means for an AI system and for us humans is not quite the same. That is why I suggested that AI is intelligently slightly different from us.

60475a7ac6647334b6cca4bfb0414438.jpeg

Gardels: AI may be another kind of intelligence, but can we say it is, or can be, smarter than us?

Rees: For me, the question is not necessarily whether or not AI is smarter than us, but whether or not our different intelligences can be complementary. Can we be smarter together?

Let me sketch some of the differences I am seeing.

AI can operate on scales — both micro and macro — that are beyond human logical comprehension and capability.

For example, AI has much more information available than we do and it can access and work through this information faster than we can. It also can discover logical structures in data — patterns — where we see nothing.

Perhaps one must pause for a moment to recognize how extraordinary this is.

AI can literally give us access to spaces that we, on our own, qua human, cannot discover and cannot access. How amazing is this? There are already many examples of this. They range from discovering new moves in games like Go or Chess to discovering how protein folds to understanding whole Earth systems.

Given these more than human qualities one could say that AI is smarter than us.

However, human smartness is not reducible to the kind of intelligence or smartness AI has. It has additional dimensions, ones that AI seems to not have.

The perhaps most important of these additional dimensions is our individual need to live a human life.

What does that mean? At the very least it means that we humans navigate the outside world in terms of our inside worlds. We must orient ourselves by way of thinking, in terms of a thinking self. These thinking selves must understand, make sense of, and be struck by, insights.

No matter how smart AI, is it cannot be smart for me. It can provide me with information, it can even engage me in a thought process, but I still need to orient myself in terms of my thinking. I still need to have my own experiences and my own insights, insights that enable me to live my life.

That said, AI, the specific non-human smartness it has, can be incredibly helpful when it comes to leading a human life.

The most powerful example I can think of is that it can make the self visible to itself in ways we humans cannot.

Imagine an on-device AI system — an AI model that exists only on your devices and is not connected to the internet — that has access to all your data. Your emails, your messages, your documents, your voice memos, your photos, your songs, etc.

I stress on-device because it matters that no third parties have access to your data.

Such an AI system can make me visible to myself in ways neither I nor any other human can. It literally can lift me above me. It can show me myself from outside of myself, show me the patterns of thoughts and behaviors that have come to define me. It can help me understand these patterns and it can discuss with me whether they are constraining me, and if so, then how. What is more, it can help me work on those patterns and, where appropriate, enable me to break from them and be set free.

Philosophically put, AI can help me transform myself into an “object of thought” to which I can relate and on which I can work.

The work of the self on the self has formed the core of what Greek philosophers called meletē and Roman philosophers meditatio. And the kind of AI system I evoke here would be a philosopher’s dream. It could make us humans visible to ourselves in ways no human interlocutor can, from outside of us, free from conversational narcissism.

You see, there can be incredible beauty in the overlap and the difference between our intelligence and that of AI.

Ultimately, I do not think of AI as a self-enclosed, autonomous entity that is in competition with us. Rather, I think of it as a relation.

Gardels:What is specifically new that distinguishes deep learning-based AI systems from the old human/machine dichotomy?

Rees: The kind of AI that ruled from the 1950s to the early 2000s was an attempt to think about the human from within the vocabulary provided by machines. It was an explicit, self-conscious attempt by engineers to explain all things human from within the conceptual space of the possibility of machines.

a78762f215ce7422f11fb6862adf78f1.jpeg

It was called “symbolic AI” because the basic idea behind these systems was that we could store knowledge in mathematical symbols and then equip computers with rules for how to derive relevant answers from those symbolic representations.

Some philosophers, most famously Herbert Dreyfus and John Searl, were very much provoked by this. They set out to defend the idea that humans are more than machines, more than rule-based algorithms.

But the kind of AI that that has risen to prominence since the early 2010s, so called deep learning systems or deep neural networks, are of an altogether different kind.

Symbolic AI systems, like all prior machines, were closed, determined systems. That means, first, that they were limited in what they could do by the rules we gave them. When they encountered a situation that was not covered by the rules, they failed. Let’s say they had no adaptive, no learning behavior. And it means as well that what they could do was entirely reducible to the engineers who built them. They could, ultimately, only do things we had explicitly instructed them to do. That is, they had no agency, no agentive capabilities of their own. In short, they were tools.

With deep learning systems, this is different. We do not give them their knowledge. We do not program them. Rather, they learn on their own, for themselves, and, based on what they have learned, they can navigate situations or answer questions they have never seen before. That is, they are no longer closed, deterministic systems.

Instead they have a sort of openness and a sort of agentive behavior, a deliberation or decision-making space, that no technical system before them ever had. Some people say AI has “only” pattern recognition. But I think pattern recognition is actually a form of discovering the logical structure of things. Roughly, when you have a student who identifies the logical principles that underlie data and who can answer questions based on these logical principles, wouldn’t you call that understanding?

In fact, one can push that a step further and say that AI systems appear to be capable of distinguishing truths from falsehoods. That’s because truth is positively correlated with a consistent logical structure. Errors, so to speak, are all unique or different. While the truth is not. And what we see in AI models is that they can distinguish between statements that conform to the patterns that they discover and statements that don’t.

So in that sense, AI systems have a nascent sense of truth.

Simply put, deep learning systems have qualities that, up until recently, were considered possible only for living organisms in general and for humans in particular.

Today’s AI systems have qualities of both –– and, thereby, are reducible to neither. They exist in between the old distinctions and show that the either-or logic that organized our understanding of reality –– either human or machine, either alive or not, either natural or artificial, either being or thing –– is profoundly insufficient.

Insofar as AI escapes these binary distinctions, it leads us into a terrain for which we have no words.

We could say, it opens up the world for us. It makes reality visible to us in ways we have never seen before. It shows us that we can understand and experience reality and ourselves in ways that lie outside of the logical distinctions that organized the modern period.

In some sense, we can see as if for the first time.

Gardels: So, deep-learning systems are not just tools, but agents with a degree of autonomy?

Rees: This question is a good example to showcase that AI is indeed philosophically new.

We used to think that agency has two prerequisites, being alive and having interiority, that is, a sense of self or consciousness. Now, what we can learn from AI systems is that this is apparently not the case. There are things that have agency but that are not alive and that do not have consciousness or a mind, at least not in the way we have previously understood these terms.

This insight, this decoupling of agency from life and from interiority, is a powerful invitation to see the world — and ourselves — differently.

For example, is what is true for agency — that it doesn’t need life and interiority — also true for things like intelligence, creativity or language? And how would we classify or categorize things in the world differently if this were the case?

7ebff184704e0a9875125ed85948f224.jpeg

In her essay in Noema, the astrophysicist Sarah Walker said that “we need to get past our binary categorization of all things as either life or not.”

What interests me most is rethinking the concepts we have inherited from the modern period, from the perspective of the in-betweenness made visible to us by AI.

What is creativity from the perspective of the in-betweenness of AI? What language? What mind?

II. A New AIxial Age?

Gardels: Karl Jaspers was best known for his study of the so-called Axial Age when all the great religions and philosophies were born in relative simultaneity over two millennia ago — Confucianism in China, the Upanishads and Buddhism in India, Homer’s Greece and the Hebrew prophets. Jaspers saw these civilizations arising in the long wake of what he called “the first Promethean Age” of man’s appropriation of fire and earliest inventions.

For Charles Taylor, the first Axial Age resulted from the “great dis-embedding” of the person from isolated communities and their natural environment, where circumscribed awareness had been limited to the sustenance and survival of the tribe guided by oral narrative myth. The lifting out from a closed-off world, according to Taylor, was enabled by the arrival of written language. This attainment of symbolic competency capacitated an “interiority of reflection” based on abiding texts that created a platform for shared meanings beyond one’s immediate circumstances and local narratives.

Long story very short, this “transcendence” in turn led to the possibility of general philosophies, monotheistic religions and broad-based ethical systems. The critical self-distancing element of dis-embedded reflection further evolved into what the sociologist Robert Bellah called “theoretic culture,” to scientific discovery and the Enlightenment that spawned modernity. For Bellah, “Plato completed the transition to the Axial Age,” with the idea of theoria that “enables the mind to ‘view’ the great and the small in themselves abstracted from their concrete manifestations.”

The big question is whether the new level of symbolic competence reached by AI will play a similar role in fostering a “New AIxial Age” as written language did the first time around, when it gave rise to new philosophies, ethical systems and religions.

Rees: I am not sure today’s AI systems have what the modern period came to call symbolic competence.

That is related to what we’ve already discussed.

There was, ever since John Locke, the idea that we humans have a mind in which we store experiences in the form of symbols or symbolic representations and then we derive answers from these symbols.

Let’s say this conceptualization was understood throughout the modern period to be the basic infrastructure of intelligence.

In the late 19th century, philosophers like Ernst Cassirer gave this a twist. He suggested that the key to understanding what it is to be human is to see that we humans invent symbols or meaning and that symbol-making or meaning-making is what sets us apart as a species from everything else.

Deep learning, in general, and generative AI in particular, have broken with this human-centric concept of intelligence and replaced it with something else: The idea that intelligence is pretty much two things: learning and reasoning.

Essentially, learning means the capacity to discover abstract logical principles that organize the things we want to learn. Whether this is an actual data set or learning experiences that we humans make, there is no difference. Call it logical understanding.

The second defining feature of intelligence is the capacity to continuously and steadily refine and update these abstract logical principles, these understandings, and to apply them –– by way of reasoning –– to situations we live in and that we must navigate or solve.

Deep learning systems are most excellent at the first part –– but not so much the second. Basically, once they are trained, they cannot revise the things they have learned. They can only infer.

Be that as it may, there is nothing much symbolic here. At least not in the classical sense of the term.

I am emphasizing this absence of the symbolic because it is a beautiful way to show that deep learning has led to a pretty powerful philosophical rupture: Implicit in the new concept of intelligence is a radically different ontological understanding of what it is to be human, indeed, of what reality is or of how it is structured and organized.

Understanding this rupture with the older concept of intelligence and ontology of the human/the world is key, I think, to understanding your actual question: Are we entering what you call a new AIxial age, where AI will amount to something similar to what writing amounted to roughly 3,000 to 2,000 years ago?

053f0857e7196e246c22b9b67d813db2.jpeg

If we are lucky, the answer is yes. The potential is absolutely there.

But let me try to articulate what I think the challenge is so we truly can make this possible.

Let’s take the correlation between the emergence of writing, the birth of a vocabulary of interiority, and the rise of abstract or theoretical thought as our starting point.

I will do what I tried to do in my prior responses: Reflect on the historicity of the concepts we live by, point out how recent they are, that there is nothing timeless or universal about them, and then ask if AI challenges and changes them.

There is a beautiful book by Bruno Snell called “Die Entdeckung des Geistes” or, in an excellent English translation, “The Discovery of the Mind.”

The work’s central thesis is that what we today call “mind,” “consciousness” and “inner life” is not a given. It is nothing that has always existed or was always experienced. Instead, it is a concept that only gradually emerged.

In beautiful, captivating prose Snell traces the earliest instances of the birth of what I think of as “a vocabulary of interiority.”

For example, he shows that in Homer’s works, there is no general, abstract concept of “mind” or “soul.” Instead, there is a whole flurry of terms that are very difficult to translate. For example, thymos, which is perhaps best articulated as a passion that overcomes and consumes one, or noos, which originally meant sensory awareness and psyche, is a term that Homer and his contemporaries most often meant “breath” or that which animates, but not what we would call psyche today.

Simply put, there is absolutely no vocabulary of interiority in Homer. Or in Hesiod.

This changes at the turn from Archaic to Classical Greek. We begin to see the birth of a vocabulary of interiority and increasingly sophisticated ways of describing inner experience. The most important reference here is probably Sappho. Her poetry is among the very first explorations of what we today would call subjective experience and individual emotion.

I do not want to derail us by retelling the whole of Snell’s book. Rather, what interests me is to convey a sense of the possibility that we discussed earlier: We humans have not always experienced ourselves the way we do today. Every form of experience and thinking or understanding is conceptually mediated. This is also true, perhaps particularly so, for the idea of interiority and inner life.

Snell’s book is so wonderful because he shows the discontinuous, gradual emergence of new concepts that amount to the idea that there is something like an interiority and that this interiority — a kind of inner landscape — is where a single, self-identical “I” is located.

Now, what is crucial, is that the introduction of writing, which probably began right at the time of Homer, was key for the emergence of a conceptual vocabulary of interiority.

Snell touches on this only in passing, but later works, especially by Jack Goody, Eric Havelock and Walter Ong, have attended to this explicitly and all have more or less come to the same conclusion: The practice of writing created new possibilities for analytical thinking that led to increasingly abstract, classificatory nouns and to a form of systematic search and production of knowledge that was not seen anywhere in human history before.

These authors also made clear that the only unfortunate thing about Snell’s work is his use of the term “discovery” in his title. The mind was not discovered. It was constituted, invented, if you will. That is, it could have been constituted differently. And that is what Goody, Ong and others have amply shown. What mind is, what interiority is, is different in other places.

Let me summarize this simply by saying that the technology of writing had absolutely dramatic consequences for what it is to be human, for how we experience and understand ourselves as humans. Among the two, perhaps, most important of these consequences was the systematic emergence of self-reflection and abstract thought.

Can AI play as transformative a role in what it means to be human as it did for writing?

Can AI mark the beginning of a whole new, perhaps radically discontinuous chapter for what it is to have a mind, to have interiority, to think? Can it help us think thoughts that are so new and so different that however we understood ourselves up until now become obsolete?

b5c71fe5036682bf65982a28da5fcba6.jpeg

Oh yes, it can! AI absolutely has the potential to be such a major philosophical event.

The perhaps most beautiful, most fascinating and eye-opening way to show this potential of AI is what engineers call “latent space representations.”

When a large language model learns, it gradually distills ever more abstract logical principles from the data it is provided with.

It is best to think of this process as roughly similar to a structuralist analysis: The AI identifies the logical structure that organizes — that literally underlies — the totality of the data it is trained on and stores or memorizes it in the form of concepts. The way it does this is that it discovers the logic of the relations between different elements of the data. So, in text, roughly, that would be the words: What is the closeness between the different words in the training data?

If you will, an LLM discovers the many different degrees of relations between words.

9d82539c919c38706291ac11ca1cbe81.jpeg

Fascinatingly, what emerges from this learning process is a high-dimensional, relational space that engineers call latent — in the sense of hidden — space.

First, this means that something grows on the inside of an LLM during training. A hidden map of the logic of relations between words that the AI successively discovers. I say on the inside because we humans cannot observe this map from the outside.

The second thing it means is that this map is not just a list but a spatial arrangement.

Imagine a three-dimensional point cloud where each point stands for a word and where the distance between points reflects how close or far words are from one another in the training data.

It is just, and this is the third thing, that this spatial map doesn’t have only the three dimensions — length, width, depth — our conscious human mind is comfortable operating in. Instead, it has many, many more dimensions. Tens of thousands and with the latest models, perhaps millions.

That is, the understanding an LLM has formed is a spatial architecture. It has a geometry that literally determines what, for an LLM, is thinkable.

It is literally the logical condition of possibility — the a priori — of the LLM.

For all we know, human brains also create latent space representations. The neurons in our brain work in a very similar fashion to how neurons work in a neural network.

Yet, despite this similarity, it appears that the latent space representations that a human brain produces and the latent space representations that an AI can produce are different from one another.

The two latent space representations likely overlap but they also differ significantly in kind and quality because of AI’s far greater dimensional scope.

Now imagine we could build AI so that the logic of possibility that defines the human brain gets extra latent spaces.

Imagine we built AI to add to our human mind logical spaces of possibility that we humans could travel but not produce on our own. The consequence would be that we humans could discover truths and think things that no human could have ever thought before AI. In this case, no one knows where the human mind might end and AI might begin.

We could take any theme and approach it from whole new perspectives. Imagine what this kind of co-cogitation between humans and AI would do to our current concept of interiority! Can you imagine what it would do to how we understand terms like mind, thought, having an idea or being creative?

As I outline this vision, I can hear the critical voices. They tell me that I make AI sound like a philosophical project while the companies building AI have very different motives.

I am entirely aware that I am giving AI philosophical and poetic dignity. And I do so consciously because I think AI has the potential to be an extraordinary philosophical event. It is our task as philosophers, artists, poets, writers and humanists to render this potential visible and relevant.

All this certainly has the makings of a new pivotal age.

Gardels: To grasp how deep learning through what AI scientists call backpropagation — the feeding of new information through the artificial neural networks of logical structures — could lead to interiority and intention, it might be useful to look at an analogy from the materialist view of biology about how consciousness arises. The core issue here is whether disembodied intelligence can mimic embodied intelligence through deep learning.

Where does AI depart from, and where is it similar to the neural Darwinism described here by Gerald Edelman, the Nobel Prize-winning neuroscientist? What Edelman refers to as “reentrant interaction” appears quite similar to “backpropagation.”

300085fcb7ac105c39474a6f811f16a9.jpeg

According to Edelman, “Competition for advantage in the environment enhances the spread and strength of certain synapses, or neural connections, according to the ‘value’ previously decided by evolutionary survival. The amount of variance in this neural circuitry is very large. Certain circuits get selected over others because they fit better with whatever is being presented by the environment. In response to an enormously complex constellation of signals, the system is self-organizing according to Darwin’s population principle. It is the activity of this vast web of networks that entails consciousness by means of what we call ‘reentrant interactions’ that help to organize ‘reality’ into patterns.

The thalamocortical networks were selected during evolution because they provided humans with the ability to make higher-order discriminations and adapt in a superior way to their environment. Such higher-order discriminations confer the ability to imagine the future, to explicitly recall the past and to be conscious of being conscious.

Because each loop reaches closure by completing its circuit through the varying paths from the thalamus to the cortex and back, the brain can ‘fill in’ and provide knowledge beyond that which you immediately hear, see or smell. The resulting discriminations are known in philosophy as qualia. These discriminations account for the intangible awareness of mood, and they define the greenness of green and the warmness of warmth. Together, qualia make up what we call consciousness.”

Rees: There are neural processes happening in AI systems that are similar — but not the same — as in humans.

It seems likely that there is some form of backpropagation in the brain. And we just talked about the fact that both biological neural networks and artificial neural networks build latent space representations. And there is more.

But I do not think that makes them have interiority or intentionality in the way we have come to understand these terms.

In fact, I think the philosophical significance of AI is that it invites us to reconsider the way we previously understood these terms.

And the close connection between backpropagation and reentry that you observe is a great example of that.

The person who did perhaps more than anyone to make the concepts of backpropagation accessible and widely known was David Rumelhart, a very influential psychologist and cognitive scientist who, like Edelman, lived and worked in San Diego.

Both Rumelhart and Edelman were key people in the connectionism school. I say this because I think the theoretical impulse between reentry and backpropagation is almost identical: the effort to develop a conceptual vocabulary that allows us to undifferentiate the biological and artificial neural networks in order to understand the brain better and in order to build better neural networks.

Some have suggested that the work of the connectionists was an attempt to think about the brain in terms of computers –– but one could just as well say it was an attempt to think about computers or AI in terms of biology.

At base, what matters was the invention of a vocabulary that didn’t need to make distinctions.

There is a space in the middle, an overlap.

It is very difficult to overemphasize how powerful this kind of conceptual work has been over the last 40 years.

Arguably, the work of people like Rumelhart and Edelman has led to a concept of intelligence that can be described in a substrate-independent manner. And these concepts are not just theoretical concepts but concrete engineering possibilities.

Does this mean that human brains and AI are the same thing?

Of course not. Are birds, planes and drones all the same thing? No, but they all make use of the general laws of aerodynamics. And the same may be true for brains. The material infrastructure of intelligence is very different — but some of the principles that organize these infrastructures may be very similar.

In some instances, we likely will want to build AI systems similar to human brains. But in many cases, I presume, we do not. What makes AI attractive, in my thinking, is that we can build intelligent systems that do not yet exist — but that are perfectly possible.

I often think of AI as a kind of very early-stage experimental embryology. Indeed, I often think that AI is doing for intelligence what synthetic biology did for nature. Meaning, synthetic biology transformed nature into a vast field of possibility. The number of things that exist in nature is minuscule compared to the things that could exist in nature. In fact, many more things have existed in the course of evolution than there are now, and there is no reason why we can’t combine strands of DNA and make new things. Synthetic biology is the field of practice that can bring these possible things into existence.

084c28e6d2506a926bc6101e874fcf39.jpeg

The same is true for AI and intelligence. Today, intelligence is no longer defined by a single or a few instances of existing intelligences but by the very many intelligent things that could exist.

Gardels: Back in the 1930s, much of philosophy from Heidegger to Carl Schmitt was against an emergent technological system that alienated humans from “being.” As Schmitt put it back then, “technical thinking is foreign to all social traditions; the machine has no tradition. One of Karl Marx’s seminal sociological discoveries is that technology is the true revolutionary principle, besides which all revolutions based on natural law are antiquated forms of recreation. A society built exclusively on progressive technology would thus be nothing but revolutionary; it would soon destroy itself and its technology.” As Marx put it, “all that is solid melts into air.”

Does the nature of AI make Schmitt’s perspective obsolete, or is it simply a fulfillment of his perspective?

Rees: I think the answer — and I take that to be very good news — is yes, it makes Schmitt’s perspective obsolete.

Let me first say something about Schmitt. He was essentially apocalyptic in his thinking.

Like all apocalyptic thinkers, he had a more or less definite, ontological and in his case also religious, worldview. Everything in his world had a definite, metaphysical meaning. And he thought the modern, liberal world, the world of the Enlightenment, was out there to destroy the timeless, ultimately, divine order of things. What is more, he thought that when this happened, all hell would break loose, and the end of the world would begin to unfold.

The lines that you quote illustrate this. On the one hand the modern, Enlightenment period, the factory, technology, substanceless, the relativizing quality of money, etc. — and, on the other hand, social, that is, racially defined national traditions, images and symbols.

Schmitt was worried that the liberal order would de-substantize the world. Everything would become relative. And at least if we go by his writings, he thought that Jews were one of the key driving forces of this de-substantification of the world. Famously, Schmitt was a rabid antisemite.

He was so worried about the end of the world that he aligned himself with Hitler and the Nazis and their agendas.

From today’s perspective, of course, it is obvious that the ones who embraced modern technology to de-substantize humans, to deprive them of their humanity and to murder them on an industrial scale, were the Nazis.

It is difficult to suppress a comment on Heidegger here, who sought to “defend being against technology.” That said, I think there are important differences between the two.

But let me go to the second part of my reply, why I think AI renders his world obsolete.

AI has proven that the either-or logic at the core of Schmitt’s thinking doesn’t hold. One example of this is provided by Schmitt’s curious appropriation of Marx.

Famously, Marx described the rise of industry enabled by the combustion engine as a dehumanizing event. Before capitalists discovered how they could use the combustion engine to fabricate goods, most goods were made in artisanal sweatshops. Maybe these sweatshops were harsh places. But, or so Marx suggests, they were also places of human dignity and virtuosity.

Why? Well, because at the center of these sweatshops were humans who used tools. As Marx saw it, tools are nothing in themselves. What one can do with a tool depends entirely on the imagination and the virtuosity of the human who uses it.

With the combustion engine, everything changed. It gave rise to factories in which goods were made by machines rather than by artisans. However, the machines were not entirely autonomous. They needed humans to assist them. That is, what the machines needed were not artisans. What they needed was not human imagination and virtuosity. On the contrary, what was needed were humans that could function as extensions of the machine. That made these humans mindless and reduced them to mere machines.

That is why Marx described the machine as the “other” of the human and the factory as the place where humans are deprived of their own humanity.

Schmitt appropriated this for his own argument to juxtapose his kind of substance thinking with the modern, technical world. The net outcome is that you now have a juxtaposition timeless, substantive, metaphysical truth on the one hand — and, on the other, the modern world of machines, of technology, of functionality, of relativity of values, of substance-less humans.

Hence, technology, for Schmitt, comes into view as an unnatural violence against the metaphysically timeless and true.

a15e25e4948052f42a37e799829eecd3.jpeg

Schmitt’s distinction was most certainly not timeless but intrinsic to the modern period and deeply indebted to its paradigm of the new machine versus the old human.

The deep-learning-based AI systems we have today defy and escape the “either-or” distinction of Schmitt — or of Marx and of Heidegger and all those who come after them.

AI clearly and beautifully shows us that there is a whole world in between these distinctions. A world of things, of which AI is just one, that have some qualities of intelligence and some qualities of machine — and that are reducible to neither. Things that are at once natural and built.

AI invites us to rethink ourselves and the world from within this in-between.

Let me say that I understand the wish to render human life meaningful. To render thought and intellectual insight critical and, so too, art, creativity, discovery, science and community. I totally get it and share it.

But I think the suggestion that all these things are on the one side, and AI and those who build it are on the other, is somewhat surprising and unfortunate.

A critical ethos grounded in this distinction reproduces the world it says it is against.

The alternative to being against AI is to enter AI and try to show what it could be. We need more in-between people. If my suggestion that AI is an epochal rupture is only modestly accurate, then I don’t really see what the alternative is.

This video, “overflow,” was generated with the Limn AI system based on a prompt enticing the AI to categorize an ambiguous drawing. The video is reflective of the AI’s effort to work through its learned categories of representation — without ever arriving at a stable representation resulting in exploring the hidden spaces between existing categories of representation. (LIMN/Noema Magazine)

III. In-Betweenness & Symbiogenesis

Gardels: I’m wondering if there is a correspondence between your “in-betweenness” point and Blaise Agüeras y Arcas’ idea that evolution advances not only by natural selection but through “symbiogenesis” — the mutual transformation that conjoins separate entities into one interdependent organism through the transfer of new information, for example, DNA fragments carried by bacteria that are “copy and pasted” into the cells they penetrate. What results is not either/or, but something new created by symbiosis.

Rees: I believe Blaise, like me, was influenced by an essay the American computer scientist Joseph Licklider published in 1960, called, called “Man-Computer Symbiosis.”

This is how the essay begins:

“The fig tree is pollinated only by the insect Blastophaga grossorun. The larva of the insect lives in the ovary of the fig tree, and there it gets its food. The tree and the insect are thus heavily interdependent: the tree cannot reproduce without the insect; the insect cannot eat without the tree; together, they constitute not only a viable but a productive and thriving partnership. This cooperative ‘living together in intimate association, or even close union, of two dissimilar organisms’ is called ‘symbiosis.’”

Licklider goes on: “At present (…) there are no man-computer symbioses. The purposes of this paper are to present the concept and, hopefully, to foster the development of man-computer symbiosis by analyzing some problems of interaction between men and computing machines, calling attention to applicable principles of man-machine engineering, and pointing out a few questions to which research answers are needed. The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.”

What does symbiosis mean? It means that one organism cannot survive without the other, which belongs to a different species. More specifically, it means that one organism is dependent on functions performed by the other organism. More philosophically put, symbiosis means that there is an indistinguishability in the middle. An impossibility to say where one organism ends and the other (or the others) begin.

Is it conceivable that this kind of interdependence will in the future occur between humans and AI?

The traditional answer is: Absolutely not. The old belief is that humans belong to nature and, more specifically, to biology, to living things that can self-reproduce. Computers, on the other hand, belong to a totally different ontological category, the category of artificial, the merely technical. They don’t grow, they are constructed and built. They have neither life nor being.

Symbiosis, in that old way of thinking, is only possible within the realm of nature, between living things. In this way of thinking, there cannot possibly be a human-computer symbiosis.

I think there was also a sense that what Licklider meant was an enrolling of humans into the machine concept. Perhaps like a cyborg. And as humans are supposedly more than or different from machines, that would mean a loss of that which makes us human, of that which sets us apart from machines.

326100c11f3ba3442dd8504305f84cfe.jpeg

But as we have discussed, AI renders this old, classical modern distinction between living humans or beings and inanimate machines or things, insufficient.

AI leads us into a territory that lies outside of these old distinctions. If one enters this territory, one can see that things –– things like AI –– can have agency, creativity, knowledge, language and understanding without either being alive or being human.

That is, AI affords us with an opportunity to experience the world anew and to rethink how we have thus far organized things in the world, the categories to which we assigned them.

But here is the question: Is human-AI symbiosis possible from within this new, still emergent territory — this in-between territory — in the sense of the indistinguishability just described?

I think so. And I am excited about it. A bit like Licklider, I am looking forward to a “partnership” that will allow us to “think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.”

When we can think thoughts we cannot think without AI, and when AI can process data in ways it cannot on its own, then no one can say where humans end and AI begins. Then we have indistinguishability, a symbiosis.

Let me add that what I describe here — with Licklider — is not a gradual human dependency on AI, where we outsource all thinking and decision-making to AI until we are barely able to think or decide on our own.

Quite the opposite. I am describing a situation of maximal human intellectual curiosity. A state where being human is being more than human. Where the cognitive boundary between humans and AI becomes meaningfully indistinct.

Is this different, in an ontologically meaningful way, from fungi-tree relationships?

Their relationship is essentially a communication, in which they cogitate together. Neither party can produce or process the information exchanged in this communication alone. The actual processing of the information — cognition — happens at the interface between them: Call it symbiosis.

What, if any, is the ontological difference between human-AI symbiosis? I fail to see one.

Gardels: Perhaps such a symbiosis of inorganic and organic intelligence will spawn what Benjamin Bratton calls “planetary sapience,” where AI helps us better understand natural systems and align with them?

Rees: What if we linked AI to this fungi-tree symbiosis? AI could read and translate chemical and electrical signals from fungi-tree-soil networks. These signals contain information about ecosystem health, nutrient flows, stress responses. That is, AI could make the communication between fungi-trees intelligible to humans in real-time.

We humans could then understand something — and possibly pose questions and thereby communicate — that we simply couldn’t otherwise, independent of AI. And simultaneously we can help AI ask the right questions and process information in ways it cannot on its own.

Now let’s expand the scope: What if AI could connect us to large-scale planetary systems that are impossible to know without AI? In fact, what if AI would become something like a self-monitoring planetary system into which we are directly looped. As Bratton has put it, “Only when intelligence becomes artificial and can be scaled into massive, distributed systems beyond the narrow confines of biological organisms, can we have a knowledge of the planetary systems in which we live.”

Perhaps in a way where — as DNA is the best storage for information we know — part of the information storage and the compute the AI relies on is actually done by mycorrhizal networks?

If anything, I can’t wait to have such a whole Earth symbiotic state — and to be a part of this form of reciprocal communication.

Gardels: What is the first next step to guiding us toward symbiosis between humans and intelligent machines that opens up the possibilities of AI augmenting the human experience as never before?

Rees: Ours is a time when philosophical research really matters. I mean, really, really matters.

As we have elaborated in this conversation, we live in philosophically discontinuous times. The world has been outgrowing the concepts we have lived by for some time now.

To some, that is very exciting. To many, however, it is not. The insecurity and confusion are widespread and real.

If history is any guide, we can assume that political unrest will occur, with possibly far-reaching consequences, including autocratic strongmen who try to enforce clinging to the past.

One way to prevent such unfortunate outcomes is to do the philosophical work that can lead to new concepts that allow us all to navigate uncharted pathways.

c3e5a5c9c6b5b3125d36ebe25dc0944e.jpeg

However, the kind of philosophical work that is needed cannot be done in the solitude of ivory towers. We need philosophers in the wild, in AI labs and companies. We need philosophers who can work alongside engineers to jointly discover new ways of thinking and experiencing that might be afforded to us by AI.

What I dream of are philosophical R&D labs that can experiment at the intersection of philosophical conceptual research, AI engineering and product making.

Gardels: Can you give a concrete example?

Rees: I think we live in unprecedented times, so giving an example is difficult. However, there is an important historical reference, the Bauhaus School.

When Walter Gropius founded the Bauhaus, in 1919, many German intellectuals were deeply skeptical of the industrial age. Not so Gropius. He experienced the possibilities that new materials like glass, steel and concrete offered as a conceptual rupture with the 19th century.

And so, he argued –– very much against the dominant opinion — that it was the duty of architects and artists to explore these new materials, and to invent forms and products that would lift people out of the 19th and into the 20th century.

Today, we need something akin to the Bauhaus — but focused on AI.

We need philosophical R&D labs that would allow us to explore and practice AI as the experimental philosophy it is.

Billions are being poured into many different aspects of AI but very little into the kind of philosophical work that can help us discover and invent new concepts — new vocabularies for being human — in the world today. The Antikythera project of the Berggruen Institute under the leadership of Bratton is one small exception.

Philosophical R&D labs will not happen automatically. There will be no new guiding philosophies or philosophical ideas if we do not make strategic investments.

In the absence of new concepts, people — the public as much as engineers — will continue to understand the new in terms of the old. As this doesn’t work, there will be decades of turmoil.

https://www.noemamag.com/why-ai-is-a-philosophical-rupture/

阅读最新前沿科技趋势报告,请访问欧米伽研究所的“未来知识库”

https://wx.zsxq.com/group/454854145828

1036b00fee7dba48f561367716da7175.jpeg

未来知识库是“欧米伽未来研究所”建立的在线知识库平台,收藏的资料范围包括人工智能、脑科学、互联网、超级智能,数智大脑、能源、军事、经济、人类风险等等领域的前沿进展与未来趋势。目前拥有超过8000篇重要资料。每周更新不少于100篇世界范围最新研究资料。欢迎扫描二维码或访问https://wx.zsxq.com/group/454854145828 进入。

dfe5eecc58b808d0dfed427cfc1057b2.jpeg

截止到12月25日 ”未来知识库”精选的100部前沿科技趋势报告

  1. 2024 美国众议院人工智能报告:指导原则、前瞻性建议和政策提案

  2. 未来今日研究所:2024 技术趋势报告 - 移动性,机器人与无人机篇

  3. Deepmind:AI 加速科学创新发现的黄金时代报告

  4. Continental 大陆集团:2024 未来出行趋势调研报告

  5. 埃森哲:未来生活趋势 2025

  6. 国际原子能机构 2024 聚变关键要素报告 - 聚变能发展的共同愿景

  7. 哈尔滨工业大学:2024 具身大模型关键技术与应用报告

  8. 爱思唯尔(Elsevier):洞察 2024:科研人员对人工智能的态度报告

  9. 李飞飞、谢赛宁新作「空间智能」 等探索多模态大模型性能

  10. 欧洲议会:2024 欧盟人工智能伦理指南:背景和实施

  11. 通往人工超智能的道路:超级对齐的全面综述

  12. 清华大学:理解世界还是预测未来?世界模型综合综述

  13. Transformer 发明人最新论文:利用基础模型自动搜索人工生命

  14. 兰德公司:新兴技术监督框架发展的现状和未来趋势的技术监督报告

  15. 麦肯锡全球研究院:2024 年全球前沿动态(数据)图表呈现

  16. 兰德公司:新兴技术领域的全球态势综述

  17. 前瞻:2025 年人形机器人产业发展蓝皮书 - 人形机器人量产及商业化关键挑战

  18. 美国国家标准技术研究院(NIST):2024 年度美国制造业统计数据报告(英文版)

  19. 罗戈研究:2024 决策智能:值得关注的决策革命研究报告

  20. 美国航空航天专家委员会:2024 十字路口的 NASA 研究报告

  21. 中国电子技术标准化研究院 2024 扩展现实 XR 产业和标准化研究报告

  22. GenAI 引领全球科技变革关注 AI 应用的持续探索

  23. 国家低空经济融创中心中国上市及新三板挂牌公司低空经济发展报告

  24. 2025 年计算机行业年度策略从 Infra 到 AgentAI 创新的无尽前沿

  25. 多模态可解释人工智能综述:过去、现在与未来

  26. 【斯坦福博士论文】探索自监督学习中对比学习的理论基础

  27. 《机器智能体的混合认知模型》最新 128 页

  28. Open AI 管理 AI 智能体的实践

  29. 未来生命研究院 FLI2024 年 AI 安全指数报告 英文版

  30. 兰德公司 2024 人工智能项目失败的五大根本原因及其成功之道 - 避免 AI 的反模式 英文版

  31. Linux 基金会 2024 去中心化与人工智能报告 英文版

  32. 脑机接口报告脑机接口机器人中的人机交换

  33. 联合国贸发会议 2024 年全球科技创新合作促发展研究报告 英文版

  34. Linux 基金会 2024 年世界开源大会报告塑造人工智能安全和数字公共产品合作的未来 英文版

  35. Gartner2025 年重要战略技术趋势报告 英文版

  36. Fastdata 极数 2024 全球人工智能简史

  37. 中电科:低空航行系统白皮书,拥抱低空经济

  38. 迈向科学发现的生成式人工智能研究报告:进展、机遇与挑战

  39. 哈佛博士论文:构建深度学习的理论基础:实证研究方法

  40. Science 论文:面对 “镜像生物” 的风险

  41. 镜面细菌技术报告:可行性和风险

  42. Neurocomputing 不受限制地超越人类智能的人工智能可能性

  43. 166 页 - 麦肯锡:中国与世界 - 理解变化中的经济联系(完整版)

  44. 未来生命研究所:《2024 人工智能安全指数报告》

  45. 德勤:2025 技术趋势报告 空间计算、人工智能、IT 升级。

  46. 2024 世界智能产业大脑演化趋势报告(12 月上)公开版

  47. 联邦学习中的成员推断攻击与防御:综述

  48. 兰德公司 2024 人工智能和机器学习在太空领域感知中的应用 - 基于两项人工智能案例英文版

  49. Wavestone2024 年法国工业 4.0 晴雨表市场趋势与经验反馈 英文版

  50. Salesforce2024 年制造业趋势报告 - 来自全球 800 多位行业决策者对运营和数字化转型的洞察 英文版

  51. MicrosoftAzure2024 推动应用创新的九大 AI 趋势报告

  52. DeepMind:Gemini,一个高性能多模态模型家族分析报告

  53. 模仿、探索和自我提升:慢思维推理系统的复现报告

  54. 自我发现:大型语言模型自我组成推理结构

  55. 2025 年 101 项将 (或不会) 塑造未来的技术趋势白皮书

  56. 《自然杂志》2024 年 10 大科学人物推荐报告

  57. 量子位智库:2024 年度 AI 十大趋势报告

  58. 华为:鸿蒙 2030 愿景白皮书(更新版)

  59. 电子行业专题报告:2025 年万物 AI 面临的十大待解难题 - 241209

  60. 中国信通院《人工智能发展报告(2024 年)》

  61. 美国安全与新兴技术中心:《追踪美国人工智能并购案》报告

  62. Nature 研究报告:AI 革命的数据正在枯竭,研究人员该怎么办?

  63. NeurIPS 2024 论文:智能体不够聪明怎么办?让它像学徒一样持续学习

  64. LangChain 人工智能代理(AI agent)现状报告

  65. 普华永道:2024 半导体行业状况报告发展趋势与驱动因素

  66. 觅途咨询:2024 全球人形机器人企业画像与能力评估报告

  67. 美国化学会 (ACS):2024 年纳米材料领域新兴趋势与研发进展报告

  68. GWEC:2024 年全球风能报告英文版

  69. Chainalysis:2024 年加密货币地理报告加密货币采用的区域趋势分析

  70. 2024 光刻机产业竞争格局国产替代空间及产业链相关公司分析报告

  71. 世界经济论坛:智能时代,各国对未来制造业和供应链的准备程度

  72. 兰德:《保护人工智能模型权重:防止盗窃和滥用前沿模型》-128 页报告

  73. 经合组织 成年人是否具备在不断变化的世界中生存所需的技能 199 页报告

  74. 医学应用中的可解释人工智能:综述

  75. 复旦最新《智能体模拟社会》综述

  76. 《全球导航卫星系统(GNSS)软件定义无线电:历史、当前发展和标准化工作》最新综述

  77. 《基础研究,致命影响:军事人工智能研究资助》报告

  78. 欧洲科学的未来 - 100 亿地平线研究计划

  79. Nature:欧盟正在形成一项科学大型计划

  80. Nature 欧洲科学的未来

  81. 欧盟科学 —— 下一个 1000 亿欧元

  82. 欧盟向世界呼吁 加入我们价值 1000 亿欧元的研究计划

  83. DARPA 主动社会工程防御计划(ASED)《防止删除信息和捕捉有害行为者(PIRANHA)》技术报告

  84. 兰德《人工智能和机器学习用于太空域感知》72 页报告

  85. 构建通用机器人生成范式:基础设施、扩展性与策略学习(CMU 博士论文)

  86. 世界贸易组织 2024 智能贸易报告 AI 和贸易活动如何双向塑造 英文版

  87. 人工智能行业应用建设发展参考架构

  88. 波士顿咨询 2024 年欧洲天使投资状况报告 英文版

  89. 2024 美国制造业计划战略规划

  90. 【新书】大规模语言模型的隐私与安全

  91. 人工智能行业海外市场寻找 2025 爆款 AI 应用 - 241204

  92. 美国环保署 EPA2024 年版汽车趋势报告英文版

  93. 经济学人智库 EIU2025 年行业展望报告 6 大行业的挑战机遇与发展趋势 英文版

  94. 华为 2024 迈向智能世界系列工业网络全连接研究报告

  95. 华为迈向智能世界白皮书 2024 - 计算

  96. 华为迈向智能世界白皮书 2024 - 全光网络

  97. 华为迈向智能世界白皮书 2024 - 数据通信

  98. 华为迈向智能世界白皮书 2024 - 无线网络

  99. 安全牛 AI 时代深度伪造和合成媒体的安全威胁与对策 2024 版

  100. 2024 人形机器人在工业领域发展机遇行业壁垒及国产替代空间分析报告

  101. 《2024 年 AI 现状分析报告》2-1-3 页.zip

  102. 万物智能演化理论,智能科学基础理论的新探索 - newv2

  103. 世界经济论坛 智能时代的食物和水系统研究报告

  104. 生成式 AI 时代的深伪媒体生成与检测:综述与展望

  105. 科尔尼 2024 年全球人工智能评估 AIA 报告追求更高层次的成熟度规模化和影响力英文版

  106. 计算机行业专题报告 AI 操作系统时代已至 - 241201

  107. Nature 人工智能距离人类水平智能有多近?

  108. Nature 开放的人工智能系统实际上是封闭的

  109. 斯坦福《统计学与信息论》讲义,668 页 pdf

  110. 国家信息中心华为城市一张网 2.0 研究报告 2024 年

  111. 国际清算银行 2024 生成式 AI 的崛起对美国劳动力市场的影响分析报告 渗透度替代效应及对不平等状况英文版

  112. 大模型如何判决?从生成到判决:大型语言模型作为裁判的机遇与挑战

  113. 毕马威 2024 年全球半导体行业展望报告

  114. MR 行业专题报告 AIMR 空间计算定义新一代超级个人终端 - 241119

  115. DeepMind 36 页 AI4Science 报告:全球实验室被「AI 科学家」指数级接管

  116. 《人工智能和机器学习对网络安全的影响》最新 273 页

  117. 2024 量子计算与人工智能无声的革命报告

  118. 未来今日研究所:2024 技术趋势报告 - 广义计算篇

  119. 科睿唯安中国科学院 2024 研究前沿热度指数报告

  120. 文本到图像合成:十年回顾

  121. 《以人为中心的大型语言模型(LLM)研究综述》

  122. 经合组织 2024 年数字经济展望报告加强连通性创新与信任第二版

  123. 波士顿咨询 2024 全球经济体 AI 成熟度矩阵报告 英文版

  124. 理解世界还是预测未来?世界模型的综合综述

  125. GoogleCloudCSA2024AI 与安全状况调研报告 英文版

  126. 英国制造商组织 MakeUK2024 英国工业战略愿景报告从概念到实施

  127. 花旗银行 CitiGPS2024 自然环境可持续发展新前沿研究报告

  128. 国际可再生能源署 IRENA2024 年全球气候行动报告

  129. Cell: 物理学和化学 、人工智能知识领域的融合

  130. 智次方 2025 中国 5G 产业全景图谱报告

上下滑动查看更多

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值