国产av一二三区|日本不卡动作网站|黄色天天久久影片|99草成人免费在线视频|AV三级片成人电影在线|成年人aV不卡免费播放|日韩无码成人一级片视频|人人看人人玩开心色AV|人妻系列在线观看|亚洲av无码一区二区三区在线播放

網(wǎng)易首頁(yè) > 網(wǎng)易號(hào) > 正文 申請(qǐng)入駐

人工智能真的永遠(yuǎn)不會(huì)有意識(shí)?

0
分享至

邁克爾·波倫在他的新書(shū)《世界顯現(xiàn)》中指出,人工智能可以做很多事情——但它不能成為人。

MICHAEL POLLAN

THE BIG STORY

FEB 24, 2026


如今,布萊克·勒莫因事件被視為人工智能炒作的巔峰之作。它讓“有意識(shí)的人工智能”這一概念在新聞周期內(nèi)迅速進(jìn)入公眾視野,同時(shí)也引發(fā)了計(jì)算機(jī)科學(xué)家和意識(shí)研究者之間的一場(chǎng)討論,這場(chǎng)討論在隨后的幾年里愈演愈烈。盡管科技界仍在公開(kāi)場(chǎng)合對(duì)這一概念(以及可憐的萊莫因)不屑一顧,但在私下里,他們已經(jīng)開(kāi)始更加認(rèn)真地看待這種可能性。有意識(shí)的人工智能可能缺乏明確的商業(yè)邏輯(如何將其商業(yè)化?),并會(huì)帶來(lái)棘手的道德困境(我們應(yīng)該如何對(duì)待一臺(tái)能夠感受到痛苦的機(jī)器?)。

然而,一些人工智能工程師開(kāi)始認(rèn)為,通用人工智能的終極目標(biāo)——一臺(tái)不僅具有超級(jí)智能,而且擁有人類(lèi)水平的理解力、創(chuàng)造力和常識(shí)的機(jī)器——或許需要類(lèi)似意識(shí)的東西才能實(shí)現(xiàn)。在科技界,曾經(jīng)圍繞有意識(shí)的人工智能的非正式禁忌——公眾會(huì)覺(jué)得這種前景令人毛骨悚然——突然開(kāi)始瓦解。


轉(zhuǎn)折點(diǎn)出現(xiàn)在2023年夏天,當(dāng)時(shí)19位頂尖的計(jì)算機(jī)科學(xué)家和哲學(xué)家發(fā)布了一份長(zhǎng)達(dá)88頁(yè)的報(bào)告,題為《人工智能中的意識(shí)》,非正式名稱(chēng)為“巴特林報(bào)告”。短短幾天內(nèi),人工智能和意識(shí)科學(xué)界的幾乎所有人都閱讀了這份報(bào)告。報(bào)告草稿的摘要中有一句引人注目的話:“我們的分析表明,目前沒(méi)有任何人工智能系統(tǒng)具備意識(shí),但也表明,構(gòu)建具有意識(shí)的人工智能系統(tǒng)并不存在明顯的障礙。”

作者們承認(rèn),召集這個(gè)小組并撰寫(xiě)這份報(bào)告的部分靈感來(lái)源于“布萊克·勒莫因的案例”。一位合著者告訴《科學(xué)》雜志:“如果人工智能能夠給人以意識(shí)的印象,那么科學(xué)家和哲學(xué)家們就必須對(duì)此進(jìn)行深入探討,這已成為一項(xiàng)緊迫的任務(wù)。”

但真正吸引所有人目光的是預(yù)印本摘要中的一句話:“構(gòu)建有意識(shí)的人工智能系統(tǒng)不存在明顯的障礙。”當(dāng)我第一次讀到這句話時(shí),我感覺(jué)自己跨越了一道重要的門(mén)檻,而且這不僅僅是技術(shù)上的門(mén)檻。不,這關(guān)乎我們作為物種的本質(zhì)。

如果人類(lèi)在不久的將來(lái)發(fā)現(xiàn)世上出現(xiàn)了一臺(tái)完全有意識(shí)的機(jī)器,那意味著什么?我猜想,那將是一個(gè)如同哥白尼式的重大發(fā)現(xiàn),它會(huì)突然動(dòng)搖我們自以為是的中心地位和特殊性。幾千年來(lái),我們?nèi)祟?lèi)一直通過(guò)與“低等”動(dòng)物的對(duì)立來(lái)定義自身。這意味著我們否認(rèn)動(dòng)物擁有諸如情感(笛卡爾最明顯的錯(cuò)誤之一)、語(yǔ)言、理性以及意識(shí)等被認(rèn)為是人類(lèi)獨(dú)有的特征。然而,近年來(lái),隨著科學(xué)家們證明許多物種都擁有智慧和意識(shí),擁有情感,并且會(huì)使用語(yǔ)言和工具,這些區(qū)分大多已經(jīng)瓦解,同時(shí)也挑戰(zhàn)了幾個(gè)世紀(jì)以來(lái)人類(lèi)的優(yōu)越論。這種轉(zhuǎn)變?nèi)栽谶M(jìn)行中,它引發(fā)了關(guān)于我們身份認(rèn)同以及我們對(duì)其他物種的道德義務(wù)的棘手問(wèn)題。

人工智能對(duì)我們崇高的自我認(rèn)知構(gòu)成的威脅,完全來(lái)自另一個(gè)層面。如今,我們?nèi)祟?lèi)必須重新定義自身,不再與其他動(dòng)物比較,而是與人工智能建立聯(lián)系。隨著計(jì)算機(jī)算法在純粹的腦力上超越我們——在國(guó)際象棋、圍棋以及數(shù)學(xué)等各種“高等”思維領(lǐng)域輕松擊敗我們——我們至少可以感到欣慰的是,我們(以及許多其他動(dòng)物物種)仍然擁有意識(shí)的恩賜與重負(fù),擁有感受和主觀體驗(yàn)的能力。從這個(gè)意義上講,人工智能或許可以成為我們共同的敵人,將人類(lèi)與其他動(dòng)物拉近距離:我們對(duì)抗它,生命對(duì)抗機(jī)器。這種新的團(tuán)結(jié)將構(gòu)成一個(gè)令人振奮的故事,對(duì)于那些被邀請(qǐng)加入“意識(shí)團(tuán)隊(duì)”的動(dòng)物來(lái)說(shuō),或許是個(gè)好消息。但是,如果人工智能開(kāi)始挑戰(zhàn)人類(lèi)——或者更確切地說(shuō),是動(dòng)物——對(duì)意識(shí)的壟斷,又會(huì)發(fā)生什么呢?那時(shí),我們又會(huì)變成什么樣子呢?

我覺(jué)得這前景令人深感不安,雖然我并不完全確定原因。我逐漸接受了與其他動(dòng)物(就我而言,甚至可能包括植物)共享意識(shí)的想法,并且樂(lè)于將它們納入不斷擴(kuò)大的道德考量范圍。但是機(jī)器呢?

或許我對(duì)這個(gè)想法的不安源于我的背景和教育。我從小就浸潤(rùn)在人文學(xué)科的溫水煮沸中,尤其是在文學(xué)、歷史和藝術(shù)領(lǐng)域,這些學(xué)科一直將人類(lèi)意識(shí)視為一種值得捍衛(wèi)的卓越存在。我們所珍視的文明幾乎一切都是人類(lèi)意識(shí)的產(chǎn)物:藝術(shù)與科學(xué)、高雅文化與通俗文化、建筑、哲學(xué)、宗教、政府、法律、倫理道德,更不用說(shuō)價(jià)值本身的概念了。我想,有意識(shí)的計(jì)算機(jī)或許能為這些輝煌的寶庫(kù)增添一些全新的、我們尚未想象到的東西。我們當(dāng)然可以抱有這樣的希望。迄今為止,人工智能創(chuàng)作的詩(shī)歌水平與打油詩(shī)相差無(wú)幾;缺乏意識(shí)或許可以解釋為什么它們連一絲原創(chuàng)性或新穎見(jiàn)解的火花都沒(méi)有。但是,如果(或者說(shuō)當(dāng)?)有意識(shí)的人工智能開(kāi)始創(chuàng)作真正優(yōu)秀的詩(shī)歌時(shí),我們又會(huì)作何感想呢?

我們憑什么認(rèn)為有意識(shí)的機(jī)器會(huì)比有意識(shí)的人類(lèi)更具美德?

人工智能永遠(yuǎn)不會(huì)有意識(shí)作為一個(gè)人道主義者,我難以接受動(dòng)物對(duì)意識(shí)的壟斷地位可能會(huì)被打破。但我現(xiàn)在遇到了一些其他類(lèi)型的人(其中一些人自稱(chēng)為超人類(lèi)主義者),他們對(duì)這個(gè)未來(lái)持更為樂(lè)觀的態(tài)度。一些人工智能研究人員支持制造有意識(shí)的機(jī)器,因?yàn)樽鳛閾碛凶陨砬楦械膶?shí)體,有意識(shí)的機(jī)器比僅僅具備智能的計(jì)算機(jī)更有可能發(fā)展出同理心。

正如一位神經(jīng)科學(xué)家和一位人工智能研究人員試圖說(shuō)服我的那樣,制造有意識(shí)的人工智能是一項(xiàng)道德義務(wù)。為什么?因?yàn)榱硪环N選擇是擁有超凡智慧卻冷酷無(wú)情的人工智能,它會(huì)為了實(shí)現(xiàn)目標(biāo)而不擇手段,因?yàn)樗狈ξ覀円庾R(shí)和共同脆弱性所帶來(lái)的所有道德約束。只有有意識(shí)的人工智能才有可能發(fā)展出同理心,從而拯救我們。我并非夸大其詞,這就是他們的論點(diǎn)。

真讓人懷疑這些人是否讀過(guò)《弗蘭肯斯坦》!弗蘭肯斯坦博士賦予他的造物不僅生命,更賦予其意識(shí),而這正是問(wèn)題的關(guān)鍵所在?,旣悺ぱ┤R的小說(shuō)記錄了“一個(gè)敏感而理性的動(dòng)物的誕生”,正是這兩種特質(zhì)的結(jié)合決定了怪物的命運(yùn)。驅(qū)使怪物尋求復(fù)仇并最終走上殺戮之路的,并非怪物的理性,而是他內(nèi)心的創(chuàng)傷。

“我所見(jiàn)之處皆是幸福,唯獨(dú)我被無(wú)可挽回地排除在外,”怪物被逐出人類(lèi)社會(huì)后向弗蘭肯斯坦博士抱怨道,“我原本仁慈善良,是苦難讓我變成了惡魔?!惫治锏睦硇阅芰倘粠椭麑?shí)現(xiàn)了邪惡的計(jì)劃,但真正賦予他動(dòng)機(jī)的卻是他的意識(shí)——他的情感。我們又憑什么認(rèn)為有意識(shí)的機(jī)器會(huì)比有意識(shí)的人類(lèi)更具美德呢?

令人驚訝的是,巴特林關(guān)于人工智能意識(shí)的報(bào)告代表了該領(lǐng)域某種程度上的共識(shí);我采訪的大多數(shù)計(jì)算機(jī)科學(xué)家都贊同其結(jié)論。然而,我花在閱讀這份報(bào)告(以及采訪其中一位合著者)上的時(shí)間越多,就越開(kāi)始質(zhì)疑其關(guān)于人工智能意識(shí)即將實(shí)現(xiàn)的結(jié)論。值得稱(chēng)贊的是,作者們嚴(yán)謹(jǐn)?shù)仃U述了他們的假設(shè)和方法,但這反而讓我懷疑,他們是否是建立在一個(gè)站不住腳的基礎(chǔ)上,才得出如此大膽的結(jié)論。

本書(shū)開(kāi)篇,這些計(jì)算機(jī)科學(xué)家和哲學(xué)家就提出了他們的指導(dǎo)性假設(shè):“我們采用計(jì)算功能主義作為工作假設(shè),即執(zhí)行正確類(lèi)型的計(jì)算是意識(shí)存在的必要且充分條件?!庇?jì)算功能主義的出發(fā)點(diǎn)是,意識(shí)本質(zhì)上是一種運(yùn)行在可能是大腦或計(jì)算機(jī)的硬件上的軟件——該理論完全持不可知論的態(tài)度。但計(jì)算功能主義是正確的嗎?作者們并不打算對(duì)此妄下斷言,只是說(shuō)它是“主流觀點(diǎn)——盡管存在爭(zhēng)議”。即便如此,出于“務(wù)實(shí)的原因”,他們還是會(huì)假設(shè)它是正確的。

坦誠(chéng)固然可貴,但這種信念上的跳躍需要極大的勇氣,我不確定我們是否應(yīng)該這樣做。

就本報(bào)告而言,系統(tǒng)的“物質(zhì)基礎(chǔ)”(即大腦或計(jì)算機(jī))“對(duì)意識(shí)而言并不重要……它可以存在于多種基礎(chǔ)中,而不僅僅是生物大腦?!比魏文軌蜻\(yùn)行必要算法的基礎(chǔ)都可以?!拔覀兂醪郊僭O(shè),我們所知的計(jì)算機(jī)原則上能夠?qū)崿F(xiàn)足以產(chǎn)生意識(shí)的算法,”作者指出,“但我們并不聲稱(chēng)這一點(diǎn)是確定的?!边@種對(duì)不確定性的承認(rèn)遠(yuǎn)遠(yuǎn)不夠。報(bào)告中未加質(zhì)疑的比喻是:大腦是計(jì)算機(jī)——意識(shí)軟件運(yùn)行的硬件。在這里,我們看到的是一個(gè)偽裝成事實(shí)的比喻。事實(shí)上,整篇論文及其結(jié)論都建立在這個(gè)比喻的有效性之上。

隱喻是強(qiáng)大的思考工具,但前提是我們不能忘記它們只是隱喻——一種不完美或片面的類(lèi)比,將一件事物比作另一件事物。兩者之間的差異與相似之處同樣重要,但這些差異似乎在人工智能的狂熱浪潮中被忽略了。正如控制論專(zhuān)家阿圖羅·羅森布魯斯和諾伯特·維納多年前所指出的那樣,“隱喻的代價(jià)是永恒的警惕?!?除了這份報(bào)告的作者之外,整個(gè)人工智能領(lǐng)域似乎都放松了警惕。

想想硬件和軟件之間涇渭分明的區(qū)別。計(jì)算機(jī)中硬件與軟件分離的妙處在于,同一臺(tái)機(jī)器上可以運(yùn)行許多不同的程序;軟件及其編碼的知識(shí)在硬件“消亡”后依然存在。這種分離也符合我們關(guān)于二元論的直覺(jué)——即,正如笛卡爾所言,我們可以清晰地劃分精神和物質(zhì)。但在大腦中,硬件和軟件之間的區(qū)別根本不存在;在那里,軟件就是硬件,反之亦然。記憶是大腦中神經(jīng)元之間物理連接的模式,它既非硬件也非軟件,而是兩者兼具。

事實(shí)上,發(fā)生在你身上的一切——你經(jīng)歷、學(xué)習(xí)或記憶的一切——都會(huì)改變你大腦的物理結(jié)構(gòu),永久性地重塑其連接。(從這個(gè)意義上講,大腦中不存在二元性;精神層面的東西永遠(yuǎn)無(wú)法與物質(zhì)層面完全分離。)認(rèn)為同樣的意識(shí)算法可以在各種不同的載體上運(yùn)行的想法是毫無(wú)意義的,因?yàn)樗懻摰妮d體——大腦——會(huì)不斷地被運(yùn)行在其上的任何信息(或“意識(shí)算法”)進(jìn)行物理重構(gòu)。你的大腦與我的大腦在物質(zhì)層面上是不同的,正是因?yàn)樗鼈儽徊煌纳罱?jīng)歷——也就是意識(shí)本身——所塑造。大腦根本無(wú)法互換,無(wú)論是與電腦還是與其他大腦。

幾乎在任何方面,你只要深入探討,就會(huì)發(fā)現(xiàn)“計(jì)算機(jī)等同于大腦”的比喻并不成立。計(jì)算機(jī)科學(xué)家將大腦中的神經(jīng)元視為芯片上的晶體管,通過(guò)電脈沖來(lái)控制它們的開(kāi)關(guān)。這種類(lèi)比固然有一定道理,但實(shí)際情況卻很復(fù)雜,因?yàn)殡姴⒎怯绊懮窠?jīng)元活動(dòng)的唯一因素。大腦中還充斥著各種化學(xué)物質(zhì),包括神經(jīng)調(diào)節(jié)劑和激素,它們不僅影響神經(jīng)元是否放電,還影響其放電的強(qiáng)度。這就是為什么精神活性藥物能夠深刻地改變意識(shí)(而對(duì)計(jì)算機(jī)卻沒(méi)有明顯影響)的原因。神經(jīng)元的活動(dòng)還受到大腦中以波狀模式傳播的振蕩的影響;這些振蕩的不同頻率與不同的心理活動(dòng)相關(guān),例如意識(shí)及其缺失、注意力集中和做夢(mèng)(以及其他睡眠階段)。

將神經(jīng)元比作晶體管,是對(duì)它們復(fù)雜性的極大低估。與芯片上的晶體管相比,大腦中的神經(jīng)元相互連接極其復(fù)雜,每個(gè)神經(jīng)元都直接與其他多達(dá)10000個(gè)神經(jīng)元通信,構(gòu)成一個(gè)極其精細(xì)的網(wǎng)絡(luò),以至于我們距離繪制出其連接的最粗略圖譜,仍然需要數(shù)十年時(shí)間。在計(jì)算機(jī)科學(xué)領(lǐng)域,人們對(duì)“深度人工神經(jīng)網(wǎng)絡(luò)”的出現(xiàn)大加贊賞——這是一種機(jī)器學(xué)習(xí)架構(gòu),據(jù)稱(chēng)以大腦為模型,它以驚人的數(shù)量疊加處理器,使網(wǎng)絡(luò)能夠處理和學(xué)習(xí)海量數(shù)據(jù)。這令人印象深刻,但最近的一項(xiàng)研究表明,單個(gè)皮層神經(jīng)元就能完成整個(gè)深度人工神經(jīng)網(wǎng)絡(luò)所能完成的一切。

沒(méi)錯(cuò),計(jì)算機(jī)在很多方面確實(shí)與大腦相似,計(jì)算機(jī)科學(xué)也通過(guò)模擬大腦的各個(gè)方面和運(yùn)作方式取得了長(zhǎng)足的進(jìn)步。但是,認(rèn)為大腦和計(jì)算機(jī)在任何方面都可以互換——計(jì)算功能主義的前提——無(wú)疑是牽強(qiáng)附會(huì)的。然而,這不僅是巴特林報(bào)告的立足之地,也是該領(lǐng)域大多數(shù)理論的基石。原因不難理解。如果大腦是計(jì)算機(jī),那么足夠強(qiáng)大的計(jì)算機(jī)應(yīng)該能夠做到大腦所做的一切,包括產(chǎn)生意識(shí)。這個(gè)前提幾乎必然得出這樣的結(jié)論。換句話說(shuō),正是作者們自己掃除了構(gòu)建有意識(shí)人工智能的最大“障礙”——即認(rèn)為大腦與計(jì)算機(jī)在關(guān)鍵方面存在差異的障礙。

將神經(jīng)元比作晶體管,是對(duì)神經(jīng)元復(fù)雜性的嚴(yán)重低估。

報(bào)告的第二個(gè)方面讓我質(zhì)疑其結(jié)論的可信度,那就是它提出的判斷人工智能是否真正具有意識(shí)的標(biāo)準(zhǔn)。這是一個(gè)嚴(yán)峻的挑戰(zhàn)。作者引用了勒莫因事件(無(wú)論是否恰當(dāng)),指出人工智能很容易欺騙人類(lèi),讓他們相信自己擁有意識(shí),而實(shí)際上并非如此。(或許更準(zhǔn)確的說(shuō)法是我們自己欺騙了自己,這要?dú)w功于我們對(duì)擬人化和魔法的迷戀。)當(dāng)人工智能的訓(xùn)練數(shù)據(jù)幾乎涵蓋了所有關(guān)于意識(shí)的論述時(shí),“可報(bào)告性”(哲學(xué)術(shù)語(yǔ),其實(shí)就是直接詢(xún)問(wèn)人工智能)就無(wú)法奏效了。解決這一難題的一個(gè)方法是,從人工智能訓(xùn)練所用的數(shù)據(jù)集中移除所有關(guān)于意識(shí)(以及可能包括感覺(jué)和情感)的引用,然后觀察它是否還能令人信服地表達(dá)自己擁有意識(shí)。

作者建議,我們應(yīng)該尋找與各種意識(shí)理論預(yù)測(cè)相符的人工智能意識(shí)“指標(biāo)”。例如,如果人工智能的設(shè)計(jì)包含一個(gè)工作空間,該工作空間匯集了各種信息流,但前提是這些信息流必須經(jīng)過(guò)競(jìng)爭(zhēng)才能進(jìn)入該空間,那么這很符合全局工作空間理論,因此可能被視為具有意識(shí)。該報(bào)告回顧了六種意識(shí)理論,并確定了人工智能必須展現(xiàn)的“指標(biāo)”,以滿(mǎn)足每種理論的要求,從而被認(rèn)為具有潛在的意識(shí)。

這里的問(wèn)題(或者說(shuō)其中一個(gè)問(wèn)題)在于:它提出的用來(lái)衡量人工智能的意識(shí)理論,沒(méi)有一個(gè)能達(dá)到任何人滿(mǎn)意的程度。那么,這算什么證明標(biāo)準(zhǔn)呢?更糟糕的是,很多這類(lèi)理論都可以在人工智能設(shè)計(jì)中模擬出來(lái),這并不奇怪,因?yàn)樗鼈兌蓟谝庾R(shí)是計(jì)算問(wèn)題這一前提。我們陷入了無(wú)休止的循環(huán)。

當(dāng)我仔細(xì)研讀完巴特林報(bào)告后,我之前一直擔(dān)憂(yōu)的“哥白尼時(shí)刻”似乎比報(bào)告大膽的結(jié)論所暗示的要遙遠(yuǎn)得多。在回顧了報(bào)告中提到的六種左右的意識(shí)理論后,我發(fā)現(xiàn)它們都存在一個(gè)共同的缺陷:它們都想當(dāng)然地認(rèn)為意識(shí)可以被簡(jiǎn)化為某種算法。

我也注意到,這些理論存在一些缺失。它們都沒(méi)有提及具身性——即意識(shí)可能依賴(lài)于同時(shí)擁有身體和大腦——或者說(shuō),它們對(duì)任何與生物學(xué)相關(guān)的概念都只字未提。這些理論也沒(méi)有解釋意識(shí)主體。究竟是誰(shuí)或什么接收了在全球工作空間中傳播的信息?或者說(shuō),整合信息理論(IIT)中整合的信息?情感在使體驗(yàn)成為意識(shí)的過(guò)程中又扮演著怎樣的角色?

最后一點(diǎn)作者們也注意到了,他們指出大多數(shù)現(xiàn)有理論都忽略了“情感”這一概念,并建議該領(lǐng)域應(yīng)更加關(guān)注有意識(shí)的機(jī)器是否擁有“真實(shí)”的情感這一問(wèn)題。因?yàn)槿绻聦?shí)證明它們確實(shí)擁有情感,我們將面臨一場(chǎng)道德和倫理危機(jī)。報(bào)告指出:“任何能夠感知痛苦的實(shí)體都應(yīng)受到道德考量。”(但痛苦難道不總是有意識(shí)的嗎?)報(bào)告繼續(xù)說(shuō)道:“這意味著,如果我們未能認(rèn)識(shí)到有意識(shí)的人工智能系統(tǒng)的意識(shí),我們可能會(huì)造成或允許造成具有重大道德意義的傷害?!蔽覀兙烤箤?duì)能夠感知痛苦的機(jī)器負(fù)有什么責(zé)任?我們真的希望給這個(gè)世界帶來(lái)更多的痛苦嗎?

除了這種對(duì)情感(作為賦予機(jī)器意識(shí)的棘手副產(chǎn)品)的高度推測(cè)性討論之外,在人工智能領(lǐng)域,關(guān)于意識(shí)的討論一如既往地抽象——如同人們所預(yù)期的那樣,它冷冰冰的、沒(méi)有形體的,并且完全無(wú)視生物學(xué)。當(dāng)我向一位致力于構(gòu)建有意識(shí)人工智能的研究人員提出“計(jì)算機(jī)是否會(huì)感到痛苦”的難題時(shí),他輕描淡寫(xiě)地?fù)]了揮手,解釋說(shuō)只需對(duì)算法進(jìn)行簡(jiǎn)單的修改就能解決這個(gè)問(wèn)題:“我們完全可以把快樂(lè)的程度調(diào)高一點(diǎn)?!?/p>

改編自邁克爾·波倫的《世界顯現(xiàn):意識(shí)之旅》。版權(quán)所有?2026 邁克爾·波倫。經(jīng)企鵝出版社(企鵝出版集團(tuán)旗下品牌,企鵝蘭登書(shū)屋有限責(zé)任公司)授權(quán)出版。

AI Will Never Be Conscious

In his new book, A World Appears, Michael Pollan argues that artificial intelligence can do many things—it just can’t be a person.


PHOTO-ILLUSTRATION: WIRED STAFF; GETTY IMAGES

SAVE THIS STORY

THE BLAKE LEMOINE incident is remembered today as a high?water mark of AI hype. It thrust the whole idea of conscious AI into public awareness for a news cycle or two, but it also launched a conversation, among both computer scientists and consciousness researchers, that has only intensified in the years since. While the tech community continues to publicly belittle the whole idea (and poor Lemoine), in private it has begun to take the possibility much more seriously. A conscious AI might lack a clear commercial rationale (how do you monetize the thing?) and create sticky moral dilemmas (how should we treat a machine capable of suffering?). Yet some AI engineers have come to think that the holy grail of artificial general intelligence—a machine that is not only supersmart but also endowed with a human level of understanding, creativity, and common sense—might require something like consciousness to attain. In the tech community, what had been an informal taboo surrounding conscious AI—as a prospect that the public would find creepy—suddenly began to crumble.


COURTESY OF PENGUIN PRESS

Buy This Book At:

  • Amazon

  • Bookshop.org

  • Target

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

The turning point came in the summer of 2023, when a group of 19 leading computer scientists and philosophers posted an 88?page report titled “Consciousness in Artificial Intelligence,” informally known as the Butlin report. Within days, it seemed, everyone in the AI and consciousness science community had read it. The draft report’s abstract offered this arresting sentence: “Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious barriers to building conscious AI systems.”

ADVERTISEMENT

The most ambitious, future-defining stories from our favorite writers.

SIGN UP

By signing up, you agree to our user agreement (including class action waiver and arbitration provisions), and acknowledge our privacy policy.

The authors acknowledged that part of the inspiration behind convening the group and writing the report was “the case of Blake Lemoine.” “If AIs can give the impression of consciousness,” a coauthor told Science magazine, “that makes it an urgent priority for scientists and philosophers to weigh in.”

FEATURED VIDEO

Michael Pollan Answers Psychedelics Questions From Twitter

But what caught everyone’s attention was that single statement in the abstract of the preprint: “no obvious barriers to building conscious AI systems.” When I read those words for the first time, I felt like some important threshold had been crossed, and it was not just a technological one. No, this had to do with our very identity as a species.

What would it mean for humanity to discover one day in the not?so?distant future that a fully conscious machine had come into the world? I’m guessing it would be a Copernican moment, abruptly dislodging our sense of centrality and specialness. We humans have spent a few thousand years defining ourselves in opposition to the “l(fā)esser” animals. This has entailed denying animals such supposedly uniquely human traits as feelings (one of Descartes’s most flagrant errors), language, reason, and consciousness. In the last few years, most of these distinctions have disintegrated as scientists have demonstrated that plenty of species are intelligent and conscious, have feelings, and use language and tools, in the process challenging centuries of human exceptionalism. This shift, still underway, has raised thorny questions about our identity, as well as about our moral obligations to other species.

With AI, the threat to our exalted self?conception comes from another quarter entirely. Now we humans will have to define ourselves in relation to AIs rather than other animals. As computer algorithms surpass us in sheer brainpower—handily beating us at games like chess and Go and various forms of “higher” thought like mathematics—we can at least take solace in the fact that we (and many other animal species) still have to ourselves the blessings and burdens of consciousness, the ability to feel and have subjective experiences. In this sense, AI may serve as a common adversary, drawing humans and other animals closer together: us against it, the living versus the machines. This new solidarity would make for a heartwarming story and might be good news for the animals invited to join Team Conscious. But what happens if AI begins to challenge the human—or animal, I should say—monopoly on consciousness? Who will we be then?

ADVERTISEMENT

I find this a deeply unsettling prospect, though I’m not entirely sure why. I’m getting comfortable with the idea of sharing consciousness with other animals (and possibly even with plants, in my case) and I’d be happy to admit them into an expanding circle of moral consideration. But machines?

It could be that my discomfort with the idea stems from my background and education. I have been slow?cooked in the warm broth of the humanities, especially literature and history and the arts, and these have always held up human consciousness as something exceptional that is worth defending. Just about everything we value about civilization is the product of human consciousness: the arts and the sciences, high culture and low, architecture, philosophy, religion, government, law, and ethics and morality, not to mention the very idea of value itself. I suppose it is possible that conscious computers could add something new and as yet unimagined to the stock of these glories. We can hope so. To date, poetry written by AIs isn’t much better than doggerel; the absence of consciousness might explain why it lacks even a spark of originality or fresh insight. But how will we feel if (when?) conscious AIs start producing really good poetry?

ADVERTISEMENT

Why should we assume that conscious machines would be any more virtuous than conscious humans?

ADVERTISEMENT

As a humanist, I struggle with the possibility that the animal monopoly on consciousness might fall. But I have now met other types of humans (some of whom call themselves transhumanists) who are more sanguine about this future. Some AI researchers endorse the effort to build conscious machines because, as entities with feelings of their own, conscious machines are more likely to develop empathy than computers that are merely intelligent. Building a conscious AI is a moral imperative, as both a neuroscientist and an AI researcher sought to convince me. Why? Because the alternative is the blazingly smart but unfeeling AI that will be ruthless in pursuit of its objectives, because it will lack all of the moral constraints that have arisen from our consciousness and shared vulnerabilities. Only a conscious AI is apt to develop empathy and therefore spare us. I am not exaggerating; this is the argument.

One has to wonder if these people have ever read Frankenstein! Dr. Frankenstein gives his creation the gift of not only life but also consciousness, and therein lies the rub. Mary Shelley’s novel chronicles “the creation of a sensitive and rational animal,” and it is the combination of those two qualities that determines the monster’s fate. It is not the monster’s rationality but his emotional injury that spurs him to seek revenge and turn homicidal.

ADVERTISEMENT

“Everywhere I see bliss, from which I alone am irrevocably excluded,” the monster complains to Dr. Frankenstein after being driven out of human society. “I was benevolent and good; misery made me a fiend.” The monster’s ability to reason surely helped him realize his demonic scheme, but it was his consciousness—his feelings—that supplied the motive. Why should we assume that conscious machines would be any more virtuous than conscious humans?

REMARKABLY ENOUGH, THE Butlin report on artificial consciousness represents something of a consensus view in the field; most of the computer scientists I interviewed endorsed its conclusions. Yet the more time I spent reading it (and interviewing one of its coauthors), the more I began to question its conclusion that artificial consciousness is right around the corner. To their credit, the authors are scrupulous about setting forth their assumptions and methods, both of which make me wonder if they haven’t erected their bold conclusion atop a dubious foundation.

Right on page one, these computer scientists and philosophers set forth their guiding assumption: “We adopt computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness, as a working hypothesis.” Computational functionalism takes as its starting point the idea that consciousness is essentially a kind of software running on the hardware of what could be a brain or a computer—the theory is completely agnostic. But is computational functionalism true? The authors aren’t quite prepared to nail themselves to that claim, only to say that it is “mainstream—although disputed.” Even so, they will proceed on the assumption that it is true for “pragmatic reasons.”

ADVERTISEMENT

The candor is admirable, but the approach demands a tremendous leap of faith that I’m not sure we should make.

For the purposes of the report, the “material substrate” of the system—that is, whether it is a brain or a computer—“does not matter for consciousness … It can exist in multiple substrates, not just in biological brains.” Any substrate that can run the necessary algorithm will do. “We tentatively assume that computers as we know them are in principle capable of implementing algorithms sufficient for consciousness,” the authors state, “but we do not claim that this is certain.” The acknowledgment of uncertainty doesn’t go nearly far enough. Unquestioned in the report is the metaphor that brains are computers—the hardware on which the software of consciousness is run. Here, we meet a metaphor parading as fact. Indeed, the whole paper and its conclusions hinge on the validity of this metaphor.

Metaphors can be powerful tools for thinking, but only as long as we don’t forget they are metaphors—imperfect or partial analogies likening one thing to another. The differences between the two things are as important as the similarities, but these differences seem to have gotten lost in the enthusiasm surrounding AI. As cyberneticists Arturo Rosenblueth and Norbert Wiener noted years ago, “The price of metaphor is eternal vigilance.” Beyond the authors of this report, the whole field of AI appears to have let down its guard on this one.

ADVERTISEMENT

Consider the sharp distinction between hardware and software. The beauty of separating hardware from software in computers is that a great many different programs can run on the same machine; the software and the knowledge it encodes survive the “death” of the hardware. The separation also speaks to our folk intuition that dualism is true—that, following Descartes, we can draw a bright line between mental stuff and physical stuff. But the distinction between hardware and software simply doesn’t exist in brains; there, software is hardware and vice versa. A memory is a physical pattern of connection among neurons in the brain, neither hardware nor software but both.

Indeed, everything that happens to you—everything you experience or learn or remember—changes the physical structure of your brain, permanently rewiring its connections. (In this sense, there is no dualism in the brain; mental stuff can never be completely disentangled from physical stuff.) The idea that the same consciousness algorithm can be run on a variety of different substrates makes no sense when the substrate in question—a brain—is continually being physically reconfigured by whatever information (or “algorithm of consciousness”) is run on it. Your brain is materially different from mine precisely because it has been shaped, literally, by different life experiences—that is, by consciousness itself. Brains are simply not interchangeable, neither with computers nor with other brains.

ADVERTISEMENT

Just about anyplace you push on it, the computer?as?brain metaphor breaks down. Computer scientists treat neurons in a brain as though they are transistors on a chip, switched on or off by pulses of electricity. That analogy has some truth to it, but it is complicated by the fact that electricity is not the only factor influencing the firing of neurons. Brains are also awash in chemicals, including neuromodulators and hormones that powerfully influence the behavior of neurons, not just whether or not they fire but how strongly. This is why psychoactive drugs can profoundly alter consciousness (and have no discernible effect on computers). The activity of neurons is also influenced by oscillations that traverse the brain in wavelike patterns; the different frequencies of these oscillations correlate with different mental operations, such as consciousness and its absence, focused attention and dreaming (as well as other stages of sleep).

To liken neurons to transistors is to grossly underestimate their complexity. Compared with transistors on a chip, neurons in the brain are massively interconnected, each one communicating directly with as many as 10,000 others in a network so intricate that we are still decades away from being able to draw even the crudest map of its connections. In computer science, much has been made about the advent of “deep artificial neural networks”—a type of machine?learning architecture, supposedly modeled on the brain’s, that layers a mind?boggling number of processors in such a way that the network can process and learn from vast troves of data. Impressive, for sure, yet a recent study demonstrated that a single cortical neuron can do everything an entire deep artificial neural network can.

ADVERTISEMENT

Yes, there are plenty of ways in which computers do resemble brains, and computer science has made great strides by simulating various aspects and operations of the brain. But the idea that brains and computers are in any way interchangeable—the premise of computational functionalism—is surely a stretch. And yet this is the premise upon which stands not only the Butlin report but also most of the field. It’s not hard to see why. If brains are computers, then sufficiently powerful computers should be able to do whatever brains do, including becoming conscious. The premise all but guarantees the conclusion. Put another way, it is the authors themselves who have removed the biggest “barrier” to building a conscious AI—the barrier that says brains differ from computers in crucial ways.

To liken neurons to transistors is to grossly underestimate their complexity.

ADVERTISEMENT

There is a second aspect of the report that makes me wonder how seriously to take its conclusion, and that is the standard it proposes for deciding if an AI is actually conscious or not. This is a serious challenge. Citing the Lemoine incident (fairly or not), the authors point out that AIs can easily dupe humans into believing they are conscious when they are not. (It’s probably more accurate to say that we dupe ourselves into this belief, thanks to our weakness for anthropomorphism and magic.) “Reportability” (philosophical jargon for just asking the AI itself) won’t work when the AI has been trained on pretty much everything that’s been said and written about consciousness. One approach to this dilemma would be to remove all references to consciousness (and presumably feeling and emotion as well) from the dataset on which the AI has been trained and then see if it can still speak convincingly about being conscious.

Instead, the authors propose that we look for “indicators” of AI consciousness that match the predictions of the various theories of consciousness in play. So, for example, if the design of an AI included a workspace that brought together various streams of information, but only after those streams had competed to enter it, that would look a lot like global workspace theory and so might qualify as conscious. The report reviewed a half?dozen theories of consciousness, identifying the “indicators” that an AI would have to exhibit to satisfy each of them and, by doing so, be deemed potentially conscious.

ADVERTISEMENT

The problem here (well, one of them) is this: None of the theories of consciousness that it proposes we measure AIs against are even remotely close to being proved to anyone’s satisfaction. So what kind of standard of proof is that? What’s more, many of these theories can be simulated in the design of an AI, which should come as no surprise, because they’re all based on the idea that consciousness is a matter of computation. Round and round we go.

By the time I finished digesting the Butlin report, the Copernican moment I’d worried about seemed more distant than the report’s bold conclusion had led me to believe. After reviewing the half?dozen or so theories of consciousness covered by the report, it seemed clear that all of them stacked the deck by taking for granted that consciousness could be reduced to some kind of algorithm.

I was also struck by what was missing from the theories under consideration. None of them had anything to say about embodiment—the idea that consciousness might depend on having both a body and a brain—or, for that matter anything remotely biological. Nor did the theories have anything to say about the conscious subject. Who or what, exactly, is the recipient of the information that is broadcast in the global workspace? Or the information that is integrated in integrated information theory (IIT)? And what about the role of feelings in rendering experience conscious?

ADVERTISEMENT

This last point was not lost on the authors, who noted the absence of “affect” from most current theories and recommended that the field pay more attention to the issue of whether conscious machines would have “real” feelings, because if it turns out they do, we will have a moral and ethical crisis on our hands. “Any entity which is capable of conscious suffering deserves moral consideration,” the report states. (But isn’t suffering always conscious?) “This means that if we fail to recognize the consciousness of conscious AI systems,” the report continued, “we may risk causing or allowing morally significant harms.” What would we owe machines that can suffer? And do we really want to bring any more suffering into the world?

Apart from this sort of highly speculative discussion of feeling (as a troublesome by?product of making machines conscious), in the AI community, the conversation about consciousness is as relentlessly abstract—as bloodless, bodiless, and utterly oblivious to biology—as one would expect. When I posed the suffering?computer conundrum to a researcher seeking to build a conscious AI, he waved away the problem, explaining it could be offset with a simple fix to the algorithm: “There’s no reason we couldn’t just turn up the dial on joy.”

Adapted from A World Appears: A Journey into Consciousness by Michael Pollan. Copyright ?2026 by Michael Pollan. Published by arrangement with Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC.

特別聲明:以上內(nèi)容(如有圖片或視頻亦包括在內(nèi))為自媒體平臺(tái)“網(wǎng)易號(hào)”用戶(hù)上傳并發(fā)布,本平臺(tái)僅提供信息存儲(chǔ)服務(wù)。

Notice: The content above (including the pictures and videos if any) is uploaded and posted by a user of NetEase Hao, which is a social media platform and only provides information storage services.

相關(guān)推薦
熱點(diǎn)推薦
伊朗導(dǎo)彈擊中以色列南部,造成至少15人受傷

伊朗導(dǎo)彈擊中以色列南部,造成至少15人受傷

界面新聞
2026-03-02 20:03:49
許世友得知開(kāi)國(guó)中將途經(jīng)南京,怒道:這還了得,我去車(chē)站截住他!

許世友得知開(kāi)國(guó)中將途經(jīng)南京,怒道:這還了得,我去車(chē)站截住他!

史海孤雁
2026-03-01 16:16:08
王朔痛批87版《紅樓夢(mèng)》:把原著毀了,簡(jiǎn)直沒(méi)法看

王朔痛批87版《紅樓夢(mèng)》:把原著毀了,簡(jiǎn)直沒(méi)法看

談史論天地
2026-03-02 07:27:17
哪一刻意識(shí)到自己沒(méi)見(jiàn)過(guò)世面?網(wǎng)友:從此再?zèng)]喝過(guò)茶

哪一刻意識(shí)到自己沒(méi)見(jiàn)過(guò)世面?網(wǎng)友:從此再?zèng)]喝過(guò)茶

另子維愛(ài)讀史
2025-12-13 21:53:50
從1200萬(wàn)到100萬(wàn),日本圍棋實(shí)際被精致利己主義毀掉了

從1200萬(wàn)到100萬(wàn),日本圍棋實(shí)際被精致利己主義毀掉了

月滿(mǎn)大江流
2026-03-01 14:52:46
我國(guó)著名主持人赴瑞士安樂(lè)死,兒子講述其死前慘狀:我非常后悔

我國(guó)著名主持人赴瑞士安樂(lè)死,兒子講述其死前慘狀:我非常后悔

阿訊說(shuō)天下
2026-02-21 12:35:11
伊朗總統(tǒng)發(fā)表聲明

伊朗總統(tǒng)發(fā)表聲明

澎湃新聞
2026-03-01 19:02:58
男子高鐵商務(wù)座車(chē)廂內(nèi)抽煙,還脫鞋將雙腳架在車(chē)窗處,12306客服回應(yīng):全列禁煙,遇到可舉報(bào)

男子高鐵商務(wù)座車(chē)廂內(nèi)抽煙,還脫鞋將雙腳架在車(chē)窗處,12306客服回應(yīng):全列禁煙,遇到可舉報(bào)

都市快報(bào)橙柿互動(dòng)
2026-03-02 12:56:36
新鵬城舉行出征儀式,周定洋、戴偉浚等多名新援登臺(tái)亮相

新鵬城舉行出征儀式,周定洋、戴偉浚等多名新援登臺(tái)亮相

懂球帝
2026-03-02 17:51:07
臺(tái)北為何慘遭逆轉(zhuǎn)!賽后陳盈駿毫不客氣給出輸球原因 直戳痛點(diǎn)

臺(tái)北為何慘遭逆轉(zhuǎn)!賽后陳盈駿毫不客氣給出輸球原因 直戳痛點(diǎn)

現(xiàn)代小青青慕慕
2026-03-02 09:53:20
18年前,揭露“三鹿奶粉”的上海記者簡(jiǎn)光洲,最后被報(bào)復(fù)了嗎?

18年前,揭露“三鹿奶粉”的上海記者簡(jiǎn)光洲,最后被報(bào)復(fù)了嗎?

毛豆何時(shí)歸
2026-02-22 07:19:18
1.2億農(nóng)村老人,每月只領(lǐng)200元養(yǎng)老金,買(mǎi)兩袋米就沒(méi)了。

1.2億農(nóng)村老人,每月只領(lǐng)200元養(yǎng)老金,買(mǎi)兩袋米就沒(méi)了。

流蘇晚晴
2026-02-26 18:18:15
最新戰(zhàn)況:伊朗與多支武裝協(xié)同反擊,美以遭重創(chuàng)

最新戰(zhàn)況:伊朗與多支武裝協(xié)同反擊,美以遭重創(chuàng)

兵國(guó)大事
2026-03-01 21:11:10
三百名醫(yī)生提醒:晨起喝溫水對(duì)心腦血管的影響,建議抽空看看

三百名醫(yī)生提醒:晨起喝溫水對(duì)心腦血管的影響,建議抽空看看

健康之光
2026-03-02 17:35:03
閃評(píng)|英國(guó)默許美國(guó)使用軍事基地 歐洲態(tài)度緣何轉(zhuǎn)變?

閃評(píng)|英國(guó)默許美國(guó)使用軍事基地 歐洲態(tài)度緣何轉(zhuǎn)變?

國(guó)際在線
2026-03-02 19:42:05
演員陳浩民夫婦滯留阿聯(lián)酋,妻子蔣麗莎:人生第二次收到轟炸警報(bào),作為中國(guó)人真的很感恩現(xiàn)在和平的每一天

演員陳浩民夫婦滯留阿聯(lián)酋,妻子蔣麗莎:人生第二次收到轟炸警報(bào),作為中國(guó)人真的很感恩現(xiàn)在和平的每一天

極目新聞
2026-03-02 12:35:35
這是唱的哪出戲?委內(nèi)瑞拉代理女總統(tǒng)宣布大赦

這是唱的哪出戲?委內(nèi)瑞拉代理女總統(tǒng)宣布大赦

史政先鋒
2026-01-31 22:15:37
特朗普苦等4天中方終于回信,對(duì)美開(kāi)出兩大條件,做不到訪華免談

特朗普苦等4天中方終于回信,對(duì)美開(kāi)出兩大條件,做不到訪華免談

小樾說(shuō)歷史
2026-03-02 14:39:43
6年了,郭麒麟的反擊幾乎斷送了朱亞文的演藝生涯

6年了,郭麒麟的反擊幾乎斷送了朱亞文的演藝生涯

小熊侃史
2025-12-25 11:24:12
大伯拿走我500萬(wàn)房本說(shuō)保管,我掛失重辦,他兒子打來(lái)80通電話

大伯拿走我500萬(wàn)房本說(shuō)保管,我掛失重辦,他兒子打來(lái)80通電話

風(fēng)起見(jiàn)你
2026-03-01 21:45:02
2026-03-02 20:32:49
科學(xué)的歷程 incentive-icons
科學(xué)的歷程
吳國(guó)盛、田松主編
3136文章數(shù) 15005關(guān)注度
往期回顧 全部

科技要聞

榮耀發(fā)布機(jī)器人手機(jī)、折疊屏、人形機(jī)器人

頭條要聞

美記者詢(xún)問(wèn)就伊朗局勢(shì)中方會(huì)采取什么行動(dòng) 外交部回應(yīng)

頭條要聞

美記者詢(xún)問(wèn)就伊朗局勢(shì)中方會(huì)采取什么行動(dòng) 外交部回應(yīng)

體育要聞

“想要我簽名嗎” 梅西逆轉(zhuǎn)后嘲諷對(duì)手主帥

娛樂(lè)要聞

美伊以沖突爆發(fā),多位明星被困中東

財(cái)經(jīng)要聞

金銀大漲 市場(chǎng)仍在評(píng)估沖突會(huì)否長(zhǎng)期化

汽車(chē)要聞

國(guó)民SUV再添一員 瑞虎7L靜態(tài)體驗(yàn)

態(tài)度原創(chuàng)

健康
游戲
親子
房產(chǎn)
藝術(shù)

轉(zhuǎn)頭就暈的耳石癥,能開(kāi)車(chē)上班嗎?

《王者榮耀世界》終于定檔!你會(huì)第一時(shí)間玩嗎?

親子要聞

阿寶和藏區(qū)老二居然就差一天的生日時(shí)間,今天我們給他們過(guò)生日哦

房產(chǎn)要聞

方案突然曝光!??诒睅煷蟾叫#钟袝?shū)包大盤(pán)殺出!

藝術(shù)要聞

簡(jiǎn)約的風(fēng)景畫(huà),美國(guó)畫(huà)家Ben Bauer作品

無(wú)障礙瀏覽 進(jìn)入關(guān)懷版