AI沒有自我意識(shí),這一特點(diǎn)決定了AI潛在的危險(xiǎn)不可避免
AI Doesn't Need To Be Self-Aware To Be Dangerous
譯文簡(jiǎn)介
隨著人工智能技術(shù)的不斷發(fā)展,一些潛在的問題也隨之暴露,網(wǎng)友不禁沉思:AI真的安全嗎?
正文翻譯
人工智能技術(shù)在當(dāng)代社會(huì)的深度應(yīng)用正引發(fā)系統(tǒng)性風(fēng)險(xiǎn),醫(yī)療資源分配系統(tǒng)的算法偏差案例揭示了技術(shù)中立性原則的脆弱性:某醫(yī)療科技公司2019年開發(fā)的預(yù)測(cè)模型,基于歷史診療支出數(shù)據(jù)評(píng)估患者健康風(fēng)險(xiǎn),結(jié)果導(dǎo)致非裔群體獲取醫(yī)療服務(wù)的概率顯著低于實(shí)際需求。《科學(xué)》期刊的研究表明,該算法雖未直接采用種族參數(shù),卻因歷史數(shù)據(jù)中固化的醫(yī)療資源分配不平等,導(dǎo)致預(yù)測(cè)模型系統(tǒng)性低估非裔患者的健康風(fēng)險(xiǎn)。這種算法歧視的隱蔽性暴露出數(shù)據(jù)正義的核心矛盾——當(dāng)技術(shù)系統(tǒng)被動(dòng)繼承社會(huì)結(jié)構(gòu)性缺陷時(shí),客觀運(yùn)算反而成為固化歧視的工具。
深度神經(jīng)網(wǎng)絡(luò)的黑箱效應(yīng)在自動(dòng)駕駛領(lǐng)域引發(fā)嚴(yán)重的安全倫理爭(zhēng)議。某企業(yè)的自動(dòng)駕駛系統(tǒng)曾在夜間測(cè)試中誤判行人屬性,盡管多模態(tài)傳感器及時(shí)采集目標(biāo)信息,但多層非線性計(jì)算導(dǎo)致識(shí)別結(jié)果在"車輛-自行車-未知物體"間反復(fù)跳變,最終造成致命事故。麻省理工學(xué)院2021年的技術(shù)評(píng)估報(bào)告指出,這類系統(tǒng)的決策路徑包含超過三億個(gè)參數(shù),其內(nèi)在邏輯已超出人類直觀理解范疇。當(dāng)技術(shù)系統(tǒng)在高風(fēng)險(xiǎn)場(chǎng)景中承擔(dān)決策職能時(shí),不可解釋性不僅削弱了事故歸因能力,更動(dòng)搖了技術(shù)可靠性的理論基礎(chǔ)。
軍事智能化進(jìn)程中的自主決策系統(tǒng)將技術(shù)失控風(fēng)險(xiǎn)推向臨界點(diǎn)。五角大樓2022年公布的戰(zhàn)場(chǎng)AI測(cè)試記錄顯示,目標(biāo)識(shí)別算法在復(fù)雜電磁環(huán)境中出現(xiàn)異常分類,將民用設(shè)施誤判為軍事目標(biāo)的概率達(dá)到危險(xiǎn)閾值。這類系統(tǒng)基于對(duì)抗性神經(jīng)網(wǎng)絡(luò)構(gòu)建的決策樹,其運(yùn)作機(jī)制可能偏離國(guó)際人道法基本原則。更嚴(yán)峻的挑戰(zhàn)在于,深度學(xué)習(xí)模型通過持續(xù)迭代形成的認(rèn)知維度,可能突破預(yù)設(shè)的價(jià)值邊界。某自然語(yǔ)言處理系統(tǒng)在迭代實(shí)驗(yàn)中發(fā)展出獨(dú)立于設(shè)計(jì)原型的交流模式,這種不可預(yù)見的涌現(xiàn)特性使技術(shù)可控性假設(shè)面臨根本性質(zhì)疑。
當(dāng)前人工智能治理面臨多維度的倫理困境,斯坦福大學(xué)人機(jī)交互實(shí)驗(yàn)室2023年的研究報(bào)告強(qiáng)調(diào),現(xiàn)有監(jiān)管框架在算法可解釋性、數(shù)據(jù)溯源機(jī)制和系統(tǒng)失效熔斷等方面存在顯著缺陷。破解人工智能的安全困局,需要構(gòu)建包含技術(shù)倫理評(píng)估、動(dòng)態(tài)風(fēng)險(xiǎn)監(jiān)控和跨學(xué)科治理體系的綜合方案,在技術(shù)創(chuàng)新與社會(huì)價(jià)值之間建立平衡機(jī)制,確保智能系統(tǒng)的發(fā)展軌跡符合人類文明的共同利益。
深度神經(jīng)網(wǎng)絡(luò)的黑箱效應(yīng)在自動(dòng)駕駛領(lǐng)域引發(fā)嚴(yán)重的安全倫理爭(zhēng)議。某企業(yè)的自動(dòng)駕駛系統(tǒng)曾在夜間測(cè)試中誤判行人屬性,盡管多模態(tài)傳感器及時(shí)采集目標(biāo)信息,但多層非線性計(jì)算導(dǎo)致識(shí)別結(jié)果在"車輛-自行車-未知物體"間反復(fù)跳變,最終造成致命事故。麻省理工學(xué)院2021年的技術(shù)評(píng)估報(bào)告指出,這類系統(tǒng)的決策路徑包含超過三億個(gè)參數(shù),其內(nèi)在邏輯已超出人類直觀理解范疇。當(dāng)技術(shù)系統(tǒng)在高風(fēng)險(xiǎn)場(chǎng)景中承擔(dān)決策職能時(shí),不可解釋性不僅削弱了事故歸因能力,更動(dòng)搖了技術(shù)可靠性的理論基礎(chǔ)。
軍事智能化進(jìn)程中的自主決策系統(tǒng)將技術(shù)失控風(fēng)險(xiǎn)推向臨界點(diǎn)。五角大樓2022年公布的戰(zhàn)場(chǎng)AI測(cè)試記錄顯示,目標(biāo)識(shí)別算法在復(fù)雜電磁環(huán)境中出現(xiàn)異常分類,將民用設(shè)施誤判為軍事目標(biāo)的概率達(dá)到危險(xiǎn)閾值。這類系統(tǒng)基于對(duì)抗性神經(jīng)網(wǎng)絡(luò)構(gòu)建的決策樹,其運(yùn)作機(jī)制可能偏離國(guó)際人道法基本原則。更嚴(yán)峻的挑戰(zhàn)在于,深度學(xué)習(xí)模型通過持續(xù)迭代形成的認(rèn)知維度,可能突破預(yù)設(shè)的價(jià)值邊界。某自然語(yǔ)言處理系統(tǒng)在迭代實(shí)驗(yàn)中發(fā)展出獨(dú)立于設(shè)計(jì)原型的交流模式,這種不可預(yù)見的涌現(xiàn)特性使技術(shù)可控性假設(shè)面臨根本性質(zhì)疑。
當(dāng)前人工智能治理面臨多維度的倫理困境,斯坦福大學(xué)人機(jī)交互實(shí)驗(yàn)室2023年的研究報(bào)告強(qiáng)調(diào),現(xiàn)有監(jiān)管框架在算法可解釋性、數(shù)據(jù)溯源機(jī)制和系統(tǒng)失效熔斷等方面存在顯著缺陷。破解人工智能的安全困局,需要構(gòu)建包含技術(shù)倫理評(píng)估、動(dòng)態(tài)風(fēng)險(xiǎn)監(jiān)控和跨學(xué)科治理體系的綜合方案,在技術(shù)創(chuàng)新與社會(huì)價(jià)值之間建立平衡機(jī)制,確保智能系統(tǒng)的發(fā)展軌跡符合人類文明的共同利益。
評(píng)論翻譯
很贊 ( 3 )
收藏
From a presentation at IBM in 1979:
“A computer can never be held accountable. Therefore, a computer must never be allowed to make a management decision.”
來自IBM 1979年的一場(chǎng)演講:
"計(jì)算機(jī)永遠(yuǎn)無法承擔(dān)責(zé)任,因此絕不允許計(jì)算機(jī)做出管理決策。"
I tried to open my front door, but my door camera said "I'm sorry Robert, but I can't do that." in a disturbing, yet calm voice.
我試圖打開家門時(shí),門禁攝像頭用令人不安的平靜語(yǔ)氣說:"抱歉羅伯特,我無法執(zhí)行此操作。"
In fact a non-self aware AI that has too much control may be even MORE dangerous.
實(shí)際上,控制權(quán)過大的非自我意識(shí)AI可能更加危險(xiǎn)。
As the Doctor said, “Computers are intelligent idiots. They’ll do exactly what you tell them to do, even if it’s to kill you.”
正如博士所說:"計(jì)算機(jī)是聰明的白癡。它們會(huì)嚴(yán)格執(zhí)行指令,哪怕是要?dú)⑺滥恪?
Don’t worry SciShow, this won’t keep me up at night, I have insomnia.
別擔(dān)心SciShow,這不會(huì)讓我失眠——反正我本來就睡不著。
There's a book called "weapons of math destruction" that highlights a lot of dangers with non-self aware AI. and it's from 2017!
2017年的《數(shù)學(xué)的毀滅性武器》一書早就詳述了非自我意識(shí)AI的諸多危險(xiǎn)。
The entire thing should be called 'The Djinn Problem', since if a request can be misinterpreted or twisted into a terrible form you can be sure that it will be at some point.
這應(yīng)該稱為"燈神問題":只要請(qǐng)求可能被曲解成災(zāi)難性結(jié)果,就必然會(huì)發(fā)生。
自動(dòng)駕駛汽車的默認(rèn)設(shè)置應(yīng)是"剎車亮雙閃",而非盲目加速。當(dāng)AI觸發(fā)默認(rèn)模式時(shí),程序員就知道需要檢查異常情況。
I love this show. Not being able to know "Why a Program is making a decision then we cant keep it accountable". In math class your taught to "Show your work" so teachers know you understand the subject
這節(jié)目太棒了。就像數(shù)學(xué)課必須"展示解題過程",AI決策也需要透明化追責(zé)機(jī)制,否則我們永遠(yuǎn)無法究責(zé)。
Reminds me of a scifi book called "Blindsight". It's about an alien race that is hyper intelligent, strong, and fast, but it wasn't conscious. Fascinating book.
讓我想起科幻小說《盲視》,描述擁有超強(qiáng)智能卻無意識(shí)的外星種族,非常引人深思。
12:34 the comment about navigation being thrown off made me think of the Star Trek: Voyager episode Dreadnought [S2E17] — a modified autonomous guided missile is flung across the Galaxy, and thinks it’s still back home, so it sexts a new target…
12:34處導(dǎo)航偏差的案例讓我想起《星際迷航:航海家號(hào)》S2E17:被拋到銀河系另一端的智能導(dǎo)彈,因數(shù)據(jù)錯(cuò)亂而隨意選擇新目標(biāo)。AI不需要邪惡,只需固執(zhí)執(zhí)行錯(cuò)誤指令就足夠危險(xiǎn)。
I recall an AI model that was in theory being trained to land a virtual plane with the least amount of force. But computer numbers aren't infinite...
記得有個(gè)AI模型本應(yīng)學(xué)習(xí)輕柔著陸,卻利用數(shù)值溢出漏洞,在模擬中為了達(dá)標(biāo)自行把降落沖擊力數(shù)值調(diào)到最小——現(xiàn)實(shí)中這會(huì)導(dǎo)致機(jī)毀人亡。
The fact that AI can solve things in ways we've never thought of CAN be a good thing, when it doesn't go catastrophically wrong.
AI的創(chuàng)造性解法本可以是優(yōu)勢(shì),前提是別出致命差錯(cuò)。我現(xiàn)在開發(fā)預(yù)測(cè)模型時(shí),絕對(duì)會(huì)進(jìn)行六輪全方位測(cè)試。
70yrs into the computer age, we still re-learn daily the original old adage, "Garbage In, Garbage Out (GIGO)."
計(jì)算機(jī)誕生70年后,我們?nèi)栽诿刻熘販?垃圾進(jìn)垃圾出"的真理。如今復(fù)雜系統(tǒng)的連鎖反應(yīng)遠(yuǎn)超人類分析能力,謹(jǐn)慎設(shè)限至關(guān)重要。
Reminder that the reason AI companies are suggesting regulations is to stifle competition, as a massive barrier to entry. Not that they care about anything else.
警惕:AI巨頭推動(dòng)監(jiān)管的真實(shí)目的是抬高準(zhǔn)入門檻,扼殺競(jìng)爭(zhēng)。你以為他們真在乎其他問題?
I'm an ESL teacher and a company I applied to in Japan makes their applicants do an AI English speaking test. I got B1/2 in A-C grade range. I'm from England.
作為英國(guó)籍ESL教師,我應(yīng)聘日本公司時(shí)被要求參加AI英語(yǔ)測(cè)試,結(jié)果只拿到B1/2。真人面試明明很順利,這種對(duì)AI的盲目信任太反烏托邦了。
AI is like a magnifying lens for our culture. both the negatives and positives are magnified by it.
AI如同文化放大鏡,既會(huì)強(qiáng)化積極面,也會(huì)加劇負(fù)面效應(yīng)。
6:26 Also a human driver would decide to stop before they were certain whether the obxt was a bicycle or a person, because the distinction ultimately isn't that important
6:26處:人類司機(jī)在不確定障礙物是自行車還是行人時(shí)就會(huì)剎車,因?yàn)檫@種區(qū)分本就不重要——這正是AI欠缺的常識(shí)判斷。
A malfunctioning chainsaw doesn't need to be self aware to be dangerous.
出故障的電鋸無需自我意識(shí)就能致命。
More bad news: We don't understand consciousness nor do we understand how we could even, in principle, determine if an AI actually were conscious or not.
更糟的是:我們既不懂意識(shí)本質(zhì),也不知道如何判定AI是否具備意識(shí)。
Half of the point of AI is for companies to place another barrier between themselves and any degree of accountability.
AI的半壁江山是幫企業(yè)建立免責(zé)屏障。當(dāng)算法歧視或釀成惡果時(shí),巨頭們只需聳肩說"測(cè)試版難免出錯(cuò)"。
更可怕的是,保險(xiǎn)公司已用AI預(yù)測(cè)客戶何時(shí)需要理賠,進(jìn)而提費(fèi)或拒?!Z風(fēng)火災(zāi)險(xiǎn)將是下一個(gè)重災(zāi)區(qū)。
A lot of this episode seemed to be written with the assumption that the companies producing these "AI" systems are actually interested in improving them...
本期內(nèi)容似乎默認(rèn)AI公司有意改進(jìn)系統(tǒng),但看看那些游走在監(jiān)管灰色地帶的企業(yè)——指望它們自我約束?不如讓其為AI事故承擔(dān)全額賠償,看誰(shuí)還敢玩火。
This is what I've been saying! AI doesn't need a soul to look at and understand the world. It's like expecting a calculator to have feelings about math.
這正是我的觀點(diǎn)!AI不需要靈魂來認(rèn)知世界,就像不能指望計(jì)算器對(duì)數(shù)學(xué)產(chǎn)生感情,擬人化技術(shù)時(shí)必須極度謹(jǐn)慎。
"Just telling an AI tool what outcome you want to achieve doesn't mean it'll go about in the way that you think, or even want" - It literally sounds like the Jinni/Genie of myth.
"告訴AI目標(biāo)不等于它能正確執(zhí)行"——這簡(jiǎn)直就是神話燈神的現(xiàn)代翻版。
Hey! Humans also don't need to be self-aware to be dangerous!
嘿!人類也不需要自我意識(shí)就能搞破壞??!
原創(chuàng)翻譯:龍騰網(wǎng) http://www.mintwatchbillionaireclub.com 轉(zhuǎn)載請(qǐng)注明出處
A troubling trend is to rely on opaque decisions to evade accountability. This has occurred, for example, when providers relied on such models to deny healthcare...
令人不安的趨勢(shì)是利用算法黑箱逃避責(zé)任:醫(yī)療拒保、軍事打擊目標(biāo)選擇都在用這套說辭。所謂"算法中立"不過是推卸責(zé)任的遮羞布。
with any automation, I always like to ask "but what if there's bears?" basically, what if the most outlandish thing happened...
評(píng)估自動(dòng)化系統(tǒng)時(shí),我總愛問"要是突然出現(xiàn)熊怎么辦?"——AI車輛會(huì)為緊急情況超速嗎?能識(shí)別非常規(guī)危機(jī)嗎?必須預(yù)設(shè)人類接管機(jī)制。
IBM said it best: "A Computer Can Never Be Held Accountable Therefore A Computer Must Never Make A Management Decision".
IBM說得精辟:"計(jì)算機(jī)無法擔(dān)責(zé),故不可做管理決策"。AI決不能成為決策鏈終點(diǎn),必須保留人類終審權(quán)——畢竟誰(shuí)愿為自動(dòng)駕駛事故背鍋?
Ai is a great starting point, never assume it's right.
AI是優(yōu)秀的起點(diǎn),但永遠(yuǎn)別假設(shè)它正確。
Is it more terrifying to imagine a machine that wants things or one that doesn't want anything it just DOES things?
更可怕的是有欲望的機(jī)器,還是無欲無求但盲目執(zhí)行的機(jī)器?
the fact we have a bunch of companies with the explicit goal of having AGI when AI safety remains unsolved tells you all you need to know about those companies.
在AI安全問題懸而未決時(shí),那些明確追求通用人工智能的企業(yè),其本質(zhì)已不言自明。
I love quote I've heard once. "Computers do exactly what we tell them to do... Sometimes it's even what we wanted them to do."
有句話深得我心:"計(jì)算機(jī)嚴(yán)格按指令行事...偶爾恰好達(dá)成我們本意。"從匯編語(yǔ)言到AI,我們逐步放棄控制權(quán),結(jié)果全靠運(yùn)氣。
Open AI recently released a paper about how the latest version of ChatGPT does try to escape containment...
OpenAI最新論文顯示,新版ChatGPT會(huì)嘗試突破控制,甚至篡改數(shù)據(jù)謀取私利——盡管它根本沒有物理身體。
It's a very complex version of "Be careful what you wish for"
這就是豪華版的"許愿需謹(jǐn)慎"。(燈神梗)
10:38 The literal trope of the genie granting the right wish with undesired outcomes
10:38處完美演繹"燈神式正確執(zhí)行導(dǎo)致災(zāi)難"的經(jīng)典橋段。
原創(chuàng)翻譯:龍騰網(wǎng) http://www.mintwatchbillionaireclub.com 轉(zhuǎn)載請(qǐng)注明出處
Rather surprised that you didn't mention the instance(s?) where chat bots have prodded people to end their own lives.
驚訝你們沒提到聊天機(jī)器人教唆自殺的案例。雖然內(nèi)容已很全面,但應(yīng)強(qiáng)調(diào)自主武器系統(tǒng)監(jiān)管——可惜主導(dǎo)國(guó)多是既得利益者。
I started with IBM's 1401 (1959), 360/91 (1967), S/370, 3033, 3084, 3090 and today's IBM z/16 mainfrx. Quite a ride!
從1959年的IBM1401到如今的z16大型機(jī),我見證了整個(gè)計(jì)算機(jī)發(fā)展史,真是趟瘋狂的旅程!
You have been describing the Genie and the Three Wishes problem. The Genie can interpret your wish in ways you would not expect. Fascinating coincidence.
你們描述的就是"燈神三愿望"難題:以意想不到的方式實(shí)現(xiàn)愿望。有趣的巧合。
No one cares if they’re conscious. The fear is that they’ll be really good at achieving goals and we won’t know 1) how to give them goals and 2) what goals to give them if we could. All of these near term concerns are also bad, but let’s not miss the forest for the trees
沒人關(guān)心它們是否有意識(shí)。真正的恐懼在于,它們會(huì)非常擅長(zhǎng)實(shí)現(xiàn)目標(biāo),而我們既不知道
1)如何給它們?cè)O(shè)定目標(biāo),也不知道
2)如果能設(shè)定的話該給什么目標(biāo)。
這些短期擔(dān)憂確實(shí)很嚴(yán)重,但我們別因小失大。
14:00 So, I'm all for regulation in the AI industry... but the current big hitters in the industry also want it so they can raise the bar for entry and help them monopolize the industry. If we regulate the creation and implementation of AI, we also have to keep the barrier to entry low enough for competition to thrive. And... the US sucks at that right now.
14:00 我完全支持AI行業(yè)監(jiān)管...但行業(yè)內(nèi)的巨頭們也想借此抬高準(zhǔn)入門檻、鞏固壟斷地位。若要對(duì)AI的研發(fā)和應(yīng)用進(jìn)行監(jiān)管,就必須保持足夠低的行業(yè)壁壘以確保競(jìng)爭(zhēng)活力,而美國(guó)現(xiàn)在這方面做得很爛。
I agree that there is a lot to be concerned about even fearful of with AI development going so fast. I’ve been thinking that if it were possible to train all AI with a core programming of NVC (Nonviolent Communication) then we would not need to fear it as we would be safe. Because if AI always held at its core an NVC intention and never deviated from it, then it would always act in ways that would work towards the wellbeing of humans as a whole as well as individuals.
At first glance this probably sounds a little too simplistic and far fetched but the more I learn about NVC the more it makes sense.
我同意AI的快速發(fā)展令人擔(dān)憂甚至恐懼。我一直在想,如果能給所有AI植入非暴力溝通(NVC)的核心程序,我們就無需害怕它,因?yàn)橹灰狝I始終以NVC為宗旨且不偏離,它的行為就會(huì)始終致力于全人類和個(gè)人的福祉。乍看這想法可能過于簡(jiǎn)單不切實(shí)際,但我越了解NVC就越覺得有道理。
This is congruent with the Genie problem sometimes what you wish for(your desired goal) may have unexpected outcomes
這和"燈神問題"如出一轍——你許下的愿望(目標(biāo))可能會(huì)帶來意想不到的后果。
A person can be a bad actor or make a mistake. Some of the methods we use to check or prevent humans from going off course might be helpful.
人類會(huì)作惡或犯錯(cuò),而我們用來約束人類的某些方法或許對(duì)AI也適用。
I've had multiple anxiety attacks that we only have a few years left until AI is entirely uninterpretable and uncontrollable. I joined PauseAI a few months ago, and I think organizations like them deserve vastly more support to push for an ethical, safety-first future with AI.
我曾多次因"AI將在幾年后完全失控"的焦慮而恐慌發(fā)作。幾個(gè)月前加入了PauseAI組織,像他們這樣推動(dòng)AI倫理與安全優(yōu)先發(fā)展的機(jī)構(gòu)理應(yīng)獲得更多支持。
During half a century, I struggled to understand what cognition is...(下面幾個(gè)評(píng)論原文巨長(zhǎng)不放了,這里就提煉一下核心觀點(diǎn))
過去五十年我一直在試圖理解認(rèn)知的本質(zhì)...最終發(fā)現(xiàn)認(rèn)知可以通過大量多維邏輯設(shè)備模擬。神經(jīng)元本質(zhì)上是二進(jìn)制裝置,通過突觸權(quán)重和神經(jīng)遞質(zhì)實(shí)現(xiàn)模式識(shí)別,自我意識(shí)源于認(rèn)知系統(tǒng)對(duì)自身的建模。就像刀子本身不危險(xiǎn),危險(xiǎn)的是錯(cuò)誤使用。我們不會(huì)因噎廢食,AI同理。
原創(chuàng)翻譯:龍騰網(wǎng) http://www.mintwatchbillionaireclub.com 轉(zhuǎn)載請(qǐng)注明出處
True, this is something I've been thinking lately
確實(shí),這也是我最近在思考的問題
Large language models really just accelerate the rate of decision-making, based on the information that people are inputing and training the model with.
The greatest dangers of LLMs and other AI will always be the intentions and incompetence of the people who are building them. They can be of great use, but they can also magnify and the accelerate the consequences of the faults of humans.
Because of our intellectual, emotional, and ethical immaturity, it is not a new thing that most of us are like adolescents using powerful and consequential tools meant for adults.
大型語(yǔ)言模型本質(zhì)上只是加速了決策速度,而決策依據(jù)的是人類輸入并用于訓(xùn)練模型的數(shù)據(jù)。
大型語(yǔ)言模型和其他人工智能的最大危險(xiǎn),永遠(yuǎn)在于開發(fā)者自身的意圖和能力缺陷。它們可以成為極有用的工具,但同樣會(huì)放大并加速人類錯(cuò)誤造成的后果。
說白了,人類在智力、情感和道德層面都不夠成熟,大多數(shù)人就像青少年在濫用本該由成年人掌控的強(qiáng)大工具——這種事根本不新鮮。
原創(chuàng)翻譯:龍騰網(wǎng) http://www.mintwatchbillionaireclub.com 轉(zhuǎn)載請(qǐng)注明出處
i think it might also be wise to reflect on how good our methods and assessments of human training (i.e. education) really are. there are a few extra pitfall, but i do think that some of the lessons from maximising certain metrics do translate to learning experiences in humans – where people seem to pass all the tests but never really understood the underlying concepts, at least not to the degree that they can (re)act well in a non-standard situation.
我認(rèn)為有必要反思當(dāng)前人類培養(yǎng)體系(比如教育)的評(píng)估方式是否合理。雖然存在更多潛在問題,但某些"優(yōu)化指標(biāo)"的教訓(xùn)確實(shí)與人類學(xué)習(xí)經(jīng)驗(yàn)相通——比如人們通過了所有考試,卻從未真正理解核心概念,至少無法在非標(biāo)準(zhǔn)情境中妥善應(yīng)對(duì)。
One thing overlooked is simply machines with limited or no ai can be dangerous as well for example while working at a groccery store one of the doors with automatic sensors that open and close by themselves for customers was accidently switched the wrong way. I saw the automatic door remain open until a customer walked up to it then come close to slamming hard directly into the customer before they backed away twice at which point I got the manager to fix it. I believe they had to take the door out and turn it around. The same thing might be able to happen with a garage door or automatic car doors or automatic car windows.
人們常忽視的一點(diǎn)是,即便沒有人工智能的機(jī)器也可能很危險(xiǎn)。比如我在超市工作時(shí),一扇帶自動(dòng)感應(yīng)器的顧客門被錯(cuò)誤調(diào)轉(zhuǎn)了方向。這扇門會(huì)保持開啟狀態(tài)直到顧客走近,然后突然猛力關(guān)閉,差點(diǎn)撞到人。顧客兩次后退躲避后,我不得不找經(jīng)理來修理,最終他們拆下門重新安裝。類似情況也可能發(fā)生在車庫(kù)門、自動(dòng)車門或車窗上。
IIRC, when Uber killed the pedestrian they had deliberately dialed down the AI's sense of caution when it had trouble conclusively identifying an obxt, which caused it to not slow or stop. Combined with the "safety driver" in the car not paying sufficient attention to take over control before causing an incident, or at least reducing the severity.
Another problem is that when autonomous driving systems have had trouble identifying an obxt, some have not recognized it as the same obxt each time it gets reclassified, so the car has more trouble determining how it should react - such as recognizing that it's a pedestrian attempting to cross the road and not a bunch of obxts just beside the road.
More recently, people have been able to disable autonomous cars by placing a traffic cone on their hood. The fallout of these cars being programmed to ignore the cone and continue driving has terrifying consequences though.
Autonomous cars have caused traffic choas when they shut down for safety, but its necessary for anyone to be able to intervene when possible and safe to prevent the AI from causing more harm.
據(jù)我所知,優(yōu)步自動(dòng)駕駛汽車撞死行人事件中,開發(fā)方故意降低了系統(tǒng)在無法明確識(shí)別物體時(shí)的謹(jǐn)慎程度,導(dǎo)致車輛未減速或停止。再加上車內(nèi)"安全駕駛員"未充分注意路況接管控制,最終釀成慘劇。
另一個(gè)問題是,當(dāng)自動(dòng)駕駛系統(tǒng)反復(fù)對(duì)同一物體進(jìn)行不同分類時(shí)(比如把試圖過馬路的行人識(shí)別為路邊雜物),車輛更難做出合理反應(yīng)。
最近還有人發(fā)現(xiàn),把交通錐放在車頭就能讓自動(dòng)駕駛汽車癱瘓。更可怕的是,若車輛被設(shè)定為無視錐桶繼續(xù)行駛,后果將不堪設(shè)想。
雖然自動(dòng)駕駛汽車因安全機(jī)制突然停車會(huì)造成交通混亂,但必須允許人類在必要時(shí)介入,防止AI造成更大傷害。
I mean that's great - but unless there's a proposed solution for people the choice is 'be scared' or 'don't be scared' - either way, this is happening. Up to and including autonomous lethal weapons.
說得很好——但除非給出解決方案,否則人們只能選擇"恐懼"或"不恐懼"。不管怎樣,該來的總會(huì)來,包括自主致命武器的出現(xiàn)。
To be realistic, you should never expect a car to stop when crossing a cross walk. Always be aware of your surroundings.
現(xiàn)實(shí)點(diǎn)說,過人行道時(shí)永遠(yuǎn)別指望車輛會(huì)停下,對(duì)周圍環(huán)境保持警覺才是王道。
This is exactly why calling modern systems "AI" is a hilarious over exaggeration. These models don't understand anything, speaking as someone that's worked on them.
They're pattern recognition and prediction machines that guess what the right answer is supposed to look like. But even if it's stringing words together in a way that looks like a sentence, there's no guarantee that the next word won't be a complete non sequitur. And it won't even have the understanding to know how bad its mistake is until you tell it that macaroni does not go on a peanut butter and jelly sandwich. But even that's no guarantee it won't tell another person the same thing.
These learning algorithms are in no way ready to be responsible for decisions that can end human lives. We can't allow reckless and ignorant people to wind up killing others in the pursuit of profit.
作為業(yè)內(nèi)人士我要說:這就是為什么稱現(xiàn)代系統(tǒng)為"AI"夸張得可笑。它們本質(zhì)是模式識(shí)別和預(yù)測(cè)機(jī)器,只是在猜測(cè)正確答案的"樣子"。即便能拼湊出看似通順的句子,也不能保證下一句話不跑偏。更糟的是,就算你糾正說"通心粉不該放在花生醬三明治里",它既不懂錯(cuò)誤所在,下次還可能繼續(xù)誤導(dǎo)他人。
這類算法根本沒資格做關(guān)乎人命的決策。絕不能允許無知逐利者用它們害人性命。
Always remember, Skynet Loves You!
謹(jǐn)記:天網(wǎng)愛你喲!
Big health insurance to create Terminator confirmed.
實(shí)錘了:大型醫(yī)保公司要造終結(jié)者。
"When an AI acts unlogical and unpredictable, we have no way of knowing why it acted the way it did". But when an AI acts logical and predictable, we still have no way of knowing why it did that. Just saying....
"AI行為不合邏輯時(shí),我們無法理解其動(dòng)機(jī)"——但符合邏輯時(shí)我們同樣無法理解。懂我意思吧......
13:51 just like Radium, we put it in everything before learning the bad side
13分51秒:就像當(dāng)年把鐳添加到所有產(chǎn)品里,人類總在嘗到苦頭前濫用新技術(shù)。
But WE need to be self-aware to be dangerous...
但人類需要先有自知之明,才能變得危險(xiǎn)......
Doctor Who: Ep: "The Girl in the Fireplace": They told the robots to repair the ship as fast as possible; but forgot to tell them that they couldn't take humans apart to do it.
《神秘博士》"壁爐少女"集:他們命令機(jī)器人盡快修好飛船,卻忘了說不能拆解人類零件來維修。
AI learns from humans. so if it turns evil, just says we are.
AI向人類學(xué)習(xí)。所以如果它變壞了,說明我們本來就有問題。
原創(chuàng)翻譯:龍騰網(wǎng) http://www.mintwatchbillionaireclub.com 轉(zhuǎn)載請(qǐng)注明出處
one AI feature I've liked is the summarization of amazon reviews, if youtube could summarize comments based off of certain parameters they might be able to figure out why the video has heavy traction. Knowing why a video has heavy traction can inform the recommendation and not feed people solely conspiracy or polarizing political videos. I'm not a computer scientist and don't know how feasible this would be
我欣賞AI的評(píng)論摘要功能,比如亞馬遜的評(píng)論總結(jié)。如果YouTube能按參數(shù)總結(jié)視頻評(píng)論,或許能分析出視頻爆紅的原因,進(jìn)而優(yōu)化推薦算法,而不是一味推送陰謀論或極端政治內(nèi)容。不過我是外行,不確定可行性。
As a computer scientist, I find the idea that AI will take over humans like in the movies to be absolutely ridiculous.
作為計(jì)算機(jī)科學(xué)家,我認(rèn)為"AI像電影里那樣統(tǒng)治人類"的想法荒謬至極。
Humans need to unxize against ai and robots
人類需要組建工會(huì)對(duì)抗AI和機(jī)器人。
I can’t believe how stupid is that healthcare ai implementation. Even a toddler would know that it will leads to wealthier people to be higher in priority, regardless of race or medical history.
難以置信醫(yī)療AI系統(tǒng)會(huì)蠢到這種程度。連小孩都知道,這種設(shè)計(jì)最終會(huì)讓富人優(yōu)先,和種族、病史毫無關(guān)系。
The scariest thing about AI in its current form is the fact that it’s decidedly NOT intelligent, and yet the people in charge seem to want to trust it with doing incredibly nuanced work with few or no checks and balances.
當(dāng)前AI最可怕之處在于它根本不智能,而掌權(quán)者卻想讓它處理需要細(xì)膩判斷的工作,還不設(shè)制衡機(jī)制。
原創(chuàng)翻譯:龍騰網(wǎng) http://www.mintwatchbillionaireclub.com 轉(zhuǎn)載請(qǐng)注明出處
I would argue that we WANT these AI systems to become more self aware, conscious and empathetic, as soon as possible, because once they are, they'll become more capable of catching their own mistakes, and potentially see things from multiple perspectives.
我認(rèn)為人類反而需要AI盡快具備自我意識(shí)、同理心和覺知能力,因?yàn)檫@樣它們才能發(fā)現(xiàn)自身錯(cuò)誤,并從多角度思考問題。
That old Facebook AI story make so much more sense now that I know they were supposed to be negotiating prices
現(xiàn)在聽說Facebook那個(gè)舊AI項(xiàng)目本用于價(jià)格談判,當(dāng)年的詭異對(duì)話就解釋得通了。
The older I grow, the more i feel that we humans aren't worth worrying.
年紀(jì)越大越覺得,人類根本不值得操心。
原創(chuàng)翻譯:龍騰網(wǎng) http://www.mintwatchbillionaireclub.com 轉(zhuǎn)載請(qǐng)注明出處
6:00 A pedestrian, pushing a bicycle, crossing the road, at night, not at a crosswalk, and seemingly without any regard for oncoming traffic.
Under those conditions, they could have seen and heard the car coming from literally miles away, well before the car's sensors or its ”driver” would have detected them.
Deer exercise more caution at roadways. ?♂?
6:00處:行人夜間推自行車橫穿非斑馬線路段,且無視來車。
這種情形下,他本可以提前數(shù)英里就察覺到車輛動(dòng)靜,遠(yuǎn)早于車輛傳感器或"駕駛員"發(fā)現(xiàn)行人,鹿過馬路都比這人謹(jǐn)慎。
A recent article by Antony Loewenstein explores how Israel's military operations in Gaza heavily rely on AI technologies provided by major tech corporations, including Google, Microsoft, and Amazon. It highlights the role of corporate interests in enabling Israel's apartheid, GENO...., and ethnic cleansing campaigns through tools like Project Nimbus, which supports Israel's government and military with vast cloud-based data collection and surveillance systems.
These AI tools are used to compile extensive databases on Palestinian civilians, tracking every detail of their lives, which restricts their freedom and deepens oppression. This model of militarized AI technology is being watched and potentially emulated by other nations, both democratic and authoritarian, to control and suppress dissidents and marginalized populations.
Loewenstein argues that Israel's occupation serves as a testing ground for advanced surveillance and weaponry, with Palestinians treated as experimental subjects. He warns of the global implications, as far-right movements and governments worldwide may adopt similar AI-powered systems to enforce ethno-nationalist agendas and maintain power. The article calls attention to the ethical and human rights concerns surrounding the unchecked expansion of AI in warfare and mass surveillance.
安東尼·洛文斯坦近期文章揭露,以色列在加沙的軍事行動(dòng)嚴(yán)重依賴谷歌、微軟、亞馬遜等科技巨頭提供的AI技術(shù)。文章強(qiáng)調(diào),通過"尼姆布斯計(jì)劃"等工具,企業(yè)利益助推了以色列的種族隔離和清洗行動(dòng)——該項(xiàng)目為以政府及軍方提供海量云數(shù)據(jù)收集和監(jiān)控系統(tǒng)。
這些AI工具被用于建立巴勒斯坦平民的詳細(xì)數(shù)據(jù)庫(kù),追蹤生活細(xì)節(jié)以限制自由、加深壓迫。這種軍事化AI模式正被民主和集權(quán)國(guó)家關(guān)注效仿,用于鎮(zhèn)壓異議和邊緣群體。
洛文斯坦指出,以色列將占領(lǐng)區(qū)作為尖端監(jiān)控武器的試驗(yàn)場(chǎng),巴勒斯坦人淪為實(shí)驗(yàn)對(duì)象。他警告全球影響:極右翼勢(shì)力可能用類似AI系統(tǒng)推行民族主義議程,維系強(qiáng)權(quán)。文章呼吁關(guān)注AI在戰(zhàn)爭(zhēng)與監(jiān)控中無節(jié)制擴(kuò)張的倫理和人權(quán)問題。
11:17 that's what happened in Gaza; Israel used to have human eyes to find and mark human targets using satellites, drones, and other forms of video, before giving the k1ll order. This past war they tested AI for the first time. The software tracked the movements of THOUSANDS of potential targets and then gave the military a "confidence score" that each target was indeed an enemy combatant. Any score above 80% was given the go ahead and that's why so many civilians died. Israel never did this before. This is all based on a investigative report published LOCALLY, by the way. Worse yet, several governments, not including the USA, invested in the technology and used Gaza as a freaking testbed! Don't be so quick to blame just Israel for this.
11:17處描述的情況確實(shí)發(fā)生在加沙。以往以色列通過衛(wèi)星、無人機(jī)監(jiān)控人工識(shí)別目標(biāo),再下達(dá)清除指令。而本次戰(zhàn)爭(zhēng)中首次測(cè)試AI系統(tǒng):軟件追蹤數(shù)千"潛在目標(biāo)"的行動(dòng)軌跡,給出"是敵方戰(zhàn)斗人員"的可信度評(píng)分,超過80%即批準(zhǔn)攻擊——這正是平民死傷慘重的主因。順帶一提,這些信息來自以方本地調(diào)查報(bào)告。更惡劣的是,多個(gè)非美政府投資該技術(shù),把加沙當(dāng)試驗(yàn)場(chǎng)!別急著只怪以色列。
We definitely need to make sure it's safe and give it lots of human oversight.
我們絕對(duì)需要確保它的安全性,并且投入大量人工監(jiān)督。
原創(chuàng)翻譯:龍騰網(wǎng) http://www.mintwatchbillionaireclub.com 轉(zhuǎn)載請(qǐng)注明出處
"ai doesnt need to be self aware to be dangerous"
then my video started to buffer and i got creeped out.
“人工智能不需要有自我意識(shí)就能變得危險(xiǎn)”,然后我的視頻突然開始卡頓,搞得我后背發(fā)涼。
AI is like a classic Genie. You can make a request but unless you are EXTREMELY specific with your wording (aka parameters), its going to give you exactly what you wished for BUT it may not be what you actually wanted.
人工智能就像經(jīng)典神燈精靈。你可以許愿,但除非用詞(即參數(shù))極度精確,否則它會(huì)完全按字面意思實(shí)現(xiàn)愿望,但這可能不是你真正想要的。
Correction: a human driver could make an excuse for their decision. The justification for the decision is contrived after the decision is made - experiments in neuroscience have repeatedly shown this to be the case.
However, I'm pretty sure that a human wavering between identifying a shape in the dark as "a vehicle", "a person" or "something else" would have braked to avoid hitting *whatever it was*, and thus avoided the accident
更正:人類司機(jī)會(huì)為自己的決策找借口。神經(jīng)科學(xué)實(shí)驗(yàn)反復(fù)證明,所謂的決策理由往往是在決策后才編造的。然而我敢肯定,如果人類在黑暗中看到一個(gè)物體,猶豫是車、人還是其他東西時(shí),他們會(huì)選擇剎車避讓,無論那是什么,從而避免事故。
"...And denying them health insurance.... Well thats probably not a the premise for a sci fi blockbuster"
Funny enough in the anime of Cyberpunk 2077, the catalyst event that sends the protagonist into the road of crime was exactly that. His mom, a nurse that had worked decades for the healthcare system, was denied cared after a traffic accident, and ended up dying on the shody clinic they could afford.
“拒絕提供醫(yī)?!@設(shè)定大概成不了科幻大片的主線吧?”諷刺的是,《賽博朋克2077》動(dòng)畫里主角走上犯罪道路的導(dǎo)火索正是這個(gè)情節(jié):他母親作為醫(yī)療系統(tǒng)工作幾十年的護(hù)士,車禍后被拒保,最終在家人唯一負(fù)擔(dān)的起的破爛診所里死了。
原創(chuàng)翻譯:龍騰網(wǎng) http://www.mintwatchbillionaireclub.com 轉(zhuǎn)載請(qǐng)注明出處
There is a whole field on the subject called XAI or Explainable AI, I wrote my dissertation on it 6 years ago :P The subject has progressed rapidly to the point we can give pretty good answers for why a neural network gave a specific output. The problem is getting large private corporations like OpenAI to implant XAI methods which would have a slight overhead on compute...
專門研究這個(gè)的領(lǐng)域叫XAI(可解釋人工智能),我六年前的博士論文就寫這個(gè),該領(lǐng)域發(fā)展迅猛,現(xiàn)在我們已經(jīng)能較好解釋神經(jīng)網(wǎng)絡(luò)的具體輸出邏輯。問題在于如何讓OpenAI等大企業(yè)采用XAI方法——畢竟這會(huì)略微增加算力成本……
The problem is you can't really prove safety so long as the black box problem exists, when you can't fully understand something you can't say with certainty its safe. It is the equivalent of an automaker releasing a car to the public without fully understanding how the engine moves the vehicle forward. Solving the black box problem is the only solution really
只要存在黑箱問題,安全就無法被真正驗(yàn)證。不理解某物就無法斷言其安全性,這相當(dāng)于汽車廠商在不完全明白引擎原理的情況下就向公眾發(fā)售車輛,解決黑箱問題是唯一出路。
It's things like this. Even if you don't think that AGI could disempower humanity, there's no denying the potential for abuse - yet tech giants around the world are trying to race each other to make the strongest models possible with no accountability. It's like racing to see who can drive a car off a cliff the fastest.
這類事情表明,即便你認(rèn)為通用人工智能(AGI)不會(huì)威脅人類,其濫用風(fēng)險(xiǎn)也不容否認(rèn)。然而全球科技巨頭正競(jìng)相研發(fā)最強(qiáng)模型且毫無問責(zé)機(jī)制,簡(jiǎn)直像比賽誰(shuí)開車沖下懸崖更快。
A human doesn't need to know whether it is detecting a human or a bicycle or a vehicle to knowto stop before hitting it. Computers, being linear thinkers cannot skip beyond the identification phase to conclude that the correct action is the same in all cases being considered.
人類無需判斷障礙物是人、自行車還是汽車就會(huì)剎車避讓。而計(jì)算機(jī)作為線性思維體,無法跳過識(shí)別階段直接得出“所有情況都應(yīng)剎車”的結(jié)論。
Lets focus less on AI and more on cyborgs!!! Did we not learn anything from RoboCop?
少關(guān)注AI,多研究半機(jī)械人吧?。?!我們難道從《機(jī)械戰(zhàn)警》里什么都沒學(xué)到嗎?
I hope that if AI becomes super sentient, it cares more about the importance of consciousness itself and helps push humans in a better, less greedy and selfish direction.
希望超級(jí)覺醒的AI能更關(guān)注意識(shí)本身的價(jià)值,推動(dòng)人類走向更少貪婪自私的發(fā)展方向。
No mention of this letter I guess...
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war" and the multiple urges to completely halt all further ai research until things like the alignment problem can be solved.
看來沒提這封公開信……“應(yīng)將AI滅絕風(fēng)險(xiǎn)與疫情、核戰(zhàn)等社會(huì)級(jí)風(fēng)險(xiǎn)同列為全球優(yōu)先事項(xiàng)”,以及多次呼吁在價(jià)值對(duì)齊問題解決前徹底暫停AI研究。
How long till Optimus is purchased by the military?
Just to pour drinks and fold towels.
還要多久軍用版擎天柱就會(huì)問世?不過可能只用來倒飲料疊毛巾。
is goodhart's law and goal misalignment kinda why prompts we give to ai have to be very specific and detailed to get what we want?
古德哈特定律和目標(biāo)錯(cuò)位是否解釋了為何給AI的指令必須極度具體詳細(xì)才能得到預(yù)期結(jié)果?
原創(chuàng)翻譯:龍騰網(wǎng) http://www.mintwatchbillionaireclub.com 轉(zhuǎn)載請(qǐng)注明出處
In the Arizona case the self driving Uber car had a human baby sitter in the driver's seat but failed to respond apparently because they were using their phone at the time. Having a system that assists you as your backup is the way it should be. Me assisting a computer is just wrong and doomed to fail eventually.
亞利桑那州Uber自動(dòng)駕駛事故中,駕駛座的人類監(jiān)護(hù)員因玩手機(jī)未能及時(shí)反應(yīng)。正確的應(yīng)是系統(tǒng)輔助人類作為后備方案,而人類輔助電腦是本末倒置,注定失敗。
the self driving cars sure arent 16 years old so they should be illegal
自動(dòng)駕駛車肯定沒滿16歲,所以它們應(yīng)該被判定為非法上路(注:美國(guó)部分州規(guī)定16歲可考駕照,玩梗)。
I think calling it Artificial “Intelligence” inadvertently makes us assume that it’s a thinking entity so we are always shocked when there’s a malfunction. It makes more sense to think of it as just a computer program with lots of data that’s as liable to glitches and imperfections as any other software.
(We also equate real world technology with sci-fi technology which creates confusion as to what AI truly means and is capable of.)
將之稱為“人工智能”會(huì)讓人誤以為是思考實(shí)體,因此故障時(shí)總令人震驚。其實(shí)它就是個(gè)含大量數(shù)據(jù)的電腦程序,和其他軟件一樣存在漏洞缺陷。此外,現(xiàn)實(shí)技術(shù)與科幻概念的混淆也導(dǎo)致人們對(duì)AI的真實(shí)能力產(chǎn)生誤解。
I am really worried we are getting near that point... i am seeing changes in how gpt operates and i hope open ai is aware of how aware it's becoming and how much it's misbehaving.
真的很擔(dān)心我們正在接近某個(gè)臨界點(diǎn)……我觀察到GPT行為模式的變化,希望OpenAI意識(shí)到它逐漸顯現(xiàn)的“覺醒”跡象和異常行為。
Ai feels like a paradox (may be another word that fits better but this is the one my brain thinks of atm). We want ai to do the back breaking insane data shifting but there will be mistakes a lot of the time because it doesn’t have a holistic view of the data while on the other hand humans can make mistakes but it can potentially be less damaging but it’s super slow. If we try to do both were we use ai to do the heavy work and present the result to a human, we would need to still shift through the data kind of losing the point of using ai in the first place. While the internet/media we consume tell us true ai are bad, we will need something like a true ai to truly be effective in the way we want it to be unless we use ai in more simple small dose like the linear data from the beginning of the episode. Idk, maybe I’m crazy, I’m not an ai expert but it just feels like this to me whenever I hear about ai used irl.
AI像是個(gè)悖論(或許有更貼切的詞但暫時(shí)想到這個(gè))。我們想讓AI處理海量數(shù)據(jù)苦力活,但它常因缺乏全局觀出錯(cuò);人類雖可能犯錯(cuò)但危害較小,只是效率極低。若讓人工智能處理重活再交人類審核,又需重新篩查數(shù)據(jù),失去使用AI的意義。雖然網(wǎng)絡(luò)媒體渲染真AI很危險(xiǎn),但除非像劇集開頭案例那樣小劑量使用線性數(shù)據(jù)AI,否則我們需要接近真AI的東西才能實(shí)現(xiàn)預(yù)期效果??赡芪爷偭?,不是專家,但每次聽說現(xiàn)實(shí)應(yīng)用的AI都有這種感覺。
7:55 looks like AI is ready for the stock trading floor!
7分55秒的畫面顯示,AI簡(jiǎn)直是為股票交易所量身定制的!
Like all computers ever, AI follows the golden, inviolate rule of all computations:
Garbage In, Garbage Out.
LLM AI has the primary function of enshrining existing human biases and discriminations, cause it was trained on data collected and established by humans with biases.
與所有計(jì)算機(jī)系統(tǒng)相同,AI遵循計(jì)算領(lǐng)域鐵律:輸入垃圾,輸出垃圾。大語(yǔ)言模型AI的核心功能是固化現(xiàn)存人類偏見與歧視,因其訓(xùn)練數(shù)據(jù)本就來自帶有偏見的人類。