查看原文
其他

公开信双语全文-Pause Giant AI Experiments

译介 2023-06-20



今天,网络上一封公开的联名信爆火,该信呼吁所有的 AI 实验立即暂停研究比 GPT-4 更先进的 AI 模型,暂停时间至少 6 个月,为的就是把这些可怕的幻想扼杀在摇篮之中。


AI 的进步速度实在过于惊人,但相关的监管、审计手段却迟迟没有跟上,这也意味着没有人能够保证 AI 工具以及使用 AI 工具的过程中的安全性。



该联名信已经获得了包括 2018 年图灵奖得主 Yoshua Bengio、马斯克、史蒂夫 · 沃兹尼亚克、Skype 联合创始人、Pinterest 联合创始人、Stability AI CEO 等多位知名人士的签名支持,截稿前联名人数已经达到 1125 人。


公开信如下:


Pause Giant AI Experiments: An Open Letter

暂停巨型人工智能实验:一封公开信
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.


我们呼吁所有人工智能实验室立即暂停至少6个月对比GPT-4更强大的人工智能系统的训练。


AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.


广泛的研究表明,顶级人工智能实验室也承认,具有人类竞争性智能的人工智能系统可能对社会和人类构成深远的风险。正如广泛认可的阿西洛马人工智能原则所述,先进的人工智能可能代表着地球生命历史上的深刻变化,应该以相称的关怀和资源来规划和管理。不幸的是,这种水平的规划和管理并没有发生,尽管最近几个月,为了开发和部署更强大的数字思维,人工智能实验室陷入了一场失控的竞赛,没有人——甚至包括它们的创造者——能够理解、预测或可靠地控制。


Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.


当代人工智能系统现在在一般任务上具有人类竞争性,我们必须问自己:我们应该让机器用宣传和谎言淹没我们的信息渠道吗?我们是否应该把所有的工作都自动化掉,包括那些使人有成就感的工作?我们是否应该发展非人类思维,让它们最终在数量上、智力上超过我们、淘汰并取代我们?我们是否应该冒险失去对我们文明的控制?这样的决定不应委托给未经选举产生的科技领导者。只有在我们确信它们的影响是积极的、风险是可控的情况下,才应该开发强大的人工智能系统。这种信心必须有充分的理由,并随着系统潜在影响的大小而增加。OpenAI最近在关于通用人工智能的声明中表示,“在某些时候,在开始训练未来的系统之前,进行独立审查可能很重要,对于最先进的努力来说,同意限制用于创建新模型的计算增长率也很重要。”我们同意。那就是现在。


Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.


因此,我们呼吁所有人工智能实验室立即暂停至少6个月对比GPT-4更强大的人工智能系统的训练。这种暂停应该是公开的、可核实的,并包括所有关键行为者。如果这样的暂停不能迅速实施,政府就应该介入并实施暂停。


AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.


人工智能实验室和独立专家应该利用这一暂停,共同开发和实施一套针对高级人工智能设计和开发的共享安全协议,这些协议由独立的外部专家严格审计和监督。这些协议应确保遵守这些协议的系统在排除合理怀疑的情况下是安全的。这并不意味着人工智能开发总体上的暂停,只是从危险的竞赛倒退到具有应急能力的越来越大的不可预测的黑箱模型。


AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.


人工智能的研究和开发应该重新聚焦于使当今强大的、最先进的系统变得更加准确、安全、可解释、透明、强劲、一致、值得信赖和忠诚。


In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.


与此同时,人工智能开发人员必须与政策制定者合作,大幅加快开发强大的人工智能治理系统。这些措施至少应包括:致力于人工智能的新的、有能力的监管机构;监督和跟踪高能力的人工智能系统和大型计算能力池;出处和水印系统,以帮助区分真实的和合成的,并跟踪模型泄漏;健全的审计和认证生态系统;人工智能造成的损害责任;为人工智能技术安全研究提供强有力的公共资金;以及资源充足的机构,以应对人工智能将造成的巨大经济和政治混乱(尤其是对民主的破坏)。


Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.


人类可以通过人工智能享受繁荣的未来。在成功创建了强大的人工智能系统后,我们现在可以享受一个“人工智能之夏”,在这个夏天我们可以收获回报,设计这些系统,为所有人带来明显的利益,并给社会一个适应的机会。社会已经暂停了其他对社会有潜在灾难性影响的技术。我们可以在这里这样做。让我们享受一个漫长的人工智能夏天,而不是在毫无准备的情况下匆忙进入一个秋天。
以上译文来源:报告厅翻译组


仅供参考,不当之处欢迎大家在评论区讨论!转载请注明来源!



- THE END -


 往期外刊精读


最多60分钟!TikTok“限制”未成年使用时间

双语对照全文|ChatGPT:OpenAI宪章

Look a million dollars!戴安娜“复仇礼服”回归!哈佛医学院:减肥失败是你eating dinner too...
爱抠鼻容易痴呆?原来抠鼻不叫dig nose...
CNN:张大千的画作拍卖价,为何超越梵高?你的压力谁也不知道,但你家狗知道!
减缓认知衰退!Alzheimer’s新药被证有效!
万万没想到,meme竟然不叫“表情包”...
性情大变,容易emo?可能是疫情惹的祸!阿波罗计划带给硅谷的,不只有科技
假期熬的夜能补回来吗?免疫细胞:不能!
时尚新概念:什么是wellbeing wardrobe?
最新研究:老年人比年轻人心理更健康!
得州一公园现1.13亿年前恐龙脚印
牛津大学:加快能源转型可节省12万亿冷知识:50元买的冰淇淋,一半是空气…《自然》子刊:补维D番茄就可以
你讨厌的“已读”功能,老外也想让它消失…
任天堂新游戏Splatoon 3,老外这么评价…
加州新规禁售燃油车,马斯克“终结论”要应验?
Disney+上线首批R级电影,美国家长怒了…
谷歌AI意识觉醒?工程师网上爆料遭解雇
Facebook为何遭到批评? 
你曾被医生的话“伤害”到吗?
外卖app真能做到超快配送吗?
最危险的奶酪,这是地狱美食吧...
美婴儿奶粉“一罐难求”,家长怒斥商家哄抬价格Gucci拥抱加密货币,要做“元宇宙第一奢侈品”?如何用英文表达“权宜之计”?-附长难句解析
首个接受猪心脏移植患者,或因猪病毒而死
存在血栓风险,美FDA限制强生疫苗使用
明年起,新加坡允许单身女性“冻卵” 哈佛大学出资1亿美元赎罪!
最新突破!“治愈癌症”更进一步!SpaceX升空!国际空间站迎来首位黑人女性!
数据表明,女性当母亲后工资会下降
招聘新趋势:互玩ghosting?
首例女性艾滋病治愈者诞生!如何地道翻译“侥幸”?喜欢宅家是种叫做Hikikomori的病?「做四休三」离我们究竟还有多远?什么是EDG?吓得宿管阿姨一脸懵逼!《权游》烂尾编剧要对《三体》下手了!
丹叔版007绝唱《无暇赴死》小米"孕育"了日语、韩语、土耳其语...清华北大全球排第几?US NEWS 最新权威排名出炉!
《老友记》最爱瑞秋的Gunther甘瑟去世!
王亚平带多少护肤品上太空?来例假如何应对?
从一个难民到诺贝尔文学奖得主!
李子柒停更疑云,国外网友急了! 
《鱿鱼游戏》大火,“大逃杀”IP为何不断成为爆款

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存