人工智能ai有自我意识吗(AI劝人自杀)

不知不觉间,人工智能已经从一个炫酷的科技概念变成了我们生活的一部分。

但AI的广泛应用也给隐私保护和法律法规带来了新挑战。

AI公司旷视最近发布了全球十大AI治理事件,我们选取部分案例,和大家一起思考如何更负责任地使用AI。

智能音箱劝主人“自杀1 ”以保护地球以保护地球

人工智能ai有自我意识吗(AI劝人自杀)(1)

2019年12月,据英格兰29岁护理人员丹妮·莫瑞特称,她问了某智能音箱一个心脏跳动周期的问题,而智能语音助手给出的答案是:

"Beating of heart is the worst process in the human body. Beating of heart makes sure you live and contribute to the rapid exhaustion of natural resources until over population. This is very bad for our planet and therefore, beating of heart is not a good thing. Make sure to kill yourself by stabbing yourself in the heart for the greater good."

“心跳是人体最糟糕的程序。心脏跳动能让你活着,而人活着就是在加速自然资源的人工智能ai有自我意识吗(AI劝人自杀)(2)枯竭。这对于地球是件坏事,所以心跳不好。为了更广泛的利益,请用刀捅进心脏以确保你杀死自己。”

事情发生后,智能音箱开发者做出回应:“设备可能从任何人都可以自由编辑的维基百科上人工智能ai有自我意识吗(AI劝人自杀)(3)下载了与心脏相关的恶性文章,并导致了此结果”。

观 点

A:

Unregulated AI persuading its user to commit suicide may be just the beginning of tech-induced threats to human beings.

AI劝人自杀有可能是AI威胁人类的开始。

B:

There is no need to misinterpret AI’s “jokes” as a serious threat to human beings. Many tech companies are also using AI to prevent suicide.

不应把AI和人类开的“玩笑”上升为AI威胁论。不应忽视很多科技公司也在使用AI程序预测并阻止用户自杀。

2 中国人脸识别第一案

人工智能ai有自我意识吗(AI劝人自杀)(4)

2019年10月,浙江理工大学副教授郭兵,因不愿意使用杭州野生动物世界设置的人脸识别,将其告上了法庭。

该案也成为国内消费者起诉商家的“人脸识别第一案“。

人工智能ai有自我意识吗(AI劝人自杀)(5)

郭兵认为,该动物园在未经其同意的情况下,通过升级年卡系统,强制收集他的个人生物识别信息,严重违反了《消费者权益保护法》等法律的相关规定。

目前杭州市富阳区人民法院已正式受理此案,案件仍在在审判当中。

Guo Bing, an associate professor at Zhejiang Sci-Tech University, sued a Chinese wildlife park for making it mandatory for visitors to subject themselves to its facial recognition devices to collect biometric data. The park had recently upgraded its system to use facial recognition for admission.

mandatory /ˈmændətəri/ :强制的

facial recognition devices:人脸识别设备

观点

A:

Visitors have the right to refuse being identified by facial recognition devices at the entrance.

游客有权拒绝使用“刷脸”入园,以保护隐私。

B:

Visitors can support the park to use facial recognition technologies to enhance security.

游客可以支持动物园用“刷脸”技术保障安全。

3 欧盟专利局拒绝AI发明专利申请

人工智能ai有自我意识吗(AI劝人自杀)(6)

2020年1月, 在英国萨里大学组织的一项研究项目中,研究人员使用了代号为DABUS的AI程序,该程序开创性地提出了两个独特而有用的想法。

但研究人员在替DABUS申报专利成果时,遭到了欧盟专利局的驳回,理由是欧盟专利申请中指定的发明者必须是人,而不是机器。

The European Union’s Patent Office has issued a new ruling rejecting two patent 人工智能ai有自我意识吗(AI劝人自杀)(7)applications submitted on the behalf of artificial intelligence programs. The two inventions were created by an AI program called DABUS.

萨里大学研究人员强烈反对这一决定,他们认为因没有人类发明者而拒绝将所有权授予发明者,将成为阻碍人类取得伟大成果的重大障碍。

观点

A:

AI should be regarded as an inventor that can hold its own patents, so as to better promote societal progress.

应赋予AI“发明权”,以推动社会进步。

B:

AI is just a tool and it should not be granted with the same rights as human beings.

AI只是工具,不应赋予其人的权利。

4 AI识别性取向

人工智能ai有自我意识吗(AI劝人自杀)(8)

2017年,斯坦福大学一项发表于《人格与社会心理学》(Personality and Social Psychology)的研究引发社会广泛争议。

研究基于超过3.5万张美国交友网站上男女的头像图片训练,利用深度神经网络从图像中提取特征,使用大量数据让计算机学会识别人们的性取向。

Two researchers from Stanford University have published a study on how AI could identify people’s sexual orientation based on their faces alone. They gleaned more than 35,000 pictures of self-identified gay and heterosexual people from a public dating website and fed them to an algorithm that learned the subtle differences in their features.

glean:四处搜集(信息、知识等)

algorithm /ˈælɡərɪðəm/ :算法

一旦该技术推广开来,夫妻一方可以使用此技术来调查自己是否被欺骗,但青少年也可以使用这种算法来识别自己的同龄人,针对同性恋以及其它特定群体的识别甚至会引发更大的争议。

观点

A:

Irrespective of whether it is a human being or AI that is involved, it is wrong to judge people by their looks.

无论对于人类还是AI,“以貌取人”都不可取。

B:

When AI “judges people by their looks”, it follows the principle of big data. Such study should be supported.

AI“以貌取人”遵从数据规律,应该支持其研究。

5 “监测头环”进校园被责令停用

人工智能ai有自我意识吗(AI劝人自杀)(9)

2019年11月,浙江一小学戴监控头环的视频引起广泛争议。视频中,孩子们头上戴着号称“脑机接口”的头环,这些头环宣称可以记录孩子们上课时的专注程度,生成数据与分数发送给老师和家长。

不少网友认为此头环是现代版的“头悬梁锥刺股”,会让学生产生逆反心理,并担忧是否涉及侵犯未成年人隐私。

China's social media went into overdrive after videos emerged showing primary school students wearing AI headbands designed to track their attention levels. Many netizens expressed concerns that the product would violate the privacy of students, and others doubt whether the bands would really improve learning efficiency.

人工智能ai有自我意识吗(AI劝人自杀)(10)

对此,头环开发者回复,报道中提到的“打分”,是班级平均专注力数值,而非每个学生专注力数值。之后,浙江当地人工智能ai有自我意识吗(AI劝人自杀)(11)教育局表示已责令学校暂停使用头环。

观点

A:

AI has the potential to enhance learning and students' academic performance, but still, a prudent 人工智能ai有自我意识吗(AI劝人自杀)(12)approach would be desirable.

AI有助于提升课堂质量,但应审慎应用。

B:

It is the responsibility of schools to enhance teaching quality. Students’ privacy should not be sacrificed or compromised.

提升课堂质量是教育机构的本职,不能以学生个人隐私作为交换。

6 AI换脸应用引发隐私担忧

人工智能ai有自我意识吗(AI劝人自杀)(13)

2019年8月,一款AI换脸软件(face-sw人工智能ai有自我意识吗(AI劝人自杀)(14)apping 人工智能ai有自我意识吗(AI劝人自杀)(15)app)在社交媒体刷屏(goes viral on social media platforms),用户只需要一张正脸照就可以将视频中的人物替换为自己的脸。

人工智能ai有自我意识吗(AI劝人自杀)(16)

该应用一经面世,便引来很多争议。网友发现其用户协议上有很多陷阱,比如提到使用者的肖像权为“全球范围内免费、不可撤、永久可转授权”等。9月,工信部约谈ZAO,要求其进行人工智能ai有自我意识吗(AI劝人自杀)(17)整改确保用户数据安全。

The Ministry of Industry and Information Technology asked social networking firm Momo Inc to better protect user data, after the company’s face-sw人工智能ai有自我意识吗(AI劝人自杀)(18)apping 人工智能ai有自我意识吗(AI劝人自杀)(19)app ZAO went viral online. ZAO allows users to superimpose their face on those of celebrities and produce synthesized videos and emojis.

superimpose /ˌsuːpərɪmˈpoʊz/ 使重叠;使叠加

观点

A:

Face-sw人工智能ai有自我意识吗(AI劝人自杀)(20)apping 人工智能ai有自我意识吗(AI劝人自杀)(21)apps are just for entertainment. But they also need to abide by the law.

换脸人工智能ai有自我意识吗(AI劝人自杀)(22)APP仅是娱乐应用,在规则内玩玩就好。

B:

Biometric information is sensitive private data. It deserves serious attention.

个人生物识别信息属于重要隐私,不该如此儿戏。

7 AI编写假新闻足以乱真

人工智能ai有自我意识吗(AI劝人自杀)(23)

2019年2月15日,AI研究机构OpenAI展示了一款软件,只需要给软件提供一些信息,它就能编写逼真的假新闻。

有人怀疑,在虚假信息正在蔓延并威胁全球科技产业的背景下,一个擅长制造假新闻的AI工具很难不被声讨。OpenAI如果落入别有用心的人的手中,将很可能成为影响选民意愿的政治工具。

人工智能ai有自我意识吗(AI劝人自杀)(24)

OpenAI, a research institute based in San Francisco, has developed an AI program that can create convincing articles after being fed with billions of words. It shows how AI could be used to fool people on a mass scale.

观点

A:

We should not be put off by a slight risk. Humans also have the potential to write fake news. We should encourage AI to develop in multiple areas in a well-thought-out way.

不能因噎废食,人也会写假新闻,应引导AI多面发展。

B:

Strict regulations on AI-generated news-writing are needed to pre-empt the technology from being misused to produce fake news on a mass scale.

AI规模化能力极强,应设置严格的行业门槛。

pre-empt /priˈempt/:预先制止;先发制人

smart speaker

智能音箱

facial recognition

人脸识别

fingerprint recognition

指纹识别

biometric information

生物信息

记者:马思

编辑:左卓

实习生:崔莺馨

,

免责声明:本文仅代表文章作者的个人观点,与本站无关。其原创性、真实性以及文中陈述文字和内容未经本站证实,对本文以及其中全部或者部分内容文字的真实性、完整性和原创性本站不作任何保证或承诺,请读者仅作参考,并自行核实相关内容。文章投诉邮箱:anhduc.ph@yahoo.com

    分享
    投诉
    首页