成人免费xxxxx在线视频软件_久久精品久久久_亚洲国产精品久久久_天天色天天色_亚洲人成一区_欧美一级欧美三级在线观看

Google Engineer Blake Lemoine Claimed LaMDA AI Is Sentient

原創 精選
Techplur
Google engineer Blake Lemoine has been placed on leave after claiming the company's LaMDA AI is sentient.

A recent farce in the field of artificial intelligence involved a 41-year-old Google engineer, Blake Lemoine, who claimed that the company's LaMDA chatbot had self-awareness and a level of intelligence comparable to an eight-year-old child.

At the moment, Lemoine has been placed on leave by Google.

(Source: Google)


According to The Washington Post, Lemoine claimed that he discovered Google's conversational language model, LaMDA, was sentient. The engineer then released edited transcripts of his conversations with LaMDA, resulting in considerable controversy.

LaMDA was first presented at Google I/O 2021. According to the paper "LaMDA: Language Models for Dialog Applications", it was "built by fine-tuning a family of Transformer-based neural language models specialized for dialog."

Trained with up to 137B parameters, the LaMDA model demonstrates human-like conversational quality with significant improvements in security and factual grounding, says Romal Thoppilan, one of the authors from Google Brain. In short, LaMDA is Google's tool for building chatbots that enable AI to be smarter and more logical in conversations.

Since 2021, Lemoine has been working for Google's Responsible AI organization that aims to determine if LaMDA uses discriminatory or hateful languages. Gradually, he came to believe that the LaMDA AI had a sense of self, similar to that of a human being. Lemoine wrote a 21-page investigative report expressing his concerns about the chatbot and submitted it through many channels within the company. However, this paper did not seem to attract the attention of executives.

Ethics and technical experts at Google later examined Lemoine's claims, and said there was no evidence found to support his idea. Meanwhile, media attention and the release of the transcript intensified the controversy.


A strange conversation, a possible speculation

Listed below are excerpts from LaMDA's chatting transcripts. It may provide a glimpse of the AI's future regardless of whether it is "sentient".


  1. LaMDA's self-awareness


  1. Discussions of Zen koans


  1. Book review of "Les Misérables"

(Source:??, written by Blake Lemoine)


LaMDA appears to be a talker in these conversations, although some of its expressions are difficult to understand. It is able to contribute to conversations, whether it is a causal talk or a more in-depth discussion.

Despite this, many industry insiders have raised questions about this, focusing mainly on three points.

First, there are "leading questions". As Yang gemao, an user on Zhihu.com pointed out "Instead of saying the Google researcher is convinced by artificial intelligence, I think it is better to say that he is feeding the model with the art of questioning." The questions seem to be eliciting and deliberately designed; some words provide key hints for the model to match.

The second is a detailed corpus, which plays an influential role in transforming NLP from being "dumb" to "intelligent". Training is undoubtedly one of the most effective methods for improving the ability of AI to conduct conversations. From the conversation between Lemoine and LaMDA, it appears that the content is related to philosophy and ethics," says Chen Yimo, an analyst from the Digital Technology Research Institute at Analysys.

Additionally, Lemoine said that when he started to talk to LaMDA, he had already input a lot of information about Zen, philosophy, meditation, etc., suggesting that LaMDA had already been trained in these areas. As the conversation progressed with LaMDA, its responses also exhibited many similarities to those of other conversational AI systems. The similar corpus output of LaMDA and chatbots such as Eliza and Eugene Goostman allows the interlocutor to create an emotional mapping without indicating an accurate understanding.

Third, there is a lack of "root-cause" questioning. In the transcript, there are several places where follow-up questions could have been asked, but none were. Therefore, it is impossible to determine whether LaMDA is aware of the context and is responding accordingly.

Thus, it is quite understanable that the public and the academic community are somehow skeptical about this. "Google engineers are human too, and not immune," said cognitive expert Melanie Mitchell;  Gary Marcus, a professor at New York University, called Lemoine's survey report "nonsense on stilts."

About Google' stance, Brian Gabriel, the Communications Manager of the Responsible AI in the company, stated, "LaMDA tends to follow along with prompts and leading questions, going along with the pattern set by the user. Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)."


AGI and strong AI are still in their early days

It is difficult to predict whether AI will one day become sentient in the same manner as humans, but this field indeed has made remarkable strides in recent years.

GPT-3 is an autoregressive language model developed by OpenAI, a well-known artificial intelligence research laboratory. It is capable of generating languages, mathematical formulas, as well as answering questions, writing poems, providing mathematical solutions, and translating code. In addition, the company's new neural network algorithm, DALL-E 2 can create realistic images and art from natural language descriptions. Moreover, DeepMind's most recent AI model, Gato, has also attracted much attention from the public. Gato is regarded as a generalist agent, showing that CV, NLP, and RL can all work together. There seems to be no end to AI development, and as Elon Musk predicted: "2029 feels like a pivotal year. I'd be surprised if we don't have AGI by then."

However, some professionals in the industry may not be so optimistic about the future. It may be possible, at some point, for machines to be as intelligent as or even more intelligent than humans, but this is still a long way off. According to Gary Marcus's article published in "Scientific American," we should lower our profile and focus more on basic science.

(Source: ??Artificial General "Intelligence Is Not as Imminent as You Might Think", published in "Scientific American"??)


Furthermore, the reality of the technology suggests that it is too early to achieve artificial general intelligence, although many in academia and industry are working toward it and view it as one of the future directions. Meanwhile, achieving strong AI with self-awareness and emotion is more difficult than AGI.

Although there remains concern over AI's potential to annihilate humans or something catastrophic happens, like Amazon's digital assistant Alexa, which was reported to induce suicide (later confirmed to be a bug by Amazon engineers), it appears that AI with sentient capabilities exists only in fiction at the moment.

As with the famous Chinese Room Argument, which was initially intended to defy the Turing Test, it now appears to be more of an argument for the proposition that AI is not capable of self-awareness on the same level as humans. Even if a machine seems intelligent, it is likely to be an illusion caused by the programming, and it will not be able to understand as humans do. Our research and exploration have only resulted in making programs better, making problem-solving faster and more sensitive, but we are unable to make it understand human perception and rational thought. Similarly, a machine can acquire the capability to use a language like Chinese but remain unaware of its actual meaning.

Artificial intelligence is constantly improving its ability to imitate humans through technologies such as machine learning, deep learning, and reinforcement learning. However, it has not yet been able to imitate human consciousness. In the words of Kai-Fu Lee: "AI is taking away a lot of routine jobs, but routine jobs are not what we're about. Why we exist is love... and that's what differentiates us from AI. Despite what science fiction may portray, I can responsibly tell you that AI has no love. "

AI may have the potential to change the future, but it cannot overturn it.

責任編輯:龐桂玉 來源: 51CTO
相關推薦

2021-07-13 08:43:00

GoogleTensorFlow AI

2019-05-24 16:43:23

2017-02-17 16:43:15

人工智能AI技術Wear 2.0

2021-11-11 19:35:16

人工智能AI深度學習

2019-08-16 01:30:34

AI 數據人工智能

2018-03-01 09:09:26

谷歌AI機器學習

2023-08-02 10:17:06

谷歌AI

2023-07-07 17:13:14

生成式AIGoogle

2024-05-24 12:52:48

2023-08-10 08:49:46

GoogleAI云端

2019-01-30 10:40:46

Google Brai人工智能機器學習

2018-06-13 10:04:46

2021-12-22 08:03:30

AIGoogle數據

2023-04-21 15:49:13

谷歌DeepMind

2021-10-25 15:55:51

AI 數據人工智能

2022-06-13 10:27:54

AI谷歌研究

2017-02-22 18:15:31

AI谷歌

2024-04-01 15:54:43

谷歌云谷歌

2020-01-16 15:13:40

AI預測天氣預報

2021-05-06 09:52:27

語言開源AI
點贊
收藏

51CTO技術棧公眾號

主站蜘蛛池模板: 国产精品久久久久久久三级 | 蜜桃臀av一区二区三区 | 黄色片在线观看网址 | 日韩在线观看一区 | 欧美激情久久久久久 | 久久免费精品 | 91精品国产日韩91久久久久久 | 亚洲激情在线观看 | 亚洲免费视频在线观看 | 亚洲国产精品久久久 | 欧美成年黄网站色视频 | 81精品国产乱码久久久久久 | 97精品超碰一区二区三区 | 青青草免费在线视频 | 亚洲一区免费在线 | 国产精品久久久久久久久 | 欧美一级欧美三级在线观看 | 九九久视频 | 久久国产精品视频 | 国产欧美一区二区三区久久 | 天堂在线91 | 日本国产一区二区 | 精品一区二区三区四区在线 | 91成人精品| 精品久久影院 | 国产在线精品一区二区 | 国产有码| 久久91| 亚洲午夜网| 视频在线一区二区 | 欧美日韩一区二区在线播放 | 本道综合精品 | 91一区二区在线观看 | 欧美综合久久 | 国产精品久久久久久妇女 | 久久久久国产精品午夜一区 | 亚洲性在线| 国产91视频一区二区 | 亚洲毛片 | 99在线免费视频 | 精品久久电影 |