Ray Kurzweil hits back at calls to pause AI research

Ray Kurzweil, a prominent futurist and director of engineering at Google, issued a rebuttal to the letter calling for a pause in AI research, offering reasons why the proposal is impractical and deprives medical advances and innovations that they benefit humanity profoundly.

International letter to stop the development of AI

An open letter signed by scientists and celebrities from around the world (published on FutureOfLife.org) called for a complete pause in AI development that is more powerful than GPT-4, the latest version created by OpenAI.

In addition to halting AI development, they also called for the development of security protocols overseen by independent third-party experts.

Some of the points made by the authors of the open letter:

AI poses a profound risk. AI development should only proceed once the beneficial applications of the technology are listed, and justified AI should only proceed if “we” (the thousands of signatories to the letter) are confident that the risks of AI are manageable AI developers are called upon to work together with policy makers to develop AI governance systems consisting of regulatory bodies. The development of watermarking technologies to help identify content created with AI and to control the spread of the technology. A system for allocating liability for harms created by AI Building institutions to deal with disruptions caused by AI technology

The letter seems to come from the point of view that AI technology is centralized and can be stopped by the few organizations that control the technology. But AI is not exclusively in the hands of governments, research institutes and corporations.

AI is now an open source, decentralized technology, developed by thousands of individuals on a global collaborative scale.

Ray Kurzweil: Futurist, author and director of engineering at Google

Ray Kurzweil has been designing software and machines focused on artificial intelligence since the 1960s, has written many popular books on the subject, and is famous for making predictions about the future that are usually correct.

Of the 147 predictions he made about life in 2009, only three predictions, a total of 2%, were wrong.

Among his predictions in the 1990s was that many physical media, such as books, would lose popularity as they became digitized. At a time in the 1990s when computers were big and bulky, he predicted that computers would be small enough to wear by 2009, which turned out to be true (How My Predictions Are Faring – 2010 PDF).

Ray Kurzweil’s recent predictions focus on all the good that AI will bring, especially medical and scientific advances.

Kurzweil also focuses on the ethics of AI.

In 2017 he was one of the participants (along with OpenAI CEO Sam Altman) who produced an open letter known as Asilomar AI Principles which were also published on the Future of Life website, Guidelines for the Safe and Ethical Development of Artificial Intelligence Technologies.

Among the principles he helped create:

“The goal of AI research should be to create not undirected intelligence, but beneficial intelligence. Investments in AI should be accompanied by research funding to ensure its beneficial use. There would be for there to be a constructive and healthy exchange between AI researchers and policy makers Advanced AI could represent a profound change in the history of life on Earth and should be planned and managed with appropriate care and resources . Superintelligence should only be developed in the service of widely shared ethical ideals and for the benefit of all humanity rather than a state or organization.”

Kurzweil’s response to the open letter calling for a pause in AI development came from a lifetime of innovative technology in terms of all the beneficial good it can do for humanity and the natural world.

His response focused on three main points:

The call for a break is too vague to be practical. All nations must agree to the pause or the objectives are defeated from the start. A pause in development ignores benefits like identifying cures for diseases.

Too vague to be practical

His first point about the card is that it’s too vague because it’s causing a break in AI that is more powerful than GPT-4, which assumes that GPT-4 is the only type of AI.

Kurzweil wrote:

“Regarding the open letter to ‘pause’ research on AI ‘more powerful than GPT-4’, this criterion is too vague to be practical”.

Nations will opt out of the break

His second point is that the demands outlined in the letter can only work if all researchers around the world cooperate voluntarily.

Any nation that refuses to sign will have the upper hand, which is probably what would happen.

He writes:

“And the proposal faces a serious coordination problem: Those who agree to a pause may fall far behind corporations or nations that disagree.”

This point makes it clear that the goal of a complete pause is not feasible, because nations will not cede any advantage, and furthermore, AI is democratized and open source, in the hands of individuals all over the world.

AI brings significant benefits to AI

There have been editorials dismissing AI as having very little benefit to society, arguing that increasing worker productivity isn’t enough to justify the risks they fear.

Kurzweil’s final point is that the open letter calling for a pause in AI development completely ignores all the good that AI can do.

He explains:

“There are enormous benefits to advancing AI in critical fields such as medicine and health, education, the search for renewable energy sources to replace fossil fuels, and many other fields.

… more nuance is needed if we are to unlock the profound benefits of AI for health and productivity, while avoiding the real dangers.”

Dangers, fear of the unknown and benefits to humanity

Kurzweil highlights how AI can benefit society. His point that there is no way to pause the AI ​​is a solid one.

His explanation of AI emphasizes the profound benefits to humanity that are inherent in AI.

Could implementing OpenAI’s AI as a chatbot trivialize AI and overshadow the benefit to humanity, while scaring away people who don’t understand how generative AI works?

Featured image by Shutterstock/Iurii Motov

[ad_2]

Source link

You May Also Like

About the Author: Ted Simmons

I follow and report the current news trends on Google news.

Leave a Reply

Your email address will not be published. Required fields are marked *