Google released an AI Policy Agenda document outlining a vision for the responsible deployment of AI and suggestions for how governments should regulate and encourage the industry.
Google AI Policy Agenda
Google announced the publication of an AI policy agenda with suggestions for responsible AI development and regulation.
The paper notes that government AI policies are being formed independently around the world and calls for a cohesive AI agenda that strikes a balance between protecting against harmful outcomes while moving away from innovation.
Google writes:
“Getting AI innovation right requires a policy framework that ensures accountability and enables trust.
We need a comprehensive AI strategy focused on:
(1) unlocking opportunities through innovation and inclusive economic growth;
(2) ensure accountability and enable trust; i
(3) protect global security.
A cohesive AI agenda must advance all three goals, not any at the expense of the others.”
Google’s AI Policy Agenda has three main goals:
Opportunity Responsibility Security
opportunity
This part of the agenda calls on governments to stimulate the development of AI by investing in:
Research and development Creating a frictionless legal environment that unleashes AI development Planning educational support for building an AI-ready workforce
In short, the agenda calls for governments to step aside and get behind AI to help advance the technology.
The political agenda observes:
“Historically, countries have excelled when they maximize access to technology and leverage it to achieve major public goals, rather than trying to limit technological progress.”
Responsibility
Google’s policy agenda argues that the responsible deployment of AI will depend on a mix of government laws, corporate self-regulation and input from non-governmental organizations.
The political agenda recommends:
“Some challenges can be addressed through regulation, ensuring that AI technologies are developed and deployed in accordance with responsible industry practices and international standards.
Others will require fundamental research to better understand the benefits and risks of AI, how to manage them, and to develop and deploy new technical innovations in areas such as interpretability and watermarking.
And others may require new organizations and institutions.”
The agenda also recommends:
“Encourage the adoption of common approaches to the regulation and governance of AI, as well as a common lexicon, based on the work of the OECD. “
What is the OECD?
The OECD is the OECD Policy Observatory.AIwhich has the support of corporate and government partners.
OECD government stakeholders include the US Department of State and the US Department of Commerce.
Corporate stakeholders include organizations like the Patrick J McGovern Foundation, whose leadership team is filled with investors and Silicon Valley tech executives who have a vested interest in how technology is regulated.
Google advocates less corporate regulation
Google’s policy recommendation on regulation is that less regulation is better and that corporate transparency could hinder innovation.
It is recommended:
“Focusing regulation on the highest-risk applications can also deter innovation in the highest-value applications where AI can deliver the most significant benefits.
Transparency, which can support accountability and fairness, can come at a cost in accuracy, security, and privacy.
Democracies must carefully assess how to strike the right balances.”
Then he recommends considering efficiency and productivity:
“Require regulatory agencies to consider trade-offs among different policy goals, including improving efficiency and productivity, transparency, fairness, privacy, security, and resilience.”
There has always been, and always will be, a tug-of-war between corporate entities fighting against oversight and government regulators seeking to protect the public.
AI can solve the most difficult problems of the humanities and provide unprecedented benefits. Google is right that a balance must be struck between the interests of the public and corporations.
Sensible recommendations
The paper contains sensible recommendations, such as suggesting that existing regulatory agencies develop AI-specific guidelines and consider adopting new ISO standards in development (such as ISO 42001).
The political agenda recommends:
“a) Directing sectoral regulators to update existing oversight and enforcement regimes to apply to AI systems, including on how existing authorities apply to the use of AI, and how to demonstrate compliance with an AI system with existing regulations using multi-stakeholder international consensus standards such as the ISO 42001 Series.
b) Instruct regulatory agencies to issue periodic reports that identify capacity gaps that make it difficult for both covered entities to comply with regulations and for regulators to carry out effective oversight.
In some ways, these recommendations state the obvious, which is that agencies will develop guidelines so that regulators know how to regulate.
Hidden in this statement is the recommendation of ISO 42001 as a model of what AI standards should look like.
It should be noted that the ISO 42001 standard is developed by the ISO/IEC Committee for Artificial IntelligenceWhich one is presided over by a twentysomething Silicon Valley tech executive and others in the tech industry.
AI and security
This is the part that presents a real danger from malicious use to create disinformation and disinformation, as well as cyber-based harm.
Google describes the challenges:
“Our challenge is to maximize the potential benefits of AI for global security and stability while preventing threat actors from exploiting this technology for malicious purposes.”
And then offers a solution:
“Governments must simultaneously invest in R&D and accelerate the adoption of public and private artificial intelligence while controlling the proliferation of tools that can be abused by malicious actors.”
Recommendations for governments to combat AI-based threats include:
Develop ways to identify and prevent election interference Share information about security vulnerabilities Develop an international trade control framework to deal with entities involved in AI research and development that threaten global security.
Cut red tape and increase government adoption of AI
The paper then argues for streamlining government adoption of AI, including more investment.
“Reforming government procurement policies to leverage and foster world-leading AI…
Examine the institutional and bureaucratic barriers preventing governments from breaking through data silos and adopting best-in-class data governance to harness the full power of AI.
Leverage data insights through human-machine teaming, creating agile teams with the skills to rapidly build/adapt/leverage AI systems that no longer require computer science degrees…”
Google AI Policy Agenda
The policy agenda offers thoughtful suggestions for governments around the world to consider when formulating regulations around the use of AI.
AI is capable of making many positive advances in science and medicine, advances that can provide solutions to climate change, cure disease, and extend human life.
In a way, it’s a shame that the first AI products released into the world are the comparatively trivial ChatGPT and Dall-E apps that do very little to benefit humanity.
Governments are trying to understand AI and how to regulate it as these technologies are adopted around the world.
Interestingly, open source AI, the most consequential version of this, is only mentioned once.
The only context in which open source is addressed is in the recommendations for dealing with the misuse of AI:
“Clarifying potential liability for misuse/abuse of general purpose and specialized AI systems (including open source systems, as appropriate) by various participants: researchers and authors, creators, implementers and end users” .
Considering how Google is said to be scared and think it’s already defeated by open source AI, it’s funny how open source AI is only mentioned in the context of misuse of the technology.
Google’s AI Policy Agenda reflects legitimate concerns about over-regulation and inconsistent rules imposed around the world.
But the organizations the political agenda cites to help develop industry standards and regulations are stacked with Silicon Valley insiders. This raises questions about what interests the rules and regulations reflect.
The policy agenda successfully communicates the need and urgency to develop meaningful and fair regulations to avoid harmful outcomes while allowing beneficial innovation to move forward.
Read Google’s article on the political agenda:
A Policy Agenda for Responsible AI Progress: Opportunity, Responsibility, Safety
Read the AI Policy Agenda (PDF)
A political agenda for the responsible progress of artificial intelligence
Featured image by Shutterstock/Shaheerrr
[ad_2]
Source link