It seems like just yesterday (although it’s been almost six months) since OpenAI launched ChatGPT and started making headlines.
ChatGPT has been reached 100 million users in three months, making it the fastest growing app in decades. In comparison, it took TikTok nine months and Instagram two and a half years to reach the same milestone.
ChatGPT can now use GPT-4 along with web browsing and plug-ins from brands like Expedia, Zapier, Zillow and more to respond to user requests.
Big tech companies like Microsoft have partnered with OpenAI to build AI-based customer solutions. Google, Meta and others are building their language models and AI products.
More than 27,000 people, including technology CEOs, professors, research scientists and politicians, have signed a request to stop AI development of systems more powerful than GPT-4.
Now, the question may not be whether the US government should regulate AI, but whether it is too late.
Below are recent developments in AI regulation and how they may impact the future of AI advancement.
Federal agencies are committed to fighting bias
Four key US federal agencies: the Consumer Financial Protection Bureau (CFPB), the Civil Rights Division of the Department of Justice (DOJ-CRD), the Equal Employment Opportunity Commission (EEOC), and the Federal Trade Commission (FTC) statement on the strong commitment to curb bias and discrimination in automated systems and AI.
These agencies have emphasized their intention to apply existing regulations to these emerging technologies to ensure they meet the principles of equity, equality and justice.
CFPB, responsible for consumer protection in the financial marketplace, reaffirmed that existing consumer financial laws apply to all technologies, regardless of their complexity or novelty. The agency has been transparent in its stance that the innovative nature of AI technology cannot be used as a defense for violating these laws. DOJ-CRD, the agency charged with protecting against discrimination in various facets of life, applies the Fair Housing Act to algorithm-based tenant selection services. This exemplifies how existing civil rights laws can be used to automate systems and AI. The EEOC, which is responsible for enforcing laws against discrimination in employment, issued guidance on how Americans with Disabilities Act it applies to AI and software used to make employment decisions. The FTC, which protects consumers from unfair business practices, expressed concern about the potential for AI tools to be inherently biased, inaccurate or discriminatory. Has he warned that deploying AI without a proper risk assessment or making unsubstantiated claims about AI could be considered a violation of the FTC Act.
For example, the Center for Artificial Intelligence and Digital Policy has presented a complaint to the FTC regarding OpenAI’s release of GPT-4, a product that “is biased, misleading, and a risk to privacy and public safety.”
Senator asks AI companies about security and misuse
US Senator Mark R. Warner sent letters at leading AI companies including Anthropic, Apple, Google, Meta, Microsoft, Midjourney and OpenAI.
In that letter, Warner expressed concern about security considerations in the development and use of artificial intelligence (AI) systems. He asked the recipients of the letter to prioritize these safety measures in their work.
Warner highlighted a number of security risks specific to AI, including data supply chain issues, data poisoning attacks, adversarial examples and the potential misuse or malicious use of AI systems. These concerns were raised against the backdrop of the growing integration of AI into various sectors of the economy, such as healthcare and finance, which underscore the need for security precautions.
The letter asked 16 questions about measures taken to ensure the safety of AI. It also implied the need for some level of regulation in the field to prevent harmful effects and ensure that AI does not advance without adequate safeguards.
AI companies were asked to respond by May 26, 2023.
The White House meets with AI leaders
The Biden-Harris administration announced initiatives to foster responsible innovation in artificial intelligence (AI), protect citizens’ rights and ensure security.
These measures align with the federal government’s push to manage the risks and opportunities associated with AI.
The White House aims to put people and communities first, promoting innovation in AI for the public good and protecting society, security and the economy.
Top administration officials, including Vice President Kamala Harris, met with leaders from Alphabet, Anthropic, Microsoft and OpenAI to discuss this obligation and the need for responsible and ethical innovation.
Specifically, they discussed the obligation for corporations to ensure the security of LLMs and AI products prior to public deployment.
The new steps would ideally complement the extensive measures already taken by the administration to promote responsible innovation, such as the AI Rights Statementthe AI Risk Management Frameworkand plans for a national AI research resource.
Additional steps have been taken to protect users in the age of AI, such as a executive order to eliminate bias in the design and use of new technologies, including AI.
The White House noted that the FTC, CFPB, EEOC and DOJ-CRD have collectively committed to leveraging their legal authority to protect Americans from AI-related harm.
The administration also addressed national security concerns related to AI cybersecurity and biosecurity.
The new initiatives include $140 million in National Science Foundation funding for seven national AI research institutes, public assessments of existing generative AI systems, and new policy guidance from the Office of Management and Budget on use of AI by the US government.
AI Audience Monitoring explores the regulation of AI
Members of the Subcommittee on Privacy, Technology, and the Law held a AI supervision audition with prominent members of the AI community to discuss AI regulation.
Approaching regulation with precision
Christina Montgomery, IBM’s director of privacy and trust, emphasized that while AI has advanced significantly and is now integral to both the consumer and business arenas, the increased public attention that is receiving requires careful assessment of potential social impact, including bias and misuse.
He supported the government’s role in developing a strong regulatory framework, proposing IBM’s “precision regulation” approach, which focuses on rules for specific use cases rather than the technology itself, and described its main components.
Montgomery also acknowledged the challenges of generative AI systems, and argued for a risk-based regulatory approach that does not stifle innovation. He emphasized the crucial role of companies in deploying AI responsibly, detailing IBM’s governance practices and the need for an AI Ethics Board at all companies involved with AI.
Address the potential economic effects of GPT-4 and beyond
Sam Altman, CEO of OpenAI, highlighted the company’s deep commitment to the safety, cybersecurity and ethical implications of its AI technologies.
According to Altman, the company conducts incessant internal and third-party penetration testing and regular audits of its security controls. OpenAI, he added, is also pioneering new strategies to harden its AI systems against emerging cyber threats.
Altman seemed to be particularly concerned about the economic effects of AI on the labor market, as ChatGPT could potentially automate some jobs. Under Altman’s leadership, OpenAI is working with economists and the US government to assess these impacts and design policies to mitigate potential damage.
Altman mentioned his proactive efforts in finding policy tools and support programs such as Worldcoin that could soften the blow of future technological disruption, such as modernizing unemployment benefits and creating worker assistance programs. (A fund in Italy, on the other hand, recently reserved 30 million euros invest in services for workers most at risk of AI displacement).
Altman emphasized the need for effective AI regulation and pledged OpenAI’s continued support to help policymakers. The company’s goal, Altman said, is to help formulate regulations that encourage safety and allow broad access to the benefits of AI.
He emphasized the importance of collective participation of various stakeholders, global regulatory strategies and international collaboration to ensure the safe and beneficial evolution of AI technology.
Exploring AI’s potential for harm
Gary Marcus, a professor of psychology and neural science at NYU, expressed growing concern about the potential misuse of AI, especially powerful and influential language models like GPT-4.
He illustrated his concern by showing how he and a software engineer rigged the system to invent an entirely fictitious narrative about aliens controlling the US Senate.
This illustrative scenario exposed the danger of AI systems convincingly fabricating stories, raising alarm about the possibility of such technology being used for malicious activities such as election interference or market manipulation.
Marcus highlighted the inherent unreliability of current AI systems, which can have serious consequences for society, from promoting baseless accusations to offering potentially harmful advice.
One example was an open source chatbot that appeared to influence a person’s decision to take their own life.
Marcus also pointed to the advent of “datacracy,” where AI can subtly shape opinions, possibly overpowering the influence of social media. Another alarming development that drew attention was the rapid release of AI extensions, such as OpenAI’s ChatGPT plugins and the subsequent AutoGPT, which have direct Internet access, code-writing capability, and powers enhanced automation, which may increase security issues.
Marcus closed his testimony with a call for closer collaboration between independent scientists, technology companies and governments to ensure the safety and responsible use of AI technology. He warned that while AI presents unprecedented opportunities, the lack of proper regulation, corporate irresponsibility and inherent unreliability could lead to a “perfect storm”.
Can we regulate AI?
As AI technologies push the boundaries, regulatory demands will continue to increase.
In a climate where Big Tech partnerships are on the rise and applications are expanding, an alarm is sounding: Is it too late to regulate AI?
Federal agencies, the White House, and members of Congress will need to continue to investigate the urgent, complex, and potentially risky landscape of AI while ensuring that promising AI advances continue and that competition from Big Tech is not fully regulated outside the market.
Featured Image: Katherine Welles/Shutterstock
[ad_2]
Source link