The National Telecommunications and Information Administration (NTIA), a division of the United States Department of Commerce, he called for public comment on strategies to foster accountability in trustworthy artificial intelligence (AI) systems.
The goal was to solicit feedback from stakeholders to formulate suggestions for an upcoming report on AI assurance and accountability frameworks. These suggestions could have guided future federal and non-governmental regulations.
Promoting trustworthy AI that upholds human rights and democratic principles was a primary federal focus according to NTIA’s request. However, there have been gaps in ensuring that AI systems are accountable and adhere to reliable AI standards on fairness, security, privacy and transparency.
Accountability mechanisms such as audits, impact assessments, and certifications could provide assurance that AI systems meet trust criteria. But NTIA noted that implementing effective accountability still presented challenges and complexities.
NTIA discussed a variety of considerations regarding the balance between reliable AI goals, obstacles to implementing accountability, complex AI supply chains and value chains, and difficulties in standardizing measurements.
Over 1,450 comments on AI responsibility
Comments were accepted until June 12 to help shape NTIA’s future report and guide potential policy developments around AI accountability.
The number of comments exceeded 1,450.
The comments, searchable by keyword, occasionally include links to articles, letters, documents, and lawsuits about the potential impact of AI.
Tech companies respond to NTIA
The comments included comments from the following tech companies that are working to develop AI products for the workplace.
OpenAI letter to the NTIA
In the letter of OpenAI, welcomed NTIA’s framing of the issue as an “ecosystem” of AI accountability measures needed to ensure reliable artificial intelligence.
OpenAI researchers believed that a mature AI accountability ecosystem would consist of general accountability elements that apply broadly across domains and vertical elements customized for specific contexts and applications.
OpenAI has focused on developing core models – broadly applicable AI models that learn from large datasets.
It considers the need to adopt a security-focused approach to these models, regardless of the specific domains in which they may be used.
OpenAI detailed several current approaches to AI responsibility. Publishes “system cards” to provide transparency about important performance issues and risks of new models.
Performs qualitative “red team” testing to investigate capabilities and failure modes. Performs quantitative assessments of various capabilities and risks. And it has clear usage policies that prohibit harmful uses along with enforcement mechanisms.
OpenAI recognized several important unsolved challenges, including evaluating potentially dangerous capabilities as model capabilities continue to evolve.
He discussed open questions about independent evaluations of his models by third parties. And he suggested that registration and licensing requirements may be necessary for future foundation models with significant risks.
While OpenAI’s current practices focus on transparency, testing, and policy, the company seemed open to working with policymakers to develop stronger accountability measures. He suggested that tailored regulatory frameworks may be needed for competent AI models.
Overall, OpenAI’s response reflected its belief that a combination of self-regulatory efforts and government policies would play a vital role in developing an effective AI accountability ecosystem.
Letter from Microsoft to the NTIA
On his answer, Microsoft stated that accountability should be a core element of frameworks to address the risks posed by AI while maximizing its benefits. Companies that develop and use AI should be held accountable for the impact of their systems, and oversight institutions need the authority, knowledge, and tools to exercise appropriate oversight.
Microsoft outlined the lessons from its responsible AI program, which aims to ensure that machines remain under human control. Accountability is embedded in its governance structure and Responsible AI Standard and includes:
Conduct impact assessments to identify and address potential harms. Additional monitoring for high-risk systems. Documentation to ensure systems are fit for purpose. Governance and data management practices. Advancing human direction and control. Microsoft described how it red teams to uncover potential harm and bugs and publishes transparency notes for its AI services. Microsoft’s new Bing search engine applies this responsible AI approach.
Microsoft made six recommendations for advancing accountability:
Leverage NIST’s AI Risk Management Framework to accelerate the use of accountability mechanisms such as impact assessments and red teaming, especially for high-risk AI systems. Develop a legal and regulatory framework based on the AI technology stack, including licensing requirements for foundation models and infrastructure providers. Advance transparency as an enabler of accountability, such as through a registry of high-risk AI systems. Invest in building capacity for lawmakers and regulators to keep up with AI developments. Invest in research to improve AI evaluation benchmarks, explainability, human-computer interaction, and security. Develop and align with international standards to support an assurance ecosystem, including ISO AI standards and content provenance standards. Overall, Microsoft appeared willing to work with stakeholders to develop and implement effective approaches to AI accountability.
Microsoft generally appeared ready to partner with stakeholders to develop and implement effective approaches to AI accountability.
Letter from Google to the NTIA
from Google answer welcomed the NTIA’s request for comments on AI liability policies. He recognized the need for both self-regulation and governance to achieve reliable AI.
Google highlighted its own work on AI safety and ethics, such as a set of AI principles focused on fairness, security, privacy and transparency. Google also implemented responsible AI practices internally, including conducting risk assessments and fairness assessments.
Google endorsed the use of existing regulatory frameworks where applicable and risk-based interventions for high-risk AI. It encouraged the use of a collaborative consensus-based approach to developing technical standards.
Google agreed that accountability mechanisms such as audits, assessments and certifications could provide assurance of reliable AI systems. But he noted that such mechanisms face challenges in implementation, including assessing the multitude of aspects that affect the risks of an AI system.
Google recommended focusing accountability mechanisms on key risk factors and suggested using approaches that target the most likely ways in which AI systems could significantly affect society.
Google recommended a model for AI regulation, with industry regulators overseeing AI implementation with guidance from a central agency like NIST. He supported clarifying how existing laws apply to AI and encouraging proportionate risk-based liability measures for high-risk AI.
Like others, Google believed it would require a combination of self-regulation, technical standards, and limited, risk-based government policies to advance AI accountability.
Anthropic letter to the NTIA
The anthropic ones answer described the belief that a robust AI accountability ecosystem requires mechanisms tailored to AI models. He identified several challenges, including the difficulty of rigorously evaluating AI systems and accessing sensitive information needed for audits without compromising security.
Funding supported by Anthropic for the following:
Model assessments: Current assessments are an incomplete patchwork and require specialized expertise. He recommended standardizing capability assessments focused on risks such as deception and autonomy. Interpretability Research: Grants and funding for interpretability research could enable more transparent and understandable models. However, regulations that require interpretability are currently unworkable. Preregistration of large AI training sessions: AI developers should report large training sessions to regulators to inform them of new risks under appropriate confidentiality protections. External Red Teaming: Mandatory adversarial testing of AI systems before launch, either through a centralized organization like NIST or through researcher access. However, red team talent currently resides in private AI labs. Auditors with technical expertise, security awareness and flexibility: Auditors need deep machine learning expertise while preventing leaks or hacking, but also need to operate within constraints that promote competitiveness. Anthropic recommended accountability measures based on a model’s capabilities and demonstrated risks, assessed through targeted capability assessments. He suggested clarifying IP ownership frameworks for AI to allow fair licensing and providing guidance on antitrust issues to enable security collaborations. Overall, Anthropic highlighted the difficulties of rigorously assessing and accessing information about advanced AI systems due to their sensitive nature. He argued that funding capability assessments, interpretability research, and access to computational resources are critical to an effective AI accountability ecosystem that benefits society.
What to expect next
Responses to NTIA’s request for comment show that while AI companies recognize the importance of accountability, there are still open questions and challenges around implementing and scaling accountability mechanisms effective
They also indicate that both corporate self-regulatory efforts and government policies will play a role in developing a robust AI accountability ecosystem.
Going forward, the NTIA report is expected to make recommendations for advancing the AI accountability ecosystem by building on and leveraging existing self-regulatory efforts, technical standards, and government policies. Stakeholder input through the feedback process will likely help shape these recommendations.
However, implementing recommendations into concrete policy and industry practice changes that could transform the way AI is developed, deployed, and monitored will require coordination among government agencies, technology companies, researchers, and other stakeholders.
The road to mature AI responsibility promises to be long and difficult. But these first steps show that there is momentum to achieve this goal.
Featured Image: EQRoy/Shutterstock
[ad_2]
Source link