AI Frameworks Database

Filter

Hide

Framework/Other Type Company/Organization Organization Type Organization City Organization State/Province Organization Country Publish Month Publish Year Description Main Website PDF Link Organization LinkedIn
AI Risk Repository
Database
MIT - Massachusetts Institute of Technology
University
Cambridge
Massachusetts
United States
August
2024
"A comprehensive living database of over 700 AI risks categorized by their cause and risk domain."
Mapping the Ethics of Generative AI
Database Summary
University of Stuttgart
University
Stuttgart
Germany
February
2024
"The advent of generative artificial intelligence and the widespread adoption of it in society engendered intensive debates about its ethical implications and risks. These risks often differ from those associated with traditional discriminative machine learning. To synthesize the recent discourse and map its normative concepts, we conducted a scoping review on the ethics of generative artificial intelligence, including especially large language models and text-to-image models. Our analysis provides a taxonomy of 378 normative issues in 19 topic areas (accessible online) and ranks them according to their prevalence in the literature. The study offers a comprehensive overview for scholars, practitioners, or policymakers, condensing the ethical debates surrounding fairness, safety, harmful content, hallucinations, privacy, interaction risks, security, alignment, societal impacts, and others. We discuss the results, evaluate imbalances in the literature, and explore unsubstantiated risk scenarios."
MITRE ATLAS
Database
MITRE
Non-Profit
Bedford
Massachusetts
United States
October
2024
"ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a globally accessible, living knowledge base of adversary tactics and techniques against Al-enabled systems based on real-world attack observations and realistic demonstrations from Al red teams and security groups."
AI Vulnerability Database
Database
AI Risk and Vulnerability Alliance
Non-Profit
United States
September
2022
"An open-source knowledge base of failure modes for Artificial Intelligence (AI) models, datasets, and systems."
The AI Risk Taxonomy (AIR 2024)
Database
Virtue AI
Private Company
San Francisco
California
United States
June
2024
"We present a comprehensive AI risk taxonomy derived from eight government policies from the European Union, United States, and China and 16 company policies worldwide, making a significant step towards establishing a unified language for generative AI safety evaluation. We identify 314 unique risk categories, organized into a four-tiered taxonomy. At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks. The taxonomy establishes connections between various descriptions and approaches to risk, highlighting the overlaps and discrepancies between public and private sector conceptions of risk. By providing this unified framework, we aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems."
AI Incident Database (AIID)
Database
Responsible AI Collaborative
Non-Profit
Los Angeles
California
United States
November
2020
"The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes."
OECD AI Incidents Monitor (AIM)
Database
OECD - Organization for Economic Cooperation and Development
Government
Paris
France
January
2024
"Documents AI incidents and hazards to help policymakers, AI practitioners, and all stakeholders worldwide gain valuable insights into the risks and harms of AI systems."
AIAAIC Repository
Database
AIAAIC - AI, Algorithmic, and Automation Incidents and Controversies
Non-Profit
Cambridge
England
October
2024
"AIAAIC is an independent, non-partisan, grassroots public interest initiative that examines and makes the case for real AI, algorithmic, and automation transparency and openness." Their repository is an "independent, open, public interest resource detailing incidents and controversies driven by and relating to AI, algorithms, and automation."
MITRE ATT&CK®
Database
MITRE
Non-Profit
Bedford
Massachusetts
United States
September
2018
"MITRE ATT&CK® is a globally-accessible knowledge base of adversary tactics and techniques based on real-world observations. The ATT&CK knowledge base is used as a foundation for the development of specific threat models and methodologies in the private sector, in government, and in the cybersecurity product and service community. With the creation of ATT&CK, MITRE is fulfilling its mission to solve problems for a safer world — by bringing communities together to develop more effective cybersecurity. ATT&CK is open and available to any person or organization for use at no charge."
The Database of AI Litigation
Database
George Washington University
University
Washington, D.C.
United States
January
2022
"This database presents information about ongoing and completed litigation involving artificial intelligence, including machine learning. It covers cases from complaint forward – as soon as we learn of them – whether or not they generate published decisions. It is intended to be broad in scope, covering everything from algorithms used in hiring and credit and criminal sentencing decisions to liability for accidents involving autonomous vehicles."
AI Risk Management Framework
Framework
NIST - National Institute of Standards and Technology
Government
Gaithersburg
Maryland
United States
January
2023
"The AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems."
AI Risk Management Framework: Generative Artificial Intelligence Profile
Framework
USAISI - U.S. AI Safety Institute and its Consortium (Inside NIST - National Institute of Standards and Technology )
Government
Gaithersburg
Maryland
United States
July
2024
"This document defines risks that are novel to or exacerbated by the use of GAI. After introducing and describing these risks, the document provides a set of suggested actions to help organizations govern, map, measure, and manage these risks"
AI & Inclusive Hiring Framework
Framework
PEAT - The Partnership on Employment & Accessible Technology
Government
Washington, D.C.
United States
September
2024
"Based on the NIST AI Risk Management Framework (AI RMF)—federal guidance on AI governance—this PEAT resource outlines how various organizations can maximize benefits and better manage risks when obtaining and using AI hiring technology. Those who may find this Framework helpful include leaders overseeing AI, human resources (HR) teams, hiring managers, DEIA (diversity, equity, inclusion, and accessibility) practitioners, accessibility programs, procurement and vendor relations groups, legal and compliance teams, and job seekers and workers."
Framework to Advance AI Governance and Risk Management in National Security
Framework
The White House
Government
Washington, D.C.
United States
October
2024
"The Framework to Advance AI Governance and Risk Management in National Security (“AI Framework”) builds on and fulfills the requirements found in Section 4.2 of the National Security Memorandum on Advancing the United States’ Leadership in AI, Harnessing AI to Fulfill National Security Objectives, and Fostering the Safety, Security, and Trustworthiness of AI (“AI NSM”), which directs designated Department Heads to issue guidance to their respective components/sub-agencies to advance governance and risk management practices regarding the use of AI as a component of a National Security System (NSS). This AI Framework is intended to support and enable the U.S. Government to continue taking active steps to uphold human rights, civil rights, civil liberties, privacy, and safety; ensure that AI is used in a manner consistent with the President’s authority as commander-in-chief to decide when to order military operations in the nation’s defense; and ensure that military use of AI capabilities is accountable, including through such use during military operations within a responsible human chain of command and control."
Responsible AI (RAI) Strategy and Implementation (S&I) Pathway
Framework
DoD - United States Department of Defence
Government
Washington, D.C.
United States
June
2022
"To maintain our military advantage in a digitally competitive world, the United States Department of Defense (DoD) must embrace AI technologies to keep pace with these evolving threats. Harnessing new technology in lawful, ethical, responsible, and accountable ways is core to our ethos. Those who depend on us will accept nothing less... DoD must demonstrate that our military's steadfast commitment to lawful and ethical behavior apply when designing, developing, testing, procuring, deploying, and using Al. The Responsible AI (RAI) Strategy and Implementation (S&I) Pathway illuminates our path forward..."
DoD AI Ethical Principles (2020)
Framework
DoD - United States Department of Defence
Government
Washington, D.C.
United States
February
2020
"The U.S. Department of Defense officially adopted a series of ethical principles for the use of Artificial Intelligence today following recommendations provided to Secretary of Defense Dr. Mark T. Esper by the Defense Innovation Board last October. The recommendations came after 15 months of consultation with leading AI experts in commercial industry, government, academia and the American public that resulted in a rigorous process of feedback and analysis among the nation’s leading AI experts with multiple venues for public input and comment. The adoption of AI ethical principles aligns with the DOD AI strategy objective directing the U.S. military lead in AI ethics and the lawful use of AI systems."
RAI Toolkit
Framework
CDAO - Chief Digital and Artificial Intelligence Office
Government
Washington, D.C.
United States
November
2023
"The Responsible Artificial Intelligence (RAI) Toolkit provides a centralized process that identifies, tracks, and improves alignment of AI projects to RAI best practices and the DoD AI Ethical Principles, while capitalizing on opportunities for innovation. The RAI Toolkit provides an intuitive flow guiding the user through tailorable and modular assessments, tools, and artifacts throughout the AI product lifecycle. The process enables traceability and assurance of responsible AI practice, development, and use."
DIU Responsible AI Guidelines
Framework
DIU - Defense Innovation Unit
Government
Mountain View
California
United States
February
2022
"The RAI Guidelines consist of specific questions that should be addressed at each phase in the AI lifecycle: planning, development, and deployment. They provide step-by-step guidance for AI companies, DoD stakeholders, and program managers to ensure AI programs align with the DoD’s Ethical Principles for AI and ensure that fairness, accountability, and transparency are considered at each step in the development cycle. DIU is actively deploying the RAI Guidelines on a range of projects that cover applications including predictive health, underwater autonomy, predictive maintenance, and supply chain analysis."
Inspect: Open-source framework for large language model evaluations
Framework
AISI - The AI Safety Institute
Government
London
England
July
2024
"Governments have a key role to play in ensuring advanced AI is safe and beneficial. The AI Safety Institute is the first state-backed organisation dedicated to advancing this goal. We are conducting research and building infrastructure to test the safety of advanced AI and to measure its impacts on people and society. We are also working with the wider research community, AI developers and other governments to affect how AI is developed and to shape global policymaking on this issue."
Army AI Layered Defense Framework Request For Information
Framework
United States Army
Government
Arlington
Virginia
United States
September
2024
"This Request for Information (RFI) is issued by the Assistant Secretary of the Army for Acquisition, Logistics & Technology (ASA(ALT)) Office of the Deputy Assistant Secretary of the Army (DASA) for Data, Engineering, and Software (DES) to provide the United States (U.S.) Army with a better understanding of industry capabilities, potential sources, and best practices relevant to the definition and implementation of an Artificial Intelligence Layered Defense Framework (AI-LDF) for the U.S. Army. The AI-LDF is to be a thorough theoretical and practical framework for mitigating risks to AI Systems. The Army does not foresee establishing a single program to develop the AI-LDF. The goal is to improve the Army’s methodology for building a comprehensive library of risks and mitigations unique to or inherent in AI systems which will inform and guide the development and implementation of subsequent AI models and software."
Project Maven
Project
NGA - National Geospatial-Intelligence Agency
Government
Springfield
Virginia
United States
February
2023
"Project Maven is an AI-induced information technology for military applications initiated by the United States Department of Defence (DoD) in 2017 and originally signed on to a civilian contractor, namely Google. However, this initiative raised massive resistance from a substantial amount of Google employees, eventually leading to the contract's annulation. This article uses narrative analysis to investigate enabling and constraining arguments of AI for military purposes that appeared in the debate following the public announcement of Project Maven. In addition, the article highlights the co-production of ethics, technology, and the complex issues that arise from civilian-military exchanges in technology development. Enabling arguments associated with consequentialist ethics are identified as narratives of accuracy and maintenance. Accuracy constitutes a guiding principle for saving civilian lives, while maintenance is directed at keeping the power balance intact. In contrast, constraining arguments proceed from deontological ethics that emphasize disengagement and ambivalence. Disengagement amplifies a civilian/military divide, while ambivalence exhibits conflicting views concerning the prospect of supplementing technological solutions that have the potential to contribute to war and civilian casualties. Conclusively, security narratives and technological storytelling are important aspects to consider since they hold a performative function that influences the framing and mobilization of security and technology development."
Principles of Artificial Intelligence Ethics for the Intelligence Community
Framework
US Government
Government
Washington, D.C.
United States
June
2020
"Artificial Intelligence (AI) can enhance the intelligence mission, but like other new tools, we must understand how to use this rapidly evolving technology in a way that aligns with our principles to prevent unethical outcomes. This is an ethics guide for United States Intelligence Community personnel on how to procure, design, build, use, protect, consume, and manage AI and related data. Answering these questions, in conjunction with your agency-specific procedures and practices, promotes ethical design of AI consistent with the Principles of AI Ethics for the Intelligence Community. This guide is not a checklist and some of the concepts discussed herein may not apply in all instances. Instead, this guide is a living document intended to provide stakeholders with a reasoned approach to judgment and to assist with the documentation of considerations associated with the AI lifecycle. In doing so, this guide will enable mission through an enhanced understanding of goals between AI practitioners and managers while promoting the ethical use of AI."
Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Framework
US White House
Government
Washington, D.C.
United States
October
2023
"Artificial intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security. Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks. This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society. My Administration places the highest urgency on governing the development and use of AI safely and responsibly, and is therefore advancing a coordinated, Federal Government-wide approach to doing so. The rapid speed at which AI capabilities are advancing compels the United States to lead in this moment for the sake of our security, economy, and society. In the end, AI reflects the principles of the people who build it, the people who use it, and the data upon which it is built. I firmly believe that the power of our ideals; the foundations of our society; and the creativity, diversity, and decency of our people are the reasons that America thrived in past eras of rapid change. They are the reasons we will succeed again in this moment. We are more than capable of harnessing AI for justice, security, and opportunity for all."
Blueprint for an AI Bill of Rights
Framework
US White House
Government
Washington, D.C.
United States
October
2022
"The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was published by the White House Office of Science and Technology Policy in October 2022. This framework was released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered world.” Its release follows a year of public engagement to inform this initiative."
EU AI Act
Framework
European Union
Government
International
March
2024
"The AI Act is a European regulation on artificial intelligence (AI) – the first comprehensive regulation on AI by a major regulator anywhere. The Act assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated."
Artificial Intelligence Law of the People’s Republic of China
Framework
Chinese Government
Government
China
May
2024
"The following document is a preliminary draft of China’s proposed AI Law that has circulated among legal scholars. The draft law specifies various scenarios in which AI developers, providers, or users are liable for misuse of AI tools. It also allows for the use of copyrighted material for model training in most cases, and provides intellectual property protections for content created with the assistance of AI technology."
UK AI National Strategy
Framework
United Kingdom
Government
International
Februrary
2024
The AI Impact Navigator
Framework
Australian Government
Government
Australia
October
2024
"The AI Impact Navigator is a framework for companies to use in assessing and measuring the impact and outcomes of their use of AI systems. Using a continuous improvement cycle known as Plan, Act, Adapt, the Navigator provides a way for company leaders to communicate and discuss what’s working, what they’ve learned, and what their AI impact is."
TERRAIN Framework
Framework
TERRAIN AI
Private Company
McLean
Virginia
United States
April
2024
"TERRAIN Agile Framework offers a much-needed compass for those embarking on their AI journey. TERRAIN Framework is designed to accommodate the need for coordination and alignment across multiple Agile teams, making it particularly suitable for the complexities of AI projects."
Microsoft Responsible AI Program and Standard
Framework
Microsoft
Public Company
Redmond
Washington
United States
May
2024
"In this report, we share how we build generative applications responsibly, how we make decisions about releasing our generative applications, how we support our customers as they build their own AI applications, and how we learn and evolve our responsible AI program."
AI Risk Assessment for ML Engineers
Framework
Microsoft
Public Company
Redmond
Washington
United States
Februrary
2024
"This document is a first step for organizations to assess the security posture of their AI systems. But instead of adding yet another framework for organizations to follow, we attempted to provide the content in a manner that can be snapped to existing traditional security risk assessment frameworks."
Secure AI Framework (SAIF)
Framework
Google
Public Company
Mountain View
California
United States
June
2023
"AI is advancing rapidly, and it’s important that efective risk management strategies evolve along with it. To help achieve this evolution, we’re introducing the Secure AI Framework (SAIF), a conceptual framework for secure AI systems. SAIF has six core elements"
AI Lifecycle Governance
Framework
IBM - International Business Machines Corporation
Public Company
Armonk
New York
United States
October
2024
"This Research Brief is part of an ongoing series of reports published by the IBM Institute for Business Value (IBM IBV) about generative AI and the opportunities and challenges it presents to organizations worldwide."
AI Security Framework (DASF) Version 1.0
Framework
Databricks
Private Company
San Francisco
California
United States
March
2024
"The Databricks Security team created the Databricks AI Security Framework (DASF) to address the evolving risks associated with the widespread integration of AI globally. Unlike approaches that focus solely on securing models or endpoints, the DASF adopts a comprehensive strategy to mitigate cyber risks in AI systems. Based on real-world evidence indicating that attackers employ simple tactics to compromise ML-driven systems, the DASF offers actionable defensive control recommendations. These recommendations can be updated as new risks emerge and additional controls become available. The framework’s development involved a thorough review of multiple risk management frameworks, recommendations, whitepapers, policies and AI security acts."
AI Security Framework (DASF) Version 1.1
Framework
Databricks
Private Company
San Francisco
California
United States
September
2024
"The Databricks Security team created the Databricks AI Security Framework (DASF) to address the evolving risks associated with the widespread integration of AI globally. Unlike approaches that focus solely on securing models or endpoints, the DASF adopts a comprehensive strategy to mitigate cyber risks in AI systems. Based on real-world evidence indicating that attackers employ simple tactics to compromise ML-driven systems, the DASF offers actionable defensive control recommendations. These recommendations can be updated as new risks emerge and additional controls become available. The framework’s development involved a thorough review of multiple risk management frameworks, recommendations, whitepapers, policies and AI security acts."
Sloan Framework
Framework
MIT - Massachusetts Institute of Technology
University
Cambridge
Massachusetts
United States
Janurary
2024
"A framework based on a “red light, yellow light, green light” approach can help companies streamline AI governance and decision-making."
watsonx.governance
Tool
IBM - International Business Machines Corporation
Public Company
Armonk
New York
United States
May
2023
"Direct, manage and monitor your AI using a single platform to speed responsible, transparent, explainable AI "
Booz Allen Responsible AI Overview
Framework
Booz Allen
Public Company
McLean
Virginia
United States
2024
"Booz Allen’s AI E/ATO Assessment is a powerful and quantitative approach to operationalizing the Responsible AI principles most relevant for today’s government missions. The AI E/ATO Assessment uncovers insights during the ethical and compliance analysis, providing a roadmap to achieve your critical mission objectives more effectively and safely."
Trustworthy AI Framework
Framework
Deloitte
Private Company
International
November
2022
"Deloitte's Trustworthy AI Framework and AI Governance & Risk services help provide strategic and tactical solutions to enable organizations to continue to build and use AI-powered systems while promoting Trustworthy AI."
Apple Intelligence FoundationLanguage Models
Report
Apple
Public Company
Cupertino
California
United States
July
2024
"Apple Intelligence consists of multiple highly-capable generative models that are fast, efficient, specialized for our users’ everyday tasks, and can adapt on the fly for their current activity. The foundation models built into Apple Intelligence have been fine-tuned for user experiences such as writing and refining text, prioritizing and summarizing notifications, creating playful images for conversations with family and friends, and taking in-app actions to simplify interactions across apps."
Intel Responsible Artificial Intelligence(RAI) Principles
Framework
Intel
Public Company
Santa Clara
California
United States
December
2023
"Intel has long recognized the importance of the ethical and human rights implications associated with the development of technology. This is especially true with the development of AI technology, for which we remain committed to evolving best methods, principles, and tools to ensure responsible practices in our product use, development and deployment. With a foundation built on Intel’s long standing Code of Conduct and Global Human Rights Principles, we are approaching Responsible AI through a comprehensive strategy centered around people, processes, systems, data and algorithms, with the aim of lowering risks while optimizing benefits for our society."
McKinsey & Company Responsible AI (RAI) Principles
Framework
McKinsey & Company
Private Company
New York City
New York
United States
December
2023
"We believe Artificial Intelligence (AI) has the power to transform business and are committed to helping our clients and our people harness that potential with clear principles and ethical guardrails for the responsible use of AI. The pace of change for ourselves and our clients has never been faster and we will continuously update these principles to support world-leading responsible and inclusive AI advancements. We encourage all organizations to establish clear principles for the responsible use of AI and commit to adhering to the following guiding principles..."
Responsible AI at PwC
Framework
PwC - PricewaterhouseCoopers
Private Company
London
England
May
2024
"Responsible AI (RAI) is an approach to managing risks associated with an AI-based solution. Now is the time to evaluate and augment existing practices or create new ones to help you responsibly harness AI and be prepared for coming regulation. Investing in Responsible AI at the outset can give you an edge that competitors may not be able to overtake."
Responsible AI: From principles to practice
Framework
Accenture
Public Company
Dublin
Ireland
March
2021
"Accenture has worked with organizations worldwide to build this trust. How? By defining and implementing solutions across four Responsible AI pillars—moving from principles to practice. In this report we share what we have learned—from practitioners’ pain points and how to address them, to case studies of what good looks like in the real world."
AI Ethics Maturity Model
Framework
Salesforce
Public Company
San Francisco
California
United States
Janurary
2022
"For the last few years, Yoav Schlesinger and I have thought a lot about how to grow and mature our AI ethics practice at Salesforce. We’ve spent time in self-reflection and talking to our peers at other large, U.S. enterprise tech companies that have built their own teams and practices. From this, we’ve identified a maturity model for building an ethical (or “trusted” or “responsible,” choose your own word) AI practice."
RAISE Health (Responsible AI for Safe and Equitable Health)
Framework
Stanford Institute for Human-Centered AI
University
Stanford
California
United States
October
2024
"Powerful new tools of artificial intelligence (AI) have created great excitement in the past two years for their potential to transform medicine and health. The technology has also introduced uncertainty and anxiety about the potential for disruption and the pace of technological change, including the risk of bias and concerns about patient safety. At this pivotal moment, Stanford welcomed a diverse array of speakers and participants from various industries and disciplines to the Palo Alto campus on May 14, 2024, to discuss the ethical integration of AI technologies into biomedicine. It was the first symposium of the RAISE Health (Responsible AI for Safe and Equitable Health) initiative, a collaboration between Stanford Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI), which each year will enlist leading experts to define challenges and explore potential approaches to solving some of AI’s biggest challenges."
NeMo Guardrails
Tool
NVIDIA
Public Company
Santa Clara
California
United States
June
2024
"NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems."
Cisco Secure
Framework
Cisco
Public Company
San Jose
California
United States
December
2021
"In the next few pages, we’ll show how Cisco’s effective security aligns with the NIST Cybersecurity Framework. We’ll list each Framework function and category, and explain exactly how Cisco Secure products and services help you accomplish each specific Framework goal. Our solutions are simple, open, and automated to interoperate at every level of the security stack, not only across the Cisco portfolio but also with other vendors’ products. Furthermore, our solutions build industry-leading, actionable Talos threat intelligence directly into them. With Cisco Secure, you can take a new approach to cybersecurity, adopt the Framework, and bolster cyber defenses and readiness."
Cisco Responsible AI Framework
Framework
Cisco
Public Company
San Jose
California
United States
Janurary
2022
"At Cisco, we appreciate that Artificial Intelligence (AI) can be leveraged to power an inclusive future for all. We also recognize that by applying this technology, we have a responsibility to mitigate potential harm. That is why we have developed the six Principles for Responsible Artificial Intelligence of Transparency, Fairness, Accountability, Privacy, Security and Reliability – all necessary for promoting and enabling safe and trustworthy AI. To implement these principles, we have a Responsible AI Framework that can be applied to the development, deployment and/or use of AI by Cisco, whether in developing a product or model for our customers and partners to use, integrating and building upon a third-party model for unique offerings, or providing AI tools or services for our own, internal operations. In practice, we strive to bring these principles to life by combining Security by Design, Privacy by Design, and Human Rights by Design to surface and mitigate risks to provide AI that is responsible and trustworthy."
AI Safety and Security Taxonomy
Framework
Cisco's Robust Intelligence
Private Company
San Francisco
California
United States
"We’re pleased to provide the first AI threat taxonomy that combines security and safety risks. AI security is concerned with protecting sensitive data and computing resources from unauthorized access or attack, whereas AI safety is concerned with preventing harms caused by unintended consequences of an AI application by its designer. Both present business risk which can result in financial, reputational, and legal ramifications. Mitigating these threats requires a novel, comprehensive approach to AI application security."
AWS Responsible AI
Framework
AWS - Amazon Web Services
Public Company
Seattle
Washington
United States
December
2024
"This document shares some recommendations that can be used across four major phases of the AI lifecycle: design, develop, deploy, and operate. The field of responsible AI is a rapidly developing area, so these recommendations should be viewed as a starting point and not the final answer. We encourage readers to consider the spirit and intent behind the recommendations. Responsible AI requires a shared commitment between developers, deployers, and end users of AI systems."
Guiding Principles for Trustworthy AI
Framework
NVIDIA
Public Company
Santa Clara
California
United States
Februrary
2024
"We believe AI should respect privacy and data protection regulations, operate in a secure and safe way, function in a transparent and accountable manner, and avoid unwanted biases and discrimination. We are committed to safe and trustworthy AI, in line with the White House Voluntary Commitments and other global AI Safety initiatives."
Frontier Safety Framework
Framework
Google DeepMind
Private Company
London
England
May
2024
"The Frontier Safety Framework is our first version of a set of protocols that aims to address severe risks that may arise from powerful capabilities of future foundation models. In focusing on these risks at the model level, it is intended to complement Google’s existing suite of AI responsibility and safety practices, and enable AI innovation and deployment consistent with our AI Principles. In the Framework, we specify protocols for the detection of capability levels at which models may pose severe risks (which we call “Critical Capability Levels (CCLs)”), and articulate a spectrum of mitigation options to address such risks. We are starting with an initial set of CCLs in the domains of Autonomy, Biosecurity, Cybersecurity, and Machine Learning R&D. Risk assessment in these domains will necessarily involve evaluating cross-cutting capabilities such as agency, tool use, and scientific understanding. We will be expanding our set of CCLs over time as we gain experience and insights on the projected capabilities of future frontier models."
Llama Responsible Use Guide
Framework
Meta
Public Company
Menlo Park
California
United States
September
2023
"This guide is a resource for developers that outlines common approaches to building responsibly at each level of an LLM-powered product. It covers best practices and considerations that developers should evaluate in the context of their specific use case and market. It also highlights some mitigation strategies and resources available to developers to address risks at various points in the system. These best practices should be considered holistically because strategies adopted at one level can impact the entire system."
Preparedness Framework
Framework
OpenAI
Private Company
San Francisco
California
United States
December
2023
"We believe the scientific study of catastrophic risks from AI has fallen far short of where we need to be. To help address this gap, we are introducing our Preparedness Framework, a living document describing OpenAI’s processes to track, evaluate, forecast, and protect against catastrophic risks posed by increasingly powerful models."
AI Governance Platform
Tool
Holistic AI
Private Company
San Jose
California
United States
"Take command of your AI ecosystem. A 360 AI Governance solution for AI trust, risk, security, and compliance that empowers companies to adopt AI at scale"
Responsible Scaling Policy
Framework
Anthropic
Private Company
San Francisco
California
United States
October
2024
"In September 2023, we released our Responsible Scaling Policy (RSP), a public commitment not to train or deploy models capable of causing catastrophic harm unless we have implemented safety and security measures that will keep risks below acceptable levels. We are now updating our RSP to account for the lessons we’ve learned over the last year. This updated policy reflects our view that risk governance in this rapidly evolving domain should be proportional, iterative, and exportable."
The Current State of AI Governance
Report
Babl AI
Private Company
Iowa City
Iowa
United States
March
2023
"As AI, machine learning algorithms, and algorithmic decision systems (ADS) continue to permeate every aspect of our lives and our society, the question of AI governance becomes exceedingly important. From racially biased healthcare algorithms to AI-enabled targeting decisions and from opaque and biased hiring algorithms to self-driving cars, the potential for AI and ADS to cause harm and infringe on both individual and group rights is significant. This is why increasingly more regulations are being proposed to audit or evaluate the impacts of algorithms that make or contribute to morally and legally consequential decisions. Alongside this increase in regulation, there has been a significant uptick in interest regarding the internal governance of AI. Organizations and institutions both large and small, nonprofit and for-profit, private and public, have begun creating and implementing governance tools and structures to ensure: (a) compliance with upcoming regulation, (b) minimization of reputational and financial risks of bad algorithms, and (c) safety and adherence to ethical standards for the responsible use of AI. This report examines the current state of internal governance structures and tools across organizations, both in the private and public sectors and in large and small organizations. This report provides one of the first robust and broad insights into the state of AI governance in the United States and Europe."
Stanford Center for AI Safety
Organization
Stanford Center for AI Safety
University
Stanford
California
United States
Janurary
2021
"The mission of the Stanford Center for AI Safety is to develop rigorous techniques for building safe and trustworthy AI systems and establishing confidence in their behavior and robustness, thereby facilitating their successful adoption in society."
Regulating Under Uncertainty: Governance Options for Generative AI
Framework
Stanford Cyber Policy Center
University
Stanford
California
United States
September
2024
"The revolution underway in the development of artificial intelligence promises to transform the economy and all social systems. It is difficult to think of an area of life that will not be affected in some way by AI, if the claims of the most ardent of AI cheerleaders prove true. Although innovation in AI has occurred for many decades, the two years since the release of ChatGPT have been marked by an exponential rise in development and attention to the technology. Unsurprisingly, governmental policy and regulation has lagged behind the fast pace of technological development. Nevertheless, a wealth of laws, both proposed and enacted, have emerged around the world. The purpose of this report is to canvas and analyze the existing array of proposals for governance of generative AI."
RegLab
Organization
Stanford
University
Stanford
California
United States
"RegLab partners with government agencies to design and evaluate programs, policies, and technologies that modernize government. We are an interdisciplinary team of legal experts, data scientists, social scientists, and engineers who are passionate about building an evidence base and high impact demonstration projects for better government."
MIT Policy Briefs on AI Governance
Framework
MIT - Massachusetts Institute of Technology
University
Cambridge
Massachusetts
United States
November
2023
"This policy brief is motivated by two objectives: 1. Maintaining U.S. AI leadership – which is vital to economic advancement and national security – while recognizing that AI, if not properly overseen, could have substantial detrimental effects on society (including compromising economic and national security interests). 2. Achieving broadly beneficial deployment of AI across a wide variety of domains. Beneficial AI requires prioritizing: security (against dangers such as deep fakes); individual privacy and autonomy (preventing abuses such as excessive surveillance and manipulation); safety (minimizing risks created by the deployment of AI, particularly in already regulated areas such as health, law and finance); shared prosperity (deploying AI in ways that create broadly accessible opportunities and gains from AI); and democratic and civic values (deploying AI in ways that are in keeping with societal norms)."
The Oxford Handbook of AI Ethics
Framework
Oxford Academic
University
Oxford
Oxfordshire
England
July
2020
"This book explores the intertwining domains of artificial intelligence (AI) and ethics—two highly divergent fields which at first seem to have nothing to do with one another."
The AI Risk-Management Standards Profile for General-Purpose AI Systems (GPAIS) and Foundation Models
Framework
UC Berkeley Center for Long-Term Cybersecurity
University
Berkeley
California
United States
November
2023
"This document provides an AI risk-management standards Profile, or a targeted set of risk-management practices or controls specifically for identifying, analyzing, and mitigating risks of GPAIS. This Profile document is designed to complement the broadly applicable guidance in the NIST AI Risk Management Framework (AI RMF) or a related AI risk-management standard such as ISO/IEC 23894."
AI Safeguard
Tool
Holistic AI
Private Company
San Jose
California
United States
"Shield against generative AI risk. Harness the power of LLMs without security or efficacy concerns"
UC Berkeley CLTC AI Security Initiative
Framework
UC Berkeley Center for Long-Term Cybersecurity
University
Berkeley
California
United States
July
2022
"As the capabilities of AI systems increase, we are experiencing a dramatic shift in the global security landscape. For all their benefits, AI systems introduce new vulnerabilities and can yield dangerous outcomes — from the automation of cyberattacks to disinformation campaigns and new forms of warfare. AI is expected to contribute more than $15 trillion to the global economy by 2030, but these gains are currently poised to widen inequalities, stoke social tensions, and motivate dangerous national competition. The AI Security Initiative works across technical, institutional, and policy domains to support trustworthy development of AI systems today and into the future."
Ethics and Governance of AI
Framework
Berkman Klein Center for Internet & Society at Harvard University
University
Cambridge
Massachusetts
United States
July
2018
"The rapidly growing capabilities and increasing presence of AI-based systems in our lives raise pressing questions about the impact, governance, ethics, and accountability of these technologies around the world. How can we narrow the knowledge gap between AI “experts” and the variety of people who use, interact with, and are impacted by these technologies? How do we harness the potential of AI systems while ensuring that they do not exacerbate existing inequalities and biases, or even create new ones?"
Responsible Generative AI: Accountable Technical Oversight
Framework
Berkman Klein Center for Internet & Society at Harvard University
University
Cambridge
Massachusetts
United States
May
2023
"Drawing on years of the center and community’s work on the governance of AI technologies, the Berkman Klein Center is exploring mechanisms for enabling accountable technical oversight of generative AI. Critical topics for generative AI to be explored include: new developments in harms and their impacts, balancing transparency and security in open research, and how to enable meaningful technical oversight within the nascent regulatory landscape. This work will surface and synthesize key themes and questions that regulators and independent technical auditors should understand and be prepared to address. "
AI And Inclusion
Framework
Berkman Klein Center for Internet & Society at Harvard University
University
Cambridge
Massachusetts
United States
Februrary
2018
"The AI and Inclusion track will foster the design and deployment of AI to benefit all members of society, including traditionally underserved communities. It will advance these objectives through research, learning, education, and engagement across local and global communities to close existing digital divides and participation gaps."
Principled Artificial Intelligence
Framework
Berkman Klein Center for Internet & Society at Harvard University
University
Cambridge
Massachusetts
United States
Janurary
2020
"The rapid spread of artificial intelligence (AI) systems has precipitated a rise in ethical and human rights-based frameworks intended to guide the development and use of these technologies. Despite the proliferation of these "AI principles," there has been little scholarly focus on understanding these efforts either individually or as contextualized within an expanding universe of principles with discernible trends. To that end, this white paper and its associated data visualization compare the contents of thirty-six prominent AI principles documents side-by-side. This effort uncovered a growing consensus around eight key thematic trends: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. Underlying this “normative core,” our analysis examined the forty-seven individual principles that make up the themes, detailing notable similarities and differences in interpretation found across the documents. In sharing these observations, it is our hope that policymakers, advocates, scholars, and others working to maximize the benefits and minimize the harms of AI will be better positioned to build on existing efforts and to push the fractured, global conversation on the future of AI toward consensus."
Responsible AI
Framework
Carnegie Mellon University
University
Pittsburgh
Pennsylvania
United States
April
2022
"Artificial intelligence (AI) is already impacting many aspects of people’s lives and society at large. At Carnegie Mellon University (CMU), we believe that AI must be designed, developed, and deployed responsibly to ensure accountability and transparency, and lead toward a more just and equitable world."
Toward AI Accountability: Policy Ideas for Moving Beyond a Self-Regulatory Approach
Framework
Carnegie Mellon University
University
Pittsburgh
Pennsylvania
United States
Janurary
2023
"The U.S. Government must strengthen its commitment to the responsible development and use of Artificial Intelligence (AI) by implementing more holistic and proactive regulatory policies and developing new industry incentives and enforcement mechanisms. Responsible development and use of AI has transformative potential to enhance human capabilities, bolster economic growth, and increase quality of life for all. However, existing efforts to build responsible AI systems have been largely reactive and ad-hoc in nature. Enhanced USG efforts should complement and advance the mission of the National AI Initiative and Federal agencies to lead the world in the development and use of trustworthy AI in the public and private sectors. Such efforts must articulate that economic growth can be achieved through AI systems that provide real benefits to end-users while attending to issues of equity and harm reduction."
AI Tracker
Tool
Holistic AI
Private Company
San Jose
California
United States
"Stay informed on the evolving AI landscape. Your all-in-one guide for maximizing the benefits of AI whilst minimizing financial, legal, and reputational risk."
AI Audits
Tool
Holistic AI
Private Company
San Jose
California
United States
"Establish trust in your AI systems. Conduct audits of your AI systems to showcase the trustworthiness of your technology"
The Cambridge Handbook of Responsible Artificial Intelligence
Framework
University of Cambridge
University
Cambridge
England
October
2022
"In the past decade, artificial intelligence (AI) has become a disruptive force around the world, offering enormous potential for innovation but also creating hazards and risks for individuals and the societies in which they live. This volume addresses the most pressing philosophical, ethical, legal, and societal challenges posed by AI. Contributors from different disciplines and sectors explore the foundational and normative aspects of responsible AI and provide a basis for a transdisciplinary approach to responsible AI. This work, which is designed to foster future discussions to develop proportional approaches to AI governance, will enable scholars, scientists, and other actors to identify normative frameworks for AI to allow societies, states, and the international community to unlock the potential for responsible innovation in this critical field."
Raising the Standard of AI Products
Framework
University of Edinburgh
University
Edinburgh
Scotland
1985
"We propose a mechanism for the promotion of high-standards in commercial Artificial Intelligence products, namely an association of companies which would regulate their own membership using a code of practice and the precedents set by previous cases. Membership would provide some assurance of quality. We argue the benefits of such a mechanism, and discuss some of the details including the proposal of a code of practice. This paper is intended as a vehicle for discussion rather than as the presentation of a definitive solution."
Use Artificial Intelligence Intelligently
Framework
University of Toronto
University
Toronto
Ontario
Canada
October
2024
"Artificial intelligence, the use of computers to perform tasks that require intelligence in humans, has greatly improved in recent years. Old expectations about what computers can and can’t do must now be continually updated as computational capacity quickly evolves to perform increasingly more complex tasks. Since 2022, generative AI (the use of AI techniques to generate high-quality text, image or video content) has improved dramatically, and this rapid improvement seems to be accelerating. The usefulness of generative AI continues to increase and it is being built into more systems. It is likely that soon, most systems will include at least some AI components."
Montréal Declaration for a Responsible Development of Artificial Intelligence
Framework
University of Montreal
University
Montreal
Quebec
Canada
2018
"The Montréal Declaration for responsible AI development has three main objectives: 1. Develop an ethical framework for the development and deployment of AI; 2. Guide the digital transition so everyone benefits from this technological revolution; 3. Open a national and international forum for discussion to collectively achieve equitable, inclusive, and ecologically sustainable AI development."
Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS)
Standard
IEEE - Institute for Electrical and Electronics Engineers
Non-Profit
Piscataway
New Jersey
United States
September
2020
"The goal of The Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS) is to provide the world’s first (based on our research) specification and body of its kind to enable a badge or mark for A/IS products, services and systems. Specifically, ECPAIS will enable evaluation based on the processes and outcomes of an organization’s products/services and systems using a risk-based approach."
ISO/IEC 42001:2023
Standard
ISO - International Organization for Standardization
Non-Profit
Geneva
Switzerland
2023
"ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems."
ISO/IEC 23894:2023 IT-AI-Guidance on risk management
Standard
ISO - International Organization for Standardization
Non-Profit
Geneva
Switzerland
2023
"This document provides guidance on how organizations that develop, produce, deploy or use products, systems and services that utilize artificial intelligence (AI) can manage risk specifically related to AI. The guidance also aims to assist organizations to integrate risk management into their AI-related activities and functions. It moreover describes processes for the effective implementation and integration of AI risk management."
ISO 31000, Risk Management Guidelines
Standard
ISO - International Organization for Standardization
Non-Profit
Geneva
Switzerland
2018
"ISO 31000 is an international standard that provides principles and guidelines for risk management. It outlines a comprehensive approach to identifying, analyzing, evaluating, treating, monitoring and communicating risks across an organization."
AI Risks - Check List for AI Risks Management
Standard
CEN - European Committee for Standardization
Non-Profit
Bruxelles
Belgium
"This document provides a check list of risk criteria for assessment guidance as well as risk events and their assessment for any system using AI. It does not offer an explicit method or solution, but rather a set of criteria and possibly measures and contingency plan structure. Detailed examples of risks, harms and possible countermeasures are included in annex. This document is applicable by all types of organizations including SMEs, large enterprises, public administration etc."
CSET AI Harm Framework
Framework
Center for Security and Emerging Technology (CSET) at Georgetown University's Walsh School of Foreign Service
University
Washington, D.C.
United States
July
2023
"Real-world harms caused by the use of AI technologies are widespread. Tracking and analyzing them improves our understanding of the variety of harms and the circumstances that lead to their occurrence once AI systems are deployed. This report presents a standardized conceptual framework for defining, tracking, classifying, and understanding harms caused by AI. It lays out the key elements required for the identification of AI harm, their basic relational structure, and definitions without imposing a single interpretation of AI harm. The brief concludes with an example of how to apply and customize the framework while keeping its modular structure."
CSET AI Triad
Framework
Center for Security and Emerging Technology (CSET) at Georgetown University's Walsh School of Foreign Service
University
Washington, D.C.
United States
August
2020
"Each part of the triad offers its own policy levers. Algorithmic progress depends on a nation acquiring and developing talented machine learning researchers. Larger and better datasets require tricky policy choices involving bias, privacy, and cybersecurity. Computing power can provide a point of leverage for export controls in foreign policy, as well as a bottleneck for AI research at home. In order to judiciously wield the levers available in AI policy, policymakers must first understand the technology itself and how it will reshape national security. The concept of the AI triad is one framework for doing so."
The Policy Handbook
Framework
CSET - Center for Security and Emerging Technology at Georgetown University's Walsh School of Foreign Service
University
Washington, D.C.
United States
June
2023
"This brief aims to provide a framework for a more systems-oriented technology and national security strategy. We begin by identifying and discussing the tensions between three strategic technology and national security goals: 1. Driving technological innovation. 2. Impeding adversaries’ progress. 3. Promoting safe, values-driven deployment."
Strengthening Resilience to AI Risk: A guide for UK policymakers
Framework
CETaS - Center for Emerging Technology and Security at The Alan Turing Institute
Non-Profit
London
England
August
2023
"The Centre for Emerging Technology and Security (CETaS) is a research centre based at The Alan Turing Institute, the UK’s national institute for data science and artificial intelligence. The Centre's mission is to inform UK security policy through evidence-based, interdisciplinary research on emerging technology issues."
The Zero Trust AI Governance Framework
Framework
AI Now Institute
Non-Profit
New York
United States
August
2023
"Indeed, a closer examination of the regulatory approaches they’ve embraced – namely ones that forestall action with lengthy processes, hinge on overly complex and hard-to-enforce regimes, and foist the burden of accountability onto those who have already suffered harm – informed the three overarching principles of this Zero Trust AI Governance framework: 1. Time is of the essence – start by vigorously enforcing existing laws. 2. Bold, easily administrable, bright-line rules are necessary. 3. At each phase of the AI system lifecycle, the burden should be on companies to prove their systems are not harmful."
Risk Management Framework + expert certifications
Framework
For Humanity
Non-Profit
Armonk
New York
United States
Februrary
2022
"Governance, Risk Management and Compliance (GRC) can be a confusing term in the AI, algorithmic and autonomous systems (AAA Systems) space. Sometimes it is thought of as a function, other times as a process and still other times including assurance and performance management. ForHumanity’s risk management framework and processes cover the Governance, Risk management and compliance aspects of GRC with Ethics, Bias, Privacy, Trust and Cybersecurity as key pillars (reflecting instead negative impacts tohumans as the focal point), regardless of the organization’s silos."
Ethics & Algorithms Toolkit (Beta)
Framework
GovEx - Bloomberg Center for Government Excellence at Johns Hopkins University
University
Baltimore
Maryland
United States
September
2020
"Government leaders and staff who leverage algorithms are facing increasing pressure from the public, the media, and academic institutions to be more transparent and accountable about their use. Every day, stories come out describing the unintended or undesirable consequences of algorithms. Governments have not had the tools they need to understand and manage this new class of risk. GovEx, the City and County of San Francisco, Harvard DataSmart, and Data Community DC have collaborated on a practical toolkit for cities to use to help them understand the implications of using an algorithm, clearly articulate the potential risks, and identify ways to mitigate them."
AI Regulator's Toolbox
Summary
Adam Jones
Independent
London
England
August
2024
"This summary explores specific practices addressing risks from advanced AI systems. Practices are grouped into categories based on where in the AI lifecycle they best ‘fit’ - although many practices are relevant at multiple stages. Within each group, practices are simply sorted alphabetically."
OECD/GPAI AI Principles
Framework
OECD - Organization for Economic Co-operation and Development
Government
Paris
France
May
2024
"The OECD AI Principles are the first intergovernmental standard on AI. They promote innovative, trustworthy AI that respects human rights and democratic values. Adopted in 2019 and updated in 2024, they are composed of five values-based principles and five recommendations that provide practical and flexible guidance for policymakers and AI actors."
AI, Data Governance and Privacy
Framework
OECD - Organization for Economic Co-operation and Development
Government
Paris
France
June
2024
"The report “AI, data governance, and privacy: Synergies and areas of international co-operation” explores the intersection of AI and privacy and ways in which relevant policy communities can work together to address related risks, especially with the rise of generative AI. It highlights key findings and recommendations to strengthen synergies and areas of international co-operation on AI, data governance and privacy"
Framework for the Classification of AI Systems
Framework
OECD - Organization for Economic Co-operation and Development
Government
Paris
France
Februrary
2022
"A user-friendly framework for policy makers, regulators, legislators and others to characterise AI systems for specific projects and contexts. The framework links AI system characteristics with the OECD AI Principles, the first set of AI standards that governments pledged to incorporate into policy making and promote the innovative and trustworthy use of AI."
A Taxonomy of Malicious ICT Incidents
Framework
UNIDIR - United Nations Institute for Disarmament Research
Government
Geneva
Switzerland
2022
"The UNIDIR Taxonomy of Malicious ICT Incidents is a tool that provides the multistakeholder community with an easy-to-read infographic that can help in analysing malicious ICT incidents. It is designed to work towards a baseline of knowledge and common understanding, which could help the international community to build confidence through increased information-sharing about malicious ICT incidents."
NATO AI Strategy
Framework
NATO - North Atlantic Treaty Organization
Government
International
July
2024
"Amid the growing global debate over advanced artificial intelligence (AI) systems such as generative AI, the Hiroshima AI Process was launched by the G7 under Japan’s presidency in May 2023, with the aim of promoting safe, secure, and trustworthy AI. This article delves into the significance of its output, the Hiroshima AI Process Comprehensive Policy Framework—the world’s first international effort toward that end."
Hiroshima Process International Guiding Principles for All AI Actors (Part of the G7 Hiroshima AI Process)
Framework
G7 - The Group of 4
Government
Hiroshima
Japan
May
2023
"The following 11 principles of the “Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems” should be applied to all AI actors when and as relevant and appropriate, in appropriate forms, to cover the design, development, deployment, provision and use of advanced AI systems, recognizing that some elements are only possible to apply to organizations developing advanced AI systems."
Hiroshima Process InternationalCode of Conduct for OrganizationsDeveloping Advanced AI Systems (Part of the G7 Hiroshima AI Process)
Framework
G7 - The Group of 5
Government
Hiroshima
Japan
May
2023
"On the basis of the Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI systems, the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems (henceforth "advanced AI systems")."
The Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems (Part of the G7 Hiroshima AI Process)
Framework
G7 - The Group of 6
Government
Hiroshima
Japan
May
2023
"The Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems aims to promote safe, secure, and trustworthy AI worldwide and will provide guidance for organizations developing and using the most advanced AI systems, including the most advanced foundation models and generative AI systems (henceforth "advanced AI systems"). Organizations may include, among others, entities from academia, civil society, the private sector, and the public sector."
G7 Hiroshima Process on Generative Artificial Intelligence: Towards a Common G7 Understanding on Generative AI (Part of the G7 Hiroshima AI Process)
Framework
G7 - The Group of 7
Government
Hiroshima
Japan
September
2023
"This document was prepared by the Organisation for Economic Co-operation and Development (OECD) Directorate for Science Technology and Innovation (STI) for the 2023 Japanese G7 Presidency and the G7 Digital and Tech Working Group, to inform discussions during the G7 Hiroshima Artificial Intelligence Process and the related interim virtual Ministers' Meeting on generative artificial intelligence on 7 September 2023. The opinions expressed and arguments employed herein do not necessarily reflect the official views of the member countries of the OECD or the G7."
AI Governance Day - From Principles to Implementation 2024 Report
Report
ITU - International Telecommunication Union
Non-Profit
Geneva
Switzerland
May
2024
""AI Governance Day" tackled the step of moving from regulatory frameworks to implementation. How are countries and regions navigating the dual objectives of maximizing AI's benefits while minimizing its risks? Participants shared experiences on what works, what does not work (yet), identified hurdles, and discussed what needs to happen next on the path towards effective regulatory implementation."
Guidelines for AI and Shared Prosperity - Tools For Improving AI’s Impact On Jobs
Framework
Partnership on AI
Non-Profit
San Francisco
California
United States
June
2023
"This is the first version of the Guidelines, developed under close guidance from a multidisciplinary AI and Shared Prosperity Initiative’s Steering Committee and with direct engagement of frontline workers from around the world experiencing the introduction of AI in their workplaces. The Guidelines are intended to be updated as the AI technology evolves and presents new risks and opportunities, as well as in response to stakeholder feedback and suggestions generated through workshops, testing, and implementation."
The AILuminate Assessment Standard
Framework
ML Commons
San Francisco
California
United States
November
2024
"The AILuminate Assessment Standard is designed to be a complete standard, as it provides a precise set of principles and guidelines to annotate AI systems responses, in addition to a hazard taxonomy for generative AI, supporting definitions, and implementation guidance."
The Responsible AI Certification White Paper
Certification
Responsible AI Institute
Non-Profit
Austin
Texas
United States
October
2022
"The RAII Certification Program is based on a maturity assessment that evaluates AI systems. Recognizing that not all AI systems are the same, this program tailors its tests to specific industries and functions. RAII’s initial focus industries and functions are: finance, health care, HR, and procurement."
The Responsible AI Certification Program Guidebook
Certification
Responsible AI Institute
Non-Profit
Austin
Texas
United States
June
2022
"This guidebook contains all information pertaining to RAII’s Certification Program such as how it was developed, how it works, and the policies and procedures that must be upheld by those who earn and audit the certification. Each of these, and all other sections were written in accordance with the required scheme development criteria in IAF (2022) and ISO (2019) scheme development documents"
AI’s Impact on Our Sustainable Future: A Guiding Framework for Responsible AI Integration Into ESG Paradigms
Framework
Responsible AI Institute
Non-Profit
Austin
Texas
United States
August
2024
"The convergence of artificial intelligence (AI) and Environmental, Social, and Governance (ESG) presents a complex and emerging terrain for organizations. Increasing use of AI complicates making informed and sustainable decisions by layering in additional existing and potential risks. In today’s environment, businesses face a pressing need to understand and leverage their synergies for positive impact, while managing associated risks. This white paper recognizes and addresses the necessity of adopting a comprehensive approach to AI investments and adoption, while stressing the evaluation of their effects on ESG objectives."
AI Policy Template - Build Your Foundational Organizational AI Policy
Framework
Responsible AI Institute
Non-Profit
Austin
Texas
United States
June
2024
"This Template [AI Policy] includes various provisions throughout that must be reviewed and, potentially, revised based on the specifics of your business and your use of artificial intelligence technologies. You are advised to confirm that all pre-populated information is accurate and appropriate for your business."
Best Practices for AI Governance Structures: Executive Oversight and Internal Review
Framework
Responsible AI Institute
Non-Profit
Austin
Texas
United States
April
2024
"With the recent passage of the EU AI Act, looming US regulation, and other global developments happening daily, now is the time to establish or refine your organization’s AI governance. The Responsible AI Institute has created a comprehensive guide to help you navigate the complex landscape of AI governance and drive change within your organization."
AI and Algorithm Auditor Certification Program: Essential Tools for AI and Algorithm Auditing
Certification
Babl AI
Private Company
Iowa City
Iowa
United States
May
2023
"The AI & Algorithm Auditor Certification Program, developed by BABL AI, equips professionals with the skills to evaluate and ensure the responsible development and deployment of AI systems. This certification is globally recognized for its rigor and emphasis on the ethical, technical, and governance challenges of algorithmic systems."
EU AI Act - Quality Management System: A certification course for risk and compliance professionals
Certification
Babl AI
Private Company
Iowa City
Iowa
United States
May
2024
"Introducing our comprehensive certificate program designed for risk and compliance professionals who need to manage a quality management system in compliance with the EU AI Act. This program combines four essential courses to provide a deep dive into the critical aspects of AI risk management and regulatory compliance."
Catalogue of Tools & Metrics for Trustworthy AI
Database
OECD - Organization for Economic Cooperation and Development
Government
Paris
France
December
2022
"These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe."
Credo AI Governance Platform
Tool
Credo AI
Private Company
Palo Alto
California
United States
March
2020
AI governance tool. They help companies interrogate their AI systems. They are "on a mission to empower organizations to create AI with the highest ethical standards. Their vision is to enable continuous human oversight of the frontier technologies and democratize AI governance/audit. They are building an intelligent SaaS to monitor, measure and manage AI introduced risks."
Reducing AI Harms and Lawsuits through AI Governance, Risk Management, and Compliance
Framework
Holistic AI
Private Company
San Jose
California
United States
May
2023
"Artificial intelligence is revolutionising industries worldwide, saving time and money, and removing the burden on human workers. However, this is not without risks, as demonstrated by several harms and lawsuits that have been observed in recent years. This white paper explores the risks of using AI, with a focus on HR Tech, Insurtech, biometrics, fintech, healthcare, housing, and social media and generative AI, as well as how AI Governance, Risk, and Compliance can make AI safer and increase trust."
Practical Risk Management in AI: Auditing and Assurance
Framework
Holistic AI
Private Company
San Jose
California
United States
"This paper maps out the auditing process, explaining its verticals and their regulatory significance. We also look at the current financial regulation, likely future financial regulation, and the current proposals for AI regulation to describe how these could and should operate effectively together. Finally, we provide a case study of an audit in financial services: testing a credit scoring system for bias based on protected characteristics."
DIY AI Governance: A Starting Guide
Framework
Holistic AI
Private Company
San Jose
California
United States
September
2023
"A downloadable guide giving an overview of the key aspects of AI GRC, one of the key points is to promote the HAI platform as the solution to the challenges of AI GRC throughout the guide."
LLM Auditing Guide: What It Is, Why It's Necessary, and How to Execute It
Framework
Holistic AI
Private Company
San Jose
California
United States
"Collectively, these auditing methods offer a comprehensive framework for evaluating LLM behaviour, addressing technical, ethical, and societal concerns, as well as guiding refinements to ensure responsible and trustworthy AI deployment."
Towards Auditing Large Language Models: Improving Text-based Stereotype Detection
Framework
Holistic AI
Private Company
San Jose
California
United States
"LLMs can reinforce biases and stereotypes, affecting areas like political polarization, racial bias in legal systems, and opening up organizations to regulatory, financial, and reputational risk. Existing LLM auditing frameworks often separate bias benchmarks from text-based stereotype detection, creating a gap in understanding the interaction between these elements."
Requirements for High-Risk AI Systems
Framework
Holistic AI
Private Company
San Jose
California
United States
"The EU AI Act (EU AIA) proposes a “risk-based approach” for regulating AI systems, where systems are classed as having (1) low or minimal risk, (2) limited risk, (3) high-risk, or (4) unacceptable risk."
AI Ethics White Paper
Framework
Holistic AI
Private Company
San Jose
California
United States
"In this white paper, which is part of our Holistic AI thought experiment series, we pick up on the responsibility of the AI ethics community - or more specifically ‘AI ethicists’, by advocating that the role of the AI ethicist in the public debate comes with a responsibility to educate and inform (to generate questions and possibilities), rather than to lead and dictate (to provide answers and ideology)."
Amazon SageMaker
Tool
AWS - Amazon Web Services
Public Company
Seattle
Washington
United States
November
2017
"Bringing together widely-adopted AWS machine learning and analytics capabilities, Amazon SageMaker delivers an integrated experience for analytics and AI with unified access to all your data. Collaborate and build faster from a unified studio (preview) using familiar AWS tools for model development, generative AI, data processing, and SQL analytics, accelerated by Amazon Q Developer, the most capable generative AI assistant for software development. Access all your data whether it’s stored in data lakes, data warehouses, third-party or federated data sources, with governance built-in to meet enterprise security needs."
DataRobot
Tool
DataRobot
Private Company
Boston
Massachusetts
United States
June
2012
"DataRobot delivers the industry-leading AI applications and platform that maximize impact and minimize risk for your business."
Arthur Bench
Tool
Arthur AI
Private Company
New York City
New York
United States
August
2023
"The Most Robust Way to Evaluate LLMs. Bench is our solution to help teams evaluate the different LLM options out there in a quick, easy and consistent way."
Arthur Shield
Tool
Arthur AI
Private Company
New York City
New York
United States
May
2023
"The First Firewall for LLMs. Shield is our solution to help companies deploy their LLMs confidently and safely."
Arthur Scope
Tool
Arthur AI
Private Company
New York City
New York
United States
December
2023
"The Complete AI Performance Solution. With Scope, enterprise teams can optimize ML operations and performance, delivering better results across LLM, tabular, CV, and NLP models."
Arthur Chat
Tool
Arthur AI
Private Company
New York City
New York
United States
December
2023
"Fast, Safe, Custom AI for Business. As a completely turnkey AI chat platform built on top of your enterprise documents and data, Arthur Chat is the fastest way to unlock the value of your LLM."
AI Auditing Definitions
Framework
IAAA - International Association of Algorthmic Auditors
Non-Profit
International
October
2024
"This report outlines essential aspects of AI auditing, including scope, frequency, metrics, explainability, redress mechanisms, auditor independence, and reporting standards. By providing this resource, the IAAA reaffirms its commitment to advancing the field of AI auditing and promoting the development of trustworthy AI systems."
Auditors Code of Conduct
Framework
IAAA - International Association of Algorthmic Auditors
Non-Profit
International
"The Code of Conduct of the International Association of Algorithmic Auditors (IAAA) sets forth the ethical and professional standards expected from its members. This Code serves as a guide to integrity, transparency, and respect within the field of algorithmic auditing, reflecting the core values and mission of the IAAA."
FLI AI Safety Index 2024
Briefing
Future of Life Institute
Non-Profit
Campbell
California
United States
December
2024
"Rapidly improving AI capabilities have increased interest in how companies report, assess and attempt to mitigate associated risks. The Future of Life Institute (FLI) therefore facilitated the AI Safety Index, a tool designed to evaluate and compare safety practices among leading AI companies. At the heart of the Index is an independent review panel, including some of the world’s foremost AI experts. Reviewers were tasked with grading companies’ safety policies on the basis of a comprehensive evidence base collected by FLI. The index aims to incentivize responsible AI development by promoting transparency, highlighting commendable efforts, and identifying areas of concern."
Towards Effective Governance of Foundation Models and Generative AI
Report
The Future Society
Non-Profit
Boston
Massachusetts
United States
March
2024
"In our report, we share highlights from the panels, fireside chats, and other dialogues at the fifth edition of The Athens Roundtable and present 8 key recommendations that emerged through discussions."
Frontier Model Forum
Organization
Frontier Model Forum
Non-Profit
Washington, D.C.
United States
July
2023
"The Frontier Model Forum draws on the technical and operational expertise of its member companies to benefit the entire AI ecosystem, advancing AI safety research and supporting efforts to develop AI applications to meet society’s most-pressing needs."
An Overview of Catastrophic AI Risks
Overview
CAIS - Center for AI Safety
Non-Profit
San Francisco
California
United States
October
2023
"This paper provides an overview of the main sources of catastrophic AI risks, which we organize into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling agents far more intelligent than humans."
An Overview of 11 Proposals for Building Safe Advanced AI
Overview
MIRI - Machine Intelligence Research Institute
Non-Profit
Berkeley
California
United States
December
2020
"This paper analyzes and compares 11 different proposals for building safe advanced AI under the current machine learning paradigm, including major contenders such as iterated amplification, AI safety via debate, and recursive reward modeling. Each proposal is evaluated on the four components of outer alignment, inner alignment, training competitiveness, and performance competitiveness, of which the distinction between the latter two is introduced in this paper. While prior literature has primarily focused on analyzing individual proposals, or primarily focused on outer alignment at the expense of inner alignment, this analysis seeks to take a comparative look at a wide range of proposals including a comparative analysis across all four previously mentioned components."
From Principles to Rules: A Regulatory Approach for Frontier AI
Framework
Centre for the Governance of AI
Non-Profit
Oxford
Oxfordshire
England
July
2024
"We recommend that policymakers should initially (1) mandate adherence to high-level principles for safe frontier AI development and deployment, (2) ensure that regulators closely oversee how developers comply with these principles, and (3) urgently build up regulatory capacity. Over time, the approach should likely become more rule-based. Our recommendations are based on a number of assumptions, including (A) risks from frontier AI systems are poorly understood and rapidly evolving, (B) many safety practices are still nascent, and (C) frontier AI developers are best placed to innovate on safety practices."
IAPS - Institute for AI Policy and Strategy
Organization
IAPS - Institute for AI Policy and Strategy
Non-Profit
Washington, D.C.
United States
2021
"The Institute for AI Policy and Strategy (IAPS) is a think tank of aspiring wonks working to understand and navigate the transformative potential of advanced AI. Our mission is to identify and promote strategies that maximize the benefits of AI for society and develop thoughtful solutions to minimize its risks. We aim to be humble yet purposeful: we’re all having to learn about AI very fast, and we’d love it if you could join us in figuring out what the future holds together."
CEN-CENELEC Focus Group Report: Road Map on Artificial Intelligence (AI)
Framework
CEN-CENELEC Joint Technical Committee on Artificial Intelligence (JTC 21)
Non-Profit
Bruxelles
Belgium
September
2020
"The Focus Group has established an overall framework for European AI standardization, by developing a high-level vision (chapter 1.2). This vision is applicable for the whole AI ecosystem and aims at supporting the European AI industry and mitigate risks for European citizens."
CEN-CENELEC response to the EC White Paper on AI
Response
CEN-CENELEC Joint Technical Committee on Artificial Intelligence (JTC 21)
Non-Profit
Bruxelles
Belgium
June
2020
"This paper is the official response from CEN-CENELEC on the EC White Paper on AI. It builds on a strong consensus of over 70 experts joined together in the CEN-CENELEC Focus Group on AI."
Ethics in the Age of Disruptive Technologies: An Operational Roadmap
Framework
ITEC - Institute for Technology, Ethics, and Culture at Santa Clara University
University
Santa Clara
California
United States
June
2023
"[O]ffers organizations a strategic plan to enhance ethical management practices, empowering them to navigate the complex landscape of disruptive technologies such as AI, machine learning, encryption, tracking, and others while upholding strong ethical standards."
The Presidio Recommendations on Responsible Generative AI
Framework
World Economic Forum
Government
Geneva
Switzerland
June
2023
"These 30 action-oriented recommendations aim to navigate AI complexities and harness its potential ethically. By implementing them, we can shape a more innovative, equitable, and prosperous future while mitigating risks."
Towards Unified Objectives forSelf-Reflective AI
Framework
Medical University of Vienna
University
Wien
Austria
May
2023
"Large language models (LLMs) demonstrate outstanding capabilities, but challenges remain regarding their ability to solve complex reasoning tasks, as well as their transparency, robustness, truthfulness and ethical alignment. We devise a model of objectives for steering and evaluating the reasoning of LLMs by unifying principles from several strands of preceding work: structured reasoning in LLMs, red-teaming / self-evaluation / self-reflection, AI system explainability, guidelines for human critical thinking, AI system security/safety, and ethical guidelines for AI. We identify and curate a list of 162 objectives from literature, and create a unified model of 39 objectives organized into seven categories: assumptions and perspectives, reasoning, information and evidence, robustness and security, ethics, utility, and implications. We envision that this resource can serve multiple purposes: monitoring and steering models at inference time, improving model behavior during training, and guiding human evaluation of model reasoning."
Policy Alignment on AI Transparency: Analyzing Interoperability of Documentation Requirements across Eight Frameworks
Framework
Partnership on AI
Non-Profit
San Francisco
California
United States
November
2024
"Partnership on AI’s Policy Alignment on AI Transparency conducts a comparative analysis of eight leading policy frameworks for foundation models, with a particular focus on documentation requirements, which are a critical lever for achieving transparency and safety."
Truthful AI: Developing and governing AI that does not lie
Framework
Future of Humanity Institute at Oxford
University
Oxford
Oxfordshire
England
October
2021
"Establishing norms or laws of AI truthfulness will require significant work to: (1) identify clear truthfulness standards; (2) create institutions that can judge adherence to those standards; and (3) develop AI systems that are robustly truthful. Our initial proposals for these areas include: (1) a standard of avoiding "negligent falsehoods" (a generalisation of lies that is easier to assess); (2) institutions to evaluate AI systems before and after real-world deployment; and (3) explicitly training AI systems to be truthful via curated datasets and human interaction. "
Strategic Implications of Openness in AI Development
Framework
Future of Humanity Institute at Oxford
University
Oxford
Oxfordshire
England
March
2016
"This paper attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals)."
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
Framework
Future of Humanity Institute at Oxford
University
Oxford
Oxfordshire
England
February
2018
"This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats."