The 36 member countries of the Organization for Economic Co-operation and Development (OECD) and six other countries formally adopted the first set of intergovernmental policy guidelines on Artificial Intelligence (AI) this month, agreeing to uphold international standards that aim to ensure AI systems are designed to be robust, safe, fair and trustworthy.
Elaborated with guidance from an expert group formed by more than 50 members from governments, academia, business, civil society, international bodies, the tech community and trade unions, the Principles comprise five values-based principles for the responsible deployment of trustworthy AI and five recommendations for public policy and international co-operation. They aim to guide governments, organizations and individuals in designing and running AI systems in a way that puts people’s best interests first and ensuring that designers and operators are held accountable for their proper functioning.
“Artificial Intelligence is revolutionizing the way we live and work and is offering extraordinary benefits for our societies and economies. Yet, it raises new challenges and is also fueling anxieties and ethical concerns. This puts the onus on governments to ensure that AI systems are designed in a way that respects our values and laws, so people can trust that their safety and privacy will be paramount,” said OECD Secretary-General Angel Gurría. “These Principles will be a global reference point for trustworthy AI so that we can harness its opportunities in a way that delivers the best outcomes for all.”
In summary, the OECD Principles on AI state that:
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for examle, enabling human intervention where necessary – to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.
- AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed.
- Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
Feel free to download the AI Principles in full.
The AI Principles have the backing of the European Commission, whose high-level expert group has produced Ethics Guidelines for Trustworthy AI, and they will be part of the discussion at the forthcoming G20 Leaders’ Summit in Japan. The OECD’s digital policy experts will build on the Principles in the months ahead to produce practical guidance for implementing them.
While not legally binding, existing OECD Principles in other policy areas have proved highly influential in setting international standards and helping governments to design national legislation. For example, the OECD Privacy Guidelines, which set limits to the collection and use of personal data, underlie many privacy laws and frameworks in the United States, Europe and Asia. The G20-endorsed OECD Principles of Corporate Governance have become an international benchmark for policy makers, investors, companies and other stakeholders working on institutional and regulatory frameworks for corporate governance.