AI Model Standardization Organizations and Initiatives: The Path Towards Reliable AI
Artificial Intelligence (AI) and Machine Learning (ML) technologies are transforming industries at an unprecedented pace. Companies in finance, healthcare, retail, and manufacturing are using AI and ML to drive innovation, improve their operations, and enhance customer experiences. However, as AI and ML become more mainstream, concerns around their reliability and interpretability are also increasing. Poorly designed models can lead to biased and unjust outcomes, misinterpretations of data, and unintended consequences. To address these challenges, various organizations and initiatives are working towards creating standardized frameworks for AI model development, deployment, and evaluation.
Here, we will explore some of the most notable AI model standardization organizations and initiatives, their goals and objectives, and their impact on the AI industry.
IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems was launched in 2016 as a collaborative platform to develop and advance ethical norms and standards for AI systems. The initiative consists of a diverse group of AI experts, policymakers, academics, and industry leaders from around the world. The initiative’s primary focus is on creating Ethically Aligned Design (EAD) standards for AI systems.
The EAD standard framework is designed to help developers and stakeholders consider and address ethical considerations throughout the lifecycle of AI development, from design to deployment. It includes a range of ethical principles such as transparency, accountability, and explainability, to ensure that AI systems are developed and deployed in a manner that is aligned with democratic values and human rights.
In addition to developing the EAD framework, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems also conducts educational programs, workshops, and conferences to raise awareness about AI ethics and promote responsible AI usage.
Partnership on AI
The Partnership on AI is a coalition of multinational corporations, nonprofits, and academic institutions working to advance AI research and its applications while ensuring its socially beneficial use. The partnership was established in 2016 and currently comprises over 100 organizations, including Apple, Facebook, Google, and Microsoft.
The Partnership on AI focuses on creating robust AI-based systems that contribute to society’s overall well-being, including developing AI standards that align with ethical principles such as transparency, fairness, and accountability. It also works to ensure continuous engagement with diverse communities and stakeholders to foster trust, build understanding, and minimize potential risks.
The partnership has built collaborations with policymakers, academics, and researchers to ensure that AI systems are deployed in a fair, transparent, and accountable manner. It has also set up working groups to examine issues such as bias in AI, ethical considerations in AI development, and the impact of AI on labor markets.
Machine Learning Transparency and Accountability Initiative
The Machine Learning Transparency and Accountability Initiative (MLTA) was launched in 2018 as a project overseen by the Alan Turing Institute, the UK’s national institute for data science and AI research. The initiative is aimed at developing tools, best practices, and benchmarks for improving the transparency and interpretability of ML models.
The MLTA initiative has four key objectives: to promote the development of transparent and explainable ML models, to encourage the use of benchmarking and diagnostic tests for evaluating model reliability, to explore the ethical implications of ML models, and to increase public awareness about the importance of transparency and accountability in AI systems.
The initiative has published a set of transparency and accountability benchmarks for ML models, which are intended to provide developers with clear guidance on evaluating and improving model transparency and interpretabiity. It has also developed an open-source toolkit called Skater, which can help developers visualize and interpret model outputs and gain insight into how they function.
ISO/IEC JTC 1/SC 42
The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) jointly established the Joint Technical Committee for Artificial Intelligence (JTC 1/SC 42) in 2017. The committee focuses on developing a range of standards and guidelines to ensure that AI systems are designed and deployed in a reliable, safe, and trustworthy manner.
The committee’s work includes developing standards for data management, transparency and explainability in AI systems, and evaluating the accuracy and reliability of AI models. It also develops guidelines for AI system deployment and assessment to help ensure that systems are developed in compliance with democratic principles and human rights.
The work of ISO/IEC JTC 1/SC 42 has the potential to shape the global AI industry, as its standards and guidelines are likely to be adopted by companies and policymakers around the world.
Conclusion
AI model standardization organizations and initiatives are playing a critical role in advancing responsible AI development and deployment. The frameworks, best practices, and benchmarks they develop provide companies with clear guidance on how to build and evaluate AI systems in a robust, reliable and accountable manner. They also foster trust and promote transparency through continuous engagement with diverse stakeholders and communities.
The impact of these organizations is broad and long-term, and their efforts are likely to influence AI development globally. As AI increasingly becomes a part of our everyday lives, it is essential to ensure that it is developed and deployed in a manner that is aligned with democratic values, human rights and societal well-being. AI stakeholders should remain engaged in the work of these standardization organizations and push for responsible AI development and deployment practices.