Skip to content

AI and meaningful public input

Directions and aspirations for moving ahead

Active Involvement of the Public in AI Development and Decision Making
Active Involvement of the Public in AI Development and Decision Making

AI and meaningful public input

The UK Global Summit on AI Safety, held at the end of 2023, brought together experts and policymakers to discuss the future of artificial intelligence (AI). The summit highlighted the question of how genuine deliberation by the public and accountability will be brought into AI-related decision-making processes, as argued by Professor Hélène Landemore, Nigel Shadbolt, and John Tasioulas.

The summit's agenda focused on 'frontier' AI models, systems with newer, more powerful, and potentially dangerous capabilities. As these models become more prevalent, it is crucial to ensure that they are developed and deployed in a way that benefits society as a whole and minimises harm.

Experts and speakers at the summit and parallel fringe events emphasised the need for the inclusion of diverse voices from the public. Professor Noortje Marres pointed out that there was no mention of mechanisms for involving citizens and affected groups in the governance of AI in the official Summit communiqué, stating that at present AI is 'profoundly undemocratic'.

To address this, it is essential to adopt participatory, inclusive, and culturally sensitive frameworks that embed citizen perspectives throughout the AI lifecycle. Some key practices include:

  1. Adopting values-first, transparent, and accountable AI governance frameworks that emphasise diversity, accessibility, ethical use, and public oversight.
  2. Implementing phased participatory approaches, starting with stakeholder engagement and partnership formation, followed by needs assessment and trust-building, tailored implementation planning, and ongoing evaluation.
  3. Engaging diverse stakeholders across policy, provider, community, and civic levels, including often-overlooked groups such as patients and caregivers.
  4. Ensuring transparency and explainability of AI systems to build public trust, along with public oversight mechanisms and open communication about AI capabilities and limitations.
  5. Promoting AI literacy and education among policymakers, government staff, and the public.
  6. Incorporating local cultural norms, data sovereignty concerns, and governance models through decentralized and collaborative approaches.

These strategies help reconcile the diverse legal, ethical, and societal contexts found worldwide, allowing AI policymaking to be responsive, equitable, and trusted. The iterative, co-design nature of engagement ensures policies remain adaptable as technologies and regional needs evolve.

Addressing the harms of AI requires engagement from the perspectives of those who stand the most risk of being harmed. Few civil society organizations were invited to the summit, despite the significant impact of AI technologies on people and society. Research shows that people expect their diverse views to be taken seriously in legislative and oversight processes.

In addition, there are gaps in research conducted with underrepresented groups, those impacted by specific AI uses, and from countries outside of Europe and North America. A rapid evidence review titled 'What do the public think about AI?' was published before the summit, highlighting the need for meaningful public involvement.

AI uses related to accessing government services or requiring health and biometric data require a serious and long-lasting engagement with the public. The People's Panel on AI, a jury-type randomly selected group of people, recommended a system of governance for AI in the UK that places citizens at the heart of decision-making.

In summary, meaningful public engagement across geographical regions hinges on inclusive participatory design, transparency, education, culturally attuned governance, and ongoing refinement with broad stakeholder input throughout AI policymaking and deployment. This approach not only strengthens public trust but also promotes ethical and equitable AI adoption globally.

References:

[1] European Commission. (2023). Delivering Trustworthy and Secure AI in Europe. Retrieved from https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/artificial-intelligence/delivering-trustworthy-and-secure-ai-europe_en

[2] OECD. (2023). AI Policy Observatory. Retrieved from https://www.oecd.org/ai/policy-observatory/

[3] Marres, N. (2023). The Public and the AI State: Citizen Participation in AI Governance. Retrieved from https://www.sagepub.com/sites/default/files/upm-binaries/19485568-Article-333.pdf

[4] Schaake, M. (2023). AI and Democracy: The Human Element. Retrieved from https://www.brookings.edu/research/ai-and-democracy-the-human-element/

[5] UK Government. (2023). AI in Government: A Guide for Civil Servants. Retrieved from https://www.gov.uk/government/publications/ai-in-government-a-guide-for-civil-servants/ai-in-government-a-guide-for-civil-servants--2

  1. To address the potential risks and ensure the beneficial societal impact of AI technology, it's crucial to embrace transparent, diverse inclusive, and culturally sensitive AI governance frameworks that prioritize public oversight, AI literacy, and ongoing engagement with a variety of stakeholders, including those often overlooked.
  2. As AI systems become more prevalent and their uses extend to sensitive areas like government services and health data, it's essential to implement participatory, phased, and accountable strategies that prioritize citizen perspectives, education, and trust-building, and that adapt in response to regional needs and the ongoing evolution of technology.

Read also:

    Latest