Joint Statement on Artificial Intelligence and Human Rights

June 2025

We, the undersigned members and the observer of the Freedom Online Coalition (FOC), reaffirm our commitment to protecting human rights and fundamental freedoms, both online and offline. Artificial Intelligence (AI) holds both transformative and disruptive potential and can advance sustainable development. When AI  complies with international human rights law and is governed responsibly throughout its lifecycle,¹ it can enhance efficiency, transparency, participation, and trust in democratic processes. At the same time, since the FOC’s 2020 Joint Statement², rapid developments in AI have increased the scale and urgency of human rights risks and impacts. 

Today, AI systems are used systematically to suppress dissent, manipulate public discourse, amplify gender-based violence, enable unlawful and arbitrary digital surveillance, and reinforce inequalities and discrimination. These trends are no longer isolated; in some instances, the trends are becoming embedded in governance and law enforcement systems, with fewer checks, less transparency, and greater cross-border and geopolitical impact.

At the same time, AI governance frameworks are emerging globally—through regional policy, legislation, international initiatives, and standard-setting bodies. This is a pivotal moment: decisions made now will shape how international law, including international human rights law, is taken into account throughout the lifecycle of AI systems. The FOC is committed to striving for frameworks that are not shaped by authoritarian interests or solely by commercial priorities but are instead firmly rooted in and in compliance with international law, including international human rights law, developed responsibly through inclusive, multistakeholder processes and serve human needs and interests while respecting full enjoyment of human rights and fundamental freedoms. 

We welcome the progress made since 2020, including the UNGA Resolution 78/265³ on trustworthy AI, and the ongoing implementation and follow-up processes of the Global Digital Compact including the Global Dialogue on AI Governance and the Independent International Scientific Panel on AI. We further welcome the Council of Europe Framework Convention on AI and Human Rights, Democracy and the Rule of Law. We seek to ensure that these processes remain focused on strengthening the protection of human rights online and offline, serving as inclusive platforms for dialogue and understanding. 

AI systems can support inclusive development, strengthen social and political participation and improve public service delivery when responsibly governed.⁴ Yet, if used without adequate safeguards and transparency—especially in high-impact areas such as health, education, social welfare and justice—they pose serious risks to human rights  and fundamental freedoms, including freedom of expression, freedom of assembly, freedom of association, and freedom of religion or belief, and privacy rights. 

We are also aware that the environmental impact of AI—including the energy and resource intensity of large-scale training of AI models— is potentially significant⁵, with growing implications for sustainability and human rights, particularly in climate-vulnerable and resource-affected communities. 

We stress that special attention must be paid to specific rights of individuals and communities who are most at risk of AI-related harms⁶, in particular women and girls in all their diversity. AI-related harms include a lack of access and representation in training data, and discrimination through algorithmic bias. Balanced training data and bias-mitigation are key to support accuracy and fairness in AI systems. AI systems that advance, protect, and preserve linguistic and cultural diversity, taking into account multilingualism in their training data and throughout the AI system’s life cycle, should be promoted. 

Furthermore, AI systems enable technology facilitated gender-based violence through online harassment and image-based abuse. This includes the creation, possession, sharing or threat to share non-consensual intimate imagery. Misuse of AI systems for gender-based violence, arbitrary and unlawful surveillance, spreading disinformation and promoting hatred, and other forms of repression undermines democratic governance and public trust and threatens efforts to achieve gender equality.

We recognize that private actors play a central role in every part of the lifecycle of AI systems. They should take meaningful responsibility for respecting human rights guided by frameworks such as the UN Guiding Principles on Business and Human Rights and by incorporating safety-by-design principles into their design and governance models in order to identify, mitigate and prevent adverse human rights impacts linked to their activities. This is particularly important in contexts where respect for human rights and the rule of law is weak, where the private sector may enable state repression and violations of human rights through digital tools and platforms; even in democratic states, private technology companies can have an oversized impact on policy and influence on politics, highlighting the importance of requisite accountability frameworks.     

States must abide by their obligations under international law to ensure human rights are fully respected and protected. States should adopt or maintain measures to ensure that the activities within the lifecycle of AI systems are consistent with obligations to respect, protect and promote human rights, recognizing that harm can arise not only from direct use, but also through indirect or cumulative impacts, including discrimination, de-skilling or cognitive atrophy. This also includes, where appropriate and relevant, the recognition and appropriate protection of intellectual property rights and personal data across the lifecycle of AI systems.

Free and pluralistic information ecosystems, including independent journalism, and the work of human rights defenders, are essential to democratic participation.⁷ AI-enabled disinformation and manipulated content can and are being used to undermine these foundations. 

We also stress the need for rights-respecting standardization and refer to the Joint Statement on Technical Standards and Human Rights in the Context of Digital Technologies.⁸ In line with the Global Digital Compact, standards development organizations should, with human rights experts and civil society expertise, collaborate to promote the development and adoption of interoperable AI standards that uphold safety, reliability, sustainability and human rights. 

Mitigating risks while harnessing opportunities is essential to unlocking the full potential of AI for sustainable development while supporting the UN Sustainable Development Goals. A human rights-respecting, human-centric approach to AI governance rooted in international law, including international human rights law, as well as accountability, is essential to build public trust in AI systems and ensure sustainable, inclusive and safe innovation.     

Call to Action

To meet the challenges and opportunities of AI in a human rights-respecting manner, we call on all governments, institutions, and technology actors to work constructively together towards the following actions:

  • Take steps to promote the use of AI in compliance with or in alignment with international law, including international human rights law and obligations, including the right not to be subject to arbitrary or unlawful interference with privacy, the rights to freedoms of expression, association, and peaceful assembly, the right to equal protection of the law without discrimination, freedom of thought, freedom of religion or belief, non-discrimination, due process, and effective remedy, as well as the appropriate protection of intellectual property rights, in particular for independent artists, writers, and the creative industry.
  • Explore coordinated reporting mechanisms on the misuse of AI, allowing for early warning and response, and improved transparency across jurisdictions.
  • Promote algorithmic transparency and refrain from developing, deploying, or exporting, AI systems that can predictably be used to undermine human rights, including systems used for, or with the potential to be used for, arbitrary and unlawful surveillance, information control and information operations.
  • Develop and implement proportionate human rights respecting safeguards, in compliance with international human rights law, for AI in high-risk settings, including, where appropriate, risk assessments, and transparent and meaningful human oversight, with clear procedures for redress and independent evaluation.
  • Promote balanced and representative training of AI systems, alongside ongoing efforts to mitigate output bias, by ensuring development and use practices that incorporate traceability and explainability enabling access to information on system reasoning, capabilities, and limitations.      
  • Promote and adopt robust legal, technical, and organizational safeguards, in accordance with international law, including international human rights law, to address the growing use of AI-driven disinformation, political microtargeting, and other forms of targeted manipulation that threaten democratic processes and social cohesion.
  • Ensure development of locally relevant AI models to protect cultural diversity and equal access to technological advancement. When AI systems primarily reflect dominant global languages and perspectives, they limit people’s ability to fully exercise their cultural rights in their native languages and benefit from AI innovations, undermining fundamental freedoms of expression, information access, and non-discrimination in increasingly digital everyday life.
  • Promote and safeguard the safety of journalists, media pluralism as well as media and information literacy, access to diverse content online, and AI and digital literacy—including in local languages—as a practical means of mitigating the impacts of disinformation and manipulated content in AI-mediated information environments.
  • Increase capacity and awareness of the evolving privacy risks posed by AI systems that can be used to infer sensitive personal information from seemingly innocuous data patterns.
  • Encourage and where appropriate, require AI developers and deployers to carry out human rights due diligence across the full lifecycle of AI systems, in line with the UN Guiding Principles on Business and Human Rights. This includes risk identification, stakeholder consultation, mitigation planning, and ongoing monitoring, including demonstrating the ability to reverse or deactivate systems in the event of unforeseen harms, with special attention to high-risk contexts and groups or populations that may be at heightened risk of becoming vulnerable or marginalized.
  • Protect individuals and communities most at risk of AI-related harm,⁶ particularly where systems reinforce or exacerbate existing vulnerabilities or discrimination.
  • Support inclusive multistakeholder approaches to AI governance frameworks with meaningful participation from civil society, independent experts, and affected communities, especially from underrepresented regions and persons who may be in vulnerable situations.     
  • Seek to ensure and encourage the UN’s Global Dialogue on AI Governance and the Independent International Scientific Panel on AI to embed human rights in their mandates, structures, deliberations, priorities, and outputs, with diverse, meaningful multistakeholder participation throughout, including stakeholders from underrepresented regions and groups. 
  • Promote information-sharing between the UN Independent International Scientific Panel on AI and UN-affiliated environmental bodies, including the Intergovernmental Panel on Climate Change (IPCC), to ensure AI-related environmental risks—such as emissions, energy intensity, and resource extraction—are systematically assessed. This supports a human rights-based, sustainable development approach in line with the 2030 Agenda and the Sustainable Development Goals (SDGs). 
  • Promote collaboration on using AI for strengthening human rights, achieving the SDGs, and gender equality and facilitating knowledge exchange at existing fora such as the Internet Governance Forum (IGF) and the International Telecommunication Union’s (ITU) AI for Good Global Summit with all relevant stakeholders.
  • Strengthen the role of the OHCHR in global AI governance by supporting efforts to involve it in the follow-up to the Global Digital Compact, including the Global Dialogue on AI Governance and the UN Independent International Scientific Panel on AI. Stakeholders, especially states, are encouraged to consider increased support, including for initiatives like the Digital Rights Advisory Service, to enhance the OHCHR’s capacity to monitor AI-related human rights impacts.
  • Explore development of appropriate legal frameworks include enforceable provisions for access to remedy which could include independent oversight and avenues for individuals and communities to contest AI-related harms and seek redress.
  • Advance implementation of emerging governance frameworks through risk-based and context-based approaches, transparency, and democratic oversight.      
  • Welcome the adoption of the Council of Europe Framework Convention on AI and Human Rights, Democracy and the Rule of Law⁹ and recognize its potential to contribute to the promotion and protection of human rights in the development and use of AI. We support efforts that align with its principles and objectives.

 

Conclusion

AI systems are reshaping core aspects of our societies. Their safe, responsible governance is now central to the protection and strengthening of human rights and gender equality, the integrity of democratic institutions, and the stability of our international legal order.

We reaffirm our commitment to a rights-respecting, human-centric, safe, secure, trustworthy and ethical AI future grounded in human rights, accountability, and the rule of law. We call on all states and stakeholders, from the private sector, organised civil society, and the academic sector, to join us in building global AI governance that earns trust, safeguards freedoms, and serves the public good.

Footnotes

  1. This includes the design, procurement, development, evaluation, deployment, use and decommissioning of the system
  2. FOC-Joint-Statement-on-Artificial-Intelligence-and-Human-Rights (2020)
  3. UNGA Resolution on AI_ A/78/L.49 (2024)
  4. FOC-Joint Statement on Responsible Government Practices for AI Technologies (2024)
  5. United Nations Environment Programme (2024). Artificial Intelligence (AI) end-to-end: The Environmental Impact of the Full AI Lifecycle Needs to be Comprehensively Assessed – Issue Note. https://wedocs.unep.org/20.500.11822/46288.
  6. This includes women, children, LGBTQIA+ persons, persons belonging to national, ethic, religious and linguistic minorities as well as persons with disabilities
  7. FOC-TFIIO Blueprint on Information Integrity (2024)
  8. FOC-Joint Statement on Technical Standards and Human Rights in the Context of Digital Technologies (2024)
  9. Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (2024)

Signed:

Armenia
Austria
Canada
Chile
Colombia
Costa Rica
Czechia
Denmark
Estonia
Finland
France
Georgia
Germany
Iceland
Ireland
Latvia
Luxembourg
Mexico
The Netherlands
Poland
Slovakia
Sweden
Switzerland
United Kingdom
Taiwan (FOC Observer)