On 22 June, ahead of the 2025 Internet Governance Forum (IGF), Estonia, Chair of the Freedom Online Coalition (FOC) for 2025, convened the second Strategy & Coordination Meeting (SCM) of the year. The meeting brought together FOC Member States, Observers, the Advisory Network, and external stakeholders in Lillestrøm, Norway, to review progress under the 2025 Program of Action and coordinate upcoming activities, including efforts to advance FOC priorities in the context of the WSIS+20 process.
The SCM program featured a diplomatic simulation exercise on digital public infrastructure, a joint roundtable discussion with the multistakeholder FOC Advisory Network, strategic exchanges on WSIS+20 engagement and the Coalition’s future priorities, and a presentation from the Digital Defenders Partnership. The sessions provided a platform to build capacity on priority issue areas, share expertise, strengthen coordination, and further progress efforts to promote an open, inclusive, and rights-respecting digital environment. To view the summary report from the June SCM, visit the following link.
The Coalition also hosted two open forums and a workshop as part of the IGF program, which took place from 23–27 June. In addition to its public sessions, FOC Members issued two statements during the IGF week on Artificial Intelligence and Human Rights and Protecting Human Rights Online and Preventing Internet Shutdowns During Times of Armed Conflict. Summaries and recordings of the IGF sessions are available below:
Shaping Global AI Governance Through Multistakeholder Action
Moderator: Zach Lampell, Senior Legal Advisor & Coordinator, Digital Rights, ICNL
Speakers:
- Rasmus Lumi – Director General, Department of International Organisations and Human Rights, Ministry of Foreign Affairs of Estonia (opening remarks)
- Ernst Noorman – Ambassador for Cyber Affairs, the Netherlands
- Maria Adebahr – Director for Cyber Foreign and Security Policy, Germany
- Divine Selase Agbeti – Director-General, Cyber Security Authority of Ghana
- Erica Moret – Director UN & International Organisations, Microsoft
The session convened a diverse set of stakeholders to discuss the urgent need for rights-based, multistakeholder approaches to global AI governance. Framed around the launch of the 2025 Joint Statement on Artificial Intelligence and Human Rights, panellists emphasized the transformative power of AI and the associated human rights risks, including algorithmic bias, mass surveillance, disinformation, and the suppression of democratic participation. Speakers highlighted that unchecked AI deployment, especially when driven by commercial or authoritarian interests, threatens core freedoms including freedom of expression, and freedom of assembly and association. The speakers stressed that AI governance must be rooted in international law and grounded in inclusive, transparent processes that prioritize those most at risk, especially women, girls, and other marginalized communities.
Speakers from government, civil society, and the private sector noted a number of methods to address these challenges, including national algorithm registries, human rights impact assessments, procurement reform, and responsible AI principles aligned with the UN Guiding Principles on Business and Human Rights. All panellists underscored the need for safeguards across the AI lifecycle, from design to deployment, echoing the Joint Statement. Examples such as the EU AI Act, UNESCO recommendations, and the Council of Europe’s Convention on AI and Human Rights were cited as promising frameworks to inspire global alignment. Speakers also stressed the importance of multistakeholder collaboration to build trust, accountability, and legitimacy into AI systems and policymaking.
The session underscored that human rights are not a barrier to technological progress but a precondition for sustainable and inclusive development. Participants also called for stronger mechanisms to ensure civil society engagement in governance, especially in repressive contexts, and for the private sector to take a proactive role in minimizing harms and enhancing transparency. The meeting concluded with a collective call to action for governments, companies, and civil society to work together to ensure that AI earns trust, safeguards freedoms, and serves the public good.
Key takeaways:
- AI must be governed with human rights at the core – AI must be developed and deployed in strict alignment with international human rights law, upholding freedoms such as privacy, expression, assembly, and non-discrimination. The misuse of AI for repression, surveillance, and manipulation increasingly threatens democratic processes and vulnerable populations.
- Inclusive and accountable governance frameworks developed through multistakeholder collaboration are needed – Inclusive, multistakeholder approaches involving civil society, independent experts, and underrepresented communities are essential. States and private actors must implement accountability measures, including transparency, human rights due diligence, and safeguards in high-risk applications.
- Sustainable, equitable, and human rights-based AI development is essential – AI systems should support sustainable development, promote gender equality, and reflect cultural diversity. Cross-sector collaboration with UN bodies and scientific panels is needed to ensure AI contributes to the SDGs while respecting environmental and human rights standards.
Calls to action:
- To support the FOC Joint Statement on AI and Human Rights and advocate for human rights-based AI governance.
- To advocate for inclusive, multistakeholder approaches to AI governance involving civil society, independent experts, and underrepresented communities to ensure governance models are not dominated by authoritarian or purely commercial interests.
- For all stakeholders, including governments, civil society, and the private sector to engage in dialogue and information exchange with the FOC and FOC Advisory Network on the topic of AI governance.
How Technical Standards Shape Connectivity and Inclusion
Moderator: Laura O’Brien, Senior International Counsel, Access Now
Speakers:
- Rasmus Lumi – Director General, Department of International Organisations and Human Rights, Ministry of Foreign Affairs of Estonia (Opening Remarks)
- Divine Agbeti – Director General of the Cyber Security Authority of Ghana
- Stephanie Borg Psaila – Director for Digital Policy, Diplo Foundation
- Natálie Terčová – At-Large Advisory Committee (ALAC), ICANN / Founder and Chair of IGF Czechia
- Alex Walden – Global Head of Human Rights, Google
- Rose Payne – Policy and Advocacy Lead, Global Partners Digital
The session brought together stakeholders from diverse backgrounds to address the pressing need for greater inclusivity and participation in technical standard-setting processes. Participants underscored that these processes are often dominated by technologists and structured in ways that alienate civil society, non-engineers, and marginalized communities. Language barriers, complex jargon, financial costs, and opaque procedures were identified as key factors that prevent broader engagement. Emphasising the importance of inclusivity, speakers proposed a range of solutions including removing membership fees, creating seats for underrepresented groups, ensuring virtual access, offering real-time translation, and establishing youth and diversity panels. These measures aim to foster broader participation and democratize decision-making within technical forums.
The discussion highlighted how inclusive standardization can have real-world impact, with examples like M-PESA, Aadhaar, and web accessibility standards showing how well-designed technical systems can bridge digital divides and serve vulnerable populations, while those that do not have safeguards pose threats to human rights and fundamental freedoms. A recurring theme was the importance of aligning technical standards with international human rights principles, ensuring that these processes do not unintentionally enable surveillance or exclusion. The session also delved into the geopolitical and infrastructural dimensions of connectivity, particularly the crucial role of subsea cables in global data transmission. Speakers emphasised the need for cross-sector cooperation to safeguard this infrastructure and ensure resilient connectivity. Representatives from companies like Google echoed the necessity of embedding human rights considerations into infrastructure investments and highlighted the private sector’s role in promoting inclusive, rights-respecting innovation.
Throughout the session, participants emphasized the need for sustained multistakeholder collaboration that bridges gaps between technologists and civil society actors. They stressed that effective technical standardization must reflect the lived experiences of end-users and communities most impacted by digital technologies. The meeting concluded with a shared vision of standard-setting processes that are not only technically sound but also equitable, transparent, and aligned with the broader goals of digital inclusion and the Sustainable Development Goals.
Key Takeaways:
- Inclusivity in technical standard-setting is essential to ensure that marginalized voices, including civil society and non-engineers, are represented and can meaningfully contribute.
- Technical standards have real-world human rights implications, affecting access to critical services and the risk of surveillance or exclusion.
- Multistakeholder collaboration and accessibility measures, such as real-time translation and reduced financial barriers, are crucial to fostering global and equitable participation.
Calls to Action:
- Lower entry barriers for underrepresented stakeholders in technical standard-setting bodies by lowering or eliminating membership fees and offering virtual participation options.
- Integrate international human rights frameworks into all stages of technical standards development to ensure ethical and inclusive outcomes.
- Establish ongoing support mechanisms, such as capacity-building initiatives and dedicated inclusion panels, to facilitate sustained participation from diverse communities.
Universal Principles, Local Realities: Multistakeholder Pathways for DPI
Moderator: Sabhanaz Rashid Diya, Executive Director, Tech Global Institute
Speakers:
- Armando Manzueta – Vice Minister for Public Innovation and Technology at the Ministry of Public Administration of the Dominican Republic
- Rasmus Lumi – Director General of the Department of International Organizations and Human Rights, Ministry of Foreign Affairs of Estonia
- Keith Breckenridge – Standard Bank Chair in African Trust Infrastructures at the University of Witwatersrand, South Africa
- Smriti Parsheera – Research Fellow at the Interledger Foundation
- Bidisha Chaudhari – Assistant Professor of Government, Information Cultures and Digital Citizenship at University of Amsterdam
- Sheo Bhadra Singh – Principal Advisor, Telecommunication Regulatory Authority of India
- Luca Belli – Professor at FGV Law School, Rio de Janeiro and Director of the Center for Technology and Society
The panel discussion on Digital Public Infrastructure (DPI) underscored the complex and often contested nature of digital transformation across global contexts. The conversation highlighted a range of experiences, from Brazil to South Africa, India to Estonia, that reflect both the opportunities and challenges of designing inclusive, rights-based digital systems.
Brazil’s PIX payment system featured prominently as an example of how public digital infrastructure can drive innovation, reduce transaction costs, and disrupt monopolistic financial structures. This case demonstrated how state-led platforms, when designed with openness and interoperability in mind, can foster competition and broaden access to services. However, the discussion also made clear that the effectiveness of such systems hinges on governance models that prioritize empowerment over control.
From South Africa, cautionary insights were shared about the unintended consequences of DPI, particularly for vulnerable communities. While digital platforms promise financial inclusion and improved service delivery, they can also open pathways to predatory lending, online gambling, and data exploitation in the absence of adequate regulatory oversight. These concerns highlighted the urgent need for liability frameworks and consumer protections to ensure that digital systems serve the public interest rather than deepen existing inequalities.
India’s large-scale deployment of DPI, encompassing digital identity, payments, and data exchange, was presented as an ambitious model of state-driven digital infrastructure. The conversation acknowledged the scale and speed of implementation, but also emphasized that success must be measured not just in coverage, but in how systems are experienced by users—particularly in terms of accessibility, consent, and grievance redress.
Across these contexts, participants emphasized that DPI should not be seen purely as a technical intervention. Rather, it is a civic and political project that requires transparency, public participation, and ongoing oversight. Estonia’s longstanding digital governance experience illustrated how trust, human-centered design, and rights-based principles can be foundational to building resilient digital ecosystems.
The discussion returned repeatedly to the idea that the value of DPI lies not in its technological sophistication alone, but in its alignment with democratic norms, its responsiveness to social needs, and its capacity to create meaningful, inclusive change. For DPI to fulfill its promise, it must be embedded in legal, institutional, and ethical frameworks that ensure accountability, equity, and public trust.
Key Takeaways:
- DPI is not a monolithic technology, but a context-specific ecosystem requiring careful, nuanced implementation.
- Successful digital transformation demands a people-centered approach that prioritizes human rights, privacy, and meaningful societal impact.
- The critical role of trust, ethical considerations, and multistakeholder collaboration is fundamental to DPI success.
Calls to Action:
- Develop rights-respecting DPI governance frameworks that prioritise transparency, accountability, and civil society participation, creating robust protocols for data protection and meaningful stakeholder engagement.
- Create comprehensive liability systems that protect vulnerable populations from digital risks, involving safeguards against fraud, exploitation, and unintended technological consequences.
- Redefine “inclusion” beyond technological metrics, focusing on genuine socio-economic empowerment to ensure digital infrastructure creates tangible improvements in people’s lives.