
Ensuring data privacy and minimising risks to national and international security are considered the most important responsibilities for public service media (PSM) when procuring commercial AI products for use in journalism, a new report finds.
The industry report, titled PSM and AI Part 2: Governance, Geopolitics, and Procurement, was authored by Professor Kate Wright, Chair in Media and Communications at the University of Edinburgh, and co-authored by Kristian Porter, the CEO of the Public Media Alliance. It is the second industry report from the Responsible AI in International Public Media project, funded by BRAID (Bridging Responsible AI Divides) Fellowships, with support from the UKRI Arts and Humanities Research Council (AHRC), and Edinburgh University.
The first report mapped what AI tools are used by public media organisations in journalism production and how they are used. The second report examines the challenges faced by PSM in trying to procure AI ‘responsibly’, and how the procurement and use of AI is embedded in governance structures.
13 public media organisations of varying incomes from five continents were included in the interviews and data collection, which took place throughout 2024 and 2025.
Key findings:
Over half of the AI tools cited by PSMs are based in the US. This could be problematic for public media, given the removal of risk-based AI regulation by the Trump administration, and other actions which could make US-based AI companies vulnerable to political influence.
PSMs have specific concerns about what it means to procure AI ‘responsibly’, especially considering their heightened risk of cyberattacks. Key threats include criminal gangs, terrorist groups and hostile states, especially Russia and China.
PSM that have legislated roles as ‘critical infrastructure’ during emergencies and crises have particularly strong, distinctive concerns about security. However, PSM raised concerns that some governments could use this to undermine their independence.
Despite their shared concerns, many PSM are reluctant to talk openly with one another about their experiences with specific AI tools, in case this exposes their organisations to further privacy and security risks. Informal conversations between PSM with established relationships, tend to exclude low-income organisations directly threatened by authoritarian states.
When procuring AI tools, PSMs feel a strong responsibility to audiences to ensure the proper management of data. But there remains uncertainty around whether PSM should use AI tools accused of breaching creators’ copyright.
To minimise data privacy and security risks, high income PSMs prefer to develop in-house tools and/or pay premium rates for AI products from large technology companies. However, middle and low-income PSMs are more likely to invest in small and medium-sized AI companies as a way of supporting local start-ups, and meeting budgets.
Investigating and piloting new AI tools was time-prohibitive and costly, particularly for low-income PSM. There is interest in developing the creation of an active database of AI tools, including the considerations relevant to PSMs.
Recommendations:
On the basis of these findings, PSMs are urged to:
- Consider prioritising AI providers based in full democracies, wherever possible.
- Regularly audit AI providers’ privacy policies, the location of data storage and processing, as well as interleaving vulnerabilities pertaining to foundational models.
- Be aware that their respective governments could use security concerns to try and compromise their operational independence.
The post Data privacy and national security the top concerns for PSM in AI procurement appeared first on RadioInfo Asia.
This story first appeared on RadioInfo.asia
