Home > Communiqués SUD-AFP > Our future: Artificial France-Presse?

Our future: Artificial France-Presse?

Monday 13 October 2025

All the versions of this article: [English] [français]

AFP has enthusiastically embarked upon a future centered on machines that think in terms of probabilities. At the same time as management intends to cut 70 jobs to save money, it is encouraging us to work with Microsoft’s Copilot to suggest headlines and proofread, even “expand” what we’ve written. Copilot does this without any journalistic ethics and AI models have been proven to have biases. Our founding statute dictates that AFP must not succumb to influence “liable to compromise the exactitude or the objectivity of the information it provides.” Why are we in such a rush to adopt a tool that could do just that?

First, let’s first recall a quickly forgotten fact: AI is not a miracle solution to the world’s problems. Environmentally, it monopolizes water and electricity resources for its data centers, which have been growing exponentially. Each of these can hold up to a million servers containing chips made from critical minerals whose mining can also cause environmental damage. Socially, it is not the machine that learns and invents on its own, but rather humans who label and verify information day in and day out to help AI interpret and sort content. AI creates nothing new: it only endlessly reproduces the models we give it, siphoning off the internet content produced by others without compensation for the authors. Ultimately, does AI help the journalist or the other way around? And while AI can do many things faster than humans, to the extent we’re supposed to always verify AI production, time savings will come only by cutting corners.

The limitations, excesses, and problems raised by AI are piling up. First, it’s clear that each geopolitical bloc has its own, thus proving their ideological biases and potential for information warfare and espionage. Management states that “By default, Copilot runs on a Microsoft-customized version of GPT-4, a large language model (LLM) developed by OpenAI.” Could you ever imagine AFP choosing DeepSeek? AFP is normally fiercely proud of its independence, yet it is leaving the barn door open wide for the horses by allowing journalists to paste full stories into Copilot. This is supposedly a secure workspace under our contract with Microsoft, but during the webinar on using Copilot it was disclosed that chat queries use Google search, so our use of the tool isn’t confidential. Given that tech giants are reluctant to pay AFP for its content and cuddle up to a politician that ejects journalists from the Oval Office, how can we choose to become ever more dependent upon them. How could we ever attack OpenAI over copyright in a future where it is a key tool in our production?

Shooting ourselves in the foot?

How can we ensure the healthy use of AI? Is this even possible? We can only note the contrast between Copilot’s real capabilities (such as reading a document, conducting an internet search, or cross-referencing data) and management’s fantasies about the "technological revolution" of this "colleague" unlike any other. However, it appears that this may be the only colleague we will be able to count on in the future, as the newsroom empties out of human journalists. Robots (so we’re led to believe) cost less. However they are not capable of critical thinking – which is a crucial aspect of journalism! If Copilot can write for us, AFP risks becoming nothing more than an avatar generating dispatches based on press releases. Why would clients choose us over OpenAI? We’ve already shot ourselves in the foot commercially with our deal with Mistral, according to our sales colleagues. Mistral gets our news with a four hour delay. Our media clients say this is too short a window, making Mistral a competitor to them for current news, leading to a drop in reader traffic. As they make less money from AFP news they are now asking for discounts.

We’re not fooled: AI will only solve the problems of managers seeking obedience at a low cost, since it in no way replaces the humans who live and describe the world. SUD took the liberty of asking Copilot directly what use a news agency should make of AI. According to Copilot, this assistance “should never be at the expense of editorial independence or of jobs that are not replaced without reflection.” AI won’t fill the gaps left by staff reorganizations as management hopes. The time savings it offers are likely to be measured in minutes or hours – insufficient to replace a journalist. And these time savings come with risks to accuracy and quality.

SUD calls for a clear AI usage charter developed together with staff. We demand a clear commitment to preserving threatened functions such as proofreading, data visualization, and even writing – where AI could compromise the accuracy and quality of our news, and thus undermine the realization of our public service mission. We want similar guarantees for technical and administrative staff. The CSE also needs to be able to carry out effective monitoring of contracts signed with AI companies and the associated business strategy, including author and neighboring rights. The greatest caution is required when using tools that disguise themselves as innocent: by trying to make our work too easy we risk seeing its value disappear!

Paris, October 6, 2025
SUD-AFP (Solidarity-Unity-Democracy)