The Impact of Artificial Intelligence on Press Freedom and the Media

36
Author of article: Lamin Jahateh

Protocol:

Without much ado, I wish to thank the Gambia Press Union for the opportunity to speak on a topic that is shaping the current practice and future trajectory of journalism: Artificial Intelligence.

Artificial Intelligence is now integrated into the daily operations of newsrooms. It is transforming the way we create, distribute, and consume news. And as it continues to evolve, AI’s relationship with journalism and freedom of the press is growing more complex, raising both immense possibilities and serious ethical questions. So, how is AI reshaping the media landscape? Let me start by looking at the positive side: AI offers valuable tools for journalists in different ways. 

Firstly,AI tools have tremendously enhanced news production. Let me explain this with an anecdote:When I started journalism as recently as 15 years ago, we used to use an analogue mini-cassette recorder to record and transcribe. I remember we used to struggle after covering the evening sessions of the National Assembly, which at the time used to end around 9 or 10pm, and from there we would go back to the office and spend long hours listening and transcribing from our tape recorders. This process was not only labourious but also very time-consuming and prone to errors. AI has saved us from that. Now there are tools that you just need to upload your record, and it transcribe it in a matter of seconds and minutes. Some of these tools even help you to transcribe in real time. Unlike manual transcription, AI-enabled transcription services present a faster and more accurate alternative. It frees up journalists to focus on more creative and analytical aspects of news production.

Similarly, AI tools are making translation so seamless, breaking language barriers, enabling international journalism to be consumed across borders almost instantly. These innovations increase efficiency, reduce costs, and open doors for storytelling that might otherwise never happen.

Furthermore, in newsrooms, AI tools can automatically generate news and summarise long reports in seconds. This comes in handy when you are working on large data sets like leaked government documents or corporate records. Talking of leaked records, I think of the Panama Papers. I know of colleagues who worked on the Panama Papers, and it took them months going through pages and pages of the huge volume of documents, trying to make sense of it all. AI tools can do that in minutes, much faster than any human analyst. This enables deeper investigative reporting. It means media organisations can produce content faster and more efficiently. It is for this reason that renowned international media outlets like the Associated Press, Reuters, and Bloomberg, for example, have started using elements of AI in their news production. 

With regard to the news consumers, AI-driven algorithms help them get personalised news based on individual preferences, behaviours, and demographics. This level of news recommendation personalisation helps media outlets reach audiences more effectively and engage users who might otherwise scroll past the headlines. This enhances engagement online and with that comes online revenue. 

Combating disinformation: AI tools help journalists verify sources, detect fake news, and combat disinformation campaigns. AI tools can help detect misinformation by analysing text patterns and verifying facts at scale. Fact checking organisations like Full Fact and PolitiFact are already using AI-assisted fact-checking to counter disinformation. Similarly, AI can help identify coordinated manipulation campaigns on social media, alerting journalists and the public to potential threats. This strengthens the credibility of news reporting.

Challenges:

Having said all these good things about AI, there are aspects of it that are, however, not so good.

Job losses: One of the most profound negative impacts of AI in the newsroom is job displacement. As AI automates content creation, traditional journalism jobs are at risk. Automation could potentially replace not only reporters but also designers, editors, and distribution staff. With fewer journalists on the ground, we risk losing investigative reporting, local news coverage, and the rich storytelling that defines quality journalism. Already, several media organisations across the world have laid off a number of their staff. In spite of this, I like the optimism of the International Federation of Journalists that “AI cannot replace human journalists…” 

Revenue losses: There are specific concerns about the use of journalistic works to feed AI without proper compensation. (The Standard newspaper example on Co-pilot… it gave me the news without sending me to the actual news site – zero click). This way, AI tools divert revenue from the journalism industry, as money that could be earned from subscriptions and advertisements from the news media goes to the AI companies. This does not just undermine the ability to produce quality journalism but also the underlying business models for the entire news media.

Algorithm manipulation: Earlier, I spoke of how AI algorithms help readers get personalised news updates. While this enhances engagement, it also raises concerns about filter bubbles and echo chambers – where individuals are only exposed to information or online communities that align with their existing beliefs, hence promoting polarisation and a lack of tolerance, which is inimical to democracy and peaceful co-existence. 

More dangerously, algorithms are increasingly used by social media platforms and governments to filter, moderate, and suppress content. While moderation is necessary to combat harmful content, it can also be weaponised to silence dissenting voices and restrict press freedom.

Surveillance and censorship: AI’s power doesn’t stop at content creation – it extends to surveillance and control, posing serious threats to press freedom.

Advanced facial recognition and data-mining tools enable state actors to track journalists and their sources. In authoritarian regimes, this technology has been used to intimidate or silence critical voices. Whistleblowers and investigative reporters are increasingly at risk, as AI-enhanced surveillance removes the veil of anonymity that once protected them. This creates a chilling effect, discouraging investigative reporting and whistleblowing.

Moreover, AI-driven censorship systems can monitor online platforms in real time. In countries with restricted internet freedoms, these systems flag and remove content deemed politically sensitive – sometimes within seconds of being published. This automated censorship not only stifles dissent but makes it harder to hold governments accountable.

Deepfake dilemma: Another major concern regarding AI in the newsroom is the rise of deepfakes. Deepfake is an AI technology that is used to generate videos or audio recordings that appear very authentic but are completely fabricated. (The recent video of Ebrahim, the military junta of Burkina Faso, in which he spoke so critically of the West is a case in point). Deepfakes have been used to impersonate political leaders, spread false narratives, and discredit journalists. As the technology becomes more sophisticated, it undermines public trust in what we see and hear.

The danger here is twofold: First, it creates confusion among audiences about what is real. Second, it gives bad actors a tool to discredit legitimate journalism. Imagine a world where any video or audio clip can be dismissed as fake, no matter how real it is. It is perhaps for this reason that the IFJ’s Secretary General said, “deepfakes are a direct attack on democracy and on people’s fundamental right to reliable and independent information”. 

Legal and ethical uncertainties: Furthermore,the legal and ethical frameworks around AI and journalism is increasingly becoming a concern. Questions abound: Who is liable if an AI-generated article contains false information? How do we ensure that AI tools respect journalistic ethics, like accuracy, fairness, and accountability? What happens when algorithms prioritise clickbait over substance?

There is also the issue of transparency. Many news organisations are using AI tools without fully disclosing them to their audiences. Shouldn’t readers have the right to know when an article is written – or at least partially constructed – by an AI? We can discuss this later!

The way forward:

The future of AI and journalism hinges on balance: between innovation and responsibility, automation and editorial judgment, efficiency and ethics. But to ensure that AI enhances rather than erodes press freedom, we must take proactive steps:

  • We need a code of conduct on the use of AI in news gathering and reporting: A new report from the Thomson Reuters Foundation, based on findings from more than 70 countries in the Global South and emerging economies, found that more than 80% of journalists use AI tools in their work. Given such a high usage of AI in the newsroom, it is incumbent upon journalism bodies like the GPU to work with partners to establish ethical guidelines for AI use in news gathering and reporting. We need robust frameworks to ensure transparency and accountability in how AI is used in media. Newsrooms should disclose when and how AI is involved in content creation as a matter of accountability.
  • Investing in AI Literacy: Journalists must be trained to understand and leverage AI while remaining vigilant against its misuse. It is imperative that MAJAC, UTG as well as the GPU train journalists not only to use AI tools, but to understand their risks. Meanwhile, I want to call on journalists to begin to teach themselves on the ethical use of AI in their journalistic work. Reuters research has it that nearly 58% of AI users are self-taught. So it is possible to learn the tools on your own. 
  • Public awareness and media literacy: Citizens should be educated on AI’s role in media to critically evaluate news sources and combat misinformation.
  • To address the revenue losses, the news media should begin to make specific agreements with the relevant AI companies to ensure that journalists are fairly compensated for their contributions.
  • But beyond the media and The Gambia, there is a need for a continental approach in the area of policy or regulation for the ethical use of AI in Africa. This is how it is in jurisdictions like Europe, where the European Union has enacted the AI Act and the Digital Services Act. Both acts have relevant provisions on the way AI is used and deployed in the media. I think there is a lesson or two there that the AU and African countries could learn from. 

Conclusion:

To conclude, artificial intelligence is neither inherently good nor evil. It is a tool – a powerful and fast-evolving tool. In the hands of responsible actors, it can be used to empower journalists and enhance press freedom, support investigative reporting, and improve access to information. But in the wrong hands, it can be used to amplify censorship, spread disinformation, and threaten the independence of the press.

In essence, the impact of AI on press freedom and the media depends on how we choose to use it. Let us embrace AI’s potential while remaining vigilant against its risks. Let us ensure that AI serves the public good.

I have a confession to make: About 30% of this statement is generated by AI.

Thank you.

Lamin Jahateh.

Media and communications researcher.