OTTAWA — A recent study from McGill University’s Centre for Media, Technology and Democracy reveals that artificial intelligence (AI) systems heavily rely on Canadian journalism for the information they provide to users without offering proper compensation or attribution in return. The research involved analyzing 2,267 Canadian news stories across major AI models, demonstrating a reliance on Canadian news reporting.
The findings indicate that all four tested AI models—ChatGPT, Gemini, Claude, and Grok—displayed extensive knowledge of Canadian current events, suggesting they had ingested a considerable amount of Canadian news reports. However, the study highlighted a significant issue: when these AI models were asked about Canadian news events from their training data, they failed to provide source attribution approximately 82 percent of the time. This lack of sourcing raises concerns about the fairness and transparency in using journalistic content.
In instances where the AI platforms were asked about specific articles and had web access enabled, most models delivered responses that contained enough information from the original reporting, often making it unnecessary for users to visit the actual news sources. While about half of the responses included at least one Canadian link, only 28 percent included a specific attribution to a Canadian source. This situation means that consumers engaging with the AI responses are frequently unaware of the origin of the journalism being presented to them.
The report asserts that AI companies now extract value from journalism in several ways: they ingest news archives as training data, produce derivative content without crediting the original sources, and deliver answers to consumers that may reduce the necessity and motivation to consult the primary sources. According to the researchers, this system contributes to the ongoing economic decline of the journalism sector upon which it depends.
The study's release coincided with a national summit organized by the federal government in Banff, addressing issues related to AI and culture. Culture Minister Mark Miller opened the summit by voicing "legitimate questions" concerning copyright, market-based licensing, and the influx of AI-generated content into the marketplace. Meanwhile, Artificial Intelligence Minister Evan Solomon noted that recent consultations regarding the government’s AI strategy emphasized that creators require assurance that AI development will entail proper safeguards. Solomon acknowledged that pressing concerns exist around copyright, ownership, and data mining practices.
In a related development, a coalition of Canadian media outlets—including The Canadian Press, Torstar, The Globe and Mail, Postmedia, and CBC/Radio-Canada—has initiated a lawsuit against OpenAI in Ontario, claiming that the company unlawfully uses their news content to train ChatGPT. They argue that this practice infringes on copyright laws and that OpenAI profits from their material without obtaining proper permissions or compensating the original creators.
To address the shifting landscape of journalism, Ottawa passed the Online News Act in 2023, mandating that platforms like Meta and Google compensate media outlets for displaying their content. In response to the legislation, Meta removed news from its platforms, while Google has begun making payments to comply with the act.
The researchers' policy brief highlights that the challenges posed to journalism by AI systems differ from those presented by social media. While social media platforms have historically captured advertising revenue through aggregation of attention to news content, AI companies are fundamentally transforming the equation by absorbing journalistic substance and delivering it as their own product directly to consumers. This shift renders visiting the original source not merely less appealing but often unnecessary due to the direct responses provided by AI systems.



