Cypriot financial analyst, founder and CEO of Wheelerson Management, Chairman of the Board of Osome Group, Grigory Burenkov*, on the public challenges brought by the development of AI and online news platforms.
The development of artificial intelligence and online platforms has radically transformed the information environment. In this regard, the issue of preserving the reputation of the “heroes” of machine-generated publications has become particularly acute. After all, the old rules no longer work as effectively. At the same time, the use of AI, in addition to unique opportunities, sometimes carries a direct threat to the reputation of both individuals and companies.
The scale of change is striking: in 2024, Newsguard researchers discovered that more than a thousand news websites operate exclusively on the basis of artificial intelligence. These platforms produce about 60,000 articles daily — roughly 7% of all global news content.
There is no doubt that in 2025 the number of such materials has grown significantly, but their quality has remained low. The articles are filled with factual errors and disinformation, which not only undermines trust in the media as a whole but also seriously damages the reputation of those who become the targets of such publications.
Grigory Burenkov: “We are witnessing the industrialization of reputational attacks”
A telling example here is the story of Dave Fanning: an algorithm linked his photograph with the name of a criminal, and the material ended up on MSN’s homepage. The news was quickly removed, but catastrophic reputational damage had already been done, and the matter went to court. This is a vivid example of how closely AI and news platforms are intertwined: an algorithmic error can deliver no less of a blow than a deliberate attack.
An even more dramatic case occurred with Norwegian Arve Hjalmar Holmen. AI presented him as the murderer of his own children, sentenced to 21 years in prison, mixing real data about his family with a completely fabricated criminal story.
However, an even more serious threat comes from the deliberate use of AI technologies in fraudulent schemes. Criminals have long turned the creation of defamatory content into a profitable business model.
An example of this is the activity of the “Synthetic Echo” network, uncovered by DoubleVerify in 2025. More than 200 fraudulent resources imitated well-known news brands, creating domains like espn24.co.uk, nbcsportz.com, and cbsnews2.com. All materials were generated by AI, and the goal was twofold: the theft of advertising budgets and the creation of credible platforms for publishing defamatory content.
The FBI officially warned about the growing scale of the problem, noting that “generative AI reduces the time and effort criminals must spend to deceive their victims.” The technology makes it possible to create huge volumes of plausible content at minimal cost, making extortion schemes easily scalable.
“We are witnessing the industrialization of reputational attacks,” notes Burenkov. “But this also creates demand for more advanced protection systems.”
AI Errors and Billion-Dollar Losses
The scale of financial losses from AI publications is already measured in billions of dollars. The most dramatic cases show how quickly an algorithmic error can turn into a corporate catastrophe.
In February 2023, Google demonstrated its chatbot Bard in an advertisement video, where AI incorrectly stated that the James Webb Telescope had taken the first photo of an exoplanet. The factual error led to a $100 billion drop in the market capitalization of Alphabet holding company in just one day — possibly the most expensive artificial intelligence error in history.
Equally telling is the story of a fake image of an explosion near the Pentagon in May 2023. The AI-generated photo spread through verified Twitter accounts as “breaking news about a possible terrorist attack.” Within 20 minutes, before the official refutation, automated trading systems and alarmed investors triggered a 0.3% drop in the S&P 500 index.
Fraudulent schemes have also reached industrial scale. One network used artificial intelligence and Telegram bots to create phishing websites imitating popular marketplaces and dating apps. Total losses amounted to $64.5 million in compensation from legitimate platforms to thousands of victims.
“We are seeing a new type of systemic risk,” Burenkov analyzes. “One algorithmic error can instantly affect billions of dollars in market capitalization and cause reputational losses built over years and decades. This requires fundamentally new approaches to risk management.”
Reputation Protection Technologies
The imperfection of technology and regulatory legislation, errors, as well as the use of AI by fraudsters — these are the most important challenges for which answers are already being prepared. For example, the largest international news organizations have signed agreements on the importance of transparency, human oversight, and ethical principles in the implementation of technologies.
“Instead of demonizing AI, editorial offices are looking for ways to use these tools for fact-checking and improving the quality of materials while maintaining high editorial standards,” Burenkov notes.
While some algorithms are improving the creation of machine-generated content, others are learning to detect it. This technological race stimulates innovation in both areas.
“The task of detection is fundamentally different from traditional information verification,” explains Grigory Burenkov. “We must distinguish between authored and machine content, which requires completely new approaches. The good news: the same technologies that improve generation also develop systems for identifying fake content.”
Media outlets that implement modern algorithms to detect AI-generated information gain market advantages in the form of audience trust.
“Users are increasingly seeing warnings: ‘this material was created using AI,’” notes Grigory Burenkov. “Although such measures are still imperfect, they are shaping a new culture of conscious information consumption and reputation protection.”
Governments are also forced to respond to the development of AI-generated content. And here, too, common themes stand out: transparency, accountability, and protection of reputation.
In September 2024, the United States launched the “AI Comply” operation aimed at combating fraud related to the use of artificial intelligence. The EU adopted a law providing for the gradual introduction of AI, giving time for technological adaptation. At the same time, mechanisms are actively being developed for the interaction of international groups created to search for and shut down fake media that cause reputational damage.
New Rules of the Game
The combination of technological progress, regulatory initiatives, and the evolution of professional standards indicates that the current reputational challenges are a transitional period, not a new reality.
“History shows that markets ultimately reward reliability and transparency,” Burenkov believes. “We are going through an initial growth phase and moving toward more mature competition, where quality and the ability to confirm the credibility of information become key assets.”
The key to successfully navigating this new reality is not in fighting technology but in using it wisely. Reputation in the digital age is still built on trust, but the mechanisms of its formation have changed dramatically. Instead of slowly accumulating authority over years of work, the critical factor now is the ability to quickly adapt, maintain transparency, and constantly confirm one’s credibility.
“Although this transition comes with risks, it also opens up unique opportunities for those ready to play by the new rules,” Burenkov concludes.