Trading Human Rights for Ad Dollars
January 28, 2026
In 2024, Sam Altman described the combination of advertising and artificial intelligence as “uniquely unsettling” and said that he hated the idea as “an aesthetic choice.”
But after internal OpenAI documents obtained by The Information projected the tech giant would lose $14 billion by the end of 2026, Altman’s company appears to have reversed course.
On January 16, OpenAI announced it would start testing ads for users in the United States who were using either the Large Language Model’s (LLM’s) free or $8-a-month tier. The company was at pains to note that the advertising would be clearly labeled, separate from the organic response, and not appear near “sensitive or regulated topics,” such as mental health discussions. According to Wired, advertisers would have access to aggregate performance metrics, but would not be able to see user data to assist with more targeted advertising.
This sounds like reassurance. But tech companies have a long and storied history of not letting ethics get in the way of their advertising bottom line. A 2021 investigation by the Wall Street Journal, for instance, found that Facebook continued to allow advertising that is fraudulent and damaging to the mental health of younger users, even after they knew about the harms.
The internal machinations of LLMs are infinitely more intricate than the algorithms that govern a social media’s “For You” page. Researchers and Big Tech companies have spent billions of dollars, and thousands of hours, trying to solve the “Alignment Problem”—ensuring AI systems serve human interests rather than pursuing unintended goals. Cases of AI-induced psychosis, or chatbots encouraging suicide demonstrate how difficult this problem remains and there is significant debate among AI enthusiasts whether it is solvable at all.
The alignment challenge becomes even more complex when financial incentives enter the picture. If an LLM is trained or fine-tuned to generate revenue through advertising, how can users trust that responses prioritize accuracy over profit? OpenAI’s promise to avoid ads near “sensitive topics” raises another question: How is it possible for an LLM like ChatGPT to effectively distinguish what counts as a “sensitive topic” among the billions of highly personal questions that people submit daily?
OpenAI could counter that the ads will be clearly marked and distinguished from the original answer. But as Alberto Romano notes in the Algorithmic Bridge, this transparency is largely meaningless to average users. Without access to OpenAI’s training data or fine-tuning processes, users cannot audit whether responses have been subtly tweaked to favor one product over another.
What’s more, it’s highly doubtful that any advertiser would be content with a clearly-labeled box marking their product, when the strength of ChatGPT (and what draws millions of users back to it, day in, day out) is its unique and highly personalized engagement system. This, in turn, creates the possibility of ads that are part of a chatbot’s answer, not just in a separate box. This is a point which Fidji Simo, CEO of Applications for OpenAI, suggested in a January blog post.
“Conversational interfaces create possibilities for people to go beyond static messages and links,” Simo noted. “For example, soon you might see an ad and be able to directly ask the questions you need to make a purchase decision.”
Overall, the introduction of ads into ChatGPT is in keeping with the theory of “Enshittification” laid out by digital activist Cory Doctorow, in which platforms entice in new users, pivot to a more business-friendly tone once they have achieved scale, and then start to squeeze the regular audience for more money. Unlike previous tech pivots, however, this one involves systems that millions rely on for information, decision-making, and even emotional support. The stakes of biased AI responses extend far beyond annoying ads to potentially manipulated advice on health, finances, and personal relationships.
Without regulatory intervention requiring transparency in AI training and advertising integration, we risk normalizing a future where our most trusted information sources are quietly optimized for corporate profit rather than human benefit. California’s SB-942, which requires clear disclosures for AI-generated content, and the Federal Trade Commission’s work to prohibit deceptive synthetic advertising, offer legal frameworks here. Building on these, regulators could require regular independent audits of AI advertising systems and mandate strict separation between ad-optimization and model alignment teams, as well as explicitly prohibiting paid influence over chatbot responses.
Bearing in mind the trillions of dollars spent on AI systems, and the repeat promises of Big Tech companies that these systems are a portent to the future, it is imperative that any sort of AI advertising has effective regulatory oversight to ensure that it is in keeping with initial promises, and not creeping into other territory.
link
