From Announcement to Manipulation: The Evolution of Advertising

A sepia-toned illustration of a town crier ringing a bell that emits hypnotic spirals, symbolising how early advertising evolved from public announcements into psychological influence.

I grew up in the 1980s, when television advertising still had a kind of charm. I remember the jingles, the mascots, the catchy slogans that managed to lodge themselves in your head for weeks. Even as a child, I knew they were trying to sell me something, but at least they did it with some flair. They felt like part of the entertainment itself.

Something has changed since then. Advertising is no longer something that interrupts culture; it has become the culture. Every space, every platform, and every idle moment now feels colonised by a hidden intention to sell. To understand how we arrived here, it is worth tracing how advertising has evolved from a loud street-side performance to an invisible system of persuasion that shapes our sense of self.

The Loud Salesmen

The earliest form of advertising was brutally honest. Ancient merchants shouted in markets, painted signs on walls, or hung banners above their stalls. When mass printing emerged in the 1800s, advertising became more widespread but no less direct. Newspapers were filled with promises of miracle tonics, soap that made you beautiful, and pills that cured everything from toothache to heartbreak. These were primitive, manipulative, and often fraudulent, but at least you knew what you were looking at. Someone was selling, and you were free to walk away.

The Mad Men Era

The 20th century transformed advertising into an art form. With the rise of radio and television, storytelling became the new language of persuasion. Campaigns no longer sold only a product; they sold an identity, a dream, a way of life. The Coca-Cola Santa Claus, the Marlboro Man, and the perfect suburban family all came from the same creative laboratories.

This was the era of the “ad man,” immortalised in cultural artefacts like Bewitched or later Mad Men. Advertising was portrayed as a glamorous profession. These were the people who didn’t just reflect society; they helped build it. The line between commerce and culture began to blur.

The 80s and 90s: Ads as Entertainment

By the 1980s and 1990s, advertising had taken on a theatrical quality. It was playful, colourful, and memorable. Mascots like Tony the Tiger, slogans like “Just Do It,” and tunes you could hum all day made adverts feel like short pieces of performance art. They were still manipulative, of course, but they wore their intentions openly.

Looking back, perhaps this is why many people from my generation recall old ads with a strange fondness. They were transparent. They worked hard to win your attention rather than simply steal it.

The Weird and Annoying Years

Somewhere in the late 1990s and early 2000s, advertising lost its balance. It became surreal, loud, and deliberately irritating. Think of Crazy Frog, the Budweiser frogs, or the unnerving Burger King mascot. Annoyance became a marketing tool. If something got stuck in your head, even out of frustration, the job was done.

This was the period when “going viral” became a goal before social media even existed. The absurdity was the message.

The Internet Disruption

When the internet arrived, advertising was clumsy but eager. Early banner ads were brightly coloured, flashing boxes that you could easily ignore. But the industry adapted quickly. As data collection improved, advertising became personal. It stopped shouting to the crowd and began whispering to the individual.

This marked the rise of surveillance capitalism. Every click, search, and pause became a data point. You were no longer a passive audience member; you were a psychological profile to be targeted. The salesman had followed you home and was now reading your mind.

The Age of Disguise

By the 2010s, advertising learned to hide in plain sight. Sponsored posts, influencer endorsements, and “native” content made it difficult to tell where information ended and manipulation began. Search engines, news sites, and social platforms quietly filled with ads disguised as genuine results.

South Park once parodied this perfectly with its storyline about intelligent ads (Season 19). It was satire, but it was also prophecy. Today, even image searches are littered with sponsored results. The ad no longer wants to be seen; it wants to be believed.

Culture as Commerce

This is the stage we now find ourselves in. Advertising has stopped orbiting culture and instead absorbed it completely. Everything is for sale, including identity itself.

People no longer ask “What do I like?” but “What do I subscribe to?” We define ourselves through brands and platforms: Apple or Android, Nike or Adidas, Netflix or Disney Plus. Even rebellion is commercialised. You can buy “authenticity,” but only if you can afford the price tag.

Advertising has achieved what no political ideology ever could. It has replaced meaning with marketing and turned culture into a series of brand alignments.

Conclusion: From Persuasion to Colonisation

Advertising began as a voice shouting in the marketplace. It evolved into storytelling, then spectacle, then infiltration. Today it is everywhere and nowhere, woven into the fabric of our reality.

The change that occurred over the last century is more than technological. It is philosophical. Advertising no longer sells products; it sells identities. It shapes our desires before we even know we have them.

Perhaps that is why so many of us feel weary. We are not just tired of being sold to; we are tired of living inside the sale itself.

When AI Becomes the Authority

A dark, moody digital painting of a person sitting at a desk, illuminated by the glow of a laptop. Thin puppet strings descend from above, attaching to their body, symbolizing unseen control and manipulation through technology.

On the bus home, I overheard a parent talking to her children. I did not quite catch the piece of information she had given them, but the kids questioned it, as kids often do. Her reply made me pause:
“It’s true, ChatGPT says so!”

That simple sentence carries more weight than it might appear. It was not said as a joke. It was said with the tone of final authority. Not “I read it somewhere,” not “I think that is the case,” but “ChatGPT says so,” therefore unquestionable.

The problem with treating AI as truth

I use ChatGPT casually and often. I find it useful, I find it stimulating, I even find it creative. But it is not infallible. I have seen it throw out confident answers that are less than accurate. Sometimes the error is small, sometimes it is glaring. That is because at its core, ChatGPT is not a library or a fact checker. It is a probabilistic language model that predicts likely answers. It sounds authoritative, but sounding right is not the same thing as being right.

Most of the errors are not malicious. They come from the quirks of how AI is built: training data full of human errors, the tendency to fill in gaps with plausible sounding fiction, the limits of knowledge cut off dates. In the end, a wrong answer is still a wrong answer.

The deeper worry

The everyday mistakes are one thing. The bigger concern is what happens when society decides to place absolute trust in this technology. What happens when “ChatGPT says so” becomes the modern equivalent of “the newspaper says so,” or “the priest says so”?

Who controls the voice of AI? Already, the way models are tuned and filtered reflects the biases and priorities of those who own them. Today, that mostly means corporations trying to avoid lawsuits or public backlash. Tomorrow, it could mean governments steering the flow of truth itself.

A quiet dystopia in the making

It is not hard to imagine where this road leads:

  • Manipulation by design: If AI becomes our main gateway to knowledge, its answers could be quietly weighted towards selling us certain products, services, or lifestyles. Imagine if every “neutral” recommendation subtly nudged us toward a sponsor’s brand.
  • Steering public opinion: If authorities lean on AI providers to promote certain narratives, inconvenient truths could simply disappear. Instead of burning books, it may take only a few lines of code.
  • Illusion of neutrality: Because AI sounds impartial, many will not notice the framing. “The algorithm says so” could become more persuasive than “the news says so.”
  • Feedback loops of control: As people rely more on AI, its outputs shape popular thinking. Then the next model is trained partly on that shaped thinking, reinforcing the bias.

This would not look like a science fiction dictatorship with jackboots in the streets. It would feel comfortable, easy, polite. A velvet cage where questions stop being asked because the answers are always ready to hand.

What we need instead

AI can be a tool. It can be helpful, creative, and even liberating. But it must never be treated as an unquestionable authority. To prevent that slide, we need:

  • Decentralisation: open source models that anyone can run and check.
  • Transparency: clarity about how these systems are trained and filtered.
  • Critical thinking: a culture where people are encouraged to question AI, not bow to it.
  • Diversity of sources: books, journalism, lived experience, and human reasoning must remain part of the conversation.

AI is here to stay, and it will almost certainly become a central part of how we live and learn. But whether it becomes a tool of empowerment or a velvet cage of manipulation depends not only on the companies that build it, but on us: on how much we insist on questioning, cross checking, and keeping the human spirit of doubt alive.