How AI is changing the landscape for content strategists and designers

Changing search behaviour, the enshittification of the web, and threats to intellectual property. Keeping our jobs isn’t the only challenge AI poses for content folk.

To start, let’s define what I actually mean when I’m talking about AI here. Jane Ruffino wrote in her excellent post, We deserve better than an AI-powered future, that ‘what people mean by “AI” most often is an LLM/generative tool’. And that’s exactly what I’m talking about. Things like Chat GPT, Perplexity, Google’s Gemini and AI Overviews, and how they are affecting the practices of content design and content strategy.

I’m not a technologist or a futurist, and I’m not trying to look far into the future. I’m looking at what I can see happening now and what I suspect might happen soon. I’m writing because I’m half interested, half concerned. And because I feel like there’s not enough focus on certain areas of AI’s implication for our discipline.

The issue I see people talking about is ‘Is AI replacing us?’. And it’s not that I’m not interested in and concerned about this. It’s just that not many people are talking about the other aspects that I’m interested in and concerned about:

  • The impact on users and search behaviour
  • Organic website traffic and user journeys
  • The enshittification of the web
  • The wholesale theft of intellectual property

So I’m looking at those issues, what other sectors are doing in response to AI, and what the content community might want to start thinking about.

How user behaviour search is changing because of AI

No one truly knows the full picture of how AI is changing user behaviour yet. However, I think it’s fair to say that it is having a demonstrable impact on people’s search behaviour in three main ways:

1. 36% of people using AI tools use them to search

People are using tools like Chat GPT and Perplexity instead of a search engine. Ofcom’s Online Nation report found that in 2023, 30% of UK internet users over 16 had used a generative AI tool. 36% of those people were using it to ‘find information or content’ and 22% to ‘seek advice’. (Source: Ofcom)

2. AI assistants are replacing some searches

And we’re seeing more integration of AI in hardware, like Google Gemini and Apple Intelligence. Having a chatbot assistant in your device means people might not end up searching for things they would previously have had to search for, like train times or coffee shop recommendations.

3. AI Overviews are showing on 17% of searches

But perhaps the biggest, most immediate impact on how people search is Google’s AI Overview. AI Overviews are a feature of Google’s search engine. Google describes them like this:

“AI Overviews appear in Google Search results when our systems determine that generative responses can be especially helpful — for example, when you want to quickly understand information from a range of sources, including information from across the web and Google’s Knowledge Graph.”

This is what they look like in practice:

A Google search engine results page showing the results for the query ‘What is AI overview’. Directly below the search box there is a highlighted result titled ‘AI Overview’ with the definition. There are links/citations to the side, and traditional search results below.

Research from Authoritas found that AI Overviews appear for 17% of searches — a significant proportion. (Source: Press Gazette)

What does this have to do with me?

Organic search might not be your responsibility (depending on where you work) but it might be a significant traffic driver for a lot of what you work on in your content role. So changes to search have big implications for us.

Implication 1: Organic traffic will fall

The three new AI search methods mentioned above all make it more likely that people will find an answer to their question in the search engine. And that means they are less likely to click through to your website, content, or product.

This isn’t new; Google’s Rich Results have been using structured data to pull content into search engine results pages for years. But taking content from multiple sources, mixing it up, and spitting it back out as an answer to a question is.

There are signals that can give us an idea about what impact this might have:

  1. Gartner has predicted a 25% drop in organic search traffic by 2026. (Source: Gartner)
  2. A study of 23 websites in 2023 found an 18-64% drop in organic traffic as a result of Google’s search generative experience. (Source: Search Engine Land)
  3. I’ve heard anecdotally from a few third sector organisations that their organic website traffic has fallen by 10-20%, coinciding with the launch of AI Overviews on Google.

Does it even matter?

I often think about traffic as a ‘so what?’ metric. How many clicks you get on your content is one thing, but what really matters is where those clicks go next. So if you report to people who care about traffic, now would be a really good time to step up your efforts in showing them why metrics like engaged views, scroll depth, time on page, events, conversion, etc. are far more meaningful.

Implication 2: We need to rethink user journeys and optimisation

Until now, optimising content for search really meant optimising for Google. It’s had a monopoly for such a long time (one that’s legally recognised now) — but things are changing. If people are going to be searching for information in different ways, then we need to be ready to rethink our user journeys and optimise our content for different methods of discovery.

There are already agencies popping up offering ‘Chat GPT optimisation services’ and plenty of articles offering advice. But it’s not just about AI search. People are using social media to find information, too. There’s been lots of talk about how young people are more likely to use TikTok to search, but this seems to be overstated. Again, there’s no definitive source of truth on this, but there are signals:

  • Google’s SVP Prabhakar Raghavan has said: “In our studies, something like almost 40 percent of young people, when they’re looking for a place for lunch, they don’t go to Google Maps or Search. They go to TikTok or Instagram.” (Source: Fortune My emphasis, because this keeps getting misquoted.)
  • An Adobe survey of 808 people in 2024 found that 64% of Gen Zers and 49% of millennials said they’ve used TikTok as a search engine. But only 1 in 10 Gen Zers said they preferred TikTok over Google for search. (Source: Adobe)

Again, it’s not new. People have been using YouTube as a search tool for years — and it’s still the second biggest search engine in the world.

This means it’s extra important to be paying attention to your users and their behaviour. It also means that your internal collaboration needs to be strong. For example, if you work for a charity content team that provides health advice, and you see an increase in people looking for that health advice on Instagram or TikTok, you might need to start providing new support to your social media colleagues.

Implication 3: The web is going to get even more enshittified

I’m worried that the rise in AI is contributing to the ‘enshittification’ of the web.

Enshittification is a term coined by Cory Doctorow in a blog post in 2022 to describe the way that the quality of online products and services has declined as big tech companies seek more and more profit. Cory is talking about things we’ve probably all experienced as users, like the falling quality of the search experience as Google has added more and more ads. Or the falling quality of some content as people compete to rank for organic search terms. Just look at recipe content. Many recipe sites give you a thousand words of preamble before giving you the thing you came for — the recipe — ‘because it’s good for SEO’, with a ‘Jump to recipe’ link because they know no one wants to read it.

AI will almost certainly bring its own brand of enshittification:

1. Bad results

A big one is the poor quality results we’ve seen from many AI tools. AI is sycophantic and it hallucinates — it’s focused on responding to the request you’ve typed in, rather than the quality and reliability of the answer. As fun as it would be, I’m not going to rehash all the AI Overview fails, or talk about how an AI-written foraging book led to accidental poisoning. What I will say is that AI is making some really bad, dangerous, unreliable content.

2. Endless recycling

AI is built on the concept of endless recycling of content. And chewing up and spitting out the same content over and over again is enshittification. This was already happening without AI. For as long as I’ve worked in content — 17 years and counting — a huge amount of content has been created through regurgitation, because companies wanted to rank for a keyword, irrespective of whether they actually had something to add to the topic. AI is just speeding this up. And it’s a race to the bottom. If no one is researching and creating anything original anymore because it’s cheaper to do it with AI, we’re going to end up trapped in a closed loop of crappy content.

And let’s not ignore the fact that AI is enshittifying the world, too. It’s a significant carbon emitter: just training a single big language model is equal to 125 round-trip flights between New York and Beijing (Source: Nature).

Implication 4: Your intellectual property is being eroded

As content folk, we/our employers have created a lot of the content that LLMs are learning from and spitting back out unattributed.

I’ve seen this happening with my own work, with models I’ve created appearing in AI Overviews. Yes, the attribution is there — off to one side — but will someone click through? It distances me as the creator, and makes everything feel more anonymous and open source. Why not stick that thing in your presentation or blog post as your own when it’s presented as an ownerless, creatorless bit of common knowledge? Not to mention the fact that it’s all going to pop up in the articles people are asking AI to write for them anyway.

It’s not just an issue for individual creators — this is an issue for organisations too. Years and years of institutional knowledge, intellectual property, and investment in content is being used to train AI and make content for other sites.

How to protect our content and IP

Trying to work out if you can actually keep your content, IP, and data out of AI tools is incredibly complex and confusing. It’s like whack-a-mole — every time you think you’ve got all the data points covered, another one pops up. As far as I can tell, it’s pretty much impossible to keep your content out of LLMs. But here are some things to think about if you want to try:

  • Website: There are methods that you can use to block AI tools from crawling your website content, using instructions in your robots.txt file. But there seems to be a lot of scepticism about whether this actually works. And if your data is already in those LLMs, it’s too late.
  • Social media content: some, but not all, social media products will give you an option to opt out of your content being used for AI training.
  • Other digital tools: you need to check the T&Cs of any tools you use very carefully — a huge number now use any data you enter to train AI. Assume that anything you put into a digital tool — transcription tools, data repositories, conference call providers, as well as AI tools themselves — will become part of an LLM unless you specifically know otherwise.

There are other routes. Newspapers are coming to arrangements with AI companies to protect their content. News Corp, Reddit, the FT, and others have made deals so that they are paid for the use of their IP. (Source: DigiDay) It seems wildly unfair that smaller organisations and individuals don’t have access to these options. The only light I can see right now is that some unions are looking at this as part of a wider exploration of the impact of AI on workers’ rights. (Source: Labour Research Department, National Writers Union, TUC) The focus seems to be on creatives at the moment, but I hope that in time, content folk and our employers will be considered as part of this picture too.*

*Note: I have to point out that on this front, I’ve been a massive hypocrite. I’ve used AI to create images for articles because it’s convenient. I won’t be doing this any more.

What can we do about it?

There are three things that I think we should all be focusing on to guard against the potential negative impacts of AI:

1. Just give the user what they want

Rule 1 for content strategy and content design has always been to focus on the user and what they need. This matters now more than ever. In a sea of shitty content, having a deep understanding of your audience and their needs is going to serve you well.

2. Be original

AI can’t create. It can’t come up with knowledge. It doesn’t have expertise. It can only recycle. So make your content as original as possible. Carry out primary research to inform your content. Make the most of your institutional expertise and knowledge. Be imaginative and creative.

3. Own your own audience and become a destination

Try and decrease your reliance on aggregators like search engines and social networks for traffic and engagement. Big tech companies are focused on profit, so it’s in their interest to give you as little traffic for as much money as possible. Focus on building an audience you actually own, and become a destination – make content so good that people will come direct when they need information. And ignore email at your peril: it’s the most consistent, reliable marketing channel. (Source: SparkToro) And email plus content strategy plus content design? What a combo that could be.

Reading list

We deserve better than an AI-powered future, Jane Ruffino

Here’s why AI search engines really can’t kill Google, David Pierce, The Verge

Presentations, Benedict Evans

How Do You Break Up a Company Like Google?, John Herrman, NY Mag 

The Search Engine Wars Are Back On, John Herrman, NY Mag    

Why publishers fear traffic, ad declines from Google’s AI-generated search results, Marty Swant, Digiday

How to Protect Your Privacy From ChatGPT and Other AI Chatbots Writers and AII, Jen Caltrider and Zoë MacDonald, Mozilla Foundation

More posts

Content at…Scope

An interview with Stephanie Coulshed (she/her), Content Design Programme Lead at Scope.

10 patterns to structure your content for understanding, engagement, and effectiveness - and a matrix to help you choose the right one.

Learn what content operations are, why they matter, and what you need to consider as part of defining your content ops approach.

Like this? Get more, straight to your inbox.

Sign up and get new blog posts emailed to you. Plus, get the 10 Things newsletter: articles, opinions, tools and more curated to spark ideas and make connections for anyone who’s interested in content with purpose. No more than four emails a month. Unsubscribe whenever you like.