An intro from Lauren..
When I was canvassing for topics to cover in future article, prompt engineering (something I know next to nothing about) came up a few times. The first person I thought of who could help was my former boss, Antony Mayfield, CEO of Brilliant Noise. When things are changing fast in digital, Antony is a very good person to know. He’s always at least one step ahead, has his finger on the pulse, and all the other cliches. AI is no exception. His Antonym email is a great source if you’re trying to wrap your head around where AI is right now, and where it might be headed.
Over to Antony…
Welcome to this article all about prompting: how to get the most from generative AI tools like ChatGPT. I’ve been fascinated by the explosion of these technologies in recent months, and have been experimenting a lot with how content strategists and writers can use them.
There are so many things to say, but these 10 points cover what I think will be most useful to you as a content professional. Let’s dive in!
Prompting is “conversational computing”
Generative AI is a powerful new technology. I like to think of it as being a bit like the sun – an intense source of light and energy. The prompts you write are ways of focusing and using that energy. You could concentrate it into a laser-like beam, or reflect and diffuse it to illuminate a whole building. You can harness it to cook food, fire clay bricks, or grow seeds into plants.
Sunlight is freely available to all. But we each use it in our own unique ways. It’s the same with generative AI – the raw capability is there, but it’s up to you how you leverage it.
Some refer to prompting as “prompt engineering”, which can sound intimidatingly technical. AI researchers have pulled off some incredible feats with carefully crafted prompts, but you don’t need to be a coding wizard to do great things. If you have a curious mind and a facility with language, you’re already well equipped.
As a content professional, language is your stock-in-trade. Prompting, and the generative AI tech that underpins it, is all about language. Some call it “conversational computing” – you can now create tools and shape content through dialogue, in ways that were out of reach to most of us just a few months ago. So let’s claim this space for ourselves.
The most practical way to write any prompt
Learning to get the best out of generative AI is like learning a new language, or at least a new dialect. It takes practice, and that means being comfortable making mistakes and trying again. If you don’t seem to be getting through, rephrase things and have another go — just like you would in a human conversation.
The single most useful thing to know about prompt-writing is the Role-Task-Format (RTF) formula.
- Role: Give the AI a role to play.
- Task: Clearly explain what you want done, leading with the desired outcome.
- Format: Specify a format for the output that will be most useful to you.
Here’s how it works, using the example of having an AI assistant help you improve a blog post:
- Role: Act as an experienced copy editor for a business publication. You could also spell out in detail what you expect from a copy editor. Sometimes more context helps, other times it doesn’t make much difference.
- Task: I want to improve the attached blog post to make it engaging for [target audience] and keep them reading to the end. Please critique the content, readability, style and flow, and suggest improvements.
- Format: List the top 10 improvements as bullet points, quoting the relevant passages and explaining the changes needed. Then please provide a revised draft of the article incorporating your suggestions.
Try this experiment: Test how well the RTF technique works by posing a straightforward query to ChatGPT, then trying again using the formula (start a fresh conversation for the cleanest comparison).
There are many other prompt-writing techniques to explore. The Prompt Engineering Guide is a great free resource to learn more. And Wharton professors Dr Lilah Mollick and Dr Ethan Mollick’s More Useful Things website has a wonderful Prompt Library as well as other resources for teachers and students (that are useful for the rest of us too)*.
*More Useful Things is a companion website to Dr Ethan Mollick’s wonderful One Useful Thing newsletter, where he shares all sorts of interesting insights about generative AI. It’s an essential read if you’re interested in generative AI and how it can help us think.)
Gen AI is a cognitive accelerator
Generative AI can be thought of as a cognitive accelerator — it dramatically speeds up and scales up certain mental tasks. But it’s important to remember that it will amplify our thoughts, both good and bad. We all do stupid things as well as smart things. Adding AI into the mix heightens the impact of both.
Used thoughtfully, generative AI gives us incredible new powers to explore ideas, solve problems, and create things that would have taken far longer before (if they were possible at all). This has practical as well as ethical implications. In a paper from Harvard Business School called “The Jagged Frontier”, researchers found that some users were lulled into semi-apathy by good initial results and then missed errors later on. They called it “falling asleep at the wheel”. As one of Brilliant Noise’s clients perceptively put it, a good principle is “utilise, don’t rely” on AI tools.
AI has a carbon footprint – here’s how to minimise it
The carbon impact of generative AI, particularly for high-computation tasks like image generation, is substantial. Creating images with advanced AI models, such as Stable Diffusion XL, can produce CO2 emissions akin to driving a car for miles. Large-scale models like GPT-3 are even more impactful, with their creation resulting in significant amounts of CO2 equivalent. The operational phase of these models, often overshadowing the training phase’s emissions, underscores the substantial carbon footprint of generative AI technologies.
This is a complex issue, and needs to be addressed in AI strategy by tech companies and their customers. At an individual level, here are some ways to keep your gen AI activities as green as possible:
- Use existing pre-trained models rather than creating your own from scratch. Fine-tune these base models for your needs.
- Choose energy-efficient processing methods, and be selective about when you use resource-intensive approaches.
- Opt for cloud providers and data centres that run on renewable energy or have robust carbon neutrality commitments.
- Reuse models and computing resources as much as possible to minimise waste.
- Keep tabs on the carbon cost of your AI workloads so you can track and manage their environmental impact.
- Push for greater transparency around the energy demands of developing and operating ML systems.
For more on this see Harvard Business Review’s How to Make AI Greener and PWC’s How generative AI model training and deployment affects sustainability.
There are tricky ethical questions to navigate
Generative AI raises some thorny ethical issues that content pros need to be aware of:
- Plagiarism: AI models often don’t disclose their training data, so their outputs may reproduce copyrighted material. Avoid publishing AI-generated content verbatim. The best way to avoid plagiarism is by using AI to rework or republish your own content.
- Factual accuracy: AI can authoritatively state false or biased information. Always fact-check and try not to think of AI as a search engine. AI-powered search like Co-Pilot and Perplexity give references for their answers and are more reliable than ChatGPT and similar tools.
- Copyright: AI can mimic distinctive writing styles, potentially infringing on intellectual property. Some companies, like Microsoft, will indemnify users against accidental copyright issues.
- Job losses: There are concerns that AI could automate away many content and creative roles. More than concerns, there seems to be evidence of it. How can we harness it to enhance our work rather than replacing us?
- Bias: AI models trained on skewed datasets can bake in and perpetuate harmful biases. The bias comes in at many stages from the building of the model to the prompts we are making. Projects like Erin Reddick’s ChatBlackGPT are highlighting this problem and are worth following to understand the issues.
Organisations should put robust guidelines in place for judicious use of generative AI. Core principles: respect privacy, apply brand standards, double-check factual claims, and rewrite AI output in your own words.
Prompt yourself!
One day, thinking through the best way to design a workflow, I realised: we aren’t always great at prompting ourselves. Some people, like project managers, developers and creative directors, are trained to break tasks down into clear steps. They naturally start by defining the desired outcome, then work backwards to map out the process. But many of us (myself included) tend to just dive straight in and muddle through by sheer determination and the fear of looming deadlines!
It’s worth building the habit of “prompting yourself” and applying the same rigour as you would when delegating to another person. Be clear on the end goal, timelines, approach and checkpoints.
When working with generative AI, I’ve found it really helpful to unpack my projects into subtasks, consider the modes of thinking required at each stage, and match those to appropriate prompts and models. Describing the work in detail, as if briefing it into an AI, has made my own process more structured and efficient.
If you use a paid version of ChatGPT, you can try out the “AI Task Analyst” custom chatbot I built to help break down and describe complex jobs (confusingly, OpenAI calls chatbots “GPTs”).
Develop your generative AI practice
To get good at prompting, you need to treat it as a practice – something you work at regularly and systematically. A few tips:
- Don’t confine yourself to professional tasks. I learned a lot from helping my partner develop their garden design business, and by making a robot coach to help with personal statements for university applications for my children.
- Decide how much time to dedicate to honing your skills. Build it into your schedule. It will repay the investment surprisingly quickly.
- Keep notes on what does and doesn’t work. Build your own library of proven prompts.
As tech visionary Douglas Engelbart once said: “The better we get at getting better, the faster we will get better.” By deliberately practising prompting, you’ll improve rapidly and compound the benefits to your work.
Tools to try
There’s a rapidly expanding universe of generative AI tools out there. Here are a few I’ve found consistently impressive and useful:
- ChatGPT – the model that kicked off the current gen AI frenzy, and still the most capable free tool for general-purpose prompting.
- Anthropic Claude – an AI assistant focused on safety and transparency. Free via Poe, paid via Anthropic.
- Mistral – a family of open-source language models from Anthropic, especially good at devouring PDF research reports and finding insights for you.
- Perplexity – an AI engine built by former OpenAI folks, pitched as “ChatGPT but better”. Currently invite-only.
But this space is evolving lightning-fast, so my recommendations will probably be outdated by the time you read this!
Nobody knows anything… yet
The emergence of large language models and the gen AI tools they power marks a genuine paradigm shift. These technologies are advancing so quickly that everyone is scrambling to make sense of their capabilities and limitations.
Even the creators of the most cutting-edge AI systems admit they don’t fully understand how they work under the hood, or what they might be capable of given the right prompts. We’re all learning as we go.
So while it can feel like there’s a dizzyingly steep learning curve with all this “prompt engineering” stuff, take heart in knowing that we’re all beginners. Don’t be afraid to experiment, break things, and look silly. That’s how breakthroughs happen!
Humanity + AI = superpowers
Whatever your hopes and fears about a future alongside ever-advancing AI, one thing is clear: by learning to “speak AI” through prompting, you will massively augment your capabilities.
You’ll be able to understand and use these new tools to supercharge your thinking, communication and creativity. You’ll be better placed to make well-informed decisions as both an individual and a professional. And you’ll have a voice in shaping how this technology evolves and integrates into our lives.
Ultimately, the real power emerges from the interplay of human and artificial intelligence. By upskilling in prompting, you’re gaining the ability to wield that combined power.
There are real concerns and issues, as with any new technology, but it is clear to me that the best way to shape its use and the future is to learn how it works.
If you would like to read more of our work, check out:
- Antonym – my newsletter, mainly about AI these days
- BN Edition newsletter – Brilliant Noise’s newsletter mainly about AI and marketing