Chat tool prompt hacks are the airport shop management books of the AI world. “Let GPT turn you into a money-making machine”, “Top 22 prompts to code faster and smarter” and, a personal favorite, “50 prompts to become 10 times more productive than Elon Musk”.
There are a few problems:
One: a lot of these prompts actually add to your workload! Variations on “help me become an expert on [SUBJECT]” promise an instant injection of pure knowledge and then deliver a Wikipedia page, but written differently. They’re a good starting point if your job requires writing generalist briefing documents, but not if you want to be Bradley Cooper’s character in Limitless. And most of the prompt wizards are pitching to an audience who absolutely want to be Bradley Cooper’s character in Limitless.
Two: Many are selling something. The number of AI-powered apps and plug-ins has mushroomed over the past nine months. And as with most growth hack/productivity bros content, there’s no transparency on what you’re being sold. Sifting through the hundreds of new products dropping every day, in the hope they will make you a human dynamo, will NOT improve your productivity. Especially when many new plug-ins are just repackaging and selling features available to everyone already through most large language models (LLMS).
Three: it’s OK not to be productive. And believing AI will make you a super-charged machine will lead to crushing disappointment when you do not, in fact, become 10 times more productive than Elon Musk.
Four: Today’s productivity hacks are tomorrow’s reasons for mass redundancies. The popularity of this career growth content comes from a place of anxiety about careers and earnings, both now and in the future. It may even be a reason you subscribed to this newsletter. AI is heightening those anxieties. So what do we do? Learn everything in the hope we can stay ahead of the AI-powered curve? Ignore it all because the robots are going to win anyway?
We’re going to attempt to tread a middle ground and settle on some general guidance for using prompts, with a focus today on chatbots rather than text-to-image/audio/video tools.
First, some definitions and categories
This is verging on pedantry but it’s useful to be clear on LLMs versus chatbots.
Often in reporting, ‘LLMS’ and ‘chatbots’ are used interchangeably. And ‘chatbots’ is frequently replaced with ‘chat tools’ or ‘language models’. With different services dropping almost weekly it’s easy to get confused, so:
GPT-4 is the multimodal large language model from OpenAI
ChatGPT Plus is the subscription chatbot powered by GPT-4
ChatGPT is the free chatbot powered by GPT-3.5
Multimodal, in this instance, means more than one type of data. eg. text or image input can produce text output.
ChatGPT (from Open AI and powered by GPT) and Bard (from Google and powered by LaMDA) have dominated headlines so far. Claude 2, from Anthropic, was released in the US and UK last month and could rival both. They are all commercial services.
Open-source LLMs, including Meta’s Llama, offer opportunities for developers to build their own tools, and it’s open-source LLMs that cause many worries about a difficult-to-regulate AI arms race.
Perceptive prompts
So, having read through all the AI prompt productivity threads - so you don’t have to - what are the main takeaways?
Be specific. Vague questions prompt vague answers.
Lean into your expertise. Let the chatbot know the level you are on.
How do you digest information? If a big ol’ block of text causes you to switch off, ask for bullet points, or a table. Most models even provide quizzes, if needed.
And, my favorite, the Michael Scott method: explain it to me like I’m 8, then like I’m 5, then, I guess, 3. Please chatbot, explain it to me like I’m the slowest learner in the world, now dumb it down.
And that’s it. Messing around with the chatbots, figuring out what works best for you, will supersede all how-tos that are out there.
A note on custom instructions
Last month OpenAI announced custom instructions for ChatGPT, they’re not available in the EU or UK yet and will be rolled out to the free version users in ‘the coming weeks’. A lot of the ‘hacks’ around custom instructions center on describing your own career/knowledge level and giving some pointers on style, eg if you like tables, pros and cons, no niceties, etc. But engineers have experimented with giving specifics of their workspace and tools for more specific responses. Custom instructions will likely become standard for mainstream LLM chatbots over the coming months.
Is prompt engineering a thing?
All this talk of prompts leads to one of the buzz phrases of 2023. The Prompt Engineering subreddit gives a good crash course on just how technical some of these engineered prompts are. It also puts paid to any lingering thoughts you may have that simply being good at writing questions in LLM chatbots could land you a six-figure job (most people using ChatGPT since November have had this thought, you are not alone).
We’ll still dive into more complex prompts in future editions. For example, LoRA (Low-Rank Adaptation) is a way to feed an entire aesthetic into an image generator by uploading a file containing, say, all your visual work. It can also include the work of an artist without their permission (see below).
So prompt engineering is very much a thing, it just looks more complex than asking ChatGPT something concisely.
Deeper dives this week
A good list of commercial LLMs and open-source LLMs, including deeper-dive research
An artist does not want his work to be used in AI, his fans are not listening