The alchemy of AI
A guide to Ethan Mollick and how to go from a spectator to a creator in the age of AI.
I’ve been using AI, specifically LLMs (large language models) like ChatGPT for the past 8 months.
When you look at some of my writing on the topic you might think I’m an expert…
Well, I have a confession to make: like any artist, I’m stealing.
I’m simply taking bits and pieces from the true artists, the pioneers in the space, and I try them on for size. I apply my domain knowledge, I mix and match, and I share the results to get feedback.
One of these pioneers is
.He’s an “Associate Professor at the Wharton School of the University of Pennsylvania, where he studies and teaches innovation and entrepreneurship, and also examines the effects of artificial intelligence on work and education.”
To me, he’s also one of the most curious people on the planet and a very unique thinker.
I’ve read every single one of his 69 newsletters (so far) and with this post I want to give you a map to approach his mind and his writing on anything AI for work, education, and creativity.
Not because I want you to start using AI, but, if you’re interested, so you can become a more effective, more creative and innovative thinker.
I know Ethan’s work did exactly that for me. I used to think ChatGPT could replace me, but I realized pretty quickly how miserably it failed. Then I stumbled on his ideas, and my approach changed completely. To the point I was able to work with 7 clients on fully custom projects at the same time as a solo copywriter – impossible to even fathom up until then.
AI has no instruction manual… but if there’s anything close to it, it’s Ethan’s writing.
If there’s one reframe that’s been proven helpful in my life like few others, it’s that for something to be useful, it doesn’t necessarily have to be true.
Ethan focuses on this theme a lot as you’ll see. Let’s dive in.
What’s the real impact of AI?
First it’s important to understand how AI is democratizing our skills. This technology won’t steal your jobs, but it will replace a lot of the tasks we consider boring and repetitive. We’ve seen this with past technology too. Take spreadsheets…
The introduction of spreadsheets, far from eliminating jobs, actually expanded white-collar work and led to the creation of more high-value jobs such as data analysts (1).
Experts suggest that AI and machine learning may actually create more occupations than they replace, but it will require current workers and those entering the workforce to adapt and acquire the new skills in demand for AI-focused roles (2).
The great news is that to get those skills, you don’t need to be a code guru.
In this post Ethan explains how AI acts as a leveler of skills. It allows non-experts to perform expert tasks. Imagine both low and top performers entering an escalator together headed up a floor or two. They’ll both end up at a higher level than they entered the escalator at.
There are two nuances though:
The average skill level will be higher, making it easier for low performers to do good work, and
Those who will find a way of integrating their domain expertise with AI, will become the kings and queens of the space, separating from the pack. Take the AI personal trainers creating their client’s personalized fitness or nutrition plans in seconds; or the AI urban planner using simulations for sustainable city development; or the AI linguist/NLP specialist working on language models to improve human-machine communication.
Also our own wisdom is still the only limit, even with AI. That’s because LLMs have a “jagged frontier”:
AI is weird. No one actually knows the full range of capabilities of the most advanced Large Language Models, like GPT-4. No one really knows the best ways to use them, or the conditions under which they fail. There is no instruction manual. On some tasks AI is immensely powerful, and on others it fails completely or subtly. And, unless you use AI a lot, you won’t know which is which.
Imagine a fortress wall, with some towers and battlements jutting out into the countryside, while others fold back towards the center of the castle. That wall is the capability of AI, and the further from the center, the harder the task. Everything inside the wall can be done by the AI, everything outside is hard for the AI to do. The problem is that the wall is invisible, so some tasks that might logically seem to be the same distance away from the center, and therefore equally difficult…
Also relying too much on AI can backfire. It can be so tempting to have it write my copy for example. Good thing I can still discern when the result is pretty bad (it typically starts a firehose of flowery metaphors or vague, empty jargon). But the danger of “falling asleep at the wheel” is real:
When the AI is very good, humans have no reason to work hard and pay attention. They let the AI take over, instead of using it as a tool.
The not so sci-fi future of AI
Ethan defines the two different types of effective AI integrators as Cyborgs (those fully merging with it, mixing and meshing their skills with AI tools) and Centaurs (those clearly defining boundaries between what they can do and what AI can help them with.
In any case, the only way for you to truly understand how to use AI and its potential for your use case, is to experience it, to use it and to find what it’s good at and what it’s not so good at. Ethan identified a 10-hour minimum experience with it to get a feel for it.
In my mind, the question is no longer about whether AI is going to reshape work, but what we want that to mean. We get to make choices about how we want to use AI help to make work more productive, interesting, and meaningful.
How to think about AI so you can use it and it doesn’t use you
It’s easy to think there’s a danger in AI taking over our world. That's when you see “War of the worlds”-like scenarios being thrown around in the news. But what if I told you that AI is not software? What if it’s more like a person?
Ethan argues that AI is more akin to people. Not in a “Terminator”, humanoid sense, but rather in the way it “thinks” and returns results. One of the most important concepts to understand when it comes to LLMs is that – unlike with software – there’s no operating manual.
we should know how to operate a piece of software. Software projects are often highly documented, and come with training programs and tutorials to explain how people should use it. But there is no operating manual for LLMs, you can’t go to the world’s top consultancies and ask them how to best use LLMs in your organization - no one has any rulebook, we are all learning by experimenting.
Get used to the irrationality and often weirdness of these tools. Like a lot of the people in your life, they might not make sense. We’re all irrational creatures making mistakes. AI included.
Thinking of AI not as software is also a great frame if you’re a non-technical person. You shouldn’t avoid it because you can’t program. And you can stand out if you have specific domain expertise. That;’s when you can successfully delegate tasks to AI. Because your expertise will guide your decisions and your evaluations of what’s a good result and what’s a poor one.
Sounds like a paradox, but the more human we can be with AI, the better we’ll be at helping it help us. Think of it as your intern.
In this piece, Ethan gives us a specific roadmap for task management with AI.
(Source)
First, decide which tasks to have the intern do and which not: is it a task you’re uniquely qualified to work on, a task you want to delegate to AI, a task you can collaborate on with AI, or a task you want to automate with AI?
Second, pick the online (access to the internet) and offline models to use. In this article Ethan gives us specific examples and use cases. Keep in mind that in a month from this writing these models might be out of date, but the categorization and the “buckets” you put them in will still matter.
Then it’s time to get into your prompt…
How to use AI in a way that’s not gonna be out of date tomorrow
Before even attempting to write your first prompt understand this:
There’s no such thing as “prompt engineering”. Prompts are not magic spells. They are a “temporary state of affairs”.
Prompts are shared as if they were magical incantations, rather than regular software code. And even if we do learn some rules, systems are evolving in complex ways that mean that any understanding is temporary.
What will make or break your ability to prompt AI successfully is your level of expertise and your ability to “encode it” in your prompts. Before this LLM revolution, data was the new oil. The people who owned the data were the ones who ruled. Now expertise is what’s valuable. In order to work effectively with AI, all you need is expertise, time with it, and a vision for what you want.
With this in mind, it’s still important to learn how to structure a prompt. Here are the only two ways you need to know to do it effectively: the conversational and the structured approaches.
Conversational prompting: this is where you simply have a normal conversation with AI like it’s a person. You ask for what you want and get (unlimited) responses.
The best way to get a great result from your conversations is context. I love Ethan’s analogy here:
You can (inaccurately but usefully) imagine the AIs knowledge as huge cloud. In one corner of that cloud the AI answers only in Shakespearean sonnets, in another it answers as a mortgage broker, in a third it draws mostly on mathematical formulas from high school textbooks. By default, the AI gives you answers from the center of the cloud, the most likely answers to the question for the average person. You can, by providing context, push the AI to a more interesting corner of its knowledge, resulting in you getting more unique answers that might better fit your questions.
Structured prompting: this is when you can have AI repeatedly and reliably (as much as possible) produce the result you are looking for - in a way that’s tailored to your specific needs and skills.
Ethan’s framework is pretty specific and practical:
Give AI a role and goal
Instruct it step by step
Inject your expertise, your domain knowledge and view point
Personalize it by asking AI to ask you questions
Provide a few examples
Request a specific output or format
Appeal to its emotion (remember AI is more like a person…)
How to be creative and innovative with AI
I don’t let ChatGPT write these posts, but that doesn’t mean that it doesn’t help me write them. - Ethan Mollick
AI can help us breakthrough our creativity limits. It gives us tools to come up with more, better ideas, acting as a “prosthetic”. None of this was possible before.
If you truly want to create novel ideas though, it’s important to understand how to use constraints and experiment with weird combinations:
There are lots of potential ways to do this with ChatGPT. One is to play with constraints. In general, and contrary to what most people expect, AI works best to generate ideas when it is most constrained. Force it to give you less likely answers, and you are going to find more original combinations.
AI is amazing for getting unstuck too:
There are many Shadows, the things that stop us from taking action on ideas and, as a result, many good ones never become reality. A great thing about ChatGPT is you can always ask it to help you with the step you are stuck on…
For example, I can skip the blank page anxiety and paralysis entirely. I just enter any data I already have to write my copy from, and ask for ideas, angles, thoughts, hundreds of different headline variants or bullet point copy. In seconds I’m not stuck anymore.
And when it comes to getting things done AI helps us avoid the boring, mentally dangerous and repetitive tasks. This means we’re free to work on the most exciting and interesting ones.
When it comes to creativity and productivity, AI can give you superpowers. Here’s quick experiment where, in just 30 minutes, Ethan “did market research, created a positioning document, wrote an email campaign, created a website, created a logo and “hero shot” graphic, made a social media campaign for multiple platforms, and scripted and created a video.”
Anytime you use AI, ask yourself:
What’s the output I got?
What work did I need to input?
What did you get as a product?
Thanks to what sci-fi has made us believe, it’s easy to associate AI with the sexy and exciting. But the first thing we should prioritize and what it actually is really good at (right now), is automating the boring stuff.
The secret ingredient to master AI
One of the reasons Ethan’s ideas resonate so much is because they stem from what seems an inborn obsession with curiosity. It’s something I’m always striving to cultivate in myself and he’s a great model to follow.
In this piece, he gives us a way to frame being curious:
Mysteries invoke a psychological trait called specific curiosity - an “intense desire to find an explanation for a puzzling experience or phenomenon,” The most innovative people are not necessarily the smartest, or even the most creative; but they often are the people who are absolutely driven by curiosity to find answers. This connection is very direct, as the paper I linked to shows that people who act with more specific curiosity one day end up being more creative the next.
Mystery helps boost creativity in a second way. New ideas are often recombinations or variations on past ideas. The more chances you have to encounter new ideas or think in new ways, the greater your chance of connecting those new concepts together with your existing knowledge in a new way. That is the basis of many breakthrough ideas.
If you want to use AI successfully, make your life easier, and get more done, better with it, never stop being curious.This is true now and in the future, no matter the upgrades or models we’ll have.
That’s how I come up with new prompts to help me in my work and personal life, every single day. I just ask myself “How can ChatGPT help me here? How can I do this better/faster with ChatGPT?”.
“Make resolving mysteries a habit.” - Ethan Mollick
The Tao of Ethan
Here are the main ideas and mental models behind Ethan Mollick’s thinking:
AI is more like a person than software
AI will replace tasks, not jobs
We still needs to be hands on and experience AI, not let it run on autopilot
To integrate it successfully AI requires our own domain expertise
Understand both what it can do and what it can’t by using it
And remember, what’s useful doesn’t have to be true.
Like the way Ethan thinks about AI as people, not as software. It’s not true, AI is not a human entity, but thinking this way is useful. Or the analogy of AI as a cloud of knowledge that you can orient to move it closer to the area of expertise you are working in. Again, not true, but useful.
Any good entrepreneur knows it. You have to be a little crazy and naive sometimes to get where you want to go.
To work with AI you have to rewire your brain in a way.
I’m doing it every single day. Creating new synapses and connecting the dots between what I can and what I could do, if, and only if… I get a little more curious. If I dare a little bit more outside my comfort zone of knowledge.
Luckily with AI used the right way, all this is easier than ever.