Levelling the Playing Field

7 August 2024/No Comments
By Nick Dunbar
Co-Intelligence by Ethan Mollick

I have a confession to make. Early last year, I updated my LinkedIn profile to include the skill of being able to code in Python. Although I can code in other computer languages, I had never studied Python. What really happened is that I had started using ChatGPT, which answers coding questions by spewing out prodigious quantities of functional Python. Giddy with excitement, I thought I could pass off the AI’s skill as my own.

I felt better about this when reading Ethan Mollick’s Co-Intelligence, where he notes that many workers are now using AI in secret. He cites three reasons: organisations have instinctively banned the use of AI, people don’t want others to think their ideas come from a machine, and finally, the fear that their jobs will be replaced by the same machines.

These questions are at the heart of Mollick’s day job as a Wharton Business School professor as well as his book. No machine is going to replace Mollick, although he toys with the idea, using ChatGPT to help him with the book, providing suggestions on chapter drafts, edits and even the ending.

He’s not the first author to use AI’s own voices in a book about AI: Janelle Shane did that in 2019 with You Look Like A Thing And I Love You, using the output of earlier generations of neural net text generators to show how ‘weird’ AI was. You can’t say that about today’s generation of large language models, which literally can be anyone you want them to be.

After starting with a quick tour of transformers and the attention mechanism which powers generative AIs, Mollick rolls out his key message, that it is better to learn by doing than be explained to. He provides ‘four rules for co-intelligence’: Number one is ‘always invite AI to the table’, using the alien perspective to shake you out of status quo bias.

Number two is ‘be the human in the loop’. Yes, GPT-4 may have been trained with every text known to humans, but Mollick reminds us that the AI is a statistical model that doesn’t actually know anything, and can’t explain what it does. Instead, it tries to make us happy with its answers, and often makes things up. Rather than copy-paste its output, we need to maintain our control.

Rule number three: treat AI as a person (but tell it what kind of person it is). Rather than struggle with the fact that ChatGPT can be inconsistent, Mollick urges us to embrace this by giving the chatbot different personas. This prevents bland output and ensures that diverse approaches to particular problems are presented.

Finally, there’s rule number four: assume this is the worst AI you will ever use. Since AI is constantly improving, any limitations are likely to be transient. This is important because Mollick proposes to divide tasks into three types: human-only (such as writing a book), delegated tasks (boring things like summarising an article) and automated tasks, with Mollick’s example being Python coding.

People who keep these divisions fixed he calls ‘centaurs’ after the mythical horse-man creature, while those who seamlessly blend their own skills and AI, he calls ‘cyborgs’.

However, the ever-changing boundary between what AI can and can’t do means these definitions must change too. And the nature of work is changing, so Mollick’s prescription is to get in there first, and work with AI before others do. He cites research showing that the most able people in a given field are not helped by AI, but the least able improve rapidly.

I’ve seen that myself, as my middling coding skills have been turbocharged – or automated – by ChatGPT. It’s the results that count, although I might have to change that LinkedIn ‘skill’ from Python to ‘AI whisperer’.

Related Articles