When anyone asks me who they should be reading to learn more about generative AI, I always give the same answer: Ethan Mollick. An associate professor of management at Wharton, Mollick writes a Substack about AI, and his LinkedIn and X accounts are full of interesting posts about his experiments with ChatGPT and the latest research on large language models (LLMs). Mollick has co-authored academic papers about the implications of generative AI for work and education, and his newsletter tackles forward-looking topics, like the way organizational structure will have to adapt to AI. Here's our interview with Mollick, lightly edited for length and clarity:

You’ve written about AI’s ‘jagged frontier,’ which means the technology is good at performing some tasks and bad at performing others, even when those tasks seem to be the same level of difficulty. Do you have any tips for determining whether or not AI will be useful for a given task?

That is what makes it so interesting. The answer is generally no. Any heuristics I get you will be overwhelmed by weirdness inside the system. For example, if I ask it to write a 25-word sentence, it may or may not write 25 words because it doesn't see the words the way we do. If I ask [it] to write a sonnet, it can write a great sonnet. So in your field you're going to have to figure it out. It's too complicated. Nobody knows in advance. There isn't a really good heuristic other than: Use it to find out.