A conversation with Mills Baker, Head of Design at Substack
The edited written version from Whoa, Vol. 2: Conversations on AI and Creativity
Mills Baker is the head of design at Substack, where he shapes how millions of writers and readers connect through the platform. This is an edited interview from Whoa, Vol. 2: Conversations on AI and Creativity. To get early access to all other interviews, including David Perell, Robin Sloan, Anu Atluru, and Oliver Burkeman, get a copy of the print + digital zine. Enjoy :)
Alex Dobrenko: So we're really finally doing it.
Mills Baker: We're doing it, man. I'm very excited. Well, I'm sort of excited. I don't want to overstate it.
AD: Just sort of excited, huh? Let's unpack that.
MB: Do you ever actually get enough mental space to be excited about something that's happening? Doesn't it all just kind of fucking wash over you all the time, you know?
(Alex and Mills go on a full on side quest on anxiety, relationships and more. We trimmed for focus, but the full detour is available to premium subscribers.)
AD: Okay, let’s talk about some AI stuff? You write a ton and you think a ton, but it sounds like with neither of those, you find AI all that helpful.
MB: Well, no, because almost everything I write or think about is either too particular for convergence machines like LLMs to have anything to say about it, or I want to know what I think or what the people I work with think or what the market I work with thinks, which again, is particular. So, you know, it can give me some kind of general point of view on social media. But the general points of view on social media, first off, they're common. I think most people already know them. And second off, they don't apply that directly to Substack. Substack is in peculiar phases of evolution, has different constraints, different dynamics. And I think this is true for the things I write too. If I thought generative AI or LLMs or whatever had an interesting thing to say about something I was writing, I probably would also think, well then I don't need to write this because it's already part of the generalized training data conclusion system.
If I thought generative AI or LLMs had an interesting thing to say about something I was writing, I probably would also think, well then I don't need to write this because it's already part of the generalized training data conclusion system.
AD: I've done the thing where there's enough of my writing out there and I'll be like, “write something in the style of Alex Dobrenko”. And it'll be kind of right, but wrong. But then immediately I worry that I'm judging it as wrong as a defense mechanism because I can't admit that it might actually be right. But maybe that distance is the whole point, and we're never gonna close that distance because there's a person in that distance. You know what I mean?
MB: Yeah. You know, Frank Lantz the Substack writer? He's a professor and a game designer. He's an incredible figure. He writes on Substack and he said this great thing about art, which is that audiences have an adversarial relationship with artists. He was describing the cat and mouse game that occurs with art and meaning. And I think this is exactly true.
So let's assume that it is a defense mechanism. I don't think it is, by the way. I actually think that if you showed it to a friend or a fan, they would go, yeah, I get what it's doing. This is a parody of you, but it doesn't do the thing you do. It's not actually inhabiting your point of view. But let's assume it's a defense mechanism. Even that is a thing that's important because humans will always relocate themselves just outside of the pattern to avoid getting got. And so I'm very pessimistic about LLMs generally.
I mean, I love them. They're really useful. I like to use them to correct photos. But, you know, broadly speaking, they're never going to do the other part.
AD: Do you worry that your pessimism is itself a defense mechanism?
MB: I have every reason to justify skepticism because I do have a ton of prior beliefs that if AI were to be comparable to a human mind or consciousness would be problematized. So it is a risk. But I also just use this shit and it doesn't do it, you know.
I think back to these technologies that get hyped a lot like crypto. For several years, everywhere you went, people were telling you about how crypto was going to change everything. It was going to end the reign of fiat currency and lead to new political structures. And one of the things I used to say back then is, if any of that's true, you won't have to tell me. I'll simply see it and I can deal with it then.
You remember when the iPhone came out? Nobody ran around saying you've got to get an iPhone. What happened is you would just see an iPhone and you would see how great the pictures were, whatever the hell, and you'd go, I want that. Nobody had to evangelize you or explain, “well, what iPhone does is it co-locates super computing power in your pocket.” Nobody would say shit like that, but that's how they talk about crypto and AI. They go, “you don't understand. This ushers in a new era of essential software.”
What's gonna happen is someday my buddy Paul will get drunk before he drives over and I'll say, damn, you were driving blackout drunk? And he'll go. No, man, agents drive cars now. And I'll go, okay, AI's here. And until that day comes, I don't have to think about it because it's all just theory, you know?
AD: Yeah, the hype of all these things gets in the way of that emergent cultural process of like a cool new band. They're pouring it down in a way where it’s not cool even if I use it.
MB: Yeah, the easiest way to kill something is unjustified hype. It can fuck a band up, it can fuck an artist up of any kind. Gary Marcus, who's an AI critic on Substack, says that if we were doing these things rationally, we would be investing a ton in solving the problems of LLMs. But because the hype is so strong, people just go, “there are no problems with LLMs”.
One of my favorites is when I say I don't think an LLM is good for something, like when I say it's bad that it hallucinates falsehoods all the time. And someone will go, people do that too. And yes, I understand that people do that too, but there is a difference between a person who makes a mistake or a person who's lying to you versus an LLM where there is no distinction between truth and falsity. And they've tried the adversarial thing where they have one LLM grade another LLM. But the grading LLM will hallucinate shit just like the original LLM will. Humans, at the very least, have conjured social and sensory ways of checking ourselves.
there is a difference between a person who makes a mistake or a person who's lying to you versus an LLM where there is no distinction between truth and falsity.
Look, nobody thinks humans are more delusional than I do, but it is quite different from the way LLMs are. And so I think the hype has damaged investment in solving what's wrong with LLMs.
In fairness, a ton of entrepreneurship and artist stuff is this way where there's no there there yet and you're trying to invent iot. Like when Chris and those guys started Substack, they didn't have any users. And they're going out to people saying, hey, give us money, we'll get some users. And there's always a fake it till you make it going on. But the scale of fake it till you make it that's happening with LLMs I find disturbing and crazy.
AD: Yeah, and the underlying belief is often, people don't know what they want. You got to give it to them. And I think actually they do know. Maybe that's gonna make me bad at business. But, I don't know, I think people know, you know.
MB: Yeah, they often do know. There's that Henry Ford quote where he said, if I asked my customers what they wanted, they'd say faster horses. That's only sort of true. If Henry Ford sat down with his customers and he was like, hey, what do you want? And they were like, I'd love a faster horse. And he said, okay, well, just so you know, we have this technology called an engine and we can mount it to wheels. And then it would be like a horse, but it wouldn't need to be fed. And it would have these different properties that carry more weight. They'd go, oh yeah, I want that. This isn't some gigantic leap that only Henry Ford could make.
So if I said to somebody, how about a piece of software that reliably makes good decisions for you and can operate everything in your life? They'd go, yeah, I want that. It's just we don't know how to build that. And what we've built instead are these things which, again, especially in software development, it's insane to be critical because they are incredible for a lot of kinds of software development. But also I was talking to one of our top engineers (at Substack), one of the brainiest guys we've got, who's deep into this particular scene. And I was asking him about a particular project and what he thought the delta pre-LLM and post-LLM was. And he was like, before LLMs, it could take 10 months. After LLMs nine and a half months. And I was like, okay, well, that's not a huge thing. And he was like, no, these things don't solve complicated, hard problems in a particular space that well.
What they're really good for is starting from nothing and building stuff that doesn't have to support a lot of weight. So you'll see these guys who post things like, I coded a thing for my fucking TV in two days. And you're like, yeah, it's not that hard to go from zero to a thing for your TV. Now try to modify Facebook so that it works in a different way. Well, it doesn't really understand the way Facebook does this. And it doesn't understand how this is set up and I couldn't get it to work right with this. So it's the particularity, again, just like with creative writing, it's not there. Because they're generalization machines.
What they're really good for is starting from nothing and building stuff that doesn't have to support a lot of weight. So you'll see these guys who post things like, I coded a thing for my fucking TV in two days. And you're like, yeah, it's not that hard to go from zero to a thing for your TV. Now try to modify Facebook so that it works in a different way. Well, it doesn't really understand the way Facebook does this. And it doesn't understand how this is set up and I couldn't get it to work right with this. So it's the particularity, again, just like with creative writing, it's not there. Because they're generalization machines.
AD: Totally. I’m curious if you can share examples of how you use it.
MB: Yeah, See Nick (Nick is a Product Designer at Substack) would have crushed at this. Nick is the best software designer in the world and he does not publicize himself at all.
But okay, one thing I do use occasionally is if we're debating something at work, I'll dispatch deep research and try to produce a 20 page, extremely detailed and citation rich report on a subject. For example, types of monetization for artists on the internet. I’ll say I wanna know every single thing, every pro-con, every long tail dimension of all the possible ways that artists can monetize on the internet. It's almost like a due diligence completionist step just to make sure I'm not overlooking something really obvious and huge. Like what if most writers make most of their money selling merch and you go fuck I never thought about merch. So I'll ask it to go on that, but I should say I've never had it produce anything that had a delta. What it comes back with I'm never like, thank God I fucking did that. I'm mostly like, yeah, yeah, yeah, okay, we knew about all that.
AD: Right, this confirms what I believed already.
MB: Yeah, so it's like a mild anxiety reducer.
AD: As the head of design at Substack, you are in the unique position of managing a team of people. What is that like with AI? Are people worried about admitting they’re using it to you?
MB: No, no, because I've always told them, use AI. Use it anywhere you can while still doing what you need to do for the company and for the good of others.
In a tech company, you have the good luck that there's zero resistance to this. Nobody at Substack would say, wait, you used AI for that, that's not good because we want people to do what's effective and use any instrumentality they can.
So they all use it whenever they can or want to. And I don't think there's any domain where they'd feel like they need to keep it a secret.
AD: I think the fear that I felt on the other side of having bosses is if you admit to it, they're gonna be like, well, we don't need you.
MB: Yeah, well I think there's a world where Chris doesn't need me, right? Because most of what I do is generate language and take language in, because I'm a middle manager. Like I don't actually produce anything of value. I just exist as an interface between other valuable parts of the organization. So if that line of thought were to lead anywhere, it would lead to the elimination of me, not the designers, because the designers actually produce things that AI cannot produce. Kellen still has to go in and design the user interface. I'm the one who could be abstracted away if I used AI. I don't use AI because it would take me so much longer to get writing out of AI that I'm happy with than it does for me to just puke out the writing myself.
Where this is salient in the labor market is a lot of people have jobs where they don’t get to say what they think. When I worked at Facebook, this was true. They didn't want to know what I thought. They wanted me to perform some kind of function. In that zone, AI is a tremendous temptation because you might want to say, hey, write a message that says, “I think this is a really exciting direction for the project”, even though I think this is a fucking terrible direction for the project. But I don't think anybody at Substack is in that kind of spot where they're having to pretend things. Chris can text me at 11 o'clock at night about something and I can say exactly what I think about it. And that's true almost all the time at Substack.
So in a very real way, that eliminates the main thing that sucks about having a job, which is having to go, “fuck, Gary really likes to hear that you're excited, so yeah Gary, I’m super pumped about this initiative and I think it's gonna really help us synergize stuff” or whatever. And I have had those jobs and there’s an integrity drain. And God, I hope Chris doesn't con me, you know, because I’d just have to go back to that world and that world's not fun at all.
AD: I'm sure he would be like, no, dude, you're invaluable. That's the vibe I get.
MB: I mean, I hope so, but you never know what Substack's gonna need at different chapters of its evolution. When I started, there were 10 people. And if it continues on the trajectory it is, Substack will be a very large company. And there could come a time when he just wants somebody who's a little bit more serious and operationally heads up.
AD: You have such an interesting vantage point at Substack seeing all the writers and creators. What do you see AI doing to the ecosystem?
MB: Well, there's so many verticals as they say, right? There's finance writers whose newsletters are all about their ability to summarize market trends, and at least in the ingestion phase of data, LLMs are fantastic. So I could see LLMs becoming part of the processes of lots of different writers in lots of different domains. But I'm pretty bearish on LLMs being able to do creative work that audiences care about stably over time or pay for.
So for example, back to those finance letters, let's say you're some wonky quant person and you're extremely good at understanding market data, but you're a terrible writer and you have an LLM write your newsletter. Well your insights help people on the stock market in whatever way people on the stock market get helped. But the writing in that case is not about the prose. LLMs can write the letter where it's transactional and it doesn’t matter.
On the creative front, I don’t think the slop era is going to be meaningful. We've had these eras before where something suddenly gets cheaper or easier. Photoshop's a great example of this. There's this brief period, I remember in the 90s, where I would go online and I would look up graphics and I'd be like, “ooh, here's a picture of a mountain with lightning behind it how cool.” And now you couldn't show me a graphic cool enough to excite me because I've gotten back to only caring about the real and creative. And so that'll happen with slop where that's gonna be a little chapter where people are excited about crazy weird videos and then people will go, that’s cheap and infinite. And that adversarial thing will come back and they'll go, I wanna see something that's interesting and hard to come by and rare and novel.” And AI generated shit as such is not novel and it's not hard to come by, it's not rare, it's not gonna change anything about how I think or feel.
And so the scarcity problem of creative work, voices that are interesting, of culture that we care about, of things that resonate with us is just gonna return with a vengeance like it always does. There's basically no escape out of the scarcity problem of meaningful shit. So my vantage point is people are gonna figure out ways to use it, some are gonna be good, some are gonna be bad. In some number of years, it will return to being a peripheral issue in the same way that word processors and Photoshop are.
AD: Mmm. I like that. Because there's this sort of purity conversation that's happening, of like, don't you fucking dare use AI, or you're in big trouble.
MB: Yeah, I think especially in America, the desire to moralize issues is very strong because people want to be good. And so they're always on the hunt for new wedges that they can use to distinguish good from evil. And I don't dispute that there are moral dimensions to AI or to LLMs. For example, I absolutely think it's fair to say that LLMs are plagiarism machines in the sense that they derive their functionality from people's work who were not credited and are certainly not going to be paid. But, in fairness to LLM companies, if you were to take any individual's contribution to the LLMs training data set, it's so minuscule and it would be so replaceable that we're not talking about people being denied millions of dollars or thousands of dollars or even single dollars. I've been writing on the internet since the 90s, I assume I'm in there somewhere, I'm probably owed by OpenAI one 100 millionth of a cent. And if things were fair, there would have been this kind of dispersal. It's also true that LLMs are likely to produce more value than they extract.
So if I were someone who hated tech companies and really wanted to resent LLMs, I'd probably be slightly celebratory because they're gonna fuck up a ton of software engineers' wealth accumulation. They're gonna hurt people like me who work in prod dev, so you're gonna get your own against tech company people, don't worry. And then, try not to forget, everybody winds up in the dirt anyway. So I don't feel strongly moral about them.
I do think it's fair for audience members to say, hey, these are my requirements. Because for example, in your case, I do think one of the reasons people love to read you is the you inside of it. And the degree to which that's diluted is a reduction in what they're getting from it. Some people call that parasocial. But you also don't know what goes into people and where they pick things up and where phrases come from. Even gestures have this funny property of moving around the world and in populations. One of my heroes Milan Kundera talks about your body having certain propensities, but says that you also clearly see and model things. Someone might say that gesture is Alex in a nutshell and you'd go, I actually picked that up from watching Cheers.
There's a million things like this where when you test it, it's not there. Scott Alexander does this periodically where he'll give people 10 poems, some written by AI, some not, and people can't reliably distinguish. Some of that I think might be the pressure or anxiety of having to make a decision on a test rather than living with something. But that doesn't mean that people can't tell. It might mean that they can't always tell reliably, but in certain contexts, you can absolutely tell. Like, there's zero chance an LLM could fake my wife to me, or vice versa. Your model of people you're close to is so rich. Maybe this is kind of orthogonal, but scientists are always saying that there's no difference between different alcohols. Almost every human who drinks will say things like, “man you don't want to be around me when I'm on tequila. Tequila makes me fight. Wine makes me expansive and philosophical,” and if you try to research this, scientists are always going, “no, none of that's true”. It's all just ethanol that hits your bloodstream. And all I can say is bullshit. The entire world isn't hallucinating the deltas between different alcohol delivery mechanisms just because you can't figure out why that would be. Different demons live in different liquors, okay? I don't know, but something's going on out there.
AD: How would you describe your relationship with AI?
MB: Like a nurse. Imagine I'm a really old decrepit man and there's like a young man or woman who has to wheel me around and give me laudanum sometimes. That's the AI. I float crackpot shit in my eye and I go, “don't you think it's true that windows have gotten worse?” And it's like, “yes sir, windows aren't like they used to be”.
AD: Ha, it's like old man yells at sky, but now old man yells at AI is the new meme. What do you wish AI could do but fear it never will?
MB: I wish AI could reliably abstract all administrative operations from my life. Like I could tell it to handle all of my official stuff, anything from banks or anything where you think I don't really want to be personally involved, just handle that shit and just ping me with any questions. I think that might actually someday be closer to possible.
AD: Everyone's making predictions about AI. Let's make a prediction about humans. Where do you think humans will be in 10 years?
MB: I think humans will be in the exact same position in 10 years that they have always been in.
AD: What question about AI and creativity aren't being asked enough?
MB: The question we don't ask enough is how our minds work. Nobody really seems to object to the fact that there is no theory of how minds work. If you ask a scientist, a doctor, neurophysicist, any biologist: what is a thought? They can't tell you, they can't tell you what the relationship between a thought and the brain is. Where do ideas come from? Why do some people think better than others? We're still missing a core theory of mind.
AD: Do you think we'll ever get it?
MB: I don't. I think it doesn't want to be known
AD: Last thing, what do you wish people talked about more instead of AI?
MB: Things that they loved. Anytime you're angry about something and you're writing about it, you should try instead to write about the good thing. So if you're mad about something happening with a political development, instead you should write a post about how you think politics should go in an ideal world. If you're mad about a bad movie, you should write about a good movie instead. I just wish more people spent time writing about things that they loved, things that they admired, things that they thought should happen, not contra something else. Because that's how you get entangled in the nightmare of the world and become part of the problem. And I do this, of course, but this is advice I give because it's the advice I need. I spend all of my time thinking about what's not good. And that's ridiculous. If everybody just took their energy and instead wrote about what you should be watching or listening to or reading, I think that would probably be good.
AD: What are, to end then, some things you love?
MB: Well, it's fresh in my mind because I was watching it with the kids this morning and you're a film guy. Terrence Malick movies are phenomenal. So we watched Tree of Life. And I also love the Night of Cups, which is his movie that people like the least but it hit me like a ton of bricks. I loved that movie.
AD: Alright. I know you got to go. This was amazing, man.
MB: Let's do it again. Later, Alex.
The full unedited conversation is available to paying subscribers below.


