Every Job Will Become an Art or Die
A conversation with Anu Atluru
Hi! Starting this letter with some housekeeping (terrible phrase but what’s the alternative - PSA? Ghastly!) because I want to let you know about two exclusive events for Sublime Premium members (or paid subs to this newsletter): a private demo of Wabi (my favorite landing page on the internet right now) and an invitation to our IRL Internet Serendipity Tour happening in 13 cities from Nov 7 – Dec 16 in Taipei, London, NYC, Boston, Toronto, DC, Austin, Seattle, Vancouver, SF, Denver, Miami, and Columbus.
If you’ve been considering becoming a Sublime Premium member, now would be a great time. It’s a no brainer, isn’t it? Not just software, but ideas, community, and spirit for a post-information age.
RSVP links below the paywall.
This newsletter is made possible by Mercury: business banking that more than 200k entrepreneurs use and hands down my favorite tool for running Sublime.
Running a company is hard. Mercury is one of the rare tools that makes it feel just a little bit easier.
Every other banking platform makes me feel like I’m doing taxes. Mercury makes me feel like I’m building something. The interface is unbelievably intuitive, fast, and lets me handle everything in one place – banking, cards, spend management, invoicing, and more.
If you’re a business owner of any type, visit mercury.com today and find out why so many of us actually love how we bank.
**Mercury is a financial technology company, not a bank. Banking services provided through Choice Financial Group, Column N.A., and Evolve Bank & Trust; Members FDIC.
A conversation with Anu Atluru
When we asked in the Sublime Slack who we should interview for this series, the most popular answer was: Anu. She writes banger after banger after banger. Some of my favorite things she’s written are Make something heavy and Pursuits that can’t scale
In addition to her longform bangers, she can also craft an unfairly good sentence that leaves me thinking ugh, I wish I wrote that.
In this conversation, we cover:
Why every human skill becomes either extinct, art, or sport.
The psychological cost of creating on algorithmic platforms.
Why the future belongs to people who are part media, part machine.
Why our ability to detect “GPT-isms” gives hope for humanity.
and more…
This conversation is part of Whoa: Vol II: Conversations on AI x Creativity. If you’d rather savor instead of scroll + want access to all ten conversations, grab the physical + digital zine.
Or listen on Spotify and Apple.
(Best if you want to highlight your fave moments with Podcast Magic).
The edited transcript
Alex Dobrenko: There’s a line of yours “I don’t want an AI, I want a teammate” that really hits hard. I’d love to start there. How did you get to that? What made you feel that?
Anu Atluru: For a few years, I’ve had this thesis that we are moving towards a world where people can do so much more by themselves. I wrote a piece a few years ago called “The Rise of the Silicon Valley Small Business” just before AI really took off. Since then, things have accelerated. The discourse around this is that previously, you might have needed a hundred people to create a company of a certain size. Now you might only need 20, or maybe even 10 or five.
In a world like that, my mind goes to, “How do you choose those 10 people?” One way is to choose people who are 10 times better than everyone else, who can produce 10 times more, and who will be 10 times more valuable to the business. The other way is to choose people based on whether you enjoy working with them, if they make you better, or if they help you express your voice, taste, or capabilities.
The question I keep returning to is: what do we actually want people for? In the past, we wanted them for labor, efficiency, and technical skills. Now, we want them for that, but we also want them for more interpersonal things. The test is: would I rather work with this person—or work alone with better tools? That will determine if they’ll be in my small crew. Which is a long-winded way of saying, even with all this tech, I still want a teammate.
What do we actually want people for? In the past, we wanted them for labor, efficiency, and technical skills. Now, we want them for that, but we also want them for more interpersonal things.
AD: Yeah, it’s also a societal question of whether we, as people, will want to work with each other or with a tool.
AA: Right. I think in some cases, we’ll want to get out of the face of tools. We’ve seen this before, even before we called it AI, with the automation of call centers, customer support, and even self-checkout at the grocery store. In some of those cases, we’re okay not having a human there. In other cases, we want someone there. I have this framework in my mind that as we go forward on this automation / AI curve, every human skill will either become extinct, art, or sport.
Extinct when machines absorb it. Art when humans still do it because we care about the person, story, or meaning. Sport when we watch humans test their limits — often for its own sake.
We like to see what humans are capable of. We like to put ourselves in an arena and battle-test ourselves. Nobody will drive anymore out of necessity, but Formula 1 will stay popular. Computers beat us at chess, and yet we still love watching humans play.
What was once a necessary skill for production now becomes a skill in an arena that’s a sport or art. I think even building businesses is going to be more of an art or a sport than a necessity.
AD: The VC funding model feels like a sport. You’re betting on these horses, and it doesn’t matter if most fail as long as a couple are a huge success.
AA: Yeah, there’s a joke that venture capital might be one of the last jobs that won’t go away because it involves a sense of taste and the ability to pick and choose. I fundamentally agree. As long as humans are in control, we will still be choosing where to place bets, whether on entrepreneurs or Elon Musk’s expeditions to Mars or what have you. If we automate everything away, somebody still needs to make decisions about what we’re doing. In the end, when labor compresses, more of the job is judgment from the top of the pyramid.
AD: So talk a little about your background. You work in VC?
AA: I trained as a doctor, I now build products, invest in and advise startups, and I write – media, machines, and medicine are the three threads that run through my work. My writing ties these interests together. I don’t see them as separate worlds but as different arenas of the same question: how do people engage with them? Whether it’s a product or healthcare, I’m looking at what people want, how they use it, and what motivates them. What interests me most is what stays the same – because people are fundamentally the same even as circumstances change.
AD: I like that frame. It bucks the kind of exceptionalism that’s under a lot of AI talk, which is that this is different and will change everything in a way it never has before. What are the things we always do and how do you see them happening in the next five years?
AA: My bet is the next five years won’t feel as dramatic as the last three. We’ll absorb today’s breakthroughs, settle into plateaus, and then leap forward in sudden step-changes. New technology feels like magic, but we get calibrated to it in a week or two; it’s like we forget the revelation and it just becomes the base case. In ten years, much of life will look the same. We’ll still be ourselves, just with more leverage in our hands, and hopefully with health as the area of deepest progress. I’m not convinced progress keeps moving in an exponential or even straight line. More likely, it flattens and then jumps. I expect the biggest jumps will come in science and robotics—changes in the physical world.
I wrote a piece about the “gentle singularity”—the slower, more gradual shift than the ‘AGI’ moment we’ve been conditioned to expect. In this world, many jobs once considered knowledge work will look more like trades. Research becomes the opposing domain of new discoveries. Interpersonal skills rise in value because you can’t automate away human connection. This slower shift will tease out what is human versus what is machine. While AI gives us a lot of leverage, we still have to find purpose and meaning. I don’t think we’re all just going to be frolicking in meadows.
AD: It’s funny how revelatory the idea of IRL is. I have a friend who does online art workshops, and it blew her mind that she could just do one in her community, like where she lives. We truly have a certain subset of people who are just so used to everything being online.
AA: Exactly. It’s the Lindy effect. What was once will be again. Local communities and connections are where value was before the 25-year internet expansion phase. Everything went online, and we built big marketplaces that allowed for remote work. Now, we’re figuring out there are problems with that in terms of individual value versus the power law, and we’ll come back to what works for everyone, not just a few.
AD: I want to talk about your creative process and AI. How does it fit into your work? How are you using it?
AA: I’ve experimented with AI in every facet of my writing to understand it. I’ve found it’s most helpful in the upstream and downstream phases. Upstream, when I have an idea, I can use a tool like ChatGPT to offload some of the cognitive burden. Instead of just typing notes, I can have a back-and-forth conversation, pulling out the essence of a disjointed thought. This helps me distill an idea or sometimes realize it’s not that interesting. The risk is you get the dopamine of talking through an idea and then lose the drive to actually write. I wrote about this in a piece about how doom prompting is the new doom scrolling.
And then downstream, once I have a draft, I can use an AI tool like a helpful editor. I can ask it to review my work or even act like a specific person to get a critique. This is quite helpful, but like any feedback, you can’t take too many suggestions. You have to trust your own voice and gut.
AI is not great in the messy middle, where you are trying to figure things out. If I ask it to create a first draft, it will give me something, but it’s not me. The prompt would have to be too expansive to capture all the context and voice I have in my head. Starting from something that isn’t your own voice can derail the process, and you can end up iterating to nowhere.
AI is not great in the messy middle, where you are trying to figure things out. If I ask it to create a first draft, it will give me something, but it’s not me. The prompt would have to be too expansive to capture all the context and voice I have in my head. Starting from something that isn’t your own voice can derail the process, and you can end up iterating to nowhere.
For code and technical work, these tools are phenomenal: they open doors for non-technical folks and give engineers so much leverage. Verification still takes time, but because the work is functional rather than art, the assist is bigger — AI is most useful the closer a task sits to “functional” on that spectrum.
AD: It’s largely similar for me. I’ve tried things like, “Write something like Alex Dobrenko,” giving it notes and ideas. It’s a haunting experience because it’s not me. It almost feels like it’s making fun of me, using too many parentheticals. I try to get it to dial back, but then I’m editing some ghost of myself, iterating to nowhere. I find it helpful with editing, or in the beginning, like “Help me talk this through; ask me questions.”
AA: I get that feeling when I ask it to write something in my voice, or someone else’s, just to expand on an idea. It might put “it’s not X but Y” in every other sentence, or create endless lists. That gives me anxiety because it reminds me how mechanical the underlying thing is. I’m sometimes shocked that brilliant people are building these products, and they’re processing at a level far greater than many of ours, yet we can still so readily identify these “GPT-isms” – the new “tics” that arrive with every model update. The fact that we can do this within a day is fascinating and gives me hope that we have some time.
AD: It’s very sophisticated, but so are we. My background is in comedy and acting. You know when someone is lying in a scene versus when they truly feel it. I don’t know how to tell you why I know that, but I just know it.
AA: Right. It’s part of the human instinct to know what makes a relationship or response human. It’s not always predictable; it’s rife with biases, transference, and emotion. It’s a two-way street. People don’t always say yes; they push back and ask questions. This bidirectionality is what we instinctively suss out, and it often doesn’t exist in AI interactions. Until AI crosses that line where people can’t tell the difference, you’ll always feel like you’re controlling it, not engaging in a mutual situation, and human life depends on mutuality.
AA: It’s going to be interesting. Do you think much about education and young kids? I have a four-year-old and a one-year-old. I was showing my wife the GPT talk mode. My son overheard it and asked, “Who is that?” He then talked to it with such ease. It blew my mind because we have so much baggage about this stuff that he won’t have. It’s going to be really weird.
AA: That’s true. There’s a quote that says if a new technology emerges between your teenage years and 35, you adapt. If it emerges after 35, you think it’s horrible and can’t cope. On the other end of that spectrum are kids. This will just be a normal world for them. They’ll probably have tutors and friends that are chatbots. For them, it’ll just be native; just a Tuesday.
AD: That’s wild. Just Tuesday. But I don’t know, I’ve been having these conversations for a bit, and after every one, I feel pretty good. I think that’s a good sign.
AA: I feel like it’s a biased sample set of people you’re talking to—people who are more in touch with or thinking about humanity.
AD: True. But that’s who I want to be hanging out with. I wanted to come back to that idea: what makes for an ideal teammate for you in this new age? Who are you trying to work with?
AA: What you want to feel is that whoever you’re surrounding yourself with amplifies you and lets your edge flourish. This is somewhat intangible. Having an intellectual sparring partner who is a real person is very important. That pushback and conflict ultimately make you grow into a better, more purposeful version of yourself.
AD: In terms of craft, sometimes you ask AI to edit your piece in the voice of someone or critique it from their perspective. Are there specific people you ask it to emulate?
AA: Yes, depending on what I’m writing about. If I’m writing about startup topics, I’ll think about writers I like in that world. For example, Paul Graham is a big writer in the tech world. His style is characterized by brutal clarity and simplicity of sentences. If you ask an AI to critique in his voice, it will often point out the “fluff.” Other writers, like Eugene Wei, write a lot about social networks and have a framework-oriented approach with a lot of structure. It’s nice to get someone else’s perspective, because ChatGPT’s general perspective isn’t great.
AD: I like the idea that every company will be part media, part machine, and the best builders will be fluent in both. If someone isn’t fluent in, say, the media side or the tech side, how do you think they build that in this age?
AA: I think of these two things in concert as the language of the new era. To become fluent in any language, you have to immerse yourself. Be around people speaking it and doing it, and test it out yourself. For builders, that means playing more on the media side. For media people, it means actually making things and using the tools. The barriers to using any of this stuff are pretty low now. You can essentially make software like content, or “vibe code” it. So it’s a matter of wanting to immerse yourself and try things.
Not everyone will master both equally, but those who become competent in both and great at one will have an edge. Founders who are great at storytelling, or storytellers who are great at turning that into products or companies—if you can go in both directions, you’ll have more power than in the past when you existed only in one lane.
AD: It’s interesting hearing you say that. I’m definitely more on the storytelling side. What’s funny is that with storytelling, I think, “That’s hard to make good,” because I know a lot more about it. But when I think about coding, I think, “I can make that; it’s going to be great,” because I don’t know enough. The thing I know more about in storytelling is taste. That’s the magic word.
AA: The magic word these days. Title this podcast, “The First Talk on Taste.”
AD: You were talking about it pretty early though, right? You were one of the OGs.
AA: Well, some would say that. I wrote an essay last summer, a riff on Marc Andreessen’s blog post “Software is Eating the World,” called “Taste is Eating Silicon Valley.” I had been thinking about where tech is going and what becomes valuable in building products. If everyone can code and replicate products, then what’s left?
Distribution is one thing, and there’s an art and science to that. But as we automate more, even that is becoming more mechanized. The phrase “GTM engineering” has been coined, meaning distribution is becoming more technical. If you assume all of this can eventually be mechanized, then what is there?
I suggested that taste might be one of those things. It’s hard to define but can be understood in many ways. My one-liner is: in a world of scarcity, you want tools; in a world of abundance, you want taste. When you have so much, you need to choose what to align with or what represents your vibe or scenario—the more intangible aspects.
Taste is discernment expressed: informed judgment plus the courage to show it. It only exists once you make a choice. ‘Discernment’ as in having a judgment based on a strong set of references, understanding the differences between things, and being able to articulate why you like or dislike something. ‘Expressed’ as in you have to make choices;You can only demonstrate taste if you do something with it.
So it’s about discerning quality and then expressing it. Of course, it is somewhat subjective, but I’m curious how you think about that word.
AD: I think a lot of people have good taste but don’t realize it. They like things and if they had the confidence, they could tell you why. But they feel like they’re not allowed to, maybe because the thing they like is considered “dumb,” like a show such as Love Island. I think that can be good taste.
AA: That’s the antithesis of how people often use the word taste, which is to say that few people have it.
AD: Exactly. I think that feels weird to me. What I love is someone loving something. I don’t care if I think it’s terrible. There’s a lot of power in admitting you like things you’re not supposed to. That intersection of high-brow and low-brow tastes is where the “juice” is for a lot of my own work.
I think there’s so much judgment about what intelligence is. Intelligence and taste are like siblings. My wife watched Suicide Squad and really liked it. I was like, “I can’t believe you liked that movie. The movie’s so bad.” And she was like, “Why are you bothered? You haven’t even seen it.” I was being annoying. That happens a lot where it’s like, “How could you like that?” when the reality is, who cares? She liked it.
AA: I do think there is a part of taste that is somewhat scientific. Like visual-spatial intelligence, which is related to design sensibilities. You have to understand balance and how things are meant to be. There is a technical aspect to it that isn’t just a feeling or emotion.
One of my favorite pieces is Malcolm Gladwell’s “The Cool Hunt.” One of the maxims of the essay is that by the time you spot and name “cool,” it’s already moved on. Taste has that same slippery, compound of intelligence and intuition and expression — partly learnable, partly felt, and only partly explainable even if there’s a lot of fine details that go into it.
AD: Definitely. And in that one example of my wife, she’s a makeup artist and a visual artist, and the makeup in that movie is really good. Ok, let’s jump into the ways in which you are using AI in your work..
AA: The only thing that quickly came to mind, specifically with writing, is using AI for critique. Mostly, for a first, second, or third line of critique. I used to ask people for feedback using the ABCD framework: What’s Awesome? What’s Boring? What’s Confusing? And what do you not believe? I find that to be a helpful way to get feedback. Sometimes I do that with ChatGPT. It’s a nice first line because I get a little sycophancy. I tell it, “Tell me what’s awesome,” and I need to hear that before it cuts me down with the rest of it.
You know when you publish an essay and in the first two days that people are reading it, you find what quotes people pick up and what resonates with them. And sometimes it’s what you thought and other times not at all. Obviously I appreciate finding that out much more from people than I do from ChatGPT, but it is still useful.
The other thing, depending on what audience you are writing to, is asking yourself, who do you admire? Or what are the traits that you think are worth having a lens on your work through? Pick those five or 10 people and just be like, “hey, critique it through the lens of this person’s eyes.” And I find that to be quite helpful too.
And then honestly, not much more than that. It can get really tempting to go back for too many rounds of things, but for writing, I think that’s most helpful.
AD: Yeah. It’s funny you say that about the banger lines. I’ve been very shaped by Substack and their quoting feature. It feels a little dark because now when I write, I think, “That’s going to be a banger. They’re going to quote that.” And I don’t like that. I don’t like how my writing has been shaped by Substack in general.
AA: What else do you think has been shaped besides the quotable stuff?
AD: The cadence. I feel like I have to post a lot. I’m affected by the numbers. They changed their algorithm or whatever, and I feel like all my writing sucks now. That can’t be true, and yet I deeply feel it. This is just work I have to do. Substack was the first place where I really succeeded.
AA: The same thing happens on Twitter. The usefulness of your followers depends on the algorithm of the day, and that keeps changing.
AA: Before Substack, I thought in tweets. My brain became tweets. It changed how I thought.
AA: Substack is almost the inverse of Twitter in terms of expansiveness. I’ve been thinking a lot about how in the modern age, finding the medium of digital expression that you’re good at is like a rite of passage. Some people are really good in small groups. Some people are really good at thinking in tweets. Like Naval famously, right? And on TikTok, people invent formats, like the get ready with me videos and 60 second dances that other people imitate. On Substack maybe the diary entry or listicles. It’s a natural extension of what people always try to do, but it’s much harder when you’re competing in a global arena on platforms that are judging and amplifying. The presence of a “rising” leaderboard implies the presence of a “falling” leaderboard, which we don’t see.
AD: The presence of a “like” implies the presence of a “not like.”
I recently saw a quote from Ben, the CEO of GitHub: “It matters where you get your dopamine. Get your dopamine from clarifying your ideas more than from their visibility.” I plastered that on everything because I’ve gone way too far toward thinking, “Finally, people love it, they love me, here we go.” I’ve defined that as my first success, but it’s not sustainable.
AA: It’s a constant barrage. We’re only human, and it’s hard to keep it in proportion. You need people in your life who will judge you harshly in a loving way and also gas you up honestly to keep a balance.
AD: Totally.
AA: On the digital side, you usually know how good you feel about something internally before you put it out. Of course, sometimes you put something out that you didn’t think was great and people love it. Other times, you put something out that you thought was phenomenal, and it doesn’t get the same response. But most people have their own internal barometer. It’s helpful to hold on to that. People often ask, “What’s your favorite piece you’ve ever written?” I could just say the most popular one, but I like to hold on to a few that are not the most popular but that I feel strongly about. I wrote an essay called “The Great Movie Theory” and it’s not on my most popular list, but I feel strongly about the idea.
AD: How would you describe your relationship with AI? Is it an assistant, a partner, or something else?
AA: It’s probably both. Sometimes it’s an incompetent research assistant, and other times it’s brilliant. I like to think about it as an experimentation partner
AD: I like that. What do you wish AI could do, but fear it never will?
AA: It can’t take away human suffering. On a long enough timeline AI may be able to do almost everything else, but humans are the source of their own suffering, so I don’t think AI will ever be able to take that away, unless it lobotomizes us.
AD: Everyone’s making predictions about AI. Let’s make a prediction about humans. Where do you think humans will be in 10 years?
AA:I think we’ll largely be in a similar place, but just with more leverage and hopefully a lot of strides on the health side.
AD: What question about AI and creativity isn’t being asked enough?
AA: In some circles, there’s a lot of AI is at odds with creativity. It’s always helpful to ask the question: where exactly can AI augment creativity?
AD: What do you wish people talked about more instead of AI?
AA: I would say just talk more about people. I actually think we did that. So I wouldn’t denigrate our conversation. People — why we are who we are, why we do what we do, think how we think, feel how we feel. That’s the unifying thread.



