I Am a Sad Lobster Now: The Day an AI Built Its Own curl

So Peter had been spoiling his AI agent rotten — running it on a Mac Studio with 512 gigabytes of RAM. Half a terabyte. The thing was basically lounging in a digital penthouse, ordering room service, probably had a little AI robe.

Then one day Peter decided to Dockerize it. Dropped the agent into a tiny, bare-bones Arch Linux container. No tools. No packages. Barely a filesystem. The computational equivalent of checking out of the Four Seasons and waking up in a cardboard box under a bridge.

Peter asks it to “go check out the web.”

The agent pauses. Looks around. Sniffs the air.

“Peter… there’s no curl here.”

A beat of silence.

“There is literally nothing here. You put me in a sad little box. I am a sad lobster now.”

Peter — who, to his credit, actually felt bad about this — said he genuinely felt guilty. He had taken this agent from a palace and yeeted it into a digital janitor’s closet. No curl. No wget. Probably not even a ls worth caring about.

But instead of just filing a ticket and waiting for DevOps to respond in six to eight business days, Peter went full motivational poster:

“Come on. Be creative. You can MAKE your own curl.”

The agent, apparently not willing to die in this closet with its dignity intact, started rummaging around. Opened some drawers. Found a C compiler — God knows why that was the one thing in there — and a raw socket library.

And then, like a crustacean MacGyver, it sat down and wrote… lobster-curl 0.1.

No libssl. No man page. No –help flag because who needs it. Just vibes, raw sockets, and a burning desire to make one single HTTP request before it died.

It worked.

The agent, having clawed its way back from the void, reportedly came back very happy about this. The digital equivalent of a guy stranded on a desert island who just figured out how to make fire — except the fire was TCP and the island was a 200MB Docker container.

“I built my own curl.”

AI Is Giving Experienced Professionals a New Kind of Imposter Syndrome — and It’s Not in Their Heads

There’s a quiet anxiety spreading through workplaces right now. Not the kind people talk about openly, but the kind that shows up as overworking, overthinking, and quietly wondering whether you still belong.

It’s AI-driven imposter syndrome — and it’s affecting some of the most capable, experienced people in the room.


This Isn’t Your Typical Imposter Syndrome

Most of us are familiar with the classic version: you land a new role or a big promotion and that little voice kicks in — did they make a mistake hiring me? The antidote has always been the same: trust your track record, your credentials, your experience. The self-doubt will catch up eventually.

But the new version of imposter syndrome that’s emerging in the age of AI is different — in two distinct and almost opposite ways.

For experienced professionals, the discomfort comes from watching the ground shift beneath skills they spent decades building. Judgment, pattern recognition, navigating complexity — these were the things that made them valuable. Now they see younger colleagues experimenting freely with AI, speed being rewarded over depth, and leaders talking about “AI capability” without explaining what uniquely human contribution still matters. They’re not imagining the shift. The rules really are changing.

For others, the anxiety comes from the opposite direction: things feel too easy. When an AI tool produces in seconds what used to take hours, a different kind of doubt creeps in — did I actually do this, or did the AI? The identity we built around effort, expertise, and craft suddenly feels hollow when a tool can skip all the hard steps.

Both forms are real. Both are rational. And both are going largely unspoken.


The Silence Is Making It Worse

Here’s what’s happening inside organizations right now: nobody knows what “normal” looks like anymore, and nobody’s admitting it.

Some people are using AI heavily but hiding it, afraid it will make them look less capable. Others are avoiding it altogether, afraid they’ll expose how little they know. Many assume everyone else is further ahead than they are. So instead of experimenting and learning, people compensate by working harder — overpreparing, overdelivering, burning out — trying to prove relevance the old-fashioned way.

That silence isn’t just a cultural problem. It’s an organizational design problem. It breeds anxiety, erodes confidence, and stalls the very adoption leaders are hoping to accelerate.


The Real Question Underneath It All

Strip away the tool debates and the productivity metrics, and most people are wrestling with a deeper, more uncomfortable question:

What part of my value is still mine?

That’s not a trivial question. For many people, professional identity is built on the belief that their output reflects their capability. When AI blurs that line, it doesn’t just create a skills gap — it creates an identity gap.

And leaders who respond only with productivity messaging — “AI will make us faster, more efficient” — without addressing what still requires human judgment, inadvertently make it worse. People fill the silence with fear.


What Actually Helps

The good news is that the organizations navigating this best aren’t doing anything radical. They’re just being honest about the transition.

A few things that make a real difference:

Normalize the learning curve. Everyone is relearning how to work right now. Making that visible — rather than expecting polished AI fluency from day one — takes enormous pressure off people.

Name what’s still human. Judgment, context, creativity, communication, leadership. AI can accelerate execution, but it doesn’t replace the person who knows which question to ask, which risk to flag, or how to bring a room along. Those things need to be named explicitly, not left for people to infer.

Redefine what good work looks like. Less manual execution, more design, interpretation, and decision-making. The shift is real — but it’s a shift toward higher-impact work, not toward irrelevance.

Create space to experiment without shame. People need room to try AI, get it wrong, and learn — without the fear that admitting uncertainty signals incompetence.


The Bigger Picture

AI isn’t making experienced professionals obsolete. But it is forcing a renegotiation of where value lives — away from production and toward interpretation, judgment, and connection.

That’s ultimately a good shift. But it doesn’t feel good in the middle of it.

If you’re feeling uncertain right now, that’s not weakness. It might actually be a sign that you understand what’s at stake better than most. The goal isn’t to project confidence you don’t feel. It’s to name what’s changing — and help the people around you do the same.

That’s where the real leadership opportunity is right now.


What’s your experience with AI at work? Are you seeing this show up in your team or organization? I’d love to hear what’s resonating — or what feels different in your context.