Warning: Occasional pithy humor and light-hearted sarcasm ahead.

I have a confession to make…

Lately, I’ve been thinking about writing a blog post that is not about AI.

Is it safe to do this?

Or will it pull me further away from the Light?

A cropped version of Michelangelo's The Creation of Adam. On the right, God reaches out from a cluster of figures, labeled "AI" in bold white Impact-style text. On the left, Adam reclines on the ground, reaching back, labeled "THE REST OF TECHNOLOGY." The two hands nearly touch, parodying the original scene to suggest AI as a dominant, godlike force eclipsing all other technology.

The Debate About the Possible

“When a distinguished but elderly scientist states that something is possible, he is almost certainly right.

When he states that something is impossible, he is very probably wrong.”

- Arthur C. Clarke’s First Law

I often tell colleagues that I don’t have a great photographic memory for verbal conversations (and therefore tools like Teams meetings with recordings, transcripts, AI recaps and the ability to ask questions retroactively about the meeting are an absolute game-changer and life-saver for me).

My lack of great conversational photographic memory is definitely a “me” thing, because I have other colleagues who have an almost spooky ability to remember every single aspect of what was said in a conversation—in many ways, those types of folks are their own personal Copilot.

I am telling you this because the conversations I do tend to remember the most are ones tied to emotions that lodge themselves into my amygdala.

And because of that, I have a vivid recollection of what I was thinking and how I felt in 2023.

When ChatGPT became popular, the public discourse was starting to rage about what LLM-based AI could and couldn’t do.

Further, people began delving into what AI would be able to do in the future and what would remain in the realm of “Sci-Fi” (at least, in our lifetimes).

2023, frankly, was a bit of a mentally scary time for critical thinkers, because any hint of skepticism about what AI could or couldn’t do was often met with the retort:

“Well, it’s not possible yet.”

What Has Been Possible for a Long Time

A meme-style still from 2001: A Space Odyssey shows a man seated at a futuristic video console, smiling as he speaks to a child on a screen. Overlaid text on the left reads, "CAN'T YOU THINK OF ANYTHING ELSE YOU WANT FOR YOUR BIRTHDAY? SOMETHING VERY SPECIAL?" Overlaid text on the right, next to the child’s image, reads, "SOMETHING THAT OUTPERFORMS HUMANS ON COMPLEX REASONING BENCHMARKS."

“The only way of discovering the limits of the possible is to venture a little way past them into the impossible.”

- Arthur C. Clarke’s Second Law

Machine learning is not new, and has existed in some shape or form for many decades, with papers going back to the 1940s.

Further, computer science and software engineering are not new, and have been deployed for many years in many facets of our existence to help solve real problems and (hopefully, but not always) make our lives better and improve the human condition.

But it always begets the question of what else is possible given the technology we have now, and will have in the future.

In 2025, I’ve seen enough of AI the past couple of years to form some opinions.

And so have others.

“I see no line of sight into AI completely replacing programmers.”

- Mark Russinovich

I would rather be confident in my opinions and be wrong and change my mind in the future, than to forever remain waffle-y about what is possible.

I also believe, like Mark, that hope is not a strategy, and I am not going to make decisions based on the pure dream that “One day, AI will handle all of this.”

In addition to that, we already have so much existing technology today that has made what would have used to be considered “the impossible,” possible.

However most of that progress happens by venturing a little bit beyond today’s limits, and we have folks engaging in wild speculation about what happens way beyond today’s limits, not just about what is possible, but also on what timeframes, and in what ways.

An Alien From Outer Space 👽

A meme-style image inspired by 2001: A Space Odyssey: a group of early hominids stand and crouch among rocky terrain at sunrise, surrounding a tall black monolith in the center. Overlaid white Impact-style text reads at the top, "SOON YOU WON’T BE MANAGING GATHERERS,"" and at the bottom, "YOU'LL BE MANAGING AGENTS—ER, I MEAN HUNTERS,"" humorously comparing modern AI agent hype to prehistoric evolution.

“Any sufficiently advanced technology is indistinguishable from magic.”

- Arthur C. Clarke’s Third Law

This debate about the possible versus the impossible happens in many corners of the internet, but typically many technical folks who delve into aspects of AI tend to debate the most intensely.

Which begs the question: Where does this leave the rest of humanity that may not be technical, and for whom AI is a totally foreign entity?

Many are now trying to (or perhaps, due to fear, in many cases, consciously or unconsciously, trying not to) wrap their heads around this brand new “thing.”

Folks often liken the arrival of modern LLM-based AI to the creation of something analogous to HAL 9000 from the movie 2001: A Space Odyssey.

However, I often wonder if the arrival of AI is less like HAL, and more like the Monolith that appears in multiple parts of that movie.

In many ways it feels like an alien technology showed up in our neck of the woods in this Milky Way galaxy, and everyone is trying to wrap their heads around it.

Another unfortunate byproduct of this is that, because AI may feel like “magic” to many people, they pin all of their hopes and dreams onto the technology, and hallucinate things that AI can and can’t do.

The problem is that these human hallucinations aren’t just incorrect—they’re expensive. They turn into roadmaps, budgets, and organizational decisions. And that’s when “magic” stops being fun.

And it is also how you get the weirdest part of this moment: not the technology itself, but the mythology we’re building around it.

Which is why I flinch a little every time I see a certain phrase making the rounds…

Beyond the Infinite ♾️

A blurred, dramatic close-up of a wide-eyed astronaut inside a helmet, lit by reflections and glowing lights, with bold white Impact-style text across the center reading, "THIS IS THE WORST AI IS EVER GOING TO BE."

“This is the worst AI is ever going to be.”

- A trite saying floating around the internet

Dissecting that annoying line could fill up a whole additional blog post. (Is it just the models getting better? And purely through more data? Or is it new type of model techniques? Or is it techniques we use around the models that get better? What do you mean?!)

This utterly useless phrase basically amounts to: “Things improve over time.”

Wow.

Beyond the Infinite(ly) stupid.

Everything tends to improve over time, not just AI… For example, model weights are one thing, but no one appreciates the immense amount of work that goes into model serving technologies like Ray Serve and other libraries that make models like LLMs actually possible to use, and improvements will be realized in the surrounding technologies that may not get their full due.

But the worst thing about this phrase is the simultaneous sense of wonder and awe and existential dread it instills in people, and it is completely unnecessary…


If it is truly the worst it is ever going to be, I still at this time find AI immensely useful.

But to find AI useful does not automatically negate the utility of everything else in the technology world that has come before it, and is still alive and well and being used today.

My coffee maker at home is useful.

This blog post that I am writing right now and host on GitHub Pages with Cloudflare in front of it is (hopefully) useful, and fueled by 100% organic pure human thought and creativity—the only AI help for this post came from creating alt text for the images, improving accessibility (which is a great use case for AI).

My dev machine setup scripts are useful, to me and to others. (AI helped me write the latest versions of those, but AI didn’t instill the philosophy of design, nor did it provide the years of learning of what “good” looks like to hone my approach, nor does it perform the actual setup.)

And for the time of all these decades of useful technologies that we’ve accumulated, and will continue to accumulate independently of AI, we need to be able to have open ways to talk about them, and not have those discussions be dismissed out of hand because they are somehow “boring” and not aligned with the prevailing AI narrative.

For example, to get MCP to work properly, you have to build up an understanding of a web of interrelated IETF specifications around OAuth and adjacent technologies, which to some may feel “boring,” but for us engineers, is essential to produce something that is not a walking security hazard with no auth that vibe coded its way out of Lovable or Replit.

In fact, most of the time I’ve spent at work with our teams delivering AI for the enterprise involves using technologies and techniques around data engineering and data science and automation and full-stack software engineering and cloud infrastructure (and more) that have nothing to do with AI whatsoever.


Thankfully, I don’t think I will be smote by a bolt of lightning if I write a future blog post that has nothing to do with AI.

(And I won’t be smote for this one because there are 52 mentions of AI in this blog post.)

AI didn’t create our current set of technology—rather, our existing of technology helped us create AI.

The introduction of AI is additive, not subtractive, to the existing technology we have.

And AI is more than likely going to help us do amazing things in the future.

With this in mind, maybe, just maybe, the mental model of how we got here, and where we’re going, needs to be thought about a little bit differently, and should allow more space for more discussion about all types of technology, not just AI.

And maybe, we can try to take some things a little less seriously and remember to have a little bit of fun along the way, too—let’s flip the script:

A meme-style reinterpretation of Michelangelo's The Creation of Adam. On the left, Adam reclines with the label "AI" in bold white text. On the right, God reaches out from a cloud of figures labeled "THE REST OF TECHNOLOGY." Their outstretched hands nearly touch, parodying the original composition to suggest AI is not above the rest of technology.