Warning: Occasional pithy humor and light-hearted sarcasm ahead.
I have a confession to make…
Lately, I’ve been thinking about writing a blog post that is not about AI.
Is it safe to do this?
Or will it pull me further away from the Light?
_meme.jpg)
The Debate About the Possible
“When a distinguished but elderly scientist states that something is possible, he is almost certainly right.
When he states that something is impossible, he is very probably wrong.”
- Arthur C. Clarke’s First Law
I often let colleagues know that I don’t have a great photographic memory for verbal conversations (and therefore tools like Teams meetings with recordings, transcripts, AI recaps and the ability to ask questions retroactively about the meeting are an absolute game-changer and life-saver for me).
My lack of great conversational photographic memory is definitely a “me” thing, because I have other colleagues who have an almost spooky ability to remember every single aspect of what was said in a conversation—in many ways, those types of folks are their own personal Copilot (whereas for folks like me, I very much need to lean on note-taking and tools and systems to help me stay organized).
I am telling you this because the conversations I do tend to remember the most are ones tied to emotions that lodge themselves into my amygdala.
And because of that, I have a vivid recollection of what I was thinking and how I felt in 2023.
When ChatGPT became popular, the public discourse was starting to rage about what LLM-based AI could and couldn’t do.
Further, people began delving into what AI would be able to do in the future and what would remain in the realm of “Sci-Fi” (at least, in our lifetimes).
2023, frankly, was a bit of a mentally scary time for critical thinkers, because any hint of skepticism about what AI could or couldn’t do was often met with the retort:
“Well, it’s not possible yet.”
This dead end line of discussion, I felt, wasn’t helpful, and further the pushed notion that AI would solve ‘X’—where ‘X’ was, in many cases, an already solved problem with existing technology—often dismissing all the years of innovation and approaches that have been around solving problems for people for many, many years; the humble IF/ELSE deterministic logic has propelled us as a society forward and shouldn’t be tossed out with the proverbial bath water.
What Has Been Possible for a Long Time

“The only way of discovering the limits of the possible is to venture a little way past them into the impossible.”
- Arthur C. Clarke’s Second Law
Machine learning is not new, and has existed in some shape or form for many decades, with papers going back to the 1940s.
Further, computer science and software engineering are not new, and have been deployed for many years in many facets of our existence to help solve real problems and (hopefully, but not always) make our lives better and improve the human condition.
But it always begets the question of what else is possible given the technology we have now, and will have in the future.
In 2025, I’ve seen enough of AI the past couple of years to form some opinions.
And so have others:
“I see no line of sight into AI completely replacing programmers.”
It is ironic that the relatively mild and reserved “hot takes” about AI technology have become the “provocative” or “controversial” ones.
I would rather be confident in my (rather mild and grounded) opinions and be wrong, and change my mind in the future, than to forever remain waffle-y about what is possible (or not possible).
I also believe, like Mark, that hope is not a strategy, and I am not going to make decisions based on the pure dream that “One day, AI will handle all of this.”
In addition to that, we already have so much existing technology today that has made what used to be considered “the impossible,” possible.
However most of that progress happens by venturing a little bit beyond today’s limits; in stark contrast, we have folks engaging in wild speculation about what could happen way beyond today’s limits, not just about what is possible, but also on what timeframes, and in what ways.
An Alien From Outer Space 👽

“Any sufficiently advanced technology is indistinguishable from magic.”
- Arthur C. Clarke’s Third Law
This debate about the possible versus the impossible happens in many corners of the internet, but typically many technical folks who delve into aspects of AI tend to engage in conversations the most intensely.
Which begs the question: Where does this leave the rest of humanity that may not be technical, and for whom AI is a totally foreign entity?
Many are now trying to (or perhaps, due to fear, in many cases, consciously or unconsciously, trying not to) wrap their heads around this brand new “thing.”
Folks often liken the arrival of modern LLM-based AI to the creation of something analogous to HAL 9000 from the movie 2001: A Space Odyssey.
However, I often wonder if the arrival of AI is less like HAL, and more like the Monolith that appears in multiple parts of that movie.
In many ways it feels like an alien technology showed up in our neck of the woods in this Milky Way galaxy, and everyone is trying to wrap their heads around it.
Another unfortunate byproduct of this is, because AI may feel like “magic” to many people, they pin all of their hopes and dreams onto the technology, and hallucinate things that AI can and can’t do.
The problem is that these human hallucinations aren’t just incorrect—they’re expensive. They turn into roadmaps, budgets, and organizational decisions.
And that’s when the “magic” stops being fun, and often turns into a CFO discussion of “What ROI and/or value are we getting out of this?”
And it’s not because the technology doesn’t have potential, but rather because it was deployed in a “wishful thinking” way that dead-ended either due to very real data or technical or even economical constraints, or perhaps simply due to the fact that it didn’t solve real problems for real people, or a myriad of other potential issues. (This is yet another entire rabbit hole suitable for another blog post.)
Maybe the alien life form not only gave us something perceived as “magic,” but perhaps by extension we contracted a disease from it in the form of “AI Rabies,” where we are metaphorically foaming at the mouth about what is possible and losing our sense of rationality and once again, hallucinate about the possible, or barring that just lose our common sense about what can be helpful to meet real user needs.
Deployed use case issues aside, this is also how you get to one of the weirdest parts of this moment: not the technology itself, but the mythology we’re building around it.
Which is why I flinch a little bit every time I see a certain phrase making the rounds…
Beyond the Infinite ♾️

“This is the worst AI is ever going to be.”
- A trite saying floating around the internet
Dissecting that annoying line could fill up a whole additional blog post. (Is it just the models getting better? And purely through more data? Or is it new type of model techniques? Or is it techniques we use around the models that get better? What do you mean?!)
This utterly useless phrase basically amounts to: “Things improve over time.”
Wow.
Beyond the Infinite(ly) stupid.
Everything tends to improve over time, not just AI…
The humble washing machine has improved over time.
I could cite a ton of additional mundane examples here, but you get the idea…
Bringing it back to an example that is less mundane: Models will improve over time, and model layers and weights are one thing, but few appreciate the immense amount of work that goes into model serving technologies like Ray Serve and other libraries that make models like LLMs actually possible to use, and improvements will be realized in the surrounding technologies that are all absolutely essential to making AI possible to use, but may not get their full due, nor the meaningful amount of mind share to discuss openly.
But the most insidious thing about this idiotic phrase is the simultaneous sense of wonder and awe and existential dread it instills in people while we are all collectively barreling through the psychological daily onslaught of new AI advancements, and it is completely uncalled for and unnecessary…
If it is truly the worst it is ever going to be, I still at this time find AI immensely useful.
But to find AI useful does not automatically negate the utility of everything else in the technology world that has come before it, and is still alive and well and being used today.
My coffee maker at home is useful.
This blog post that I am writing right now and host on GitHub Pages with Cloudflare in front of it is (hopefully) useful, and fueled by 100% organic pure human thought and creativity—the only AI help for this post came from creating alt text for the images, to improve accessibility (which is a great use case for AI).
My dev machine setup scripts are useful in saving days setting up a machine by hand, both for me and for others I know that are using them. (AI helped me write the latest versions of those, but AI didn’t instill the philosophy of design nor the essence of simplicity I wanted to achieve in the latest iterations of them, nor did it provide the years of learning and the acquired taste of what “good” looks like to hone my approach, nor does it perform the actual setup.)
And in consideration of the time span of all these decades of useful technologies that we’ve accumulated, and will continue to accumulate independently of AI, we need to be able to have open ways to talk about them, and not have those discussions be dismissed out of hand because they are somehow “boring” and not aligned with the prevailing AI narrative.
For example, to get MCP to work properly, you have to build up an understanding of a web of interrelated IETF specifications around OAuth and adjacent technologies, which to some may feel “boring,” but for us engineers, is essential to produce something that is not a walking security hazard with no auth (or dubious auth) that vibe coded its way out of its containment zone of Lovable or Replit.
In fact, most of the time I’ve spent at work with our teams delivering AI for the enterprise involves using technologies and techniques around data engineering and data science and automation and full-stack software engineering and cloud infrastructure (and more) that have nothing to do with AI whatsoever.
Thankfully, I don’t think I will be smote by a bolt of lightning if I write a future blog post that has nothing to do with AI.
(And I won’t be smote for this one because by my count there are 38 mentions of the word “AI” in this blog post, including the one in this sentence—but please count for yourself, because I am a human that can make mistakes!)
AI didn’t create our current set of technology—rather, our existing technology helped us create AI.
The introduction of AI is additive, not subtractive, to the existing technology we have.
And, coupled with existing technology, AI is more than likely going to help us do amazing things in the future.
With this in mind, maybe, just maybe, the mental model of how we got here, and where we’re going, needs to be thought about a little bit differently, and should allow more space for more discussion about all types of technology, not just AI.
And perhaps we can try to take some things a little less seriously and remember to have a little bit of fun along the way, too—so in the spirit of fun, let’s flip this script:
_meme_reversed.jpg)