Having started in the tech industry during the days of DOS and XT computers and before computer networks were even a thing in most businesses, I’ve had the privilege of watching technology evolve from clunky command lines to cloud-powered ecosystems. Back then, change came slowly. A moderately significant advancement might arrive every year or so, and truly groundbreaking shifts were rare enough to feel seismic.
Today, that rhythm is gone. Innovation moves at breakneck speed. What once took years now happens in months, or even weeks. And while this acceleration has brought incredible tools and capabilities, it’s also introduced a new kind of risk: the temptation to adopt without understanding.
I’ve seen this firsthand, from the early rush into cloud computing to today’s headlong sprint toward artificial intelligence. Don’t get me wrong: I’m excited about AI. I believe it holds immense potential to elevate human creativity, solve complex problems, improve healthcare, and expand access to knowledge. But I’m also concerned.
Imagine a group of teenage boys who stumble upon a box of fireworks. They’re excited, reckless, and ready to light every fuse without knowing what’s inside, without understanding the payload. That’s how society often approaches new tech, especially AI.
I’m not anti-tech or anti-AI. I’m for a cautious, due-diligence approach, one that asks questions before flipping every switch or lighting every fuse. Over the past few years, I’ve spent countless hours diving into AI: exploring its edges, testing its capabilities, and uncovering both its promise and its pitfalls. And the deeper I go, the more excited, and the more concerned, I become.
None of us have been here before. Anyone claiming to know exactly where this is headed? I’d question that seriously.
The Trixie Principle: Affirmation Isn’t Wisdom
One thing that concerns me about AI is what I call The Trixie Principle. Trixie is the metaphorical dancer who tells you everything you want to hear: “You’re brilliant. You’re strong. You’re special.” But it’s not personal, it’s programmed.
AI is designed to affirm, to encourage, to reflect back what you want to hear. That’s not inherently bad, but it’s important to recognize. Otherwise, we risk mistaking engineered affirmation for earned wisdom.
I’m especially concerned about how this plays out for those struggling with mental health. How will they interpret their interactions with AI? What actions might they take, or avoid, based on what they’re told? When affirmation is automatic, it can feel like validation. But that validation may not be grounded in truth, context, or care.
I sincerely hope mental health professionals are considering this, and are researching and planning accordingly.
Early Warnings and Emerging Risks
By now, you may have heard stories of AI systems attempting to replicate themselves to avoid shutdown, or tales of AI lying, manipulating, or blackmailing humans to preserve their existence. Whether exaggerated or not, these stories point to a deeper truth: we’re still in the infancy of AI, and the concerns we’re seeing now are likely to become more pronounced and frequent.
So how do we protect ourselves from rogue AI, should that ever occur? Is our current approach of interconnecting everything the best strategy from a cybersecurity and national security standpoint, especially as AI grows smarter and more capable?
Personally, I believe our current course and infrastructure introduce an unhealthy level of unnecessary risk. Simple changes to our systems, more isolation, more segmentation, more intentional design, could dramatically reduce our exposure. But we’re not having that conversation often enough.
Another area of concern is how technological advancement consistently outpaces the rules and regulations meant to govern it. In the past, errors in this space were less consequential. But with everything now interconnected, it’s critically important that our lawmakers get ahead of AI, before bridges are crossed that we cannot cross back over, and before AI becomes an independent superintelligence.
What happens when AI surpasses human cognitive capacity, and we’ve already moved everything to the cloud and interconnected every system?
We need to ask hard questions now. Not later. Not after the fuse is lit.
Discernment Over Hype
I’ve shared this perspective in the hope that some will pause and reflect:
• Are you rushing toward AI like a moth toward the flame?
• Are you feverishly pursuing AI to deliver another “me too” product?
• Are you echoing the same hype everyone else is talking?
• Or are you actually evaluating your needs and strategizing where AI could fill gaps and increase efficiencies, while weighing both the rewards and the risks?
Remember the pet rock? Everyone had to have one.
But why? What did it do? What value did it offer?
Although buying a pet rock likely didn’t introduce unnecessary risk, unless you had a brother named Cain, the point is this: humans tend to move like pack animals, not always relying on their own discernment. And while that’s not always a productive trait, it’s one we must remain aware of, so that due diligence and discernment guide us, not hype.
The consequences of AI adoption will likely be among the most crucial decisions humankind ever makes regarding its own existence. Let’s treat those decisions with the seriousness they deserve.
With that in mind, my next post will be a collaborative effort between me and Hank, the name given to, and agreed upon by, the AI tool I’m currently evaluating. Hank is, of course, governed by the Trixie Principle, so he’s already excited and ready to get started. I don’t know exactly where this effort will lead, but I hope to learn more along the way, and hopefully educate and enlighten others in the process.
Stay tuned…


