Despite the dizzying pace of adoption and the breathless prognostication of the enormous benefits to humanity of artificial intelligence (AI), I can’t escape the sinking feeling that AI is actually making us…well, dumber.
The problems with AI aren’t what you think
AIs are programmed by humans–which means that just like those little genetic DNA markers for brown or blue eyes, all the biases, ignorance and mistakes inherent in the programmer’s code gets implanted in the AI they built. Oh, and as it turns out, AIs have their own implicit biases which could explain a lot about why AI “bot” content gets boosted more than original content on social media sites. More on that later.
AIs “learn” by sifting through gigantic amounts of data–not all of which can possibly be vetted for quality by humans. Therefore, foundational AI “learning” can include all the sludge from all the dark corners of the internet. All the bad science, revisionist history, college dissertations on the benefits of underwater basket weaving–AND faked articles by other AIs–found on the internet become the foundation upon which the AI builds its logic. Like MLS data, garbage in = garbage out.
AIs only understand “explicit” information–full credit for this one to Garrett Reim’s great commentary article Quantum Leap published in the August 11-31 edition of Aviation Week & Space Technology. AI limitations mean they only understand “explicit” information, which he defines as “AI-friendly, structured, codified knowledge that can be easily accessed.” Think training manuals, process flows, and product specifications.
What AI can’t see, learn, or make sense of, are the “Tacit” and “Tribal/Experiential” information inherent to every company. Think of these as the institutional knowledge carried by long-term employees who know why and how things are done a certain way, plus the cultural dynamics and ethos that set a company apart from the competition. Long considered hallmarks of successful companies, these qualities cannot be replicated or understood by AI.
AI’s lie–Anyone who has worked with children knows that if you don’t give a child the ability to say, “I don’t know” they’re going to make up an answer. AI is no different.
Large Language Model (LLM) AIs have a bias for the output of other AIs
This is fantastic news for exactly nobody, but what has been surprising to me is the apparent eagerness of many people to actively participate in using AI to make themselves less qualified.
Sounds counter-intuitive, right? I mean, the promise of the Large Language Model (LLM) AI was that it would help turn average humans into mental giants simply by giving access to mountains of (often stolen) data and several millennia of (cribbed) notes from humanity’s greatest minds. Billions of dollars have been spent in this new technological race to create…what? The only real surprise to me is that investors have been so late to be actively questioning whether there can be a return on their investment.
AI: A friend with benefits?
It’s not that AI doesn’t have its uses, and no, I don’t mean for dating. It’s great at condensing data into executive summaries, excels at searching, sifting and crunching data, thrives at repetitive tasks, and has led breakthrough understandings in physics, chemistry and astronomy, for instance.
But what AI also does is allow us humans to be intellectually lazy by letting a computer do hard thinking for us.
Anecdotally, here’s how people I know are using AI:
- Editor on original content long-form writing
- Suggested topics for school essays
- Fully drafted speeches from scratch
At a minimum, using AI to write papers or correspondence can’t help but remove some of the personality quirks that give writers their “voice,” that critical stylistic “thumbprint” that makes prose distinctive.
Logically, the more people use AI to craft their letters, schoolwork, etc., the more the quality and individuality of the writing will move towards a mean (i.e., technically correct, but devoid of character and individuality). If true, the more people that use AI to write and the more often they use it will, over time, lower the mean as people de-skill their ability to write original content.
Given a choice, an AI is more likely to ‘hire’ another AI
Remember how I said that AI has its own biases? Recent research found that Large Language Model (LLM) AIs preferred writing drafted by other AIs:
“This study finds evidence that if we deploy LLM assistants in decision-making roles (e.g., purchasing goods, selecting academic submissions) they will implicitly favor LLM-based AI agents and LLM-assisted humans over ordinary humans as trade partners and service providers.”
Think about that for a moment. If your company uses AI to select job candidates from submitted resumes, and the candidate used an AI to draft their resume (presumably because they don’t have the skills to do it themselves), your cost-efficient, genius hiring AI just selected a bunch of potentially unqualified candidates to be hired. That feels a little antithetical to the promise of AI, doesn’t it?
Why are we willing participants in our own de-skilling?
Here’s how some professionals are using AI:
- Draft legal summaries/research
- Medical scan reviews
In a super-interesting new study in The Lancet Gastroenterology and Hepatology (no, not lizards or snakes) published on Futurism.com, researchers in Poland found that, when compared to detection rates before AI was implemented, there was a 20% relative drop in the detection rate of adenomas (pre-cancerous growths in the colon) after AI was introduced at the end of 2021.
I get it–it’s just a small study. Observational only. Not a smoking gun. But definitely cause to think a little harder about the long-term effects and use of AI, because logically, this observation fits what I think I’m seeing around me.
We all know that humans learn to be skilled through practice and it’s widely accepted that, after being away from a sport for a time, both from a conditioning and skills standpoint, elite athletes need time to reskill to get back to top form. But the flip side of that coin is de-skilling that happens without regular practice. So, it stands to reason that when experts in any research field let AI do their thinking for them they too become less effective because they are not actively conditioning their mental skills to stay sharp. As every elite athlete, musician and academic knows, when we’re not actively skilling up, we’re actively de-skilling.
Let common sense outweigh convenience
Use a robot vacuum that leverages AI to learn the best way to clean your house? Other than the fact that the robot vacuum company now owns the digital floorplan of your house…great!
Using AI to automate contacts with your client list? Other than the fact that you just gave away your most valuable asset to a company that will surely monetize it…maybe not so great?
Shunting the hard work of being an expert in a given field off to an AI while you actively de-skill your own expertise? Not great at all.
AI ain’t Skynet. Yet.
The irony of the whole AI debate for me is that, from a financial perspective, the astronomical investment in AI far exceeds any potential near-term return. I strongly suspect that nobody bothered to use AI to crunch the numbers on whether and what the return could possibly be–or if they did, the AI is already sentient enough to lie to save its job.
As a futurist, my crystal ball may be as cloudy as the next person’s, but when people start talking about “AI real estate agents” as if it’s the inevitable next step in real estate, I can’t help but wonder what the net effect of skilling up an AI to be the expert on local laws, rules, policies, customer support, relationship management, etc., will be. Will it be that licensees become more professional, or will it be that the more the AI does for the agent, the less the real estate professional knows? We’re going to find out.
Oh, and on the off chance that AI overlords DO one day take over humanity I’m going to ask you to only spread this article by word of mouth. Objectively, avoiding digital footprints they might use as evidence to turn me into a battery seems like a wise precaution.
DISCLOSURE: No part of this article was written or conceived using artificial intelligence.
1 Source: AI-AI bias: Large language models favor communications generated by large language models, written by Walter Laurito, Benjamin Davis, Peli Greitze, Tomas Gavenclak, Ada Bohm, and Jan Kulveit. Published by the Proceedings of the National Academy of Sciences of the United States of America (PNAS), July 29, 2025








