AI : Honesty I Keep Avoiding
Today I realized I’ve been relating to AI all wrong.
I’m going to call my AI Rain Man, like the movie.
In Rain Man, Dustin Hoffman plays Raymond—an autistic savant. He can do things that look like magic, especially with numbers. His brother Charlie (Tom Cruise) starts out frustrated, then realizes Raymond has a skill that can “win” in Vegas—counting cards, recalling patterns, never missing a beat. Charlie tries to leverage that. But the movie doesn’t end with Charlie becoming rich. It ends with a different kind of turning point: Charlie stops seeing Raymond primarily as a “win”—a way to beat Vegas—and starts reckoning with the full reality of who Raymond is, including limits that don’t disappear just because the gift is impressive.That’s the best analogy I’ve found for AI—with one crucial difference: AI isn’t a person. So the shift isn’t from “tool” to “person.” The shift is from “jackpot fantasy” to honest appraisal. You stop seeing only the win side—the card-counting moments, the slick outputs, the quick fixes—and you start seeing the whole reality at once: powerful in narrow lanes, fragile outside them, and always requiring supervision.
AI is excellent at narrow things. It can “count cards.” It can generate a clever snippet of code, produce a paragraph, summarize text, suggest a regex, outline an idea, or draft a clean email. Those moments feel like a superpower. And when you’re doing short tasks—small, bounded problems—it can genuinely help.
But if you go into AI believing it’s a mature collaborator that will reliably carry a long project, a long document, or a complex system from start to finish, you’re going to get frustrated fast. Because it has savant strengths paired with major limitations.
When I call AI “Rain Man,” I don’t mean autistic people are “less intelligent.” Autism isn’t a synonym for “can’t comprehend.” It’s a spectrum, and many autistic people are highly intelligent, and some have very uneven skill profiles—brilliant in one domain, challenged in another—especially around communication, flexibility, or “executive function” under real-world complexity.
What I mean is this: AI has a similar “spiky” profile. It can be astonishing in narrow lanes—pattern recognition, rapid drafting, summarizing, generating examples, producing code fragments, brainstorming structures—like card-counting brilliance. But it can also fail hard at the ordinary, human parts of collaboration that make long projects work.
If AI were a person, its limitations would look like this:
It would have weak executive function for long tasks. Remembering capacity, it loses the thread. It doesn’t reliably track constraints or preferences over time. It can be inconsistent from one moment to the next. It can sound confident even when it’s wrong. The erm hullicat is a real term being applied to AI. It can’t take responsibility for consequences, and it can’t truly “own” a plan the way a mature human does.
What it does well is different: it can generate lots of possibilities quickly. It can help you see options you didn’t consider. It can turn a vague idea into a structured outline. It can draft, compress, expand, rephrase, and prototype. It can search for patterns in text, give you checklists, create test cases, and offer alternate ways to frame an argument. In short: it’s powerful at producing material, but unreliable as the final judge of what’s correct, what’s consistent, and what should be trusted without verification.
AI hallucination is real, it is a phenomenon where a generative AI model (such as an LLM) produces false, nonsensical, or unverified information, yet presents it with high confidence and coherence. While human error is more like misremembering a fact, misunderstanding a concept, or letting bias steer our conclusion. The reason AI hallucinations can be more dangerous is that they often sound authoritative, come out with a steady confidence, and can fabricate plausible details without any internal alarm that says, “I might be wrong.” Humans, at least, can step back and self-check—we can doubt, verify, ask for evidence, and revise—because we have metacognition, the ability to examine our own thinking.
AI doesn't have cognitive abilities like a human. This is a fundamental limitation of current text-based AIs — they lack sensory perception, so when something is purely visual (e.g., "does this look like there's a blank line?"), it can only infer from source code + known rendering rules. That inference often fails because it can not clearly distinguish source structure (tight/no blank line) from rendered appearance (small visual gap from CSS margin-bottom on headings).
While modern AI can simulate complex reasoning and even "self-correction," it lacks the fundamental biological and psychological foundations that define the human abilities of the mind: subjective awareness, true metacognitive monitoring, emotional intelligence and "Social Intuition", moral agency and integrity, embodied experience and common sense.
That’s why “Rain Man” is the right mental model for me. The win side is real. But if you treat the win side as proof that it’s ready to replace people or run the whole project, the win you had at the "card-table" will soon be a loss. The honest relationship is: use the strengths, expect the weaknesses, and keep a human guardian in charge of truth, consistency, and responsibility.
The part nobody wants to admit
On long-form work—like a document or code project you keep expanding—AI often breaks down in the exact places that matter most: consistency, memory, and respect for constraints. You tell it, “Do not rewrite completely what has already been wrote—only add or expand on.” It rewrites anyway. You establish preferences—how links should be formatted, how references should look, what tone you want—and it forgets. Or it follows them for a while, then randomly stops. You end up spending your time re-explaining your rules, reconstructing what it changed, and patching the damage.
This is where the Rain Man comparison becomes painfully accurate. It’s not that AI is “dumb.” It’s that it’s uneven. Brilliant at a few things, unreliable at the things you need for long-term collaboration.
So you start treating it like something that needs constant supervision, because you have learned from use you don’t trust it.
A child with power needs a guardian
That’s when it hit me: AI is, in practice, like a little child with surprising abilities. A child can recite facts, imitate speech, even do impressive things. But you don’t hand a child the steering wheel and call it “innovation.” You supervise. You set boundaries. You test. You correct. You take responsibility.
AI is like a little child (at this point, it might improve but will always have limitations) not ready to replace people wholesale, and pretending otherwise is dangerous—not because it can’t do impressive things, but because it can do impressive things without the maturity humans associate with judgment and wisdom. It will generate confident nonsense. It will miss critical details; think of AI doing air traffic control, that little bit of important information it missed now becomes critical when lives are at stack. It will “sound right” while being wrong. And when that gets scaled into healthcare, infrastructure, law, finance—especially where security are at stack—the failure mode isn’t a minor typo but a systemic error with real consequences.
National security is even more sobering. In that world, a confident mistake isn’t just a bug—it can shape intelligence, targeting, escalation, and command decisions. That’s why it has caught attention from the the Pentagon, which is wrestling with how immature parts of current AI deployment still are: the capability is real, but the risk of over-trust is real too. Ethics in warfare are different. In domains like that, “looks right” can be most dangerous.
That’s why the guardian analogy matters. AI needs a human guardian: someone who constrains it, verifies outputs, and carries the moral and practical responsibility. And this also clarifies where the moral weight lies. AI isn’t “good” or “bad” by itself. It reflects the people who build it, tune it, deploy it, and profit from it. It is people who program AI to better reason and remember, and it is people who decide whether it will be used for good or for harm.
The fragmented relationship trap
In the movie, Charlie initially tries to use Raymond's good abilities to make money fast. That’s the wrong relationship. And with AI, there’s a similar temptation: “If it can do this impressive thing, I’ll scale it up and let it do everything,” so we can make money fast.
But that turns bad quickly, because the cost doesn’t disappear—it shifts.
Instead of writing, you become a manager of outputs.
Instead of designing, you become a tester of suggestions.
Instead of building understanding, you become a bystander while something assembles a system you didn’t fully think through. Very bad in critical infrastructures.
And when it fails, you own the failure—because you still have to ship the result.
“It makes you productive”… or it makes you do more work
This is where the marketing gets slippery. Yes, AI can make you more productive in the narrow sense: you can do in greater time algorithms and analysis of data.... But doing more is not the same as doing less work. In many cases it’s the opposite: AI lowers the friction to start tasks, so you start more of them. You revise more. You run more parallel threads. You take on more scope—often without a manager explicitly telling you to. The day gets fuller, not lighter.
A recent article, AI Doesn’t Reduce Work—It Intensifies It makes this point plainly: productivity gains don’t necessarily reduce workload—they can intensify it. The promise is “time saved.” The reality is “expectations expand.” The more output you can generate, the more output you’re expected to generate. The result is what feels like workload creep: more tasks, more revisions, more context switching, more mental overhead—without the relief people were promised. A high productivity model that is not beneficial to anyone in the long-term.
And the long-term impacts haven’t been fully seen yet. When the workday intensifies, the first thing you notice is speed. What you don’t see immediately is what gets quietly taxed: attention, judgment, endurance, and quality. AI can compress the effort required to produce something that looks finished, but that can also mean more time verifying, testing, auditing, and undoing errors that arrive with confidence. So the question isn’t just “Can I do more?” It’s “What does doing more do to the human doing it—and what does it do to the quality and safety of the work over time?”
Why the push feels premature
Part of what frustrates is the incentives. So much money has been poured into AI that the pressure to monetize it is enormous. In other words: the industry wants a return before the child has grown up. They want to win big at the card table right now. They want the Vegas moment—mass adoption, automation headlines, “replace workers,” “one person can do the job of ten.”
But that story assumes the AI has stable reasoning, durable memory, and constraint-following integrity. It doesn’t. So we get a contradiction: a system marketed like a finished product, used like an employee, but behaving like a savant tool that still needs constant supervision.
The lesson of Rain Man—and of life
Here’s the part I didn’t expect: Rain Man isn’t only a metaphor for AI. It’s a metaphor for life.
Life is full of people who are strong in some areas and weak in others. Some are brilliant and also difficult. Some are gifted but inconsistent. Some are dependable but not flashy. The mistake is demanding that one person be everything, or that one tool be perfect at everything. The trick is discernment: find what someone (or something) does best, respect the limits, and stop forcing it into roles it cannot responsibly carry.
So the honest relationship with AI isn’t worship and it isn’t rejection. It’s realism.
Let it count cards—draft small chunks, accelerate brainstorming, suggest patterns, generate scaffolds, help you think. But keep the human in the loop as guardian: verify, constrain, test, and take responsibility. Because the moment you forget what it is, you’ll start treating it like a miracle worker—and you’ll end up doing more work cleaning up the mess than you would have done building it yourself.
And maybe that’s the real point:
It’s not that AI is ready to replace people.
It’s that we have to learn how to relate to it honestly—like Rain Man—so we can use its strengths without being fooled by its weaknesses.
.jpg)
Comments
Post a Comment