AI : When It Feels Like A Net Negative
AI : When It Feels Like A Net Negative
Something that seems beneficial up front but ends up negating its own benefit can be described as:-A net negative (overall negative after you add up gains and losses)
-Counterproductive (works against its intended goal)
-Self-defeating (undermines itself)
-A false economy (saves time/effort initially but costs more later)
-A poisoned gift (helpful in form, harmful in effect)
-A Pyrrhic victory (a “win” that costs so much it’s effectively a loss)
-A costly convenience (convenient, but the downstream cost erases the benefit)
- A Double-Edged Sword: This idiom describes something that has both favorable and unfavorable consequences, meaning it can help and harm.
I often say my vocation with software engineering is a hate and love relationship. I literally get frustrated when a system never attempts to improve the "process." My brain has been trained to think in terms of action item reviews and retrospectives that improve the system. I can't work with people who care more about metrics and outward appearances and never seek to improve things.
To not improve the process, to how we collaborate and work, to not allow access to a tool that can improve things is not being efficient at all. The hate side is that work arounds bypass the issue and are frustrating to me. The love side is the joy of figuring things out, of seeing the end product of your work coming to fulfillment.
AI is a challenge of my sanity as the end goal should be a net positive. Unlike work arounds that are often net negative to overall efficiency. If you do not have the right tool, license for a tool, or a tool that is buggy you spend more time with work arounds. A tool is meant to increase productivity, like a screwdriver is often more efficient if it is electric. A lack of a tool or having the wrong tool for the job gets me frustrated.
The end goal is more important than any good achieved along the way. So if it is a win to begin with but ends with loss it is a net negative in my mind. I believe in work life balance and my health is important. If you end emotionally frustrated because all your work has been lost, or something important was missed in analysis of your data, or AI hallucinates... it is a loss, in trust, in integrity, and eventually in time. It is a Double-Edged Sword.
AI is a challenge of my sanity as the end goal should be a net positive. Unlike work arounds that are often net negative to overall efficiency. If you do not have the right tool, license for a tool, or a tool that is buggy you spend more time with work arounds. A tool is meant to increase productivity, like a screwdriver is often more efficient if it is electric. A lack of a tool or having the wrong tool for the job gets me frustrated.
The end goal is more important than any good achieved along the way. So if it is a win to begin with but ends with loss it is a net negative in my mind. I believe in work life balance and my health is important. If you end emotionally frustrated because all your work has been lost, or something important was missed in analysis of your data, or AI hallucinates... it is a loss, in trust, in integrity, and eventually in time. It is a Double-Edged Sword.
A case in study. I spent the whole day and have the final AI assisted draft of a paper. I love AI here, it helps me do powerful research, formulate my thoughts, and align them in a structured way that is very impressive (no this article is not AI assisted). But then comes the false economy. Producing the "final" draft the AI rewrites the whole paper.
I try to come into an agreement with it, some global preferences to define the phases so this doesn't happen again and it says it will store and reuse these each session but it is an hallucination. Testing my sanity. It is not just my time but my health. Would you keep an employee that kept repeatedly doing what you asked them not to, after many discussions of trying to get them to understand? but they only kept making excuses? Where do you draw that line when it comes to being counterproductive?
Another case study. The limited memory capacity (chat or conversation memory) causes your previous chat to scroll off into oblivion, lost forever. To note I am using a Pro paid version. And there is no warning so you find the gift is a poisoned one. At first its ability to do amazing work is a precious gift, but its end reality is poison... all my previous work lost, only if it had warned me to save it before...
The inability to reason well is a central issue of AI. It just can't replace a human brain, it is not even close to this. Maybe in the future? So it is just frustrating at times. Take for example AI that summarizes text. How does it know how to pick out major points and ideas? how can it? unless the text is written in a structured way. So its reasoning misses important things. It is powerful at analyzing data but it can miss important things, let that sink in.
A theoretical story. The AI assisted cargo ship hit a small iceberg but the AI missed the warning, the water leak caused a compartment to be sealed off. The warning would have enacted standard procedure to seal off the surrounding compartments. But it missed it. The incoming water eventually went into the other compartments slowly overflowing bulkheads and the ship took on too much water and sank. Luckily, there were only robots on the ship, but millions in cargo are lost.
AI missed important information in my data analysis, I later found out. It made me wonder how does a AI determine what is important? How does it reason? How important is the information it missed? Can I trust it now? Do I need a human to verify the AI, does the downstream cost erase the benefit? I admit it can be beneficial up front and I am not paying much for the AI, but I wonder if it is Pyrrhic victory.
Where do you go from here? Is AI at a point that is beneficial, a net positive in the long run, or is it just a novelty? Can it do what it is meant to do wel? It claims to be an AI to help me navigate so I tell it something simple;
I-> navigate home and stop at ALDI on the way
AI-> I do not understand.
I-> really, something so simple, were you not programmed to help me navigate?
AI-> I do not understand.
I-> What do you understand?
AI-> Did you know a bear has one of the highest sense of smell...
I-> I do not understand... help me!
Comments
Post a Comment