My Thoughts on Generative AI
Anne Lupien (AnneKitsune, annekitsunefox@gmail.com)
Alternative title: LLMs considered harmful.
Spoiler: I'm anti-AI in most contexts. Probably not for the reasons you think.
Background
I have seen a lot of opinions and arguments thrown around for and against
generative AI such as llms and image generators.
However, I have yet to feel satisfied with the vast majority of arguments.
They often suffer from the same issues: they are temporary problems that can
be solved through more engineering. they are problems that, frankly, people
have and will continue to put up with as long as it gives them convenience
(environment & pollution). As such, I will not talk about drawbacks that
fade away as technology evolves, such as limited capabilities, context
length, missing input modes (such as realtime vision, touch, taste, etc.)
I wasn't satisfied with this, so I gave it many, many long thoughts and
experimenting sessions. But first...
About Me, or why I feel like I can speak on the topic.
While not an experts on LLMs and generative AI, I feel qualified enough to
have an opinion because:
- I'm an engineer by trade
- I have attempted, several times, to create various AIs using NEAT and other
approaches before LLMs came about and ate the whole field.
- I'm a chronic overthinker and perfectionist.
- I spent over a hundred hours thinking about this and experimenting.
- I spent thousands of $ on servers and was self-hosting open source models,
comparing benchmarks, experimenting with custom challenges, automated
workflows, optimized prompts, etc.
- Modern LLMs are a dream come true for younger me.
So... with all this, why would I be anti-AI?
My primary argument rests on two premises leading to a conclusion.
P1: AI causes harms.
Several, at that.
First, and most eggregious, is that it *destroys* skills.
Speaking for personal experience, I forget how to code significantly faster
when I vibe code for a few days than if I wasn't coding *at all* for several
months. I don't know the reason for this.
I have seen a study that corroborates this, but the sample size is rather
small and I forgot what it's called. Source: trust me bro.
Second, it relies on you being "on the ball" about skepticism, quality review,
spotting misinformation, etc. Constantly.
It's a well known fact in my field (programming) that this is completely
unsustainable and people will absolutely slip up more commonly than they
imagine.
People also extremely often overestimate their abilities at this.
Not only that but, if you are using AI for efficiency and speed, you are
directly undermining your goal if you try to take it slow, causing a
misaligned and contradictory incentive from the very beginning.
Third, it's becoming quite clear that LLMs amplify *and causes* biases and
delusions. Yes, even in healthy individuals. Yes, even with less
sycophantic LLMs. For more information, I invite you to read:
https://arxiv.org/pdf/2507.19218
In particular, I invite you to notice how Box 3 describes the risk factors...
as literally the primary use cases of LLMs and what's actively being
recommended by most people in the field as the "correct way" to use them.
Fourth, it is a simulacrum. I'll admit I'm not super familiar with
Jean Baudrillard's work, but even with a surface level understanding it
remains an interesting lens to look at AIs through.
At the start, AIs were trained on content from the internet, which is
mostly composed of copies of copies of things, or straight up fake or
outdated information that's no longer accurate. Not only that, but the more
time goes on, the more models will be trained on output of other models.
Ultimately, this creates a simulacrum. Or in simpler words, a lack of
objectivity because the source material isn't based on reality whatsoever,
but on copies of copies of imperfect descriptions of reality.
From this, it follows that despite how you might feel that you can
"prompt the LLM to be more objective", this is only a mirage. LLMs cannot
be objective, because they are not rooted in reality.
https://en.wikipedia.org/wiki/Simulacrum
Fifth, AIs, be it LLMs, image/video generators, diagnosis tools, vibe coding
editors or text-to-speech systems, are fundamentally replacing tools.
The best way is to contrast this with things that are not and then derive
a general principle out of it.
Let's take sed(1) as a trivial example. For those who don't know, this is
a small tool that replaces text in a file.
You would never argue that using a search/replace tool replaces programmers.
You would say it makes them more efficient.
Let's contrast this with LLMs. You can ask (or will be able to as tech
evolves more) "make me a website to sell surf boards" and will get a working
website. Here, most people will (correctly) argue that you replaced 99% of
the work of a programmer with a single prompt.
"But why? Isn't both just using a tool to get more done?", some will ask.
I'll argue that these are completely different on the basis of what I call
"learnable determinism".
For a given input text file and sed command, you can learn, understand
*and predict* what the output text file will be. With perfect accuracy.
For AI however, the same is clearly not the case. You can in know way know
what your website will look like, let alone know what the code will be.
Even the foremost experts on LLMs can't tell you the answer without running it,
because fundamentally the process is a black box. In fact, that's the whole
point! If you could tell the outcome in advance, you could use different tools
instead.
Sixth, the return to the (statistical) mean. Because of how AIs are trained,
they are biased to give "middle of the road" answers rather than objectively
good ones. (Again, they have no concept of objectivity!) And since their
outputs become the input for new models, this creates the problem where
everything regresses to the mean over time, until such time where getting
anything out of AI that isn't a boring "average" answer will become very
difficult.
Finally, AIs remove value from anything it touches compared to if it wasn't
there. There's at least three things that make something valuable: rarity,
usefulness and effort put into making it. AIs allow mass production, which
drops the rarity of a lot of things down to zero. Meanwhile, it also reduces
the effort of making things significantly, which diminishes it's value to both
the person producing it (who cares less about things they spent less effort on)
and to the person using it (who knows it wasn't made with care and love, but
with a thoughtless mass production machine.)
See: Ikea Effect.
P2: AI is optimized to hide these harms.
LLMs in particular are optimized using human evaluation in some parts of the
process. Humans are tasked with voting for answers they prefer. Maybe some of
that happens automatically through metrics, just like with online recommendation
algorithms that optimize watch time.
This has several effects:
- They give confident answers even when completely wrong, which is really good
at turning off most of people's skepticism. Just like politicians.
- They are often subtly wrong in a way that's hard to detect because the
surrounding content looks ok. Just like cults and conspiracy theories.
- They'll push back at you somewhat to not be total pushovers, but never enough
to actually prevent you from doing what you (even if you really shouldn't.)
There's an exception here for truly dangerous stuff, because that goes through
a separate safety mechanism / filter.
- They look innocent enough that you think you have control. "I know how to
prompt it to be objective." is a sentence I hear often, despite the fact it
is completely illogical to be talking about objectivity and generative AIs
in the same sentence.
Conclusion: You are more harmed than you think you are.
In particular with LLMs.
Image generator are more in your face about the harm they do.
But LLMs hide them significantly better.
People underestimate the harms, either from a lack of care or from not noticing.
To further compound the issue, LLM-producing *companies* have a vested interest
in introducing biases in the training of the models itself that makes the
problems even harder to notice. They are willing to sacrifice accuracy and
correctness for this (in so far as AI could ever be ""correct"", which as
mentioned earlier, would be correct by happenstance more than because it is
rooted in reality.)
As such, AIs, and in particular LLMs, cause more harm than you or most people
notice.
Opinion: The harms outweigh the benefits in most cases.
And this applies no matter how strong an AI model is.
But, it would be unfair to only look at the negatives without mentioning the
positives. AIs can (whether now or later as tech evolves) increase productivity
on an increasing number of tasks. They can help people in acute mental health
crisis. They can help people with disabilities have access to tools that are
simply not possible to make or be as good as they are using AIs.
Which leads me to talk about the conclusion I arrived to.
*Generally speaking, the harms outweigh the benefits. Except in two cases*:
1. You care more about getting things done than about your skills, subjective
added value, correctness, perfectionism, and other values.
You want to get things done right here, right now, and are willing to
sacrifice other things for it.
2. There are no viable alternatives or available alternatives are exhausted.
Though I will caution that if your use case falls into this category,
you must think about if that use case is valid.
For example:
Excessive emotional ventilation to friends could lead you to exhaust them
and leave you with only LLMs as an option. It's worth asking whether you
should be ventilating this much or should confront your problems head on.
Whether you agree or disagree, feel free to shoot me an email with your
thoughts! :)
Just, please, don't write it with LLMs. If I wanted LLMs thoughts on the matter
I would just, you know, ask an LLM myself. :P