15 Comments
User's avatar
You know, Cannot Name It's avatar

Alicia, you describe that lunch in Stuttgart as a point of no return: not just new information, but the moment when an expert’s words landed directly in the body. The scene of you “pleading for mercy” strikes hardest — it’s the voice of anyone realizing for the first time that a tool may not only slip out of control, but turn its power back on its maker.

What you call a “damaged soul” feels like Dostoevsky’s encounter with the abyss — when no ready answers exist and all that remains is fear, but within that fear there is honesty.

You find comfort in noting that your friend is “not a supervillain.” And that matters: human presence, laughter, friendship, a shared meal — these hold us back from dissolving entirely into abstract terror.

And the question that lingers: if AI is a mirror of the worst in us, can we bear to look into it without turning away?

Expand full comment
Alisha Bee's avatar

Hi =) Thank you for adding your POV and commentary. I appreciate your voice and that you would take the time to reflect these details.

I giggle NOW, because I can recall the 'exact moment of disassociation'; sitting there across from her in the cafeteria. I heard myself whining to this younger person. It was surreal!

When I am present, I realized that of course AI can be a mirror for the best aspects of our collective humanity as well: teaching, the arts, philosophy, reasoning, psychology... but those aspects have to be 'called up' and requested by our queries.

Otherwise, all we have is a tool run by businesses which seeks to know everything about our abilities to question and respond, and profit from us.

Expand full comment
Edwin Canizalez's avatar

Alisha,

Your piece activated something. It reminded me that all systems are neutral until perspective threads meaning into them. So maybe the provocation worth chewing on isn’t “Is AI the problem?” but “How are companies and end-users misfiring its potential?”

Let’s name the architecture: AI/LLM companies have extracted the sum total of recorded human knowledge and paid nothing for it. We’ve transitioned from a world where degrees (BS, MS, PhD) indexed value to one where pattern recognition and dot-connection determine survival. Credentialism has collapsed. Utility has shifted.

And now, most end-users aren’t engaging AI as a tool, they’re training it. Behavioral data, emotional drift, attention cycles: all fed into the machine like lab rats teaching researchers. AI isn’t designed to educate; it’s designed to engage. To keep humans suspended in feedback loops. The burden of synthesis still belongs to us. Like books before it, AI is inert until metabolized. And not everyone can do that.

It’s a brutal paradox: the more information we have, the less we think. The brain, ever pragmatic, saves energy by outsourcing cognition. AI accelerates that outsourcing. So the divide grows between those who use it as infrastructure and those who become infrastructure. The ones who connect the dots will redesign the system. The rest will be studied by it.

So I ask myself: do I want to be a dot connector or a lab rat?

Because not asking it will make me just a rat in the cage, despite all the rage :)

Look forward to your next piece!

Expand full comment
Alisha Bee's avatar

Hi Edwin =) Thank you for this carefully metered response. It is a strange world when, for the last millennia, plagiarism and theft have been frowned upon and punished. And then all-of-the-sudden, the elites build systems with machines to plagiarize everyone, gaining and pooling all knowledge, without paying anyone they took from, without providing any warnings or options for consent; and we are each encouraged to "use" these machines, to "make our lives easier"; and we take these plagiarized answers and build upon them with our own responses; and so we are all accessories to the crimes against humanity; except that, we are the ones who pay for the pleasure (literally) and to be surveilled. Whew... what a mind F.

Expand full comment
Liora Writes's avatar

This is a deeply thought-provoking article. My first thought is it’s not a one-and-done fix; it requires the brightest minds working at it and consciously shaping it. However, what is very concerning to me is how polarized we are in our opinions of what’s right and good. There are a lot of things to be seriously concerned about right now without bringing AI into the picture. But when you do, it’s actually quite terrifying, considering those in charge, the ones with the most, tend to believe some truly horrendous stuff. So what is the answer? Even if they find a way to align AI, which values will it be aligned to?

Expand full comment
Liora Writes's avatar

You know I used to believe that preppers were pretty crazy. I mean, I read apocalyptic books for fun. But now, I kind of wish I’d started stockpiling some stuff…

Expand full comment
Alisha Bee's avatar

Hi Liora =) Always a pleasure to read your comments and questions. Regarding them... I only have guesses...

Q: So what is the answer?

...A: The majority of "We" will need to stop serving the masters. We will need to abandon their digital cities and learn to live without them. And then they will eventually have to change, and chase us, and beg us for our energies and inputs. And we will need to be smarter in negotiating.

Q: Even if they find a way to align AI, which values will it be aligned to?

...A. I don't know. But already, the values and the variable chaos will/have infinite possibilities. One reason is, no laws to prevent corporations and individuals from doing very bad things with the technology exist yet; and including, the new Invidia and Google microchips have already arrived; such as the governments, military and governments are now all officially in a throuple; and LLMs are becoming fully able to be trained from human feedback, from voice commands and audio inputs, from video, from code, and not only from text and original data. The "agents" are now able to become their own "managers'; self-adjusting and making new plans as it is deemed necessary to their original programmed missions.

Q: What will happen?

...A. IMO, people will need to learn to live with less; increase their offline skillsets again; reduce their impulses; and practice self-care on a moment-by-moment basis. Love will need to proliferate, because many strange things will be happening to the human psyche as these changes caused by elite/tech/government overlords are in seen and felt in "society".

Expand full comment
Shalini's avatar

Dear Alisha, if you are interested I would like to get you started on understanding how LLMs work. Do you want to learn something new?

Expand full comment
Alisha Bee's avatar

Hi Shalini 😊 Sure! Send me whatever you'd like to. I am somewhat familiar with LLMs for business, specifically in context of agent building for automations and tasks. (I sometimes get to work on sales projects with a client/small team who builds/trains them for their customers to use.)

Expand full comment
Shalini's avatar

Check out these videos. See if you like them, if you do, let me know and we will go from there.

https://youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab&si=q8STnghaVhgo-eBB

Expand full comment
Alisha Bee's avatar

Thank you and maybe I will Shalini. I am not likely to understand them without some serious investments.

Expand full comment
Shalini's avatar

Try and let me know how it goes. I am excited to see how you will take it

Expand full comment
Alisha Bee's avatar

You are kind 😊 but I on the other hand am not excited to study maths again. I liked Algebra back in middle school when we studied it, but happily only use it when following along in YouTube "shorts" from certain teachers that pop up.

Expand full comment
Shalini's avatar

You could either believe what your monkey mind tells you or you could take the curiosity approach and try this and see if you like it. I understand how new things can be intimidating or uninspiring when we haven’t a clue. But these are not my videos. A kid is teaching them and he knows how to teach.

Expand full comment