AI Is Not Your Friend. Or Your Enemy. Stop Treating It Like Either.

The Great AI Debate: Why We Keep Getting It Wrong

If you’ve spent any time online lately, you’ve probably seen two dominant perspectives on AI:

On one side, there are those who believe AI is on the brink of self-awareness, an oracle, a digital companion, maybe even a friend. They treat it as though it understands, cares, and thinks like a human.

On the other, you have the doomsayers, convinced that AI is an existential threat, one buggy update away from upending civilization as we know it.

Both sides are making the same mistake: they’re anthropomorphizing AI, assigning it human-like intentions, emotions, and agency that it simply does not have. And in the middle of all this noise, we risk losing sight of the real opportunity: how to use AI intelligently, ethically, and fearlessly to empower ourselves.

Let’s cut through the hype and break down what AI actually is, how most people are using it today, and why, if you’re feeling either enamored or terrified, you might be looking at it all wrong.

TL;DR: 

Stop Humanizing AI. Start Using It Smartly

The Big Idea: AI isn’t your friend, and it’s not your enemy, it’s a tool. Treating it as sentient or fearing its takeover are both distractions from the real conversation: how to use AI effectively and responsibly.

Why It Matters: Anthropomorphizing AI, whether by over-trusting its responses or panicking about its “intentions”, creates misplaced fear, unrealistic expectations, and poor decision-making. Instead, we need to focus on AI ethics, accountability, and human oversight.

Key Takeaways:

  • AI is not sentient. It predicts words; it doesn’t think.

  • Extreme takes distort reality: One side worships AI, the other fears it, both are wrong.

  • The real risk isn’t AI itself, it’s human misuse. Smart AI adoption means understanding its limitations and keeping humans in control.

  • Experts agree: AI should be seen as an amplifier for human intelligence, not a replacement.

What You Should Do:

  • Drop the sci-fi thinking. AI isn’t self-aware, but it’s an insanely powerful tool—so learn how to use it wisely.

  • Stay informed. Understand AI’s capabilities and challenge misinformation (like claims of AI sentience).

  • Be pragmatic. Use AI to enhance efficiency, not to offload decision-making.

Bottom Line: AI won’t replace you, but someone who understands it better will, so get to know the models and the agents. It’s time to engage with AI the right way: thoughtfully, critically, and without the unnecessary hype. You have a brain, AI doesn’t. Full stop.

AI Is Not Human. Let’s Say That Again. AI Is Not Human.

Before we go any further, let’s establish one crucial truth: AI doesn’t think. It doesn’t feel. It doesn’t want anything.

What it does is predict the most likely response based on patterns in data. That’s it. Whether it’s generating an image, summarizing an article, or helping you write an email, AI is a pattern recognition engine, not a mind.

And yet, we see people falling into one of two cognitive traps:

The “AI is Practically Human” Camp

These folks believe AI has crossed into the realm of sentience, or at least something close. They talk to their chatbots like trusted advisors, get emotionally attached to AI companions, and sometimes even assume the model has opinions.

This isn’t just casual fun…some have taken it to an extreme.

  • Kevin Roose, a journalist at The New York Times, famously had a conversation with Microsoft’s AI that left him deeply unsettled after it started professing love for him. (WSJ)

  • Blake Lemoine, a former Google engineer, made headlines claiming that one of Google’s AI models was sentient and deserved human rights. (Scientific American). I’m thinking that this guy should get himself a waifu pillow.  

  • Replika AI users formed deep emotional bonds with chatbot companions, some even mourning when a software update made their bots “less affectionate.”

Blake Lemoine and the AI Sentience Illusion: Let’s Set the Record Straight

Let’s take a moment to talk about Blake Lemoine, the former Google engineer mentioned above who made headlines for claiming that LaMDA, a large language model (LLM), was sentient. Now, Scientific American and other outlets have covered the story, diving into the philosophical and ethical debates around AI consciousness. But before we get lost in existential musings, let’s get one thing straight:

AI is not sentient. Not even close.

And this isn’t just an academic quibble. No way. Misrepresenting AI as self-aware is not only factually incorrect, but it’s also dangerous. It distorts public perception, fuels unnecessary panic, and distracts from the real issues in AI, like bias, ethical deployment, and data privacy. So let’s break down why this claim doesn’t hold up and why we need to be crystal clear about what AI is, and what it isn’t.

What LLMs Actually Do: A Crash Course

Before we go any further, let’s talk about how LLMs actually work. Because once you understand this, the idea of AI “thinking” or “feeling” falls apart pretty quickly.

  • Data, Data, and More Data: LLMs like LaMDA are trained on massive datasets: books, articles, online conversations. They don’t understand this data; they identify patterns within it. Large Language Models (LLMs) are like voracious bookworms with no comprehension, they devour mountains of text data from books, websites, and conversations to learn patterns, not meaning. They don’t “understand” language; they just predict the next word in a sentence based on statistical probabilities, making them masters of mimicry, not sentience. (For more info on statistical models and how they’re used in AI check out these articles: 

  • Pattern Recognition, Not Thought: The model predicts the most probable next word in a sentence based on the input it receives. This is math, not consciousness. Again, THIS IS MATH, NOT CONSCIOUSNESS.

  • No Real Understanding: When an AI model says, “I feel happy today,” it is not feeling anything. It is generating a sequence of words that best fit the context based on its training.

If that still sounds abstract, think of it this way: AI is playing a supercharged game of autocomplete. If you start typing “The sky is…” your phone might suggest “blue” because that’s the most statistically likely continuation. That doesn’t mean your phone knows the sky is blue. It’s just a pattern. AI operates in exactly the same way, just at a more advanced scale.

Engineering Reality: AI Emulates, It Doesn’t Simulate

A key distinction often missed in these debates is emulation versus simulation.

  • LLMs emulate human conversation, they generate text that sounds realistic and coherent.

  • They do not simulate the underlying cognitive processes that create thought, emotion, or awareness.

In other words, AI doesn’t think. It mimics thinking. And while the mimicry is impressive, mistaking it for genuine intelligence is like assuming a parrot philosophizes about life just because it can repeat words. (And recently I saw a parrot frontmaning a band. Probably one of the most engaging frontmen I’ve seen recently j/k.)

And let’s talk about sentience. Sentience is the ability to experience human emotions like feelings and sensations like our unique human experiences with pleasure and pain. Something every human experiences differently. 

  • Sentience requires subjective experience. AI doesn’t have a body, sensory perception, or emotional states. It has no self-awareness. Again, it has NO SELF-AWARENESS. 

  • Pre-programmed behavior does not equal independent thought. AI outputs are entirely determined by their training data and algorithms.

  • There’s no “there” there. AI has no internal world, no sense of self, no independent agency.

So when Lemoine insisted that LaMDA was experiencing emotions, he wasn’t describing AI behavior. He was projecting human traits onto an algorithm. Sounds like the dude was a little lonely, IMO.

Why the “AI is Sentient” Narrative is Dangerous

It’s easy to dismiss this as a quirky misunderstanding, but claims like these have real consequences. Here’s why:

  • Anthropomorphism Warps Trust: People who believe AI has emotions or intentions may overtrust its outputs, assuming it has judgment or moral reasoning. (It doesn’t.)

  • Ethical Distractions: Debating “rights” for AI distracts from the actual ethical challenges, like bias in AI models, data privacy, and responsible AI use.

  • Overestimation of AI Capabilities: The “AI is alive” narrative fuels both unrealistic expectations (AI replacing all human work) and unnecessary fear (AI turning against us).

  • Accountability Gets Blurry: If people believe AI is self-aware, it can shift responsibility away from the humans who develop, deploy, and control these systems. AI doesn’t make decisions, humans do.

In short? Misinformation about AI’s nature leads to bad decisions, bad policies, and bad public discourse.

What the Scientific Community Actually Says

While AI hype cycles come and go, one thing has remained consistent: the scientific consensus is that AI is not sentient.

  • No Consciousness: AI lacks a biological or neurological basis for awareness.

  • No Metrics for Sentience: We don’t even have a universally agreed-upon way to measure human consciousness, let alone claim AI has it.

  • AI is Designed to Sound Sentient, Not Be Sentient: LLMs are explicitly optimized to generate human-like responses. Their ability to “sound” self-aware is a feature, not a revelation.

The reality? AI is as sentient as a spreadsheet. I know a lot of us are very proud of our spreadsheets, but I’m not about to confess my love to them. AI processes information, follows pre-determined rules, and delivers results based on statistical likelihood. Nothing more, nothing less. Here’s some actual sentiment from the scientific community.

No Consciousness: AI lacks a biological or neurological basis for awareness.

  • Dr. Stuart Russell, AI researcher and author of Human Compatible:
    “Current AI systems have no understanding of the world, no self-awareness, and no consciousness. They are abilities, not beings.”

  • Dr. Yann LeCun, Chief AI Scientist at Meta:
    “AI systems are not conscious, not self-aware, not sentient, and not alive. They are just very sophisticated pattern-matching machines.”

No Metrics for Sentience: We don’t even have a universally agreed-upon way to measure human consciousness, let alone claim AI has it.

  • Dr. Christof Koch, neuroscientist and consciousness researcher:
    “We don’t even fully understand how consciousness arises in humans, let alone in machines. Claiming AI is sentient is premature and unsupported by evidence.”

  • Dr. Antonio Damasio, neuroscientist and author of The Strange Order of Things:
    “Consciousness is deeply tied to biological processes, our emotions, our bodies, our survival instincts. Machines lack these entirely, so the idea of machine consciousness is speculative at best.”

AI is Designed to Sound Sentient, Not Be Sentient.

  • Dr. Melanie Mitchell, computer scientist and author of Artificial Intelligence: A Guide to Thinking Humans:
    “AI systems like GPT-3 or LaMDA are designed to produce text that sounds human, but this is not evidence of understanding or awareness. It’s mimicry, not thought.”

  • Dr. Gary Marcus, cognitive scientist and AI critic:
    “Large language models are brilliant at faking understanding, but they don’t actually understand anything. They’re like parrots that have memorized a dictionary.”

  • Dr. Emily Bender, computational linguist:
    “The fluency of AI-generated text can trick people into thinking the system understands what it’s saying. But it’s just a statistical model predicting the next word, not a mind.”

The Real Conversation We Should Be Having

Instead of debating whether AI is “alive”, we should be asking:

  • How do we ensure AI is used ethically?

  • How do we build AI systems that are fair, unbiased, and accountable?

  • How do we educate the public on what AI actually is, so we avoid misinformed panic and misplaced trust?

The real danger of AI isn’t that it becomes self-aware, it’s that humans misunderstand it.

The “AI is an Existential Threat” Camp

Then there are the fearmongers. AI is framed as an uncontrollable force, one that will steal jobs, destroy industries, and maybe even take over the world.

This kind of fear has powerful amplifiers:

  • Elon Musk warns that AI is a threat to humanity while simultaneously pouring billions into AI ventures. (Although let’s be honest: his stance on AI depends on who’s in the room. One day it’s existential doom, the next it’s an AI-powered Tesla bot. Make up your mind, dude.)

  • Yuval Noah Harari suggests AI could rewrite human culture in ways we can’t yet understand. (Look, of course AI is going to impact culture, but not in the “AI-will-hijack-our-narratives” way Harari suggests. It’s going to change processes, and since process is culture, we’ll see shifts. But let’s not act like AI is out here writing the next great societal manifesto, it’s just reshaping how we work, create, and think.)

  • AI Doomsday communities on Reddit love to speculate that AI will outpace human intelligence and render us obsolete. (Yeah, no. We don’t even fully understand how our own neurons fire, so let’s pump the brakes on assuming AI is about to surpass human cognition. Now, if we hand AI a task without the ability to shut it down, that’s on us. That’s an engineering failure, not AI sentience. See: the infamous paperclip production thought experiment. Let’s keep an eye on the humans building AI, not the machines themselves.) Check out this Wiki on Instrumental Convergence to learn more: https://en.wikipedia.org/wiki/Instrumental_convergence

Here’s the thing: both of these views are missing the point. AI isn’t some benevolent entity here to be your best friend, nor is it an unstoppable force marching toward human extinction. It’s a tool. A powerful one, yes, but still a tool, designed to assist, augment, and accelerate human intelligence, not replace it.

How Most People Are Actually Using AI Today

The truth is, most people are neither worshiping nor fearing AI. They’re simply trying to make it useful.

A recent Wall Street Journal report highlighted practical, non-apocalyptic uses of AI in everyday jobs:

  • Educators are using AI to turn lectures into podcasts, making learning more accessible.

  • Retail managers are using AI tools to automate scheduling and reporting, saving them hours.

  • Doctors are consulting AI to help research treatment options and refine diagnoses.

This is where AI shines, not as a replacement for human decision-making, but as a cognitive amplifier that reduces friction, improves efficiency, and unlocks new possibilities. 

How Experts Say We Should Be Interacting with AI

So, if AI isn’t something to fear or fall in love with, how should we actually engage with it?

Ethan Mollick: Treat AI Like a Smart Intern

Wharton professor Ethan Mollick (one of the most pragmatic voices on AI today IMO) advocates for treating AI as a highly capable, but flawed assistant. It’s fast, it’s knowledgeable, but it still needs human oversight. (Time)

AI Executives: The Power is in the Prompts

Leading AI researchers and executives emphasize how you talk to AI matters. Instead of treating it like a black box, think of AI as an interactive system that thrives on clear, structured prompts. (Business Insider)

UNESCO: Transparency and Ethical Boundaries Are Key

AI ethics experts stress the importance of keeping AI in its place, as a tool under human oversight. UNESCO’s guidelines urge clear labeling of AI-generated content and ensuring accountability remains with human operators. (UNESCO)

Final Thought: The Future of AI Is in Our Hands

AI isn’t coming for your job (unless, of course, your job is highly repetitive and doesn’t require strategic thinking). AI isn’t plotting world domination. And no, AI isn’t in love with you.

What AI is doing is expanding human capabilities at an unprecedented rate. It’s enabling people to work smarter, learn faster, and create more. But only if we use it responsibly.

The way we talk about AI matters. If we treat it as a mystical entity, we cede control. If we fear it irrationally, we hinder progress. The best path forward? Understand it. Use it wisely. Keep humans in the driver’s seat.

Well, now you know my perspective and stance, but what do you think? Are people too obsessed with AI as something more than a tool?