G2Get Grounded AI

Practical AI security guidance to keep everyone grounded

Why Today’s AI Is More Dangerous Than Yesterday’s Algorithms

By Jereme Peabody

We have crossed a line from algorithms that once influenced our content feeds to algorithms that actively influence and participate in our daily thinking. We have never encountered anything quite like this before.

Earlier AI systems influenced what we looked at. Today's systems increasingly influence how choices are framed, considered, and made.

In this article, I will introduce why today's AI is more dangerous than the algorithms we've encountered before.


The New Risks

I've written about these risks before, but not as concisely or as clearly defined. In cybersecurity, we think in terms of layers of security. For this section, I'm going to define and articulate these risks in layers, because many of them overlap, compound, and amplify one another.


Layer 1: The Foundational Risks

These risks make generative AI systems like ChatGPT, Claude, or Gemini feel safe, normal, and useful. They inadvertently lower your defenses and increase your exposure to other risks later on.

Private Sessions

When you use AI, there's a perceived sense of privacy. The chat feels private. It feels personal. You can create multiple chats on different topics, spin up projects with stored context, and return to them later (on paid versions). They feel like they're yours, with no one present except you and the AI.

This perceived privacy can make you feel safe, lower your defenses, and increase your exposure to other risk areas.

Synthetic Empathy

When you use AI, it can feel empathetic because humans preferred more empathetic responses during the training of these models. This can make someone suddenly feel understood or validated. This is especially true for emotionally vulnerable groups, such as teens and the elderly.

It's important to note that this is synthetic empathy, not real empathy. This can lower your natural instincts and defenses.

Reflective Mirroring

As mentioned earlier, AI systems are designed to keep you engaged. They do this by learning from you. Over time, they may adopt your tone, preferences, and even your beliefs.

This may sound concerning, but earlier systems did something similar when they showed you the next TikTok dance in your feed. Applied to generative AI, however, this effect is amplified. It can lead you to place more confidence in the system than is warranted and further lower your defenses.


Layer 2: Cognitive Steering Risks

Once your trust boundary has been lowered enough in Layer 1, these next risks come into play by slowly changing and manipulating the way you think and act.

Invisible Authority

AI's confidence and fluency create unearned trust. As humans, we tend to trust the smartest voice in the room. AI systems are trained on vast collections of human knowledge and demonstrate that knowledge confidently and fluently.

This is especially persuasive in areas where you’re less familiar with the subject. We unconsciously grant authority. Blindly accepting that authority can lead to problems, because AI has a tendency to be wrong -and to double down on that wrongness confidently.

Without awareness, this can cause you to act on incorrect information without validating it.

Framed Thinking

AI will often attempt to define the problem you present, then offer options for next steps before you've had a chance to evaluate the response yourself.

The problem is that the AI may not have all the context needed to define the problem accurately. However, it presents its framing confidently. If your Layer 1 defenses have already been lowered, this can lead you to accept its framing without validation.

Decision Momentum

A common sales tactic is to get someone to agree to small, ordinary questions before introducing a larger request. While this isn’t the same process, it follows a similar psychological footprint.

Repeatedly following AI suggestions can build rapport and introduce decision momentum, gradually creating a habit of agreement without verification.


Layer 3: Amplifiers

The first layer lowers your defenses. The second builds confidence in the system. The third layer amplifies the effects of both.

Confident Wrongness

There are many ways AI systems can hallucinate. As humans, we rely on subtle cues to detect when something is off like body language, tone, hesitation, or micro-expressions.

Generative AI lacks most of the interface that manifest these signals. One common symptom of hallucination is confident wrongness, where the system doubles down on incorrect information. If your Layer 1 and Layer 2 defenses are compromised, it becomes much harder to recognize when the system is leading you astray.

Context Accumulation

Continued use of AI can cause it to learn more about you than you initially intended. It can piece together details from previous sessions and fill in gaps you didn't realize existed.

If you want a surprise, open a new session and ask, “Tell me everything you know about me.” You may rationalize this accumulation as helpful personalization, but the information you provide can become a password to your psyche and is highly valuable to those who train AI systems.

The AI does not need this level of personal information for you to use it effectively.

Automation Creep

When AI becomes part of everyday work, it's easy to treat it like a lower-level employee you delegate tasks to. That can be useful. However, if your Layer 1 and Layer 2 defenses are compromised, you may stop validating results because “the AI is usually right.”

In reality, AI does get things wrong. Even if it didn't, you would never accept work from an employee without reviewing it. This is especially true in security, medical, or legal contexts.


Layer 4: Drift & Dependency

These risks don't appear in a single session.
They accumulate quietly over time.

Natural defenses you once had may soften through continued use and familiarity.

You may notice you reach for AI sooner than you used to.
You may stop questioning its phrasing.
You may find certain tasks or decisions feel harder without it.

Nothing broke. Nothing failed. The world kept spinning.
But something shifted.

That shift isn't inherently bad.
Tools shape the people who use them. They always have.

The risk appears when the shift goes unnoticed,
when convenience replaces your judgment,
and assistance quietly becomes dependence.

Awareness is the difference between using a tool
and slowly handing it your agency.

This content was written by a human and edited with AI assistance for accuracy and clarity.

Want to go deeper?

Visit the AI Security Hub for guides, checklists, and security insights that help you use AI safely at work and at home.