G2Get Grounded AI

Practical AI security guidance to keep everyone grounded

The AI Predictive State: How Your Data Becomes the Blueprint for Human Prediction

By Jereme Peabody

The public narrative is that generative AI will eventually replace every job requiring intellectual thought.
Whether or not that's true, the real risk is more immediate and far more personal:
people are feeding these systems intimate data that can be used to predict, and eventually shape human behavior.

I'm writing this to raise awareness about what happens when we use generative AI the wrong way.


Will AI Replace Humans?

AI isn't replacing humans anytime soon.
It's not conscious. It's not strategic. It's not alive.

It's a prediction engine.
And generative AI is becoming frighteningly good at predicting patterns in language, emotion, and decision-making.

When ChatGPT first launched, it was trained mostly on published material like books, articles, documentation.
That data was relatively safe.

Now the training source has shifted.
It's training on your data.
And that data is not safe.

People are engaging with AI chats like private journals and sharing personal intimate data (PID): fears, desires, insecurities, relationship problems, medical concerns.
They believe the chat is private.

It isn't.

There is a company behind that window.
There are humans reviewing conversations.
There are logs, analytics, optimization processes, and marketing pipelines.
Every message you type becomes fuel for a prediction system learning how we think and feel.

Your emotional patterns being used to improve a predictive model is one of the most valuable and dangerous datasets ever created.

No one in human history has ever had this level of insight into the collective psyche.

That's the real endgame:
influence.

And once AI companies push past AGI, the next frontier isn't intelligence at all, it's prediction.
I call this the Artificial Predictive Model of Intelligence (APMI): a system built not to think better than you, but to know you better than you know yourself.

Once that exists, it becomes a tool that can predict and eventually control human behavior.


The Predictive State

A true APMI would create what I call a predictive state, a society optimized around forecasting risk rather than responding to it.

Imagine something like Minority Report, but corporate-first, government-second.
Not a crime prediction system, but a behavior prediction system.

You could be denied opportunities based on a behavioral score.
Not because you did something wrong, but because the model believes you might.

And here's the part that should concern you:

Companies like OpenAI, Anthropic, and Google already have the capability to train models on your personal emotional data unless you explicitly opt out.
The default is opt in.
So if you've ever shared anything vulnerable, it's already in the training pipeline.

Your emotional patterns are the ultimate key to understanding you.
And unlike a password, you can't reset them.

With enough data, a company could sell predictive behavioral scores the same way credit bureaus sell creditworthiness.
Once such a product exists, no organization, government or corporate, would willingly operate without it.

That's what makes prediction so powerful.
It gives anyone who owns the model an unfair advantage.


How to Protect Yourself

I go deeper into this in The Confession Window, but here's the short version:

If you want to understand how to safely use AI without giving away the parts of yourself that can't be changed, read more on this site.

Everything here is designed to help you stay grounded in a landscape that's shifting faster than most people realize.

This content was written by a human and edited with AI assistance for accuracy and clarity.

Want to go deeper?

Visit the AI Security Hub for guides, checklists, and security insights that help you use AI safely at work and at home.