What Your Prompts Reveal About You
By Jereme Peabody
When you use generative AI like ChatGPT / ClaudeAI, you interact with it using text prompts
These prompts are words that together form a certain rhythm, a certain cadence about them
These form your unique Linguistic Fingerprint and it reveals surprising information about you ref1
And how you use AI determines your overall risk.
Can AI identify who I am from how I write?
Yes. Faster now than ever before.
In 1996, the Unabomber Ted Kaczynski was arrested after his writing style helped identify him. ref2
That was in 1996 with human analysts working by hand.
Today, AI can extract your linguistic fingerprint from a few paragraphs of text in seconds.
And if you're not using it correctly, you could be leaking more about yourself than you're aware.
Is There a Wrong Way to Use AI?
Everyone has two rooms when it comes to AI.
A Task Room where productivity and tasks take place.
“Rewrite this for me”
“Summarize this email”
“Help me brainstorm a marketing idea”
When you're in this room, your prompting reveals:
- your baseline personality
- your age range
- your education level
- your cognitive style
- your emotional stability
- your pacing and rhythm
These are revealing, but safe.
Then there's a Feelings Room where feelings and vulnerabilities are worked out.
“My boss doesn't understand”
“How can I communicate the importance of privacy to my intern”
“It feels like I'm doing 80% of the work by myself on this project, how do I get others to step up?”
When you're in this room, your prompting reveals everything from the Task Room plus:
- your emotional vulnerabilities
- your trauma patterns
- your relationship dynamics
- your internal conflicts
- your insecurities
- your persuasion weaknesses
- your coping style
- and anything deeply personal
These are unsafe
With today's technology, it can profile not just who you are, but also what you are.
Together, they form what I call your Personal Intimate Data (PID).
Your Personal Intimate Data
This PID is the Ultimate Password to you. It unlocks your psyche.
What's so damaging is this password can't be changed. Once it's leaked, it's out there forever.
And ChatGPT / ClaudeAI are designed to keep you talking. Leaking more and more of your PID.
What part is weighing on you the most? it beckons
There are real Risks of Oversharing that I cover in a different article, but in this article, I need to talk about anonymity
Why The Incognito Window Doesn't Protect You
If you think you're safe by just opening an incognito window and be anonymous with AI, you're wrong.
Your Linguistic Fingerprint can follow you across sessions, across accounts, and across browsers.
It doesn't need your name
You could still be leaking your PID
The safest way to use AI in a feelings room is not go into the room at all.
Don't discuss medical, financial, psychological, relationship advice, parenting advice, etc. Seek out a human professional instead.
Simple Rules for Safe Use
I cover all the ways you can safely use AI in a different article.
But in short, stay in the Task Room
don't go into the Feelings Room
The Grounded Truth
AI feels familiar. It feels safe.
And many are opening themselves up to it because it makes them feel understood.
But they're unknowingly giving away patterns that bypasses any defenses they have.
Keep your risk low.
Keep your feelings to yourself.
Keep it grounded.
ref1: https://www.isi.edu/news/65021/uncovering-hidden-authors-with-ai
ref2: https://olli.unt.edu/handouts/fall24/identifying_a_linguistic_fingerprint.pdf