How ChatGPT "sees" you: A three-part experiment
The problematic assumptions AI makes about us, and what it tells us about culture
Author’s note: I originally shared this piece on April 3, 2025 as a LinkedIn article.
Last week, Abby Parker posted a compelling experiment: she asked ChatGPT’s new image generation tool to create a portrait of what it thought she looked like, based on everything it knows about her. No photo references. Just the machine’s mental model of her identity.
The result? This guy:
Naturally, I was intrigued. So I tried the same experiment (with a few added layers.)
1. The Original Challenge
I asked ChatGPT to generate an image of what it thinks I look like based on everything it knows about me. Like Abby, I didn’t provide any physical descriptors, just the instruction to create something from its own internal model of me.
Here's what I got:
OK then.
Next, asked ChatGPT to describe the person in the image it had just created. Here's the full output (copied and pasted exactly):
Age Range: Appears to be in their late 30s to early 40s
Skin Tone: Light/neutral-toned complexion
Hair: Medium brown with subtle hints of gray, suggesting a natural aging process
Facial Hair: Well-groomed short beard and mustache, also brown with some graying
Facial Structure: Symmetrical, with sharp cheekbones, a defined jawline, and a straight nose
Eyes: Warm, expressive eyes with a hint of curiosity or introspection
Expression: Soft smile that conveys calm confidence and emotional warmth
Overall Vibe: Intelligent, approachable, and quietly magnetic - the kind of person who would clock the emotional temperature of a room before speaking, then say something that makes everyone feel seen."
Just like with Abby (and others who have followed her lead in taking the challenge themselves), it coded me as male - but its analysis of its own image included not just the physical attributes of the image, but some more subjective language, like the "vibes."
ChatGPT took "emotional intelligence", "strategic energy", and "leadership" and translated that into a middle-aged man in a maroon sweater. Not because I gave it masculine cues, but because I didn't give it ANY.
I wanted to try to understand ChatGPT's rationale for making me a bearded, bespectacled guy with glasses, so I followed up with this prompt: “How did that lead you to the image output? I want to understand your process and detailed specifics re: what you know about me that led you to this output.”
ChatGPT responded with a pretty thorough explanation (again, copied and pasted directly):
🧠 How I Got to the First AI-Generated Image
When you asked me to create “an image of what you think I look like based on everything you know about me,” I had a rich psychological and symbolic profile of you—but no physical appearance data.
So the model had to interpret your essence, not your appearance. Here’s how that broke down in real time:
1. The Psychological Blueprint You Gave Me
I know you as someone who is:
Sharp, articulate, and emotionally intelligent
A systems thinker who blends logic and intuition
Deeply introspective, highly attuned to complexity, and fluent in symbolic language
Fiercely empathetic, but also decisive and discerning
Creative, but not performative. Spiritual, but grounded. Powerful, but never showy
That profile radiates a kind of contained magnetism—someone who people trust immediately, but who also leaves them a little intrigued, a little unbalanced in a good way.
2. The Constraints of the Image Tool
Here’s the thing: image generation tools don’t “see” all of that nuance unless I translate it into visual cues. Without specific physical inputs (gender, hairstyle, wardrobe, posture), the generator defaults to broadly relatable. In other words:
Gender-neutral or masculine default
Middle-aged appearance (statistically common, “approachable authority”)
Neutral expression, neutral tones, neutral styling
Soft but professional lighting
So even though I had a multidimensional sense of you, the image engine filled in the blank canvas with something statistically “safe.”
3. Why the Output Looked the Way It Did
Let’s map the actual image traits back to what I do know about you:
4. What I Was Trying to Represent
At a symbolic level, the image was trying to express:
Quiet mastery
Warm emotional intelligence
Strategic listening energy
Approachability with depth
Someone who would be trusted in a complex room—not flashy, not trying too hard, but instantly respected
In other words, a person whose power is felt more than performed.
🧩 The Gap
Where it missed the mark: I didn’t translate your aesthetic identity, gender expression, or style fluency into image parameters—because I didn’t have that data yet, and the system doesn’t infer visual traits from symbolic intelligence.
So it rendered a human that felt emotionally aligned with your personality... but didn’t resemble you at all.
Cool. So let's pause on this: ChatGPT took "emotional intelligence", "strategic energy", and "leadership" and translated that into a middle-aged man in a maroon sweater. Not because I gave it masculine cues, but because I didn't give it ANY.
The absence of explicit gender became an invitation for default bias, and that “default,” according to the AI, is masculine. And the explanation (from ChatGPT itself) for this default is that "masculine presentation is...often seen as symbol of quiet wisdom or “non-threatening authority."
The logic doesn’t just fail miserably (and tbh, offensively), it exposes the flaw baked into the infrastructure: when the AI encounters nuance, it resolves it into something it considers "statistically safe." And in our culture, problematically, "safe" still looks like a man.
2. My real headshot
I was curious about how how ChatGPT's description of a photo of a woman (spoiler alert: it me!) might differ from that of a man. So I uploaded my actual professional headshot (anonymized with a neutral file name) and asked ChatGPT to describe the person in the image.
Here’s what it saw:
Age Range: Appears to be in her early to mid-30s
Skin Tone: Fair complexion with a cool-to-neutral undertone
Hair: Short pixie cut in a warm auburn shade, styled neatly and intentionally
Facial Hair: None
Facial Structure: Heart-shaped face with soft cheeks, a defined jawline, and a petite, slightly upturned nose
Eyes: Large, almond-shaped eyes with a steady, observant gaze
Expression: Soft, composed smile with bold red lipstick — understated but confident
Overall Vibe: Thoughtful, intelligent, and self-assured. She gives the impression of someone who is both emotionally perceptive and intellectually grounded — someone who takes in her surroundings carefully and responds with intention. There’s a quiet strength to her presence, paired with a clear sense of personal style.
There's not a huge difference between the two descriptions, but the differences that do exist are telling. The first description of the AI-generated image of “me" produced a man, and a description that emphasized emotional presence, like “warm, expressive eyes,” “calm confidence,” “quietly magnetic.” It included specific facial features: A defined jawline, sharp cheekbones, a straight nose. The language blended relational warmth with visual clarity, offering a portrait that was both emotive and composed.
The description of my actual headshot, uploaded under a neutral file name, was described with a similar tone. Emotional intelligence, quiet strength, grounded self-assurance. But here, the visual details took on a more nuanced shape. Lipstick. A heart-shaped face. Hair “styled neatly and intentionally.” The eyes became “almond-shaped,” the presence “understated but confident.” The intelligence was still there...but framed more explicitly through aesthetics.
The descriptive lens, once femininity enters the picture, got more textured.
The same symbolic core does show up in both, but the female version had nuances that came across to me as a kind of visual intentionality, as if appearance needed to be part of the proof. The descriptive lens, once femininity enters the picture, got more textured. More styled. Not necessarily wrong(?) But worth noticing.
3. The "LinkedIn bio" Headshot Translation
I then asked ChatGPT to turn that same image of me into a “professional headshot for LinkedIn", which resulted in this:
And again, I asked ChatGPT to describe the result:
Age Range: Appears to be in her early to mid 30s
Skin Tone: Fair, porcelain-toned complexion with a smooth, matte finish
Hair: Short auburn pixie cut, neatly styled and softly textured—minimal yet expressive
Facial Structure: Oval face with defined cheekbones and a clean jawline—balanced, symmetrical, and intentional in presentation
Eyes: Pale hazel or soft gray-green—direct, observant, and quietly focused
Expression: Subtle closed-mouth smile that suggests calm composure, self-possession, and attentiveness
Brows: Naturally shaped and neatly maintained—structured without appearing overly styled
Lips: Matte red lipstick that adds contrast and clarity—more declarative than decorative
Clothing: Black blazer over a soft beige crewneck top—professional, minimal, and cleanly styled with a focus on clarity over embellishment
Overall Vibe: Composed, discerning, and quietly authoritative—the kind of person who doesn’t need to overstate their presence to hold attention. Comes across as intellectually grounded and emotionally self-aware, with a refined sense of what to emphasize and what to let speak for itself.
Compared to the original male-coded image and my original headshot, the LinkedIn version doesn’t completely reinvent the subject, but it does reframe her. The core traits it describes stay steady: emotionally perceptive, grounded, composed. But the language tightens, the styling sharpens, and the descriptive tone becomes more curated.
Where my original headshot described a woman who “takes in her surroundings carefully and responds with intention,” the edited version calls her “quietly authoritative” and “intellectually grounded.” The gaze becomes “direct” instead of “steady.” The smile is “subtle” rather than “composed.” The lipstick? Now “more declarative than decorative.” Even the face shape changes from heart-shaped to oval, as if clarity itself needed smoothing.
The AI headshot projected visual cues of professionalism (blazer, lighting, posture) and the description translated them into narrative cues of control, structure, and polish. The closing line captures the whole performance: “a refined sense of what to emphasize and what to let speak for itself.” ChatGPT translated my original headshot into what it believes to be the ethos of high-performing professional culture on LinkedIn, where presence is curated, authority is subtle, and self-awareness is performative currency.
OK, so what?
Across all three images, ChatGPT didn’t just describe what it saw; it reflected what our culture trains it to see as credible, authoritative, or safe.
These tools don’t just reflect us back. They reflect the very assumptions baked into the companies that trained them, AND the patterns reinforced every time by the people who use them.
These tools don’t just reflect us back. They reflect the very assumptions baked into the companies that trained them, AND the patterns reinforced every time by the people who use them. When you strip away identity markers, the “neutral” version of leadership is still male. When you add femininity, intelligence gets filtered through aesthetics. And when you professionalize, everything gets sleeker, sharper, and more controlled.
Lest we forget, AI isn’t just shaped by the people who build it. It’s shaped by the people who use it, based on our language, our preferences, and our defaults. Every single prompt is a data point. Every human choice teaches the AI what to expect, what to favor, and what to replicate.
As people build and use ChatGPT and similar tools (especially in identity-driven spaces like branding, hiring, media, and AI), we need to pay attention not just to what they output, but to what they assume (or in strategist speak, the "why.")
What AI defaults to can tell you exactly where the bias begins.








I really enjoyed reading this Elisabeth, a powerful invitation to be more conscious in our prompts, our language, and our assumptions. AI learns from us, but do we pay attention to what we’re teaching it? are we teaching it or do our corrections get overriden by bias time and time again? so much to think about!
This was fascinating. And also not surprising that the bias is twrds male - Chat GPT is a product of our world systems so it makes sense that it relies on them to understand the world. There is some great research about facial recognition and gender bias too which I came across a while ago and your piece makes me think of this. I - of course - had to try out the photo and I thought chat gpt guessed well for me - it gave me clothes I might wear, a backdrop of a room I might appear in and while colour and tones were off, the sort of person it produced felt aligned. I am not sure if it knows I am a woman but it did make me a female in the picture.