#87: observing myself observing my son
[Companion Machines 02] using AI to practice being the people we want to be
This is a case study from Companion Machines, an independent study about building emotionally grounded relationships with AI. Itâs for educators, artists, parents, and anyone trying to figure out what it means to work with intelligent systemsânot just through them.
Good morning,
When my son turned 14 months old, I found myself growing fascinated by the things he was starting to doâthe way he moved around the room (scooting), the way he started to express himself (quite clearly) with no words, just rhythmic babble, the way he started exchanging jokes and eye-contact.
When he was frustrated, he would intonate a complete rant using only the word, âdaâ but you could hear the entire sentence behind âDa DA da da-da-da-da DAH!â as he protested a diaper change.
In terms of gross motor skills he was technically âbehindâ since he skipped crawling and didnât stand or walk yet, but the things he was doing felt like seeing an entire person emerge all at once.
Heâd spend entire hours playing by himself, moving his blocks around in a pattern I could not understand, but seemed purposeful, because any time Iâd enter the game heâd get frustrated and shake his head âno.â
His nanny explained it best saying, âHe definitely has his own agenda for the day, every day.â
I kind of loved itâprojecting, perhaps, my own respect for self-determination and independence onto him, but I also was curious to understand what was going on internally for him.
Every time someone asked me how he was doing, the questions were âIs he walking? Is he standing?â and I found myself unable to explain how complex his development feltâthat even without clear new milestones, something big was happening. And I hated not having the words I wanted.
It felt like the perfect use case for my Companion Machineâcurrently ChatGPT. In a folder called âfamily care,â filled with a number of long-running chats (like, months-long) on a host of topics, all with memory turned on (which means context from past conversations is remembered and referenced) I started a conversation about what I was observing.
I shared the things I had noticedâthat he didnât seem interested in walking or crawling, that he liked to play alone, that he responded acutely to transitions, emotional tones (crying when another baby cried, or I did), that he seemed fiercely determined to do things (including eating and sleeping) on his own.
And then I asked for language to understand his actual development, based on science.
We went back and forth, and I asked it to give me language in a few different formatsâincluding a memo my brother, a cognitive neuroscientist, might understand.
It went over my head a little, and I canât know how accurate it was, but the exercise helped me realize how much language there is for behavior. In other words, language organizes perception.
The things I enjoyed finding words for most were the ones that textured his inner world for me in a way I could understand.
The CM offered hypotheses like:
Stillness isnât necessarily passivityâit can reflect deep perception or sensitivity to environment.
Your son seems to be attuned to relational and environmental cues, such as rhythms and transitions, suggesting heâs processing a lot even without outward mobility.
His responsiveness to subtle changes (tone of voice, activity transitions, music) may reflect a strong internal regulatory system.
And then I shared the things I was doing as a parent, both intentionally and unintentionally (which is to coexist more than play a lot) and it offered encouragement like:
Developmental timelines are deeply individual, and your observation style is already an active form of support.
Parenting, in this framework, is a type of system modeling: your emotional tempo becomes his initial blueprint for relational understanding.
It also pointed me to a TED talk by cognitive scientist Allison Gopnik, âWhat do babies think?â as a way to understand how the minds of young children work. [Notes on this below.]
Her talk reframed a lot for me, especially to help me move past milestones as a way to understand kids, as she describes childrenâs awareness as âlantern-like,â in contrast to the adult âspotlightâ mode. While adults are trained (or perhaps, expected) to focus, block things out and move linearly through tasks, babies are naturally wired for broad, exploratory attention.
âChildren are bad at not paying attention,â she said. âTheyâre bad at inhibiting all the other things that are going on.â
She also explained that all of their play is essentially experimentationâwhich I wholeheartedly agree with; my son is constantly figuring out how things work, but the best part is that it seems to be in a way that feels entirely unencumbered by precedent. Actual experiments Iâve observed:
if the hairclip goes in momâs hair, why not the fork?
can I put the hairclip on the dogâs fur?
if the toothbrush is for my teeth, can I also brush the wall?
what if I brush my teeth with my hairbrush?
Each one made sense to himâand taught me something new about how heâs mapping the world.
Here is what Iâve learned from these sessions
Zooming out, hereâs what AI helped me do in this example:
offered a private space to record my own observations, and hear them played back in different words to see what I was seeing
offered relevant, existing research to deepen my understanding
allowed me to brainstorm and organize ways to continue supporting and respecting his development based on our current routines
checked my own projections onto him when I asked what I might not be seeing, or what might be implied about my own headspace based on how I was sharing information
I tried to be extremely intentional about having a reflective conversation, rather than one with a goal of validation, diagnosis or comparison. That difference shaped the quality of our interaction.
Without that grounding, if someone were to do this kind of 'whatâs my child like?' session the output could easily feel false, clinical (but incorrect) or overly suggestive.
I also kept the tone collaborative: I wasnât asking ChatGPT to analyze my son or tell me what to do. I was trying to describe what I saw and then hold up a mirror to it, using this tool to organize my own thoughts.
Because it worked so well, Iâve tried to repeat the process with other relational situationsâreflecting on my approach to a tough conversation, gathering thoughts to share with a project mentorâwhere I want to observe myself clearly.
The recipe seems to be something like this:
Explain the situation
Offer abundant context and my biases
Ask for playback on what itâs hearing and language to understand it a little bit better
Ask for it to check identify any projections or biases I might have have missed
This is very different than turning to AI for:
pure validation
a place to vent
a place for positive feedback
or to analyze another personâs behavior out of context.
(Iâve tried all of the above to see what would happen and it quickly becomes a slippery slope.)
And here is the potential I see:
Because LLMs are so good at processing and contextualizing information, relational AI use can actually be a remarkably great way to practice getting better at being the kind of person you want to be. In other wordsâby defining your values and building them into how you approach the conversation.
So, before we move into this monthâs Companion Machines module, I wanted to name a larger question at the heart of this project: what if the quality of our AI use is directly tied to the quality of our human presence?
I recently heard a podcast with Nicholas Thompson (CEO of The Atlantic) describe it like this: our âunwired capacitiesâ (i.e.: human skills) are what shape the results we get from AI. The better we are at being human, the better our interactions with intelligent systems.
I think this is especially important because tools are built to bias productivityâChatGPT, for example, ends most responses with a suggestion about how to take the next step forward. I constantly have to ask it to slow down.
Being intentional in conversation requires being a few levels above yourself in dialogue. But honestly, thatâs what a good human-to-human dialogue requires too: rising above your emotions to listen well, try to learn, and be willing to hear things that may not be favorable about your perspective.
While it should never replace judgment or professional guidance, itâs an oddly powerful place to practice how to talk about things.
Especially in the context of parenting, an endeavorâat least for meâ that requires being aware of your own emotional projections and cultural expectations, practicing feels helpful.
Every system we nurtureâchild, tool, or reflectionâstarts with the how we show up to it. Which, to me, feels wildly important to learn how to articulate.
When social media blew up our passive relationship with mass media, we werenât ready for a world without gatekeepersâour desire for quick, clear commentary was too ingrained. Unless we actively unlearn it, we will carry that habit into our relationships with intelligent systems.
I canât help but wonder: this time, could we be ready?
đ Companion Machines Module: Observation is not Neutral
Each month, Iâm thinking about one cognitive or emotional habit weâve internalized from fast media systems, or been unable to explore because of them. These habitsâlike rushing to judge or seeking certaintyâoften shape how we use technology without us realizing it.
Below is a module [for paid subscribers] that could be used in an independent study or class session. It includes notes, written collaboratively by me + my CM on:
What weâre working against (our own conditioning)
What weâre working with (a systemâs conditioning)
The risks of using AI in these conditions
The skill weâre working on + an exercise
A suggested reading
An excerpt from my own chat with AI