Good morning,
My mind is on AI this week because on November 7 I’ll be moderating a panel on AI and the future of work/education with a stellar line-up: tech journalists Kevin Roose & Casey Newton from the NYT Hard Fork podcast and data scientist Hilary Mason of Hidden Door.
There are tons of interesting developments and questions to discuss but as I’ve been thinking through my own experience as a witness to the advent of AI (full disclosure, I live in SF and my spouse works in the belly of the beast), I can’t help but start with the feelings I’ve had about it all.
The way we process new information is inevitably colored by our illogical, under-informed, self-protective mechanisms because we are and have always been… creatures hoping to survive. For this reason, I think addressing emotions first is the only way to make sure emotions aren’t ruling decision-making or learning processes. (related from the archive: bodies + information)
So let’s talk about the feelings of it all.
Step 1: Unpacking Emotions
When you see something big coming that you can’t quite grasp the magnitude or repercussions of, it can range from thrilling (for those of us who live for risk and novelty and speed) or terrifying (for the risk-averse turtles among us; I am one).
In the case of AI, I’ve experienced more of a wild mix of multitudes than anything else. In an attempt to unpack them in an organized way, let’s consider 6 core emotions, which I often find to be the best starting place whenever I’m trying to unpack what I feel. (I have no background in psychology but I like these 6 to keep it simple.)
Literally, I list them all then ask if they are relevant and why.
Anger: This is an innovation that just feels too big to fathom. I don’t like that feeling. It actually makes me mad and irritable. I like to understand what’s coming at me, because that’s how I know to keep myself safe. I also sometimes hate that something so impactful is in the hands of people who want to make money. I get that capitalism drives innovation and I’m living a very comfortable life full of products and services that have exponentially improved my days. But I also know that there are serious side effects to those very same innovations that are not good for my health or for whole groups of people. At scale, it feels like I need to trust God or humans with God-like power on this one, and I’m not down. Is there another option?
Disgust: When I see news like this heartbreaking story, I just feel disgust. Why do we need chatbots for companionship? Who thought that was a good idea? I also feel disgust in tiny moments, like when someone says “let’s ask ChatGPT that” in the middle of a conversation when we would normally Google. Google never made me feel insecure, competitive or threatened. It was a tool at my disposal that, if I got good at using, empowered me. I don’t know why this made me happy. But it did. Maybe because it was very clear how it works? ChatGPT feels like a smarter entity than me and I don’t know how it’s thinking or when it’s wrong. It’s a threat to the process I love most: information gathering and synthesis. I loathe threats.
Fear: I have two. 1) If I don’t keep up with these advancements, will they just happen TO me? Is there a way to exert intention and control toward a technology that has shown up in the bottom right-hand corner of every internet tool I currently use, offering magic? Can I overcome my desire for magic or do I just soak it up? 2) What if a loved one, what if my child, develops a very wrong, scary and lonely worldview because they engage with intelligence that isn’t rooted in a body? Does intelligence outside the body have any warning systems that come from a desire to be social and live like we do? As faulty and divisive as those systems can make us, I trust beings that are just trying to live and just trying to live with their people.
Happiness: There are a lot of problems that can and will be solved quickly with the help of AI. It’s very exciting. I’m grateful to witness these changes, to see science advance, to see people who haven’t had the time to invest in certain skills finally get to level the playing field a bit and show up in rooms previously locked by access to education, thanks to the help of smart tools. I am happy that we are seeing progress like never before in so many fields, especially health and science, which is something that gets obscured by the news cycle, but from a historical point of view, is truly encouraging.
Sadness: A friend once made a comment to me about self-driving cars (I live in a city full of them) that has stayed with me. “It just feels so off that we will reach a point where we don’t have to actively make the choice to keep other people on the road safe on a moment to moment basis.” (Objectively speaking, it’s likely that self-driving cars will actually be safer than distracted human drivers.) But still, the farther you feel from people, the less responsible you feel for them. The more automated our protective mechanisms become… I worry that something will get lost in our humanity. It truly makes me sad.
Surprise: That magic moment when the AI feature of a creative platform serves you exactly what you need? When the brainstorming session is complete in seconds? When the thing you were trying to articulate but couldn’t quite do is offered to you on an eloquent platter? It’s amazing. It’s thrilling. I love it. It feels like literal magic. It pulls me onto a long, wind-y, giddy road of “Oooh, what else can we try to do?!”
As you can see you, SO MANY FEELINGS. All at once.
But I think it’s important to honor them and then let those feelings get out of the way in order to actually start to understand what’s going on from a rational perspective.
Because even when rational, the hugeness can limit your view, as Michell Zappa describes well in his recent notes from TedAI Vienna:
Imagine a group of blind people encountering an elephant.
Each person reaches a different part of the animal and describe what they are feeling. The one holding the leg senses something like a tree. The one grabbing the trunk feels like something like a snake, and so on.
Depending on who you ask, you will receive a different response about what they are witnessing. And despite their distinct witnesses, they are each correct in their individual assessment of the situation.
How do you know who to listen to?
This metaphor helps illustrate what it is like trying to wrap your head around AI.
Developing at the intersection of numerous fields, AI is the culmination of computer science, mathematics and statistics, but also linguistics, ethics, psychology and many more areas of research. Countless experts are collaborating on furthering AI, and we can safely claim that nobody understands the whole thing.
So, for a non-expert, not working in the field, feelings worked out, where do we start?
Step 2: Clarifying Intent
For me, rationality is underpinned by intent. And my ultimate intention is to care well for myself and others. So, from that lens, where do I start digging in?
Here are some of my guiding questions a la journalism as a personal practice:
Do I get it?
Or am I amassing assumptions because I really don’t actually get it? If so, what do I want to understand?
Am I clear about what categories of things we can currently do with AI? Not just what random people in my filter bubble are saying but like, what are the broad categories?
Do I have a sense of what advancements are on the horizon? And do I know where to follow them?
Do I know where the big money is going and why? Do I get how that will accelerate certain timelines over others?
Do I understand the risks at hand? Not based on one-off stories that make headlines but actually, what are the smartest people spending their time on mitigating?
Do I understand the potential at hand? The news is so much about what’s been done, what’s already happened. But what is about to happen that the public can’t even fathom yet? What ailments that we’ve suffered from for so long are about to disappear?
What’s coming for me and how do I prepare?
Questions about the future untethered to my daily life can be too overwhelming and existential and I honestly don’t have capacity to think that big. Knowing that is important in a world of takes on AI and everything else. Tied to daily life, how can I explore things like:
What types of invisible labor can I totally get rid of with the help of tools to free up my time for other things?
How can I augment my creative and work process?
How should I introduce AI to my child? Where are the kids’ resources?
How can I refresh my media diet so that I’m better informed about my questions above and am not just chatting about the perils or drama of the latest news story?
At the moment, I very much don’t get it but I do have the following principles for inquiry that I am trying to live by, in order to not get swept up by the magic or hyperbole.
Journey notes to self
Principle 1: Enjoy magic but don’t blindly trust it or pass it along.
The other day in the car, we drove by a building that shared a name with several other institutions in my city—Koret. I asked my husband if he knew who Koret was. My husband asked ChatGPT. We chatted back and forth with ChatGPT as it walked us through the Koret family, its history, business, fortune and philanthropy. It felt too easy. We stayed engaged with the topic much longer than we would have with Google. But did I really know if all the facts were correct? No. Did it mostly sound right? Yes. Will I pass along the information I learned blindly? No.
This one was filed under “interesting but not fact-checked, so not going to make it a party fact.” If I was asking about something more consequential or for distribution, I would fact-check. Period.
Principle 2: For every encounter with cognitive power that feels bigger than mine, think about an aspect of my own humanity that I want to hold onto.
Have you ever been in a room with someone who just feels so much smarter than you that you kind of can’t follow what they are saying or how fast they are saying it? Like, how do they think so fast or retain so much?! It’s irritating and also kind of inspiring.
Instead of shrinking or judging, I’ve learned to engage with those moments by remembering what I can offer that’s uniquely… me. Often, that’s curiosity. I see it as my superpower because I never get bored of asking more questions. What’s your superpower? Hold onto it.
Principle 3: Get your hands dirty, but not just in “cheat” moments.
If there is one thing I’ve learned about AI so far it’s that it’s a very nascent technology, there is a lot of jargon being thrown around about it and there are huge divisions among experts themselves about how to evaluate and assess what’s happening. Things also move fast. What was new or true 2 months ago isn’t necessarily the case anymore.
For those reasons, rather than trying to take in an abstract, evolving landscape all at once, it’s better to play with actual tools to assess for myself what I think is happening. I’ve loved playing with AI for image generation, writing, decision-making, genre translation, brainstorming, imaginative play, quick research and synthesis. I haven’t loved it for fact-finding or engagement. There’s so much more to explore.
I also think it’s important to explore for exploration’s sake, not just when you have something due and you want to see if ChatGPT can help you real quick.
Principle 4: Know/serve yourself better than a tool can claim to know/serve you.
I think there’s a lot we can learn from the food industry on this one—either you’re an easy marketing target for health products or you’re a person who understands the basics of biology, digestion, metabolism and your own needs and can therefore make educated choices. The latter is what I hope my kids will learn to do well when it comes to both food and media/technology.
All that said, I’m trying to explore a lot of the above in a constellation and learning from people who have a broader vantage point than me. That, and playing! After the event I’ll also report back on what stands out.
In the meantime I’ll leave you with this bit from a recent Hard Fork episode in which Kevin and Casey discuss a point in an essay by Anthropic’s chief executive, Dario Amodei, about the importance of both pessimism and optimism toward AI they say:
It’s important to have both. You can’t just be going around warning about the doom all the time, you also have to have a positive vision for the future of AI because not only is that what inspires and motivates people—it matters what we do. I thought that was actually the most important thing he did in this essay. He basically said, look, this could go well or this could go badly. And whether it goes well or badly is up to us. It’s not some inevitable force. Sometimes people in the AI industry have this habit of talking about AI as this disembodied force, that it’s just going to happen to us, inevitably. And we either have to get on the train or get run over by the train.
They are referring to AI-CEO-speak, but I think the same idea applies to us as users. It matters how we engage. It matters how [emotionally] sober we are when we do so. And it starts by rooting ourselves in a little bit of self-clarity and rational curiosity.
Here are tickets for those who are local. (Plus it’s at my alma mater, which has a very lofty mission, so it’ll be a special conversation!)
Happy Monday,
Jihii
P.S. In the spirit of play: my partner put this newsletter into NotebookLM and turned it into an AI-generated podcast of two people talking through the main points (extremely strange experience but they weirdly got the key points mostly right). Listen here:
Jihii this was very helpful. If there’s a video of the AI event you mention at Soka university I would love to see it. Happy Diwali to you and your family
Thank you, Jihii, for the continuous food for thought. I found myself relating, thinking, considering perspectives, and more. You've got a gift for words and sparking insights!