Human Centered

Developing AI Like Raising Kids - Alison Gopnik & Ted Chiang

Episode Summary

Should we care for machines the way we do for children? The question helps animate this fascinating conversation between renowned psychologist Alison Gopnik, a former CASBS fellow and current leader of a CASBS project on "The Social Science of Caregiving," and acclaimed science fiction author Ted Chiang.

Episode Notes

This episode is  produced in association with the CASBS project "The Social Science of Caregiving," and draws further inspiration from the CASBS project "Imagining Adaptive Societies." Learn more about both:
https://casbs.stanford.edu/programs/projects/social-science-caregiving
https://casbs.stanford.edu/programs/projects/imagining-adaptive-societies

CASBS program director Zachary Ugolnik served as co-producer of this episode.

Ted Chiang on Wikipedia: https://en.wikipedia.org/wiki/Ted_Chiang

Ted Chiang in The New Yorker
"Why Computers Won't Make Themselves Smarter"  https://www.newyorker.com/culture/annals-of-inquiry/why-computers-wont-make-themselves-smarter
"ChatGPT is a Blurry JPEG of the Web" https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
"Will A.I. Become the New McKinsey?" https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey
"Ted Chiang's Soulful Science Fiction"  https://www.newyorker.com/culture/persons-of-interest/ted-chiangs-soulful-science-fiction

Explore the work of Alison Gopnik
http://alisongopnik.com/
http://www.gopniklab.berkeley.edu/alison
https://en.wikipedia.org/wiki/Alison_Gopnik
https://www.ted.com/talks/alison_gopnik_what_do_babies_think
https://www.nytimes.com/2021/04/16/podcasts/ezra-klein-podcast-alison-gopnik-transcript.html

Learn about CASBS

website|Twitter|YouTube|LinkedIn|podcast|latest newsletter|signup|outreach​

Follow the CASBS webcast series,Social Science for a World in Crisis
 




 

Episode Transcription

Narrator: From the Center for Advanced Study in the Behavioral Sciences at Stanford University, this is Human Centered.

Cognitive psychologist, Alison Gopnik, has argued that we need to develop artificial intelligences like we raise our children, by reinforcing our values during their development. Award-winning science fiction author, Ted Chiang, agrees. His novella, The Life Cycle of Software Objects, considers what it might look like to care for AIs as though they were growing sentient beings, and what it could mean to be apparent to them.

In this episode of Human Centered, we bring these two minds together for a conversation exploring how we care for each other, and what that might teach us about how we care for thinking machines. Ted Chiang is one of America's most celebrated contemporary science fiction authors, known for works such as Tower of Babylon, Hell is the Absence of God, The Merchant and the Alchemist's Gate, Exhalation, and Story of Your Life, which served as the basis for the 2016 film Arrival. His work has earned numerous literary honors, including four Nebula Awards, four Hugo Awards, and six Locust Awards.

In recent years, Chiang has garnered attention for non-fiction writing as well, perhaps most notably in The New Yorker with his essays, Why Computers Won't Make Themselves Smarter, Chat GPT is a Blurry JPEG of the Web, and Will AI Become the New McKinsey? We'll drop links to those in the episode notes. Alison Gopnik is a distinguished Professor of Psychology and Affiliate Professor of Philosophy at University of California, Berkeley.

She currently serves as President of the Association of Psychological Science and is the author of acclaimed books, including The Gardener and the Carpenter, What the New Science of Child Development Tells Us About the Relationship Between Parents and Children, The Philosophical Baby, What Children's Minds Tell Us About Truth, Love, and the Meaning of Life, and The Scientist in the Crib, What Early Learning Tells Us About the Mind. Gopnik is very much a public intellectual as well, from TED Talks to appearances on popular podcasts such as Hidden Brain and The Ezra Klein Show, and writing for The New York Times, The Atlantic, Slate, and for several years the Mind and Matter column in The Wall Street Journal. We'll drop links to those in the episode notes as well.

She was a 2003-4 CASBIS Fellow and remains connected with the Center through one of two CASBIS projects motivating her pairing with Ted Chiang today. She's the leader of the Center's project on The Social Science of Caregivin, which interrogates lessons offered in how we align our familial values with an extended community, as well as with the design of new technologies including AI. The other CASBIS project inspiring today's conversation is called Imagining Adaptive Societies, which uses speculative fiction to help us imagine societies capable of responding to the major challenges of our age.

We'll link to both CASBIS projects in the episode notes if you want to learn more. But for now, let's get to the conversation between Alison Gopnik and Ted Chiang.

Alison Gopnik: Well, I am so delighted to be part of this conversation with Ted Chiang. A little of the background is that when I started doing work looking at AI in, and trying to use children's learning as a model for AI, I had about five different people who sent me copies of Ted's amazing novella, The Lifecycle of Software Objects, including Ezra Klein when I was on his show. And when I read it, I was so blown away by the fact that not only was this a wonderful novella about AI, but it was also the best account of being a parent and a caregiver, or the best literary account of that that I had ever read.

It's interesting that there aren't more conversations about caregiving in the literary tradition, and we could think about some of those reasons, but this seemed to me to be the most profound one and the one that captured caregiving best. And it may just be that that's part of speculative fiction, is that by imagining an alternative context in which you had to take care of, you had to take care of sentient beings that depended on you, you could understand and think about it more deeply than we do when we're in the trenches of actually taking care of real children who are the ones that we are really taking care of and really raising. And in fact, I spontaneously did something I had never done before in my entire life, which was write a fan letter to Ted about how impressed I was and how much I love the stories and the work.

And in general, Ted's work has been very interactive with the kinds of ideas that we have at CASBS. It's a wonderful example of science and humanities speaking to one another in this literary way. And as it turns out, there's actually two projects at CASBS happening right at this moment that Ted's work speaks to really beautifully, one of which is the Imagining Adaptive Societies project, which is taking speculative fiction and the strengths of speculative fiction and connecting it to the kind of speculative fiction that comes when we're thinking about how to make a better society or a better world.

And the other is the project that I've been working on on the social science of caregiving, trying to think about caregiving, not just in the narrow context of the parenting blogs, but trying to think about caregiving much more generally as a really essential part of human nature, as involving not just children, but the ill, the elderly, and also not just being about humans, but involving the way that we feel about the natural world, the way that we feel about artificial intelligences. I think the case of artificial intelligence has made the significance and importance of caregiving particularly vivid, and the life cycle of software objects is the best example I know of why caregiving should be so important if we were going to actually have genuine artificial intelligence. Let me start out with the caregiving part about caring for machines and caring for humans.

Life cycle is such a wonderful story because it raises the issue of how do you put yourself at the service of another being and also allow that being to have autonomy. I think if we're ever going to have artificially intelligent systems, that's going to be a really basic problem that we have to solve. It's a problem we also have to solve every time we have a new generation of humans.

And I was wondering if you articulate that really beautifully in that story and if you have sort of further thoughts as you've watched what's happened in AI and just thinking about care in general.

Ted Chiang: I guess I always feel like I need to preface any conversation about AI with a clarification about what exactly we're talking about. Because the phrase artificial intelligence is used to refer to widely disparate things, because sometimes it's used to refer to the hypothetical thinking machines and sometimes it's used to refer to basically applied statistics. So there's this unfortunate tendency to sort of conflate the two.

And yes, I always want to make sure that we are clear about what we are talking about. In terms of the machine learning programs or robots that we have now, I basically think of them as being comparable to thermostats. A thermostat can be said to have a goal, but I don't think it would be fair to say that it has any preferences.

It has no subjective experience. And you can imagine a machine learning program that you have to train to maintain the temperature of a house. In a certain sense, yeah, you are training this program, but you are basically interacting with a thermostat.

And that is, I think, the situation that we are in with the existing technology.

Alison Gopnik: I completely agree. I mean, one of the things that I've always said is if we said what we're studying is extraction of statistical patterns from large amounts of data instead of artificial intelligence, we would be describing what we were doing much more accurately, but we would be much less likely to have a broad public relations reach.

Ted Chiang: Yes, and of course, companies benefit from this conflation. They always use the phrase artificial intelligence because they want to imply that there's some great thinking machine at work when the product they're selling is just applied statistics. But then if we're talking about this more hypothetical idea of machines that have subjective experience.

So let's imagine that we had a machine that had the same or comparable level of subjective experience to say a dog. And you can train a dog to do useful work and you can train it using punishments or rewards. And I think the evidence suggests that you will get better results if you use rewards to train it.

So that is a purely sort of pragmatic reason to treat your working dog well. But there is also, of course, ethical reasons for treating your dog well because your dog has a subjective experience. And if we posit that at some point we will have machines that have subjective experience, then I think we will have the same ethical imperative to treat them well.

And in that context, if we extrapolate further and to imagine machines whose subjective experience is getting closer to that of human beings, then all of the ethical dimensions become more complicated. Because one of the things you do with a child that you don't do with a dog is that a dog will never become a person, whereas a child eventually will become an adult and will have enormous autonomy and responsibilities and an entirely different level of agency than a dog ever will. And in that scenario, you who are raising or training this machine that might eventually become an autonomous moral agent, then, yes, you have pretty much the same obligations.

You have to wrestle with the same questions that all parents do. One of the guiding questions for me when I was writing Lifecycle of Software Objects was the question of how do you make a person? At some level, it seems like a kind of a simple thing, but the more you think about it, you realize it is the hardest job in the world.

It is in some ways maybe the job that requires the most wrestling with the most difficult ethical questions. The fact that so many people do that, they raise children, it makes it very easy to devalue that. We tend to congratulate people who have written a novel or something like that because relatively few people write novels.

A lot of people have children, a lot of people raise children to adulthood, and what they have accomplished is something incredible.

Alison Gopnik: I mean, I think just in terms of the sort of cognitive difficulty level, right, I mean, one of the things that we've been thinking about in the context of the social science group is that the very structure of what it means to raise a person, right, is so different from the structure of almost everything else that we do, right? So usually what we do is we have some set of goals, we produce a bunch of actions, insofar as our actions lead to our goals, we think that we've been successful, and insofar as they don't, we don't. But of course, if you're trying to create a person, exactly the point is that you're not trying to achieve your goals, you're trying to give them autonomy and resources that will let them achieve their own goals and even let them formulate their own goals.

I mean, one way you can think about it is if you think about the kind of classic structure of economics or utility theory, which is, you're an agent and you're trying to accomplish your goals, and here's another agent who's trying to accomplish his goals, and you have a social contract where you exchange what you do, that's totally different from what happens in a caregiving situation. A caregiving situation is one where you have one agent who has power and authority and resources, and instead of pursuing their own goals, they pursue the goals of this other agent, and even more than that, they let that agent formulate their own goals, figure out what it is that they want to do themselves, as indeed the Digians end up having to do in the story. And I don't think we have a very good sense in politics or economics or psychology of how it's possible to do that or how we actually do do that.

Ted Chiang: Well, okay, so I guess there are a couple of things that I think of with regard to this. One is there are parents who have very specific goals in mind for their children. They want their children to turn out a certain way, and they will do everything they can to ensure that their children turn out that way.

And they believe that they are doing what is best for their children, but they are in a lot of ways robbing their children of autonomy. And it's a very difficult thing for parents to let go of that because they firmly believe that they are doing what is best for their children. And I guess I also have to acknowledge that, of course, I'm talking about a sort of contemporary view of parenting, and I think for much of history, there were different standards for what it meant to be a good parent.

And nowadays, we have this view that being a good parent is letting your child become what they want to become and helping them become what they want to become rather than bending them to what you want them to become. And in a way, I feel like this is kind of analogous to, I guess, I think what Kant said about treating people as either means or as ends. It's not exactly the same, but there's something analogous there where if you think of your child as perhaps a means toward your goals versus thinking of them as an end, which they are, you know, they are their own subject, and yet they're not there to help you.

You know, that's a struggle. That's something that people sort of have to wrestle with as part of existing in society. But the tension is, I think, much more emotional when it comes to the parent-child relationship.

Alison Gopnik: See, I think part of it is that in the kind of classic, in the Kantian or utilitarian or most views, the picture is, all right, you've got these two autonomous agencies, these two beings, both of whom can be out in the world doing things. And the question is, how do they negotiate their relationships? That's sort of the picture.

But the thing about caregiving, and again, the ditians are such a lovely example of that, is this incredible asymmetry between the abilities and resources that the carer has versus, it's an interesting paradox, right? There's one asymmetry, which is the carer has much more power than the creature that's cared for. So in the life cycle, the humans could just end the program at any moment, right?

There's nothing to stop them from doing this. They're the ones who actually have the power. And yet, in caregiving relationships, that powerlessness of the cared for is exactly the thing that motivates the carers to make these remarkable investments, make these remarkable altruistic sacrifices.

Again, as the human parents do in the story, precisely because the other agent is powerless, precisely because the other agent needs them, needs their resources. And I think that's a really interesting and very human set of relationships that we haven't thought about as much. In terms of if we had something that was real artificial intelligence, not a statistical pattern extraction from large data sets, even a simple system, and you mentioned this in one of your questions, you'd sort of have to say, well, look, if it was actually going to be intelligent, one of the signs of that would be to be able to formulate new goals, formulate new intentions that aren't the ones that were just programmed into it.

And it seems to me like as soon as you have a system that's like that, this issue about how do you balance means and ends and how do you balance autonomy and care is going to be relevant, it's going to come up.

Ted Chiang: You were talking about the asymmetry of the parental relationship with children. And that is something that I find philosophically really, really interesting because most of our other relationships, with say our spouse or our friends or even our siblings and certainly anything like coworkers or people who we have economic interactions with, there is a much higher degree of symmetry in many other relationships. There is this assumption that you are free to leave, that we are both participating in this relationship of our own volition, but we can end it if we so choose.

And none of those things are true of the parent-child relationship. And it's also like this interesting question, like who holds the power in the parent-child relationship? If you ask either one of them, they will probably say the other party holds all the power.

And the child is incapable of leaving the relationship. If a parent voluntarily leaves the relationship, we pretty much think that's...

Alison Gopnik: About as bad as anything could be, right?

Ted Chang: Yes, yes. So yeah, so they are stuck with each other. And yeah, a relationship of this type, you would never see this among, say, two autonomous adults. If two autonomous adults voluntarily entered a relationship like this, you would think they were insane.

Alison Gopnik: Although I think it's interesting that part of what happens with sort of committed relationships of all sorts, and this is interesting from the other end of the spectrum, like trying to take care of elderly parents. But I think it also comes up in spousal, in relationships between spouses, is that there's this interesting assumption, which is if it's really a committed relationship, part of the sign of that is that if the asymmetry developed, you would be committed to taking care of that person. So my husband had open heart surgery this year.

And one of the things that was really striking to me is this extremely dynamic independent person now is lying in a hospital bed completely helpless. And the effect that it had on me was, oh, okay, I'm really committed now, right? Like this is when love and commitment and loyalty are really showing up in their fullest form, is when you do see this asymmetry between the two partners.

But again, I think it's sort of invisible in the politics and economics and psychology and philosophy literature that those very strange relationships are so important and significant and play such a big role in our moral lives.

Ted Chiang: And they are only strange because we have sort of normalized this economic model of interaction.

Alison Gopnik: Yeah, I think that's another reason, I mean, that's another motivation for thinking about the CARE, the CARE Project at CASBS or thinking about it for me as a researcher, which is that it's odd because on the one hand, everyone just in their everyday life will recognize, you know, you ask someone what's the most important thing in your life? What's the hardest moral decision that you have to make? What's the place where your deepest emotions were engaged?

They'll tell you something about close relationships of care. And yet I think exactly because they're associated with emotion and feeling and women, they haven't had the sort of theoretical impact that you might imagine, right? I mean, it's interesting.

One of the things that we've talked about in the Center is there's this tradition, this Asian tradition with philosophers like Menjie, where the idea is supposed to be, well, politics should really start in those close personal relationships. And then the task for an ethics or the task for politics is how can you scale up those close personal relationships of care to the level of a state or the level of a country? And I think you can make an argument that one of the things that the Enlightenment did, for example, was to take the contractual relationships, you know, the sort of I'm autonomous person, you are, we're reciprocally negotiating, and find a kind of software in markets and democracies for scaling that up to the scale of a country or the scale of the planet or the scale of a world.

But it, and that's been a very, you know, I think that's been a very successful enterprise, but it left the care relationships or it leaves the care relationships just at the level of okay, that's something that's private and personal and not part of the broader ethical world or not part of the broader political, not part of the broader political world.

Ted Chiang: Yeah, it's not the responsibility of the state to engage in care.

Alison Gopnik: Right. And it's not quite clear whose responsibility it is, aside from just the responsibility of the individual person. So it's strange that this thing that's so important to us on an individual basis ends up being, I mean, literally it's invisible in the GDP.

It doesn't show up in any of the measures of labor or markets. It's, you know, if someone's doing it, if someone's taking care in this personal, private way, it isn't in the economy. It's just this kind of strange, the strange kind of moral dark matter in our politics.

And it doesn't show up, and I can speak as, I can say as one who got her first degree and first training in philosophy, it definitely doesn't show up in the philosophical, or at least in the sort of dominant Western philosophical traditions.

Ted Chiang: And from just as a sort of sideline on the economic utility question, I feel like conventional economics, one of the ways that it sort of ignores care is that every employee that you hire, there was an incredible amount of labor that went into that employee just by virtue of that. That's a person. Again, how do you make a person?

Well, for one thing, you need several hundred thousand hours of effort to make a person. And every employee that any company hires is the product of hundreds of thousands of hours of effort, which companies, they don't have to pay for that.

Alison Gopnik: Yeah, that's an interesting externality, right? I like that. It's like…

Ted Chiang: Yeah, they are reaping the benefits of an incredible amount of labor. And if you imagine in some weird kind of theoretical sense, if you had to actually pay for the raising of everyone that you would eventually employ, what would that look like?

Yeah, and to sort of bring this back to the artificial intelligence question, thinking machine question. So there's a science fiction writer named Greg Egan, who there's a line from one of his novels that I always like to quote on the topic of artificial intelligence. He says, if you want something that does what you tell it, use ordinary software.

If you want consciousness, people are cheaper. And I think that's very true. People are cheaper because all the costs of creating people are externalized.

They are born by someone else. And one way to think about the project of artificial intelligence is, can you create a person cheaply? Can you create something that is the functional equivalent of a person, but that doesn't require decades of labor?

Because if you can do that, then you have saved yourself so, so much. Because in a sense, the promise of artificial intelligence is a labor force which is perhaps infinitely reproducible, and which you owe nothing to. And this ties in with the fact that because human beings are the product of decades of life experience and social relationships, one of the things that comes from that is that we recognize that people have, you know, they are owed things.

They deserve to be treated well. I mean, this is why I'm super skeptical of current approaches to artificial intelligence, which you seem to imagine that if you just cobble together enough thermostats, which you will pull it in, you know, sort of different directions, like, well, okay, it'll try and maximize this and minimize that and well, then you'll get a person. I think you might be able to get some fairly useful tools with that, but I guess I don't think it will...

Alison Gopnik: Just as an empirical causal fact, produce a person.

Ted Chiang: Yeah, I don't think it will produce anything, you know, that does what people do.

Alison Gopnik: Well, you know, even when you're just doing this comparison, so this big project that I'm involved with, with my colleagues at Barrett, the Berkeley AI Research Center, so that the idea is what kinds of things could we learn from just empirically looking at how children learn as much as they do that you could imagine implementing to try and design artificial systems. So this is not trying to create a person. It's just are there things that we can learn even for relatively simple problems like getting a robot that could put, you know, nails in different, could sort nails in into different containers.

And the two things that I think come up again and again are not how much data you have or how much compute you have, but when you look at children, they're engaged with the external world. So they're doing a lot of exploration. They're doing a lot of experimentation and they're doing it in a remarkably effective way without having a lot of sort of self-conscious knowledge about it.

We call it getting into everything when a two-year-old is out in the world and exploring. But empirically, when we look at it as developmental psychologists, what we see is that they're actually performing just the right actions that they need to perform to get the data that they need to make the next discovery, to figure out the next thing about how the world works. And when you look at robotics, even just being able to get a robot that can do the very simplest things is far, far, far beyond what we can do at the moment.

And even when you see the films of the robots, they always are turning up the speed. So if you look carefully in the corner, you'll see that it says, this is actually 10 times because the actual robots are taking so long to be able to do just the simplest things. But then the other thing is that children are learning from, by being in social relationships.

And again, we can show that empirically that children are incredibly good and sensitive to, is the person who's teaching me this knowledgeable or not? Or is the person who's teaching me this someone who I think is trustworthy or not? Or is the person who's teaching me this, teaching it to me in a way that will actually make contact with my own knowledge?

A project we just started working on, for instance, is trying to see if children could design their own curriculum in an informal way. Do they recognize that they have to, you have to do something simple before you can do something more complicated, for instance. It just seems like a very simple, obvious thing that kids do when they're learning.

But it's really hard to build into even a very, very simple artificial system. So I think the combination of actually being out in the real world, getting data from the real world, knowing how to get data from the real world, revising what you do in the light of what happens out in the world, and coming back to that, that's a lifetime's worth of experience. And it means interacting with something outside of yourself.

And then the same thing is true about your interactions with other people. And those interactions with other people, learning from other people, couldn't happen unless you were in a social setting in which people cared for each other.

Ted Chiang: Yes, I do completely agree with you. And just to, I guess, expand on that. Well, OK, so with regard to, say, physicality, I have long been a subscriber to the idea that any intelligence needs to be embodied and situated.

And I think that the very first problems a baby has to solve is, you know, how do I move my body? How do I move in the world? And I think those are the same types of problem solving that form the basis of every other type of problem solving babies go on to do.

And I think that that is, you know, that is entirely missing from existing large language models. And, you know, as many people noted, you know, in a lot of ways, just the phrase large language model is a misnomer because they're not engaging in language. They're not using language.

They're using text. They're just, you know, they're just looking at a stream of tokens. But, you know, language is language refers to things.

It refers to physical objects and it refers to, you know, other people. And so, you know, because these large language models, because they don't have access to the real world, they have no experience, they have no interaction of any meaningful sort during their training, they are not using language in the way that any linguist would use the word language. You know, they're just processing text tokens, which is an entirely different thing.

Alison Gopnik: Yeah, well, I think there's a version you could think of, which is that the large language models are sort of the postmodernist, dairy-die-in picture of intelligence come true. And when you see that, you think, you realize, oh no, that was wrong all along, right? Like, you can't be intelligent if you're just within the text.

And I think even if you think about it from an evolutionary perspective, I think there's a pretty good argument that you start to see brains in the Cambrian explosion. And what happens in that explosion is that you start getting eyes and limbs. You start to be able to move and you start to be able to see.

And as the great psychologist James Gibson put it, you see in order to move and you move in order to see. That those two things are interacting all the time. So when you start getting creatures that have perception and have action, that's when you start getting a brain.

That's when you start getting something that looks like real intelligence. And that's all predicated on the fact that you're in a real world. You're in a real world that's external to you.

You're not in this kind of postmodernist text world. And you're always finding out new things about the world. And you're always getting surprised by the things that you find out about in the world.

And you're always changing what you think and changing your representations based on those things that you find out about the world. And you can kind of see that. I think there's a kind of continuity between the simplest organism that has eyes and claws and what we do in science, right?

Where we use our intelligence and we use our ability to do experiments to figure out brand new things about the world that we couldn't have figured out beforehand. And that ability, one of the ways that people in computer science, in vision, describe it is as the inverse problem. I really like that phrase.

So the inverse problem is there's a world that's outside of you, it's giving you data, you're finding out things about it, but you're never really going to completely know everything about what that world outside of you is like. And the great problem of intelligence, the great problem that brains are designed to solve is here's a bunch of photons hitting the back of my retina and bits of disturbances at my eardrums. And somehow from that, I'm reconstructing, there's a chair, there's a person, there's a microphone, there's quarks and leptons and distant black holes.

And that's exactly what human intelligence enables us to do. And again, that's very different from just taking a bunch of data, even a very large amount of data, and pulling out what the statistical patterns are that are in that data.

Ted Chiang: And I do just want to say something about, because I have seen people defending large language models say that critics are placing too much importance on, say, sight and sound as modalities. And I just want to make clear, I'm not demanding that the modalities specifically be sight and sound. So do you think we need to kind of follow the rough pattern of biological evolution if we want to create something of comparable to hominid intelligence?

Do you think we will be going through a step of something like, something as smart as a beetle, and then somewhere later something as smart as a mouse, and then somewhere later something smart as a dog, and so forth? Do you think that it would follow that pattern?

Alison Gopnik: Well, I think if you think about evolution, I think there's a common impulse that we have, which is to think about what they used to call the scala natura, as if there's this kind of scale across nature of things that are more complex or more human-like or whatever the measure is. And of course, if you're thinking about biological evolution, what matters is what your ecology is, what the environment is that you adapted to, that you're adapted to. So, you know, the kind of computations that bees can do are amazingly complex, and beautifully adapted to the environment in which they find themselves.

And as everyone knows, you know, if you think about ants as being super individual intelligences, they are doing, an antil is doing an incredible amount of computation relevant to its particular environment. But I do think that you could say, look, what are the dimensions that are the relevant dimensions? And this dimension of being able to deal with variability and unpredictability is definitely a dimension that is relevant to us, very relevant to our human intelligence.

You could say that our ecological niche is the unknown unknowns. That's what we're adapted for, is being in an environment where we're not quite sure what's going to happen. We're not quite sure what's going to happen next, and we don't even have certainty about how certain we are about what's going to happen next.

And for that environment, which is perhaps anthropocentrically, what we think of as being the environment that really intelligent creatures are in, you really need to have a different kind of process than the kind of process that we typically see in AI. So I think a way of thinking about it is we could, I think it would be easier maybe to design an intelligence that was adapted to an insect environment than to design an intelligence that was adapted to the kind of unpredictable variable environment that human beings are in. But I think a way of thinking about it is what are the tasks, what's the environment that you think that this intelligence is going to be functioning in and adapted to, and then think about how you could design it in those terms.

But one thing that we know is those intelligences that are really good at dealing with a lot of variation and unpredictability do have this characteristic that it takes them a long time. It takes them a long time in the sense that they have a long childhood, but also for humans it takes a long time in the sense that we get a lot of our adaptation across generations. As we were saying before, we have cultural evolution, so we have change taking place both in individuals and over time.

I think it's kind of unlikely that you could just get a static, a single static machine or computer doing particular computations that could have those kinds of characteristics.

Ted Chiang: So this leads to this question that I had about the length of generations if you were iterating. Hypothetically, if you had some digital organism or thinking machine which was, we'll say, comparable to a chimpanzee, and your goal is to get a digital organism that is comparable to a human. You can make whatever modifications you want to the genome of this organism, but you can only tell what the effects are by actually watching it navigate its environment.

One question I have is, how long do you have to let each generation run before you can tell whether your changes are moving you in the right direction? Because if we're going with a very biological analogy, you can tell, say, a chimpanzee baby is different from a human baby within a year, probably. But that's still a year per generation. I don't know how quickly you can tell.

Alison Gopnik: And of course, the interesting thing is that if you were comparing a chimp baby to a human baby, what you'd say is the chimp baby is much more competent. So if you were looking at the beginning, and this is one of the interesting facts about development in general, a chimp by the time the chimp is seven is producing as much food as it's consuming. It's really competent.

And if you were to compare human infants, for example, with the infants of most other species, you would say, boy, that is not a good bet, right? Like, I mean, this thing, like it can't even feed itself. It can't move.

And look, here's this baby horse that's getting up and moving and doing all the things that it's supposed to be doing as a horse or even better. Here's this baby chicken. That's the kind of classic example in evolutionary biology.

Here's this baby chicken, and by the time that baby chicken is a couple weeks old, it's basically just as competent as an adult chick. You compare that to this completely useless, helpless, weak old baby. So you really wouldn't know until some fairly long period of learning and development had taken place.

And of course, if you're thinking about humans, you could argue it's still sort of up for grabs, right? Because we have this cultural evolution piece as well, which means that our capacity to learn and understand is taking place over generations. So you might very well, I think this is kind of interesting, you might ask the question right now, oh, is this going to turn out to be an intelligent species or is this going to turn out to be a really badly adapted species that disappears, becomes extinct after a short period because it turns out that its cognitive capacities were not actually well tuned to its environment.

So I thought that was a great question. But I think it's a question that you could ask about. Well, one answer is I think it will take you longer rather than shorter, that it's going to be, if there's a creature, if there's an intelligence or system or a computational system that you could tell right away what its computational capacities are, that's not going to turn out to be the really intelligent system, that what the sign of an intelligent system is going to be that you're going to have to see how it develops, not just how it develops over time, but how it develops over time in interaction with an environment and in interaction with other agents.

But I think it's also an interesting question about would you end up concluding, you might not have a good take until really late in the process about how well adapted this intelligence was.

Ted Chiang: So it seems to me that you're trying to create an intelligent organism, you need a very long developmental period in order to achieve a high degree of confidence farther down the line. Then each iteration will have to maybe run for many years. You will not be able to iterate quickly because it will require interaction.

You can't automate it. You can't let them run by themselves. Someone is going to have to interact with this thing for years before you get a really good sense of this is much smarter than the last batch.

Alison Gopnik: That actually raises something else I wanted to ask you about with something that comes up in both Lifecycle and in some of the other stories, which is time, thinking about the way that human beings are living in time. I think it comes up in the caring context. It comes up in some of the things that you've described, where you don't know what the outcome is going to be.

You don't know what the possibilities are. I think the fact that humans live in a world where time isn't, say, cyclical, or time isn't completely symmetrical, the way the aliens in Story of Your Life lived their life, that we have this sense of a single vector of time going forward into the future, really changes the way that we function. Part of what you've done, I think, in your work is think about, what would it be like if our relationship to time was different?

Ted Chiang: Science fiction as a genre is about change. I usually describe science fiction as a post-industrial revolution form of literature, because science fiction stories tell a kind of story that was inconceivable for much of human history, because for much of human history, you could safely assume that the future would be like the past. Your grandchildren's lives would be fairly similar to your grandparents' lives.

Might be a different king, but they would still be a king. And only, you know, roughly speaking after the Industrial Revolution, or arguably also possibly the French Revolution, there was this idea that the future would be different than the past, maybe very different than the past. And I think science fiction comes out of that realization.

Our conception of time in the modern era is in many ways a product of the Industrial Revolution and just the pace of change that we have now become used to. So then from a different standpoint, just about our perception of time, the idea of time is, say, linear, the idea that maybe the future lies ahead of us and that we are moving toward the future. There's also this, I think, fairly well discussed anthropological discoveries about how different cultures perceive time because of the way their language works and the way that just their culture conceptualizes time. And so…

Alison Gopnik: Well, in the story of our lives, were you thinking about physics or were you thinking about this cultural relativity? Because it's such an interesting idea to try and think about what would it be like if you were interacting with aliens who really had a different relationship to time than you did? What sense could you make of their language?

Ted Chiang: So the inspiration for that story actually did not really come out of linguistic relativity or cultural relativity. For that story, I was interested in the idea of inevitable loss, the inevitability of loss. One of the things that inspired me, I was watching this one man show where this performer, he was talking about his wife dying of cancer.

And there was a point when they knew how it was going to end. And I was very moved and affected by that. And the more I thought about it, it seemed like, yeah, that is one of the things that is part of being human, that we can conceptualize the future in a way that I think dogs cannot.

And we know that we will suffer losses in the future. I think one of the things that marks a passage from being a child to being an adult is recognizing the fact that you will suffer losses in the future. And how do you cope with that?

How do you move forward with that? For Story of Your Life, what I was interested in there was telling a story about someone who has this very specific, this really explicit knowledge of a loss that lies in her future and how can she how can she live with that?

Alison Gopnik: Okay, so Ted, is there a piece of scholarship or actually a piece of philosophy or more abstract thinking that's especially helped you think through some of the themes that we've addressed or that you found particularly inspiring for your writing or maybe more than one?

Ted Chiang: It’s not exactly a piece of scholarship, but one of the things that I think had a big impact on my thinking when I was working on the life cycle of software objects was this book called Creation by Steve Grand. He's a computer programmer and his sort of I guess his specialty is artificial life, you know, creating digital organisms that are fairly simple. But in this book, he talks about sort of his theories about the relationship between artificial intelligence and artificial life.

And he he makes the argument that you can't really abstract out artificial intelligence from artificial life. And so he's not just talking about, like, say, embodied embodiment and situatedness. He's going, you know, further into things like, you know, his his digital organisms that have kind of like the equivalent of hormones, which, you know, govern, you know, their drives.

And, you know, but he he he actually makes the argument that something cannot be intelligent unless it is in some sense alive, that there are maybe like there's something fundamental about the kind of like the metabolic interaction between an organism and its environment that is maybe essential to intelligence, which I find very interesting. I'm not like I don't know. I can't say that I'm completely convinced, but I think he he made a very interesting case.

You know, I do wonder because if he turns out to be correct, I feel like that might have a lot of interesting implications. Maybe you couldn't have a real artificial intelligence unless it actually like needed to eat, you know, some, you know, some some analog to eating. And then, you know, there might be other like implications about like, because like it would be very easy to think that, oh, an artificial intelligence would be immortal.

But it's like, actually, maybe if it has like kind of like a metabolic component to it, like you could easily make it immortal, but you would the interventions you'd have to make would be very comparable to the interventions you would have to make to make a human being immortal. Like it would not be sort of immortal by default. It might actually be so closely tied to these processes that keeping an artificial intelligence alive forever is a comparable problem to keeping a human being alive forever.

And so I'm not going to say I'm 100% on board with everything he has concluded. But his book was definitely in my thoughts a lot when I was conceptualizing life cycle of software objects. Alison, I guess I should ask you, you know, in return, can you recommend a work of fiction that has helped you think through like some of the themes that you address in your research or have some work that has otherwise inspired you in the scholarship that you do?

Alison Gopnik: Yeah, so I was thinking about this and there's a certain kind of category of children's books, since I'm studying children, that also have the character of giving you a kind of picture of what it would be like to have a mind that was really different from the sort of typical adult mind. And books like Alice in Wonderland, which everyone knows about, but I think even more and more obscurely, Mary Poppins is a really nice example of a work of fiction that gives you a sense of the sort of strangeness of perspective that you would have and that you do have when you're a child. People, unfortunately, the movies have sort of messed up Mary Poppins for the popular imagination, so you just think of it as being like Julie Andrews.

But in the books, what happens is that there's this very sort of banal, ordinary existence of these children growing up in a small portion of London in Kensington in the 1920s. But because of what the relationship to the Magical Nanny does is just give you a sense of how strange and bizarre just going to the park or going to the grocery store is, and the fictional device of having the magic is just a tiny bit different from just the perspective of the children as they follow this rather acerbic nanny around through their everyday existence. And that's a nice example where just a little bit of fantasy in the fiction gives you a sense of just how fantastic the actual reality is, especially for a young child.

Ted Chiang: That’s really interesting. That is, I would not have expected that. But the way you describe it, yeah, that makes sense. I can see that.

Alison Gopnik: And I think fantastic fiction often has that sort of numinous quality or captures that sort of numinous quality better than realistic fiction does. But that, even though it's fantastic, it's actually a really good realistic depiction of what a lot of our real experience of life is like.

Well, this has been such a Center for Advanced Studies in the Behavioral Sciences conversation. It's exactly the sort of conversation that makes my heart soar every time I walk up the hill into CASBS, is that you're going to have conversations that cross science and social science and the humanities and the physical sciences all wrapped up with one bow. So I think that was a really beautiful example of exactly the sort of conversation that you can have sitting in this local place at this time. So thank you so much, Ted, for climbing up the hill with us and being part of it.

Ted Chiang: Well, and thank you, Alison, for having this conversation with me. This was a really great conversation. I'm glad I was able to come visit the center. And if this is the type of conversation that happens regularly here, it's like, well, then, yeah, I would love to come back.

Narrator: That was Alison Gopnik in conversation with Ted Chiang. A quick reminder, we've got a lot of great links in the episode notes today. You'll definitely want to check those out.

And as always, you can follow us online or in your podcast app of choice. And if you're interested in learning more about the center's people, projects and rich history, you can visit our website at casbs.stanford.edu. Until next time, from everyone at CASBS and the Human Centered team, thanks for listening.