Human Centered

Tech Innovation Needs Social Science - Arati Prabhakar

Episode Summary

Host John Markoff spoke with 2017-18 CASBS fellow Arati Prabhakar, former director of DARPA, NIST, and now Founder and CEO of Actuate Innovation. They discussed the state of R&D in the US, Silicon Valley, data privacy, autonomous weapons, and more.

Episode Notes

Arati Prabhakar

National Institute of Standards and Technology

Defense Advanced Research Projects Agency

Center for Advanced Study in the Behavioral Sciences

@casbsstanford on twitter

Episode Transcription

John Markoff: From the Center for Advanced Study in the Behavioral Sciences at Stanford University, this is Human Centered. I'm John Markoff. Arati Prabhakar is a former CASBS fellow. A trained physicist, she has been director of both the National Institute of Standards and Technology and the Defense Advanced Research Projects Agency, DARPA. A veteran Silicon Valley venture capitalist, this year she created Actuate Innovations, a nonprofit organization designed to model creative approaches to societal changes. We spoke about managing large research organizations and the gaps in our national research and development infrastructure she is trying to fill. If I could just ask you to briefly introduce yourself and maybe describe your current interests, but only in the one-sentence way, and then we'll go way deeper, more deeply into it.

Arati Prabhakar: I'm Arati Prabhakar. My current interest is how we expand innovation in the United States so that we're ready for the problems that we face for the future.

John Markoff: And describe your background. What were you before?

Arati Prabhakar: I started off as an engineer. I got a PhD in applied physics at Caltech. Did not want to be in a lab doing narrow research. Stumbled into a congressional fellowship in Washington, and that opened the door to going to DARPA as a young program manager in the mid-'80s. Loved that, loved the idea that there was a place where you could stand and have a big lever to move technology and make a difference in the world. I was at DARPA 7 years and then director of NIST, the National Institute of Standards and Technology, in the Clinton administration. Then spent 15 years in Silicon Valley. Most of that was a decade in early-stage venture capital. Then in 2012, got a call asking if I'd go back and run DARPA, which I did for 4.5 years. Then in 2012, we were coming— I'm sorry, in 2017, we were coming home to Palo Alto. Like an apple fell on my head, and I got the chance to come to CASBS, which was fantastic. That's the story.

John Markoff: First, I wanted to ask about what you expected on your way into CASBS and what you found. Most of the people who come to CASBS are active academics, and you were not. You came from a different viewpoint, I think.

Arati Prabhakar: Yes, absolutely. I'm not an academic. I'm not a social scientist. I was definitely one of the handful of us who were other that got mixed in. The one thing that was in my mind coming in the door was that in my 5 years, 4.5 years at DARPA, I had gotten extremely interested interested in how the social sciences were changing. Stepping back from all the stuff that we did in the time I was at DARPA, the big pattern about what's changing in research and technology today, in my view, is that our ability to understand and influence very complex systems is accelerating a little bit. I mean, this is the human journey, right? For as long as we've been human beings, we've been trying to figure out how to wrestle with complexity. And we're nowhere near figuring it out completely, but we're getting a little bit better in a very interesting way. And I think that applies to all areas of complexity, but most interesting to me was that I think the way we understand humans and behavior and societal interactions, that's changing. And that seemed to me to be very powerful, but I didn't know anything about social science, and that's why I came. There's not a DARPA social science bucket in the sense that We started some programs while I was there. We hired our first anthropology PhD as a program manager. And we were starting to figure out what was even the question. But so yeah, I mean, that's— in fact, what we talked about was— if you want a little DARPA history, so back in the 1990s, I started the first microelectronics office. And it grew out of— we had an office at DARPA, the Defense Sciences Office, that looks at all areas of research. It's sort of the bubbling pot of research areas. About every 20 years, something emerges from it that's big enough that we say, "We're going to make a big dedicated long-term push in this area." In the '90s, it was Microsystems. In my time, we started a biology office for the first time at DARPA, Biological Technologies Office. What we talked about was that Maybe in the coming years, that social sciences would become a big enough DARPA-like thing to go do, that we might start an office there.

John Markoff: You come from this relatively massive agency to all of a sudden being in a study by yourself. Was that—

Arati Prabhakar: Yeah, that was different.

John Markoff: Alone, was that a culture shock?

Arati Prabhakar: Yeah, it was. It took me a while to learn. I had a phenomenal team at DARPA. Everything from just the scale of the responsibility and what we were trying to do. If I had a brainstorm on a weekend, I'd come in on Monday morning and by Tuesday something interesting was happening. I could go on and do my next thing and there was this huge thing was getting mobilized and people were doing stuff. You don't have that when you're out of that position. Also things as mundane as booking travel. Now I can do all of those things, but initially it was definitely a shock. This is a great transition place. I think that's one of the marvels of CASBS, actually.

John Markoff: Traditionally, people have come here to write books. Did you have that in the back of your head? Here you've built— you came here and you built an organization. Out of it, you built an organization. Which way were you going when you came in?

Arati Prabhakar: I didn't think about writing a book. Again, I'm not an academic. So, that wasn't in my head. The organization that I'm now starting was not clear to me when I came to CASBS. The itch that I had that ultimately became this organization was because I had seen R&D and innovation from a lot of very different vantage points. I was really interested in this question of— my view was that we were doing great on the agenda that Vannevar Bush laid out for us in 1945. But, you know, in a mere 75 years, you might want to think about what the agenda needs to be for the future. And I was pretty concerned, and I remain quite concerned, that we're not innovating in the ways that will serve our societal needs for the future. So that was the itch, and CASBS was a great place to think and interact with people who think about society's challenges a lot.

John Markoff: Was there a particular sort of, you know, single moment where you just— the idea sort of formulated itself for you, or did it evolve over—

Arati Prabhakar: It really evolved. There wasn't an aha moment, and it morphed many times. And we barely exist. We're gonna continue to morph. Yeah. But the threads of it are all the things that we're talking about, is big problems that are this whole engine of innovation, half a trillion dollars a year in the economy, It's still going in the direction that we launched it in, in 1945. I think quite significant gaps in our innovation capacity. Then coupled with these new tools and capabilities to wrestle complexity in different ways. That was sort of the problem and the opportunity. That's what's driving the whole thing. We're going to continue to iterate our way to something that works.

John Markoff: Starting that against the background of where we are as a country now, if I asked you the question, is the US R&D infrastructure healthy? What would you say?

Arati Prabhakar: I think it's extremely healthy in certain areas, and my concern is how do we make it healthy in other areas? You know, you can't live in this area and not see all the fruits of innovation. It's defined Silicon Valley in its history. So clearly a lot of it is working. But by the way, coming back to Palo Alto after being gone for 5 years, you also see what isn't working. You see the Maseratis and the homeless people side by side in downtown. It's a snapshot of what is working in innovation and what isn't working.

John Markoff: It was interesting that you started in '45 with Bush, and that sort of put a stake in the ground there. But there was this second point, which led to DARPA, the Sputnik moment that the nation had. I often try to understand how that could have been that catalyzing. God knows we've had other crises out of which not as much has come.

Arati Prabhakar: That's a great question, isn't it? Yeah, I don't know. My sense of that moment was Sputnik is the symbol, but the backdrop is the Cold War. This idea of this competition with the Soviet Union and the threat of nuclear annihilation was palpable. I mean, I don't think it didn't feel theoretical. It didn't feel— Climate today is is a phenomenally important challenge, very dangerous situation. But I don't think— I mean, I try to think about what 1957 or 1958 felt like when Sputnik happened. I think my speculation is that the memory of the Second World War was so fresh, and people knew what it meant to have the world lose millions and millions, tens of millions of people through war. To see that happen very abruptly, I think it was very real in a way that If I contrast it to the actions and our readiness to deal with climate today, I don't think climate challenges or threats feel as tangible.

John Markoff: Actuate, what's your statement of purpose for Actuate?

Arati Prabhakar: My passion, the thing that gets me out of bed every morning, is I want to see a generational shift in our country's ability to innovate for these classes of major societal challenges that we face. You know, 2019 forward, not 1945 to 2019. And by the way, none of the things that we've done for the last 75 years— we still need to worry about national security, we still need to worry about health, and we still need to make sure we have a vibrant basic research foundation. But we also now are facing the problems of extreme problems of inequality, income inequality, and the lack of access to economic opportunity, and the actually staggeringly bad health outcomes for a country that spends more than anyone on the planet for healthcare, and our lack of trust in data and information, even though we're so reliant on it in the information age, and then climate, which we've been talking about. And what I wanna see is a generational shift in our country's ability to innovate for those problems. Not because innovation is the only thing that's going to solve those problems, but because we don't know how we're going to solve those problems, and that demands innovation as part of how we're going to get there. And what Actuate aims to do— I mean, we're a brand new tiny organization, but so what are we— how can we contribute to this? Our thesis is that we can contribute by running R&D programs in these areas. And first to generate some interesting new solutions, show what might be possible for some of these areas. But also in doing that, we want to model what innovation looks like in these areas. Because it's not going to look like defense innovation. It's not going to look exactly like a DARPA or an ONR or a National Institutes of Health or NSF. It's going to look different. And I think we can show that not by being a think tank, but by going and doing R&D programs.

John Markoff: So if you're going to put yourself close to something, it sounds like you You're close to an incubator.

Arati Prabhakar: So what I would say— so what we're not going to do is start companies. We are in Silicon Valley, and we're not starting companies. You know, like, radical thought. The piece of the puzzle that we think we can contribute, and that I think has enormous leverage, is the kind of R&D that lives between basic research, where you just write papers and you're done, and the kind of R&D that happens in companies when you know what the product and the market is. And that, I think that's actually a structural gap, and a good place for us to make a contribution.

John Markoff: I mean, that's technology transfer in a sense, is it, or no?

Arati Prabhakar: Well, what is it? It has a lot, people characterize it in lots of ways, but it usually has the characteristics that you're starting with a systems view. There's not one particular piece of research that's the magic solution. It usually takes the integration of different kinds of research and technology. It usually has the characteristic that in addition to advancing research, you have to demonstrate something more applied, right? So it's in this in-between space. I mean, and a lot of that is what DARPA does and why I think we're able to bring a DARPA mindset. Not a recipe from DARPA, right? But more of a mindset. From DARPA for these problems.

John Markoff: But then you're one step off— there's an organization at Berkeley called Cyclotron Road, which is— they do want to push things to— they're slightly one step— they're sort of in your space, but they're not in your space, because they really want to commercialize technologies.

Arati Prabhakar: Yeah. What I think they are great at, and they live in this bridge land—

John Markoff: And they're energy-focused, too.

Arati Prabhakar: Yeah, I mean, that's where they started, is energy-focused. But like us, I think what we very much share is we see lots of great basic research, and we see a market that's the most powerful and vibrant market on the planet, and we see a gap between those two. The particular way that they're going about it is trying to get scientists who think they might be able to build companies. They're coaching— I think of Cyclotron Road as helping scientists and researchers become bilingual, right? So you can speak deep technical things, but also understand the business world. And I think that's a vital piece of it. We're gonna— I mean, our approach is completely different, but I think very complementary. Yeah.

John Markoff: And will you start with, I mean, will you start in specific areas or with actually a specific project? How?

Arati Prabhakar: Yeah. Yeah, so we have these 4 very broad areas of societal need. I mean, each of those is an ocean to boil, right? I mean, just income inequality is an ocean to boil. And then we have a few other things that we think are problems. So there's no dearth of problems. What we're not gonna try to boil all of those oceans. I view each of those 4 areas as areas in which we're gonna search for very powerful opportunities. And specifically, we've defined 2, our first 2 programs, and we're just at the beginning. We've just gotten a commitment for design grant support to design a program in one area, and then an invitation to make a proposal to do a design grant in a secondary. But the— maybe the first two are illustrative. For healthcare costs, we got very interested in the question of how could we scale the prevention of chronic disease. So 84 million Americans today have prediabetes. They're at risk of diabetes. The CDC estimates that only 10% even know that they are at risk of diabetes. The American Medical Association thinks that there are about 500,000 out of 84 million who are actually doing something in terms of lifestyle intervention, diet, exercise, smoking cessation. So we're sort of nowhere, and, you know, it's an obesity epidemic that's leading now to a diabetes epidemic. The question in that, the research question in that program is, can we take what we are learning about behavior monitoring, behavior modification, incentives, coaching from the world of advertising, marketing, gaming and a lot of very interesting research. Can you apply that to this, you know, incredibly hard problem of helping people maintain— achieve and then maintain healthy habits? So, you know, tough research question. We— there's research to build on, but more research that needs to be done. And then ultimately you have to demonstrate that it could actually work with real human beings. Yeah.

John Markoff: And so, um, on both sides of that equation, will you work with Academics or corporate people on the who does it? Both.

Arati Prabhakar: We're looking for the best research talents. And inevitably, those are gonna be people who live both in universities and in companies and sometimes nonprofits. And what we want is to fund the people who do the individual pieces, fund an integrator who'll build a prototype that might be an app of some sort that you can actually test with human beings and do the tests and the trials. And then ultimately we want to show that it actually can work and be very, very economic and scalable. And then what we really want is for the people who develop that IP to own it and commercialize it. And, you know.

John Markoff: You know, I was thinking about it in the context of Silicon Valley and the fact that you've touched all the bases, which usually people are usually in a camp and you've been in all of the camps.

Arati Prabhakar: —Some of the camps.

John Markoff: —Well, I think pretty many. But on the venture side— yeah, I guess you haven't actually done a startup yet. This is your first startup.

Arati Prabhakar: —Right. This is my first startup. And there's a lot that goes on in technology and in the economy that's not like Silicon Valley, right? I mean, it's bigger companies and different industries. And I don't— yeah. So there's a lot more. But I got to touch a lot of things that I loved. Yeah. —But so do you have your own theory? You know, one of the—

John Markoff: it's a favorite discussion point is sort of why is Silicon— why did Silicon Valley happen? And is it unique? And I was wondering if your particular set of viewpoints gives you a particular perspective on that question.

Arati Prabhakar: I don't think I have anything new. I mean, you know, you and many other people are so smart about this and have written and thought about this.

John Markoff: I'm obsessed about it, but I just, I see that it's either, you know, Rashomon or the blind man and the elephant. Yes. It's everybody's, and you know, Margaret O'Mara's book just came out on it. She sort of dialed the, the explanation toward the government, which I think has been very healthy because there's been a sort of push in the other direction. But it, you know, it is this wonderful mystery in the sense of, to your question of, you know, how do you do it writ large? I mean, if this was this particular point at which innovation happened for particular reasons, how do you grow that and spread it all over the world?

Arati Prabhakar: Yeah, I'm very— whenever someone has a simple explanation for a complex phenomenon, it pegs my BS meter. And so, yeah, it's an ecology, right? And so it has to have a lot of different factors that came into balance and then allowed it to surge. And people have written about all the different pieces. I've been in, I try to avoid being in, but I end up in a lot of vacuous conversations about, it was private entrepreneurship in the American way, or it was government. Almost every interesting thing that happened in technology happened 'cause all of those pieces came up together, right? And it's their interplay that's super interesting.

John Markoff: Well, wouldn't, you know, there's this also discussion about life in the universe, right? Like, what if we're the only one? What if there was like this, you know? So maybe that's what Silicon Valley is. It's this set of forces came together and they won't happen anywhere else.

Arati Prabhakar: Well, so, I mean, one interesting thing is Silicon Valley itself isn't a static thing. And it's a continuing mystery what the next generation of Silicon Valley looks like. And at any moment, Every time there's a crash, of course, everyone's like, "Well, we're all done now." And that's never happened yet. And when it's surging, everyone tries to figure out what the next dip looks like. And you can't even, you know, it's really hard to visualize.

John Markoff: I've been trying to call the top for the last 3 years. I fail completely every time.

Arati Prabhakar: I know, which then everyone's like, "Well, there is no top." And we know that's probably not true.

John Markoff: There was this moment, I think it was in 2015 or 2016, it was at a Salesforce conference where they take over that whole street in front of Moscone. And I saw these guys who were firing $2 bills out of a gun, and I thought, this has to be a bubble top. How do you get attention, right? That was the problem. That's just tacky. Yeah, but that was a sign that, you know, that kind of stuff happened to the dot-com era. I'd seen this stuff before, right? The irrational exuberance kind of stuff, and I thought it was happening again, but it just went right on through it.

Arati Prabhakar: Yeah, so in some number of years, we'll be able to look back and see to see how the next— how does this endless expansion end? There's some structural things happening in the world that make it hard to see it going forever.

John Markoff: Well, competition with China is an interesting one that's now on the radar. There is now alarm about China as a potential technological competitor. Do you share that? First, there was in the AI community about 3 years ago, there was this debate about China and how quickly they were coming on AI. And that was really before we knew that they were deadly serious about spending money. But there was this huge uptick in papers. And initially, people were saying, well, the quality of papers is not bad. You've got lots of quantity, but they're not innovative. Now people seem to think that they might be a real force in AI. Where are you on China?

Arati Prabhakar: What do I think about that? First of all, part of this conversation reminds me— you could have said verbatim everything you just said. Go back 20 years and say about Japan. And I think that we have this sort of classic reaction in the US that starts with, "Oh yeah, that's good work, but it's so practical. It's not really basic research and therefore it's not as good." But in the Japanese case, that's what made them very good at manufacturing. And in the Chinese case, it's what makes them, I think, extraordinary at using AI, whether or not While we're all arguing about how good their basic research is, they are doing practical things that are powerful and astonishing and alarming in some cases. I guess I'm actually much more intrigued by that question of how different societies are going to choose to use the technology. Because it's just an expression of the values of the society, and the things that China is doing. I mean, there are many things we're doing more by default, by not paying attention, that I find, that I think are concerning and invasive and troubling in terms of personal autonomy and privacy. But in China, it's being done overtly and at scale. And it's— I don't think— my sense is there are people who know so much more than I do, but it's not like the government's doing it to the people.

John Markoff: I think there's broad acceptance that that's a good thing to do. Absolutely. I spent time in China at the end of 2016, and I spent time in the Third Ring. I was struck both times by how similar Chinese culture is to Silicon Valley. I was completely at home, including the fact that most of them spoke English. You can walk into the Third Ring, which is their technology center in Beijing, and you can be right at home. Except they don't have this democracy bug. You think it's a bug or a feature?

Arati Prabhakar: That's a good question.

John Markoff: Absolutely. I didn't want to go there. Well, I want to go there.

Arati Prabhakar: I think that's the most interesting question is, what are the conditions under which people in a society thrive? I got an opinion on that one.

John Markoff: You know, at the moment, I'm running this program that transcribes— we're recording your conversation, but we're also transcribing it. And it's a company that's in Mountain View. I know it's— How good is it? It's unbelievable. It's great. Oh, that's great. So, and what's interesting to me, sort of where I'm going with this, is this is a guy who got a degree— a Chinese guy who got a degree at Stanford, and then he went to work for Google, and he was involved in Google location work. And then he started a couple companies, and he started this one. And it's homegrown. I mean, it's done by a Chinese national who moved to America, and it's being done in the heart of Silicon Valley. To me, that's kind of the golden goose. And it's, I mean, you know, one of the things that's driven Silicon Valley is that it was a magnet for the best and the brightest in the world. Okay, for whatever else you describe about the immigration question, it's pretty clear that it's part of the chemistry of Silicon Valley. Silicon Valley. It also seems like it's something you could kill. If the best and the brightest don't come here, they'll do their work somewhere else in the world. I'm making a statement, it's not a question.

Arati Prabhakar: No, I think you're completely right. That's true of America in general. It's true of Silicon Valley in particular. This is one of so many areas where All this thrashing around is the United States trying to figure out how to play its role in a changed world. We knew who we were in the Cold War and in the world order that existed then. We're still this phenomenally powerful country, but we haven't figured out how to play this role in the way the world is today. I mean, we haven't figured it— toxic, bad in so many areas, immigration just being one example.

John Markoff: Since you left DARPA, you come back to the Valley. In the last year, there have been some really bitter fights about the role of high-tech and its relationship to the military and to the Pentagon. I was wondering, standing where you stand, when the Google researchers bridled and when Microsoft researchers— how did you feel, and did you have a a sense they were making a mistake, or is there a middle road anywhere?

Arati Prabhakar: This to me is related to this broader question of how, especially how technical people today think about ethics and about their role in society. And so, a couple of things. One is, I think at the top level, when I see that, my first reaction is, I'm really glad to see technologists who are asking those questions. Again, when I was in my 20s and 30s, I didn't— certainly when I was in graduate school, I never had the sense that that was much on people's minds. So I think that's really good. Specifically with respect to the questions about national security, there are times when I see the reaction of the employees in a company and I think— and I think I know more than they do about how the technology is being used and what for what purposes. And there are times when I think, "Oh, they really got that right. I'm so glad they're protesting. I'm very concerned about how it's being used." And there are other times when I think what they're saying is they don't actually think we should have a military capability or worry about national security. And I think they're actually being quite naïve. Now, I'm actually totally fine with it if you've thought deeply and gone and dug into it and thought about world affairs. The role of our country in the world, and thought about military technology and come to the conclusion that you think we should not be going in a particular direction or having a particular military capability. I've thought a lot about those things and concluded that while we don't— I don't always agree with everything that we do, I actually do think we need to have a strong national security capability and a strong military. And I want us to use aggressively advance technology, but very thoughtfully and carefully. So I'm fine if you've done the work and we disagree. I honor that.

John Markoff: Sometimes I think it's a little naive, because I think people are just like, "I just don't like that, because it makes me uncomfortable." One of the things I think we debated a little bit when you were here was sort of where you draw the line on weapons autonomy. It's a very difficult and increasingly murky slope.

Arati Prabhakar: So this issue on weapons autonomy, it's— I think this is an issue that we keep circling. Here are the— when I think about my interactions with the military and national security community in general, thinking about FBI and others as well, there are things that we are talking about that I think the society in general is worrying about that I am not worrying about. And then there are things that I don't think we're worrying about enough that I am worrying about. So the thing that I'm not worrying about is about the machines turning themselves on, declaring war, and going out and killing people. And the reason I'm not that worried about that is that when you meet professional— when you meet military officers in the United States military, what you quickly realize is that in contrast to those of us who do science and engineering, they actually have ethics core, integral to their training to the way that they are groomed. It's part of their curricula when they go through school. And the reason for that isn't that complicated. They are the people that we have said, for us as citizens, we've said to them, when we as a country are facing harm, you are the person I am giving the authority to, to make life and death decisions, right? And so it's really good that they think about these ethics issues. The upshot of that is that those are the people that I think are the least likely ever to relinquish control to a machine. They take that responsibility so completely seriously. The thing I'm very worried about that I don't think we think of— I mean, I think we're getting more attention on it, but I still think it's much more dangerous— is whether it's companies who, yes, they're trying to do good things for society, but they are business The organism is a business and it is driven to make profit. And that colors what you think is good for society. And similarly, if you are a border patrol agent, or you're in the military, or you're in the FBI, you are in the business of seeing threats and dangers. And we pay them to do that so that I can sleep at night. But they end up living in a very paranoid world. And the things that It's so hard to really remember that civil liberties matter or that individual privacy matters when you're the one who's on the line to make sure that a terrorist doesn't blow up Times Square. And it's these inadvertent biases that I think are so dangerous because they lead you to— they make you blind to things that really matter for society ultimately.

John Markoff: To your previous point, I mean, I struggle a lot with this. I think I'm going to come back.

Arati Prabhakar: You hit all my hot buttons, John, and then I'm just going here.

John Markoff: It's an extremely difficult ethical argument. I think this was two books ago when I was writing about the Valley with Dormouse. I think it was Jacques Vallée that I interviewed, who as a young engineer at SRI designed the first smart bombs. It's the same debate. His argument in justifying what he did as an engineer is, he grew up in France. He lived in a village that had a bridge, and both the Allies and the Germans spent the entire war trying to destroy that bridge. They hit everything in the village except for the bridge.

Arati Prabhakar: Right. Because the bombing accuracy was the size of half a city, right? You dropped it and you had no idea where it was Yeah, yeah.

John Markoff: And so I understand that. And now that people like Jerry Kaplan, who defends the development of AI technologies for weapons systems, they're using the same argument. And to me, it seems like it's a valid argument. And yet, you know, the mechanism moves in a direction where you get yourself into really bad places, potentially.

Arati Prabhakar: I think both of those are true. So, so that's the biggest strategic shift that's happened in military technological capability. Capability since, for this last half of this last century, right, is the move from mass to precision. World War II was just, can you build bigger bombs? Basically, post-Vietnam, we went in the direction of, can you deliver the weapon on target much more precisely? Just to be very, very quantitative about it, the number of people who died during conflict during any of the recent high-tech conflicts is a microscopic fraction of the kind of civilian casualties that we've had in Vietnam or before. So hard to argue that that's bad. And on the other hand— and there's always another hand, right? Because on the other hand, I think it's also pretty easy to say that that has drawn senior military leaders and senior political leaders into conflict and killing in circumstances that we wouldn't have gone before, right? I look at that, and I would not turn the clock back to bigger and bigger weapons that obliterate cities. I'm glad we shifted. But it's never a pure thing, right?

John Markoff: There were two other technologies that were pioneered at DARPA under your directorship that I'm fascinated by. One, I don't know if you even noticed this, but did you— you know, about a month ago, Elon Musk finally announced Neuralink. Yeah. And there's a couple of thoughts. One is, this didn't get into my article, I actually wrote something for the Times about it, but the two teams that he acquired to build that company are DARPA-funded teams. I mean, they came from UCSF and Berkeley, and that was the technology. And I know you guys were funding a lot of brain-computer interface work, and going on, but he picked 2 particular. So, I mean, it didn't get a lot of coverage, but he's one of a number of companies that are pursuing that. Now, to your point about you're not worried at night about the machines taking over, um, you know, Elon, if you push him just a little bit, he'll say, well, I'm doing this because I think the machines are going to take over, and the only way we can save ourselves is by coupling ourselves tightly to them, which to me is an argument that just bizarre even by science fiction standards. Because did he not see Star Trek? Does he not know about the Borg? I mean, it's just like, what could possibly go wrong?

Arati Prabhakar: I mean, that's my favorite question, is what could possibly go wrong? It's always a good question. Uh, well, I don't agree with Elon. I, I, I just think people are more interesting and more fantastic and more worrisome than any machine. So I'm always more interested interested in. That's the— I think people are the danger. Technology just makes them more dangerous, basically. But it's a very human thing we're talking about. It's not a machine thing.

John Markoff: But there is this also, what you guys demonstrated in demonstrating brain control of, you know, of stuff in the environment is, you know, for the medical and assistive applications are— I mean, you know, you can take it that far without bothering to go into this other world and it justifies—

Arati Prabhakar: You can, but where do you draw that line? Because that line is invisible. And I mean, I think— so of all the things that we did at DARPA, that is not the area that I think is going to have the most immediate societal effect, simply because it's about the human body and the human brain. And it's going to be medical to begin with. And we have, for good reasons, we have lots of things to slow that down and make sure it's safe before we go too far, right? But it was the area at DARPA that I found literally the most mind-blowing to think about. Because when you start thinking about changing how the human brain connects to the world and releasing it from the constraint that it's going to connect to the rest of the world through the body that we have, it gets very interesting very quickly. And, you know, One of my most fun moments at DARPA was we had an offsite. DARPA is a place where you're supposed to think outside the box. But of course, you actually build your own box as you go. We had this offsite that was me and my technical leadership team. We wanted to challenge ourselves to get outside of the boxes that we had already built. Oh, you know who we brought in to talk with us? That was really fun. We brought in Alta Charo, who's going to be a CASBS Fellow this year, who's an amazing bioethicist and a really creative person. And a second science fiction writer, really interesting philosophical guy. But I'll tell you, it didn't take much to get people going. And people wanted to get out of the box. And we broke into— we did all the offsite stuff, like you break into teams and we brainstormed. And the teams came back. And what every team had come up with, one way or another, was about, number one, how we were going to meld with our machines. And number two, that the most interesting thing about that was going to be our ability to meld with each other. Because mediated by these connections to our machines, we started— I mean, people have cutesy names like the hive or whatever. But when you were— so imagine if I could have in my brain the context that you've learned from all the work that you've ever done. Or I'm negotiating with you about some terrible crisis around the world, but I understand the problems that you have feeding people in Ethiopia. And I understand that you lost your mother to starvation. And I have that empathy for you. What if I could actually— what if we could understand each other in ways that were that deep? Could we start to solve— could we go after some of the problems that seem completely impossible today? So, like, that sounds groovy. And then you start thinking about, well, you know, like, what does autonomy now mean, right? And what— privacy. Like, I'm worried about privacy now. What does privacy even mean in this future? I think it's super interesting to think about all of those things.

John Markoff: The other technology I was thinking about, because particularly, I mean, I know that you as a director actually did stuff about thinking about privacy at DARPA and trying to, you know, advances privacy against this technology or in the context of the technology. But was Gorgon Stare before your tenure?

Arati Prabhakar: Did that show up on your— Yeah, I don't think it was my time.

John Markoff: They put it both on drones and they can put it in balloons to watch whole areas. It's a super high-resolution sensor system. And what I just noticed, and it hasn't really become a controversy yet, but it's being— Sierra Corp, which was the contractor that did the original Gorgon, is now doing similar work with probably related technology, basically over wide swaths of the US. Yeah. So it's come home and you can actually, from a, you know, from a law enforcement point of view, this is an unbelievable tool. This is something that's on the street corners in London on steroids because you can use it as a DVR and anything happens at any point in the area that it's looking at, you can wind the recorder backwards. It's another one of those lines that just, you know, it's also an incredible civil liberties problem, potentially. Right.

Arati Prabhakar: Developed in wartime when that's how we dealt with al-Qaeda, that's how we dealt with ISIS. I mean, you know, it all makes sense at the beginning, and then it's so logical to extend its use. But back to privacy. The DARPA program that I think you were talking about was called BrandEyes, and it was about privacy-protecting technologies, differential privacy and encryption. That actually, I got this bug back then that if you could really unlock the value of data, and now I'm thinking about administrative data and healthcare data and educational data, really accentuated by the conversations I had at CASBS, it seemed like all the really exciting big opportunities opportunities. The data is so valuable, and we're just starting to get to— get— figure out what to do with this fountain of correlations, right, that it's throwing out. But every single interesting story would, would run aground because of privacy concerns. And so that itch turned into what we— what's going to be our second program at Actuate. It's a data and privacy program, and it's building directly on those research efforts from DARPA and other places.

John Markoff: And trying to put them places?

Arati Prabhakar: We want to demonstrate an end-to-end architecture for managing data and privacy together, and then do demonstrations where you get, you know, you get administrative data from different sources, and you clean it up and link it, and you start using it, but all while it's rigorously encrypted and protected.

John Markoff: I know this company in Menlo Park has a real problem. They should fund you.

Arati Prabhakar: Oh yeah, well, yeah, so did you talk to Nate about Social Science One? Yeah, great, I can't wait to hear that interview. Yeah, so Gary King, who's on the board at CASBS and is Nate's co-founder, I think, for Social Science One, yeah, we've been talking about exactly that.

John Markoff: So, I mean, in both of your first two projects, I actually see parallels to things that are happening at CASBS. I mean, have you had Well, for sure on data and privacy. The Moral Economy Project seems to map to some of the stuff.

Arati Prabhakar: Yeah, I mean philosophically, absolutely. For me, CASBS was a year not only to hang out with social scientists and sort of figure out how they approach the world, but I'd never been surrounded by 3 dozen people who just thought about the problems of our society all the time. Time, right? I mean, that's actually not the frame that scientists and engineers live in. I mean, social scientists are working on far more interesting problems, but I think there's not as much of a sense of agency. Scientists and engineers have a sense that they can just go do stuff and it's going to get better. And I think, like, I'd love to have more of that in social science. That would be great.

John Markoff: Thanks for spending time with us. Oh my gosh, what fun. To learn more about the topics in this episode, check out the show notes. There you'll find links to works by our guest and relevant articles. Thanks for listening.