Pulitzer Prize-winning tech journalist & 2017-18 CASBS fellow John Markoff chats with 2022-23 CASBS fellow Rebecca Slayton on how the field of computing expertise evolved, eventually giving rise to the niche of professionals who protect systems from cyber-attacks. Slayton's forthcoming book explores the governance & risk implications emerging from the fact that cybersecurity experts must establish their authority by paradoxically revealing vulnerabilities and insecurities of that which they seek to protect.
Pulitzer Prize-winning tech journalist & 2017-18 CASBS fellow John Markoff chats with 2022-23 CASBS fellow Rebecca Slayton on how the field of computing expertise evolved, eventually giving rise to the niche of professionals who protect systems from cyber-attacks. Slayton's forthcoming book explores the governance & risk implications emerging from the fact that cybersecurity experts must establish their authority by paradoxically revealing vulnerabilities and insecurities of that which they seek to protect.
REBECCA SLAYTON
Cornell University faculty page | | CASBS page |
Slayton's book Arguments that Count: Physics, Computing, and Missile Defense, 1949-2012 (MIT Press)
Slayton's article "What is the Cyber Offense-Defense Balance?," International Security
Video: Talk on "Shadowing Cybersecurity: Expertise, Transnationalism, and the Politics of Uncertainty" at Stanford Univ.
JOHN MARKOFF
Markoff's latest book, Whole Earth: The Many Lives of Steward Brand (Penguin Random House, 2022)
Center for Advanced Study in the Behavioral Sciences (CASBS) at Stanford University
75 Alta Road | Stanford, CA 94305 |
CASBS: website|Twitter|YouTube|LinkedIn|podcast|latest newsletter|signup|outreach
View the Fall 2023 CASBS Newsletter
Narrator: From the Center for Advanced Study in the Behavioral Sciences at Stanford University, this is Human Centered.
How did the field of computer security emerge, and over time, evolve a niche of professionals who protect systems from cyber attacks? What are the implications given that these experts must demonstrate their skill by paradoxically revealing vulnerabilities and insecurities? In this episode of Human Centered, a conversation with 2022-23 CASBIS fellow, Rebecca Slayton.
Slayton is an Associate Professor of Science and Technology Studies at Cornell University. Her research examines the relationships between risk, governance, and expertise, with a focus on international security and cooperation since World War II. Her first book, Arguments That Count, shows how the rise of computing reshaped perceptions of the promise and risks of missile defense.
And the book won the 2015 Computer History Museum Prize. Her CASBIS project, soon to be released in book form, examines the emergence of cybersecurity expertise through the interplay of innovation and repair. Listeners may remember hearing Rebecca in the role of interviewer in a previous episode, featuring CASBIS legend Robert Cohen.
Well, this time she's on the other side of the table, taking questions from someone who expertly covered computing and cyber history for three decades, former New York Times tech journalist and Pulitzer Prize winner John Markoff. Markoff himself was a CASBIS fellow in 2017 to 18, and listeners will remember him as a frequent early host of Human Centered. John is back in the CASBIS studio and given his knowledge and perspective, the perfect conversation partner to engage Slayton on the topics and issues, while sprinkling in a few classic anecdotes of his own from his years in the tech reporting trenches.
Let's listen in.
John Markoff: Could we begin? I wanted to ask about your path to cybersecurity. I noticed that you have this sort of broader, you have broad interests in science policy and technology policy, but you've recently focused on cybersecurity. Was there an event or a series of events? What pulled you towards cyber?
Rebecca Slayton: So there was a very long transition from my original academic training in physical chemistry into science and technology studies. And then unexpectedly, I became interested in the history of computing. Should I go into all of that?
John Markoff: Yeah! Oh, absolutlely.
Rebecca Slayton: Okay. So as a physical chemist, I was working in a laser lab. We were doing ultrafast laser spectroscopy. And when my friends would come to see the lab, this was the time of Austin Powers and Dr. Evil. And they all wanted to see my laser, meaning the weapon that was going to destroy the earth, which struck me as very funny because lasers are terrible weapons. They're extremely inefficient. They're flaky. Most of my life as a physical chemist was dealing with equipment, trying to get it to work properly. And so I became interested in this site.
Why is there this popular image of the laser as something that is very powerful and it's a wonderful weapon? And as I decided that I was really interested in history of science and technology, I decided to propose a postdoctoral project that would be a retraining project in science and technology studies. Where I would look at the Strategic Defense Initiative, often popularly called Star Wars, which was a proposal by President Ronald Reagan to make nuclear weapons, quote unquote, impotent and obsolete through the use of advanced technologies.
And lasers and directed energy weapons were just one part of that program, but a very sexy part. You know, this was, he gave this speech shortly after the original Star Wars trilogy had come out. And so I decided to study that and I thought, you know, well, I have this background, this technical background in laser technology.
Also, I was very interested in the public authority of scientists, how it is that scientists make their claims persuasive in the policy arena, and how arguments that are persuasive to one audience may be not persuasive to another. So I was interested in that process of generating authority through rhetoric and various kinds of arguments. So as I started getting into the project, I thought I was going to focus on the arguments of the physicists, but then I noticed there was this really interesting set of arguments being made by computer scientists.
And they had a very different structure of reasoning than those of the physicists. So physicists would talk about laws of nature. Here is the best you could do with your directed energy weapons. And then anything more than that is limited by the laws of physics. Look, we've got geometry, you've got horizons. This is what you can do to stop nuclear weapons with laser technology.
The computer scientists, they didn't deal with laws of nature. They dealt with fallible technology. And the physicists would say, you know, assume all the technology works perfectly. You can't do better than this. And the computer scientists were saying, look, what we know is that technology never works perfectly the first time you field it. You have to go through extensive testing and you don't really get your software debugged until you've used it realistically.
And so then the question became, well, how would you use software realistically to test a nuclear missile defense system? We're not going to have trial nuclear wars. Of course, you can imagine similar issues, of course, have come up over the whole history of nuclear testing. How do you know that your nuclear missiles are going to hit their target properly? And that's a very contested area and has been. And it's one reason that we have way more nuclear weapons on the planet than we should because, well, if we're not sure it's going to hit accurately, we'll just build more nuclear weapons.
But basically when I was looking at the way the authority of the physicists and the authority of the computer scientists worked, it was very different. Physicists appealed to the laws of nature. Computer scientists talked about the limits of engineering. And so because they were talking about the limits of engineering, they could always be accused of just being bad engineers. Well, look, roll up your sleeves, be a better engineer, design the technology better, and then you'll be able to make it work. And so they were limited in a different way.
On the other hand, their arguments had a very common sense appeal. So they would say, look, it's Murphy's Law. Nobody necessarily understands the laws of physics, but they understand Murphy's Law.
If something can go wrong, it will. So I became interested in these very different kinds of arguments, different kinds of authority that were being generated. That became the focus of my first book because I decided to go back in time and look at the whole history of missile defense from really actually even before we developed the hydrogen bomb started with air defense.
And that actually a lot of really interesting original computing work that came out of the first air defense systems, nuclear defense systems, first real-time computing, first major networked computing came out of that era. And the question of the book that the book thought to answer was why was it that the arguments of the computer scientists were so marginalized relative to those of the physicists.
John Markoff: Let me ask you about the cultures of those two communities because the physicists had this rich history of dealing with the consequences of their inventions. There were, you know, out of World War II and nuclear weapons, there were these organizations, atomic scientists, various organizations. I think this was super early when you wrote your book, had computer professionals for social responsibility, did they exist?
Rebecca Slayton: Oh yes, they formed around the time shortly before the Star Wars initiative.
John Markoff: I have one side story to tell you. I believe I'm responsible for the term Star Wars with respect to SDI. So I was at Pacific News Service and I wrote an early piece, this was probably I don't know when it was, I can't remember, but anyway, the headline writer for the packet that went out used the term Star Wars.
It wasn't in my article anywhere, but it was in the headline. And then it caught on. Pacific News Service got picked up by, it was syndicated, so hundreds of newspapers around the country picked it up and Star Wars became part of the lexicon. I think that my article was the first one, I don't have absolute proof, but anyway.
Rebecca Slayton: That would have been 1983.
John Markoff: Yeah, it could have been then. So anyway, apropos of nothing, but we were talking about these communities and the computer scientists were behind the, but they were already active in thinking about the consequences of their technological development, and you took advantage of that. Was it that community?
I mean, CPSR grew out of an organization that was just a couple of miles from here. Well, I was at the first meeting. It was at Stanford and it was some people at Xerox PARC who were thinking about those issues and put this national organization together.
Rebecca Slayton: Yeah, so CPSR, Computer Professionals for Social Responsibility, initially grew out of the nuclear freeze movement around 1982, which was the height of the nuclear freeze movement when you had out in Central Park in New York, thousands and thousands and thousands of people protesting and saying, we need to do something about nuclear weapons. The Reagan administration was talking about winnable nuclear wars and the Computer Professionals for Social Responsibility. They also asked Xerox PARC to do something to support this and they did actually.
The company did actually put some money into, I mean, they tried, of course, to be apolitical, but into public education about these issues. Now, Star Wars, I'll call it the Strategic Defense Initiative since that was the official. I mean, Reagan's speech in 1983 said nothing about lasers, right?
But it quickly became called that. That kind of defanged the nuclear freeze movement, because all of a sudden Reagan said, well, yes, we understand everybody's concerned about these terrible weapons, but we're going to do something about that. We're going to use high technology to eliminate the threat of nuclear weapons. And wouldn't it be better to think about defenses than offenses, after initially his administration had been very offensive and aggressive in its rhetoric. So it kind of took the edge off of the nuclear freeze movement, even though it didn't address their fundamental concerns, the rhetoric of it took the edge off.
John Markoff: So you came out of a pure science background. Did you face, I don't know to what extent you were actually working with lasers, but did you face some of the ethical dilemmas that some of these communities faced in terms of DOD funding as a researcher and as a...
Rebecca Slayton: I have never accepted DOD funding. That's a great question. All of my work as a physical scientist was National Science Foundation.
Now there were people in my lab that were funded by the Defense Department to work on quote-unquote energetic materials, aka explosives. But what we were doing was so pure science that... and I used that term with full knowledge that there's no such thing as completely pure science. It was highly abstract and theoretical. It did not have any near-term application. We never… by the time I was doing my PhD, SDI had sort of just been absorbed into the rest of the missile defense program and was nothing really unique.
John Markoff: Have you continued... I mean, so in the interim between the time your book was published and now, I mean, the physicists seem to have lost their primacy in terms of what's going on in US science policy. Although, you know, it's interesting, let's bring CASBS into this. Arthi Prabhakar, who is now the president's science advisor, is a physicist. Has that always been true that physicists are picked to be science advisors? Arthi was a CASBS fellow.
Rebecca Slayton: Yeah, it's shifted over time. So that's part of the historical story that I found so interesting, was that coming out of World War II, the physicists were sort of the preeminent science. They were giving credit or blame, depending on your perspective, for the development of atomic weapons, which were commonly attributed with ending World War II.
They did end the war. They didn't win the war. And so everybody wanted to have a physicist at their party. It was that kind of thing to explain quantum mechanics and nuclear technology at that time. And that is part of why the challenges of computer technology, which were then, they were just not central. Today, you know, big tech and information technology is, we see that as a driver of innovation. All of the really big, wealthy companies are, you know, an information technology of some form. That was not the world then. And so, yeah, physicists were the core of the President's Science Advisory Committee.
But that has shifted over time. The President's Science Advisory Committee has diversified, has had a lot more biologists, particularly as biotech got geared up and I think more information technology over time as well.
John Markoff: I also wanted to ask you in sort of the arc of your career about, so you've become a historian, you've become a social scientist sort of, talk about how your methodology has changed. I mean, your methodology in the lab was very circumscribed. How do you think about it now?
Rebecca Slayton: That's a great question. So part of why I shifted was I found myself very limited in the questions I could ask by my PhD training. My advisor was somebody who had a hammer in search of a nail, and he found a lot of good nails. But I didn't want to have to be constrained in that way.
So the social sciences history felt a lot more flexible to me. Now, of course, as I went into it, I realized that actually there's a heck of a lot of investment, intellectual investment in learning any kind of new discipline or method. When I first went into it, I think I was very heavily oriented towards quantification, wanting to find some way to quantify things. But also very skeptical of quantification. And so I gravitated towards more qualitative studies as time went on and really looking at history and historical narratives.
Because part of what drew me into science and technology studies was recognizing that the stories we tell about our past have a powerful impact on policy and how we make sense of the world and how we imagine our future.
John Markoff: Let me sort of push a little farther on that methodology question. So you have moved to doing research in a world where there are challenges, I would think, or I want to ask you about this, of doing policy research in a world that is sort of tightly bound by corporate, military, intelligence agency secrecy. How have you found that world to, are there strings to pull on?
Rebecca Slayton: I mean, is it the principal challenge, is that a lot of it happens in the dark You know, more of it is public than people realize. Part of how I get around that is that I'm very interested in public perceptions to start with and public policy making. Because I'm a historian and because a lot of times I'm dealing with government policy, a lot of documents have been declassified or released or leaked from the past.
I think it's actually much harder to study, and this has been a challenge for me as I've moved more into cyber security, history of corporations, because there aren't good records and there are no laws that require corporations to let you see how decisions were made behind the scenes. It's actually been easier to study the history of Defense Department work or National Security Agency work, despite all of the secrecy, because there's been a lot of Freedom of Information Act releases, and you can always pursue that.
John Markoff: Yeah, so documents are a big part of the...
Rebecca Slayton: Yeah.
John Markoff: In 2016, I think that's sort of what the first thing I mean, maybe you would have done other earlier things, but there was a piece you wrote about the offensive-defense balance, and I was wondering what pulled you to that particular question.
Rebecca Slayton: What struck me was that people were talking about offense-defense balance in completely different ways. So what it meant for offense to have the advantage in cyberspace meant different things to different people.
For some people, it meant economic advantage, right? That it was cheaper for the offense to get, and that's been historically, academically, the definition of offensive advantage is that it's more expensive to defend than to attack. But when I talked to computer scientists, they said, well, no, we just mean that if the offense tries hard enough, it'll win. If somebody really wants to hack you, they will, but that's different than saying it's cheaper.
And so what I tried to do in that, and then some people have even a third conception, which is, is a first mover advantage. Because the offense can go first, it has advantages that the second mover, the defense, does it right. So there are these different conceptions, and I felt like people were talking past each other.
And so I tried to, in that paper, really make it more rigorous and think about sort of what, if we define very clearly what offense advantage means, how would you spell that out? How would you operationalize that? And what I found, I kind of started it as a farce, actually, because I thought, well, part of what's interesting is when people talked about offense-defense balance in the nuclear weapons arena, they were talking about hardware.
And with hardware, you're in a particular kind of economic arena where most of the costs are actually production costs, just replicating missiles or defensive missiles, radars, all of those things. But there's a small upfront cost of research and development, whereas with computer technology and software, it's all research and development, which really changes how we think about the economics of offense and defense. And so I thought, you know, almost as a joke, if we look at Stuxnet, which was US, Israel, a bunch of allies trying to basically undermine Iran's nuclear enrichment facility at Natanz, can we estimate how much work went into, how much money went into launching that attack and defending against it?
And I thought just going through that exercise will illustrate how difficult it is to come up with any kind of standard cost measurement, because you're talking about human labor, you're not talking about rolling missiles off the production line. And I kind of did it as a joke and just a thought experiment, but what I found was that it seems likely that actually the offense spent a lot more than the defense, which is the opposite of what people mean when they say offense has an advantage, right? And then the other thing we had to think about was what are the benefits, right?
And what I found there was that if you look at the benefits, just try to estimate the benefits in terms of how a nation values that attack. Look at the costs of sanctions on Iran, for example, from the cost of sanctions due to their nuclear program, and it's an order of magnitude more than the cost of the offense. So you can say, well, yeah, the offense is actually more expensive than the defense, but who cares because the perceived advantages of that offense were so high.
Yeah, it'll cost us more than it'll cost them to clean up, but so what? Our advantages are so valuable. So trying to break down, I sort of started it as a joke almost and as a thought experiment, but then I realized actually there's something interesting here to be gained, even though I think the numbers are very, very fuzzy, and I hope people don't take those too seriously.
John Markoff: It gets complicated. Well, this is a general question about that world that you're looking at. You know, Stuxnet struck right at the heart. Well, maybe it's over. It's on the cyberweapon side of things, but there's this continuum between spying and offensive weaponry that's particularly blurred in this cyber world, and I don't even know if you can pick it apart, but I'm thinking about it in the question of stabilization, destabilization. Have you explored that?
Rebecca Slayton: Right, so one of the things people are very concerned about with cyber is that it can be very difficult, because what you need to do to launch a cyber attack always starts with intelligence, that the person who is the target of intelligence of the nation, that is the target of intelligence, can't always tell whether it's just intelligence or something more, and so that can be very destabilizing. You might get a strong reaction to the intelligence, because that actor is assuming that actually you're preparing for an attack. We haven't seen a whole lot of...
I mean, it's hard to know what goes on there, and that's where secrecy becomes an issue, because it's hard to know what perceptions were behind the scenes. There's not a lot of evidence that it has been super destabilizing in that sense. I think the main instability that we see is that anytime you start entering a network, you risk having unintended consequences because these systems are so unpredictable and complex and poorly understood that even a very careful operation may have unintended consequences.
So there was one, for example, US-led operation that ended up knocking out a server farm in Texas that affected a bunch of civilians in the United States because they were trying to disrupt something abroad, and that kind of thing happens all the time. So there's that kind of instability. So far, we haven't seen so much instability that we know of, and there seems to be some evidence from war gaming, and this is not my field, so I say this very carefully, that because officials recognize that this is a new area that they don't understand very well, they're just more hesitant to be super reactive.
John Markoff: I guess I'm asking this in the context of things like tools that are known as zero days in the cyber world, where nation states now stockpile them, there's a black market business in selling them. The companies are in this interesting position where they're charged with providing security for their customers, and at the same time, they have some sort of murky relationship with the government, which may have its own priorities. I mean, it's just this weird world that's hard to sort out, I guess.
Rebecca Slayton: It is, yeah. So the whole vulnerabilities market is fascinating to me. And that market itself can potentially create vulnerabilities. So my colleague Ryan Ellis at Northeastern has written about how it is that the very fact that a company is now stockpiling or storing vulnerabilities makes it a target for hackers who want to get those vulnerabilities. So people think about vulnerabilities markets as potentially reducing vulnerabilities by giving hackers incentives to find things and responsibly disclose them rather than exploit them. But they also create new targets for exploitation.
John Markoff: And that's a tension that's not gone away. I attended your CASBS talk and you focused on the Morris worm. My role in that was reporting that it was Robert Tappan Morris who was the instigator of that attack that brought a very young internet to its knees for a day.
I always thought that that was a significant event because it was the first time that the American public writ large had any sense of the power of networks for good and bad. I mean, networks were really not on the policy or national agenda until the Morris worm. And then it sort of redefined the way things were. You were looking at it in a sort of formal sense of what it did to the security community in the world. I mean, was that the focus of your?
Rebecca Slayton: Yeah. So the Morris worm, November 2, 1988, was largely believed to be an accident launched by a Cornell graduate student in which he was trying to run a little experiment to show how open and vulnerable the Internet was. And the Internet was at that time mostly a research network with a bunch of academics on it.
It wasn't what we think of today as the Internet. There weren't a lot of people on the Internet. But he made a mistake in his programming that caused the worm to go out of control and basically shut down the Internet by tying up servers, making them too busy.
And that was a moment of reckoning for the Internet community, which was then primarily an academic community. Gene Spafford at Purdue University, a computer scientist who was involved in helping clean up, called it an attack from within, because computer scientists had kind of trusted each other to just not take advantage of these vulnerabilities. And they realized how easily this could happen.
So Robert Tappen Morris was the first person then he was found out. His father was at the National Security Agency, and he was the first person convicted under the Computer Security Act.
John Markoff: Computer Fraud and Abuse Act.
Rebecca Slayton: Thank you. Computer Fraud and Abuse Act of 1984. And fortunately for him, didn't serve prison time.But one of the things that I find interesting about that act is that it criminalized the act of hacking, but there have never been any penalties for the suppliers of computers and software for failing to provide better security.
John Markoff: I noticed in your 2016 piece on Offensive Defense of Balance, early on you cite Joseph Nye on the design of the Internet. The Internet, I think he argued, was never designed to be secure. It was designed to share information. It wasn't designed to make information private or make information secure. And so it was always, they were always catching up with the problem and since didn't foresee the problem.
Rebecca Slayton: Well, I think they did foresee it and I think that's a common misconception is that it was just an accident that the Internet was not secure. It's actually, particularly after the Morris Worm, there was an explicit choice made where the managers of Internet development at DARPA, the Defense Advanced Research Projects Agency and the Defense Department said, you know, there are trade offs when we what we want to optimize with this system is innovation, evolution, evolvability. We want to be able to continue to evolve the system.
If we want a really secure system, we got to lock it down. We've got to limit access. We've got to impose strict rules and we don't want to do that. And so, yes, the system will be vulnerable. And what we're going to do instead of imposing those strict rules, despite the fact that we realize that they've created vulnerabilities that allowed this outage to happen, is we're going to continue to do research to improve security, but then we'll also start what's called computer security or computer emergency response teams. So the focus came to be on response.
We expect that there will be hacks and outages and problems. And so we'll have a network of responders, a decentralized network of responders to clean up and to help hopefully mitigate and make it less likely that these things happen. And so that really started a new institution at that moment after the Morris worm.
John Markoff: That perspective is really interesting to me. There is this, I'm thinking about the question of security. You can make something perfectly secure. You can put it in a box and put a lock around it. They have networks that do that. They're physically air gapped and stuff like that. But that makes them very hard to use. And there's this continuum of ease of use. And, you know, these things are meant for laymen to use. And so you don't want them to be hard to use. And so that's a trade off that you have to live within when you're doing security. So is that sort of what you're getting at in this design question?
Rebecca Slayton: Absolutely, absolutely. So there's always a tension between usability and security. And there's a whole field of research that's grown up around usable security. How can we make systems that are both secure and easy to use?
John Markoff: I noticed there was a phrase you used that I hadn't seen before. You talk at one point that I didn't recognize, the anti-security movement. Is there an anti-security movement? And what were you referring to?
Rebecca Slayton: Yes, so the person who's written about this much more than I have and who really tipped me off to this is Matt Gerzen, who's at Harvard. He's a Ph.D. student in their History of Science department.
But yeah, there was an anti-security movement, really took off around the turn of the millennium. And it was partly triggered by the fact that the security industry started really going mainstream as the internet went online and governments and corporations started to invest in security. All of these hackers, a lot of the hackers sort of started leaving the community and going to work for companies.
And the anti-security movement opposed the security industry, which was sort of a meme, quote unquote, security industry. Not always clear what that meant, the security industry, except that it was bad. And the reason that many underground hackers saw it as bad, some of them complained that it was actually undermining security by taking advantage of unpaid work by hackers, creating fear, uncertainty, doubt, and then getting lots of money for it, potentially disclosing vulnerabilities that could be used for hacking. But there was also some sense that just they didn't want to lose all of their secret knowledge that allowed them to get into systems and have fun.
John Markoff: That's really interesting. So this is sort of, I was involved in covering this at that point, and I was in contact with a lot of those communities. I covered the rise of that security industry. There was an RSA conference that sort of defined it. And one of the reasons I walked away from the field in 2011 is I would go to these conferences and they got bigger and bigger and bigger every year, more commercial activity. And at the same time, security in a quantifiable way became demonstrably worse every year. There was this huge disconnect. And I have a very clear memory of a dinner I attended with the CTOs of a bunch of cyber companies. And there was this one weird moment. They were joking about what a wonderful industry this was because they could kind of dial up the level of business by just having an incident happen. This was hypothetically. And I was watching these guys and I saw the eyes going like this. Nobody said anything, but I got this sense that that was really happening. I have no idea. I was never able to report on anything. But it seemed like a very strange industry.
Rebecca Slayton: Yeah, no, it's quite ironic. There's a way in which the industry, the more the industry fails, the more it profits. Right? Because the more security breaches there are, the more people want to pay to prevent them. And this paradox actually is part of what got me into my current project, is this idea that how is it that you can demonstrate security? Security is this abstract thing. You can't prove that you have it. The only thing that these experts will promise is that if somebody wants to break into your system bad enough, they will. So how do we trust these people? How do they gain any kind of credibility?
John Markoff: And describe your current project. Is this what you came to CASBS this year to work on? And it's a book on the history of cyber?
Rebecca Slayton: Yeah, of the cyber security expertise and looking at how do we deal with the fact that how do you demonstrate that you have actually secured a system? And what drew me to it is the fact that, first of all, we can't ever prove it. And that a lot of times, oddly, it seems like security experts prove how good they are by breaking things, which is weird, right?
I mean, if you went to a doctor, they wouldn't say, let me prove that I'm a really good doctor. I'm going to break your arm. But that is what a lot of hackers do. Look, I can break into your system, so therefore, you better pay me to fix it. Because if I can do it, somebody else can.
John Markoff: Well, that goes back. I mean, you know, I got to know Robert Moore Senior well and Robert Tappan Morris just a little bit. But the relationship between father and son completely defines what you're talking about.
I mean, Robert Moore Senior spent a lot of time teaching Robert about the field of cybersecurity. And there was this culture of testing security, proving security by attacking security. And his son was tutored. I mean, that was the culture at the time that was very much alive and well. And it was, you know, he was perceived, they saw themselves as white hat hackers early on. I mean, white, gray, black. And Robert grew up in that in that world very clearly. And there was, I was at Robert Morris Senior's home once. And he made this weird aside that I never figured out.
He picked up a disk and said something about what Robert had done. And it's like that if if he had just sort of known about what was here, this wouldn't have happened. I always wondered whether the father was more involved than than was what was publicly known.
Rebecca Slayton: Possibly. Well, clearly he trained his son a little too well.
John Markoff: Yeah, we'll have to wait for Robert Tappan Morris's memoir to answer that question. So where are you in your book? I'm sorry. That's a terrible question. But do you have any sort of preliminary thoughts?
Rebecca Slayton: So I came here to work on the book and instead of focusing on some related questions that really got me like trying to think about what do we mean by vulnerability? What is a vulnerability? How do we think about that across different disciplines?
But I have worked on the book and thought about one of the things that I've done through conversations with colleagues is thinking about how to organize it around some sort of key moments where the political dynamics around cybersecurity change, shift. One of the things I find fascinating about this field and the rise of this field is that it's not a classic story of professionalization that you would expect from the sociology of the professions. You do have a complete proliferation of all of these different ways of trying to generate credibility.
So, lots of certifications, professional certifications. But I really don't see that as a professionalization process. It's much more a process of learning to make a lot of money off of certifications. It's a huge business. So, yeah.
John Markoff: So, encryption is part of this overall computer security debate. And I was wondering, in terms of your offensive defensive framework for analysis, the debate has been about backdoors. That's the policy debate.
It's still going on. Still not settled. But all of a sudden, our mutual friend, Whitfield Diffie, who pioneered public key cryptography, is now going around and speaking on the issue of quantum computers and the threat they play to, which is a classic resource question. Have you had a chance to look at that at all?
Rebecca Slayton: Not as closely as I would like. So I was involved with editing volume in honor of Whitfield Diffie and his co-inventor of public key cryptography, Martin Hellman. And they developed a very robust system, set of systems and technologies for securing communications, secret communications across public lines.
But what they've always known is that if you throw enough computing power at an encryption algorithm, you can break it. The threat with quantum computing is that if quantum computing can amplify your computing power because it can operate on smaller, shorter time scales than our current processors, that it could suddenly make a whole bunch of encryption technologies that we think of as secure today no longer secure because the processors can't keep up with them. So there's always this sort of cat and mouse game with encryption.
John Markoff: I realize, in the intelligence community, historically long before quantum computers were on the horizon, they were always storing old encrypted messages in the hope that in the future they would... In fact, they have historically then broken things later and they've even arrested people based on... Spies have been found out in organizations. So I think about the challenge now because there's so much encrypted information to store. I have to store a lot of stuff. I guess you can store the stuff you're interested in.
Rebecca Slayton: Yeah, and one of the really influential things that Whit and Marty did was to criticize what the National Institute for Standards and Technology put forward as the data encryption standard in the 1970s by showing that within five to ten years, processors were probably going to be able to break that pretty easily, and they were right. Now, the National Security Agency had some hand in trying to make sure that the encryption was not too good. But even within the National Security Agency, there was some debate about whether they did make it too good, and it actually slowed them down.
But I'm glad we got to squeeze this in under the wire.
John Markoff: Yeah! Thank you. This is fun.
Rebecca Slayton: Yeah, likewise.
Narrator: That was Rebecca Slayton in conversation with John Markoff. As always, you can follow us online or in your podcast app of choice. And if you're interested in learning more about the Center's people, projects and rich history, you can visit our website at casbs.stanford.edu.
Until next time, from everyone at CASBIS and the Human Centered team, thanks for listening. Thanks for listening.