Human Centered

Bridging Adaptive Algorithms and the Public Good

Episode Summary

Pulitzer Prize-winning tech journalist John Markoff chats with 2022-23 CASBS fellow Nathan Matias about often-overlooked public interest questions and concerns regarding the deployment of tech platform algorithms and AI models. Specifically, Matias is a player in filling the two-way knowledge gaps between civil society and tech firms with an eye on governance, safety, accountability, and advancing the science — including the social science — of human-algorithm behavior.

Episode Notes

Pulitzer Prize-winning tech journalist John Markoff chats with 2022-23 CASBS fellow Nathan Matias about often-overlooked public interest questions and concerns regarding the deployment of tech platform algorithms and AI models. Specifically, Matias is a player in filling the two-way knowledge gaps between civil society and tech firms with an eye on governance, safety, accountability, and advancing the science — including the social science — of human-algorithm behavior.
 

Nathan Matias: Cornell University faculty page | CASBS bio | Personal website |

Citizens & Technology Lab

Coalition for Independent Technology Research

Select Matias publications

"Humans and Algorithms Work Together — So Study Them Together" Nature (2023)

"Impact Assessment of Human-Algorithm Feedback Loops" Just Tech, SSRC (2022)

"The Tragedy of the Digital Commons" The Atlantic (2015)

"To Hold Tech Accountable, Look to Public Health" Wired (2023)

Link to more Nathan Matias public writing | Matias on Medium | on LinkedIn |

------
Read John Markoff's latest book, Whole Earth: The Many Lives of Stewart Brand (Penguin Random House, 2022)
 

Episode Transcription

Narrator: From the Center for Advanced Study in the Behavioral Sciences at Stanford University, this is Human Centered.

Adaptive algorithms interact with human behavior in ways we cannot yet reliably predict or evaluate. But while we're mostly not allowed to examine what's happening inside the systems of private tech companies, neither can those companies really see the full extent of their sometimes harmful discriminatory impacts on people's lives. How do we even study, much less govern, these bidirectional relationships and knowledge gaps to advance safety, accountability, and the common good?

Today on Human Centered, a conversation with 2022-23 CASBUS fellow, Nathan Matias, an Assistant Professor of Information Science and Communication at Cornell University. Matias leads the Citizen and Technology Lab at Cornell, a research group independent of industry that organizes community and citizen behavioral science and consumer protection research for digital life. He works alongside affected communities to advance the science of human algorithm interaction by testing pragmatic interventions for a fairer, safer, more understanding Internet.

Engaging Matias is former New York Times tech journalist and Pulitzer Prize winner John Markoff, a close friend of our podcast. Markoff himself was a CASBIS fellow in 2017 to 18 and author most recently of Whole Earth, The Many Lives of Stewart Brand. Public interest questions and concerns often get left out of the equation when it comes to the deployment of algorithms and artificial intelligence.

But as you'll hear, Nate Matias foregrounds these concerns, revealing them in context such as the legal liability and immunity for platforms, adaptive system design and testing, open source algorithm deployment, content moderation, prevention versus remediation, and more. Should tech platforms be subject to democratic processes? Are there practices or tools of recourse for individuals when tech companies won't cooperate?

If not, how do we develop such practices and tools? All roads point to more citizen science, including citizen social science. Nate kicks off the discussion by describing an essay he published in the journal Nature in 2023, which we'll link to in the episode notes.

And a quick heads up, later on in the episode, you'll hear my voice again asking Nate a few questions. I just wanted to let you all know now so that when the time comes, you won't think your podcast is broken and starting over from the beginning. So with that out of the way, on with the show.

John Markoff: Let’s start with your recent Nature article because it'll lead us in lots of different directions. So let me ask you to just outline the kind of points you were trying to make and for the audience, describe the sort of thesis of the article.

Nathan Matias: The article has two points really at the heart of it. The first is a point about the political and societal implications of our inability to predict and manage adaptive algorithms. And the second point tries to outline how we might try to fill those gaps in science.

I start by telling the story of the Supreme Court case, Gonzalez v. Google, where there are allegations that ISIS, which did use YouTube to recruit people, that their videos were promoted by the platform's algorithms in ways that made terrorist attacks more likely. And in fact, the family of Nohemi Gonzalez, who died in the Paris terror attacks in 2015, made that allegation.

And the tricky thing for science is that if you talk to computer scientists or social scientists, and ask them, could YouTube do anything to actually change their algorithms or anticipate or prevent tragedies like this? The answer is, we don't really have the science to do that. Whether you're worried about stock market flash crashes or Google Maps algorithms sending people into forest fires or other patterns that result from algorithms reacting to humans and humans reacting to algorithms, that's a dynamic that scientists are really struggling to reliably understand.

John Markoff: I was thinking about the outcome of this case and how the interplay with whatever might or might not be happening with Section 230 of the Communication Decency Act. Could this be a placeholder? The courts step in and they say that these companies have responsibility. I mean, that's sort of what's at play in 230, isn't it?

Nathan Matias: 230 has a complicated relationship with these questions about the behavior of algorithms because there's an open question about who bears responsibility for the behaviors of some kind of allegedly autonomous system. And one crowd says, well, ultimately these things are owned by companies and they should bear responsibility. That just like car manufacturers have responsibility for, to some degree, for crashes that happen if they're linked to design, so should algorithm makers.

But Google, in this particular case, argued that whatever its algorithms did, whether or not its algorithms were responsible for recruiting or directing terrorist activity, they should still have immunity from any liability due to Section 230, and that's really the only way they argue, to have a viable tech industry. I think that there has to be some path forward that identifies actors who should be expected to take reasonable actions to prevent these kinds of problems. But I do think the legal debates have been, the sides have been drawn so clearly in order to preserve one or another's side that it often leaves the scientific, the really important scientific and public interest question out.

John Markoff: What did the court rule?

Nathan Matias: The court ruled that there wasn't, I want to get the language right, the lawyers for the Gonzalez family had not made a strong enough case for them to consider it in greater depth, so they basically punted on the question and maybe some future case will raise the issue again, maybe we will get regulation, but they were very circumspect about any of the merits of the arguments made. One of the gaps we face is that the statisticians, the engineers, the social scientists who work on these questions don't really have a good bridge to the legal minds who are working on these issues. And one project that I developed while here at CASBS was actually a workshop and series of conversations we're going to be having later this year to try to build bridges so that people who understand those legal questions can dive into the science and those of us who are developing the science can better understand how research we create can meaningfully inform policy on this issue, which is so fraught and so important.

John Markoff: You have a couple of sort of society and technology laboratories and what is it, the Center for Independent Technology Research.

Nathan Matias: Yeah, I think I need a rule that I should start no more than one organization per year. The organization that I helped start this year is the Coalition for Independent Technology Research. This is a non-profit that works to support, defend the right to do independent research on technology in society and that's independent from my research lab.

That's really focused on protecting researchers when they come under threat from companies that are uncomfortable with the consequences of research that people are doing. The lab that I lead at Cornell, the Citizens and Technology Lab, is a community science, a citizen science lab that works directly with the public to study the impacts of technology on society and also test ideas for change. And that's the lab that is organizing this series of workshops with legal scholars and scientists and communities because people come to us because they have real concerns and problems and hopes for their digital lives.

And sometimes science is the answer, sometimes policy and law are the answers as well. And if we can bring those forms of expertise together, then the communities who work with us can make the most out of the research they're doing.

John Markoff: Have you looked at the regulatory process in Europe in the context of AI regulations and does it play on this in any meaningful way? The situation that I see is that there's been a lot of discussion about the open source deployment of these models and they're out there, anybody can play with them now, they increasingly run on individual's machines and to me that seems like it might make the regulatory challenge more difficult.

Nathan Matias: The most consequential policy changes we've seen have come from Europe and where I've been asked to speak to folks in the US government, it has been around helping them understand how changes in Europe are going to affect what kind of research and what kind of policy happens in the United States. Beyond that, I think there are a number of agencies that are circling around the question of whether an existing agency within the US government is able to regulate AI, you know, the FCC has been doing some work, the NTIA is another agency that just requested public comments, but I think it remains to be seen how and what the US government will be doing at a national level. I do think that at the current moment, with so few ways to actually test things and their impacts, it's very dangerous to put these things out so widely.

I got to visit the site of one of the largest oil spills in American history, which happened in Kern County. In a moment where there wasn't a lot of regulation, people had the ability to drill wells, but they didn't yet have the ability to prevent spills. It was an open field day, and so they drilled a lot of wells.

They said, let's just try it and see what happens, and we'll find the problems. A hundred years later, the ground by that old well is still black with oil that hasn't fully been remediated. I think, given our past history with putting things out into the world that we don't yet know how to control, I personally, as a scientist, think that we really desperately need to develop the science to do this testing rather than just unleash these things willy-nilly.

John Markoff: A lot of this is playing out in interesting and sometimes, to me, seemingly depressing ways on Twitter right now. Has anybody doing research on this current iteration of Twitter and what its impact might be?

Nathan Matias: A great number of researchers have continued to study behavior on Twitter and the conduct of the company since its acquisition by Elon Musk. I think, for example, of Jonathan Meyer at Princeton and his team have studied security vulnerabilities on Twitter. Other folks have studied the spread of misinformation and also the rises in hate speech and harassment on the platform.

There's been a concerted effort to keep research happening on the platform because the company has been actively working to cut off researcher access and just in the last few weeks is threatening to take legal action against researchers who are studying the platform.

John Markoff: Are there special APIs that researchers can use?

Nathan Matias: There used to be special APIs that researchers could use. They are being cut off by the company and the company is also starting to demand exorbitant fees for continued access, which is their ostensible reason for the threats of legal action. So, watching that unfold has validated our hypotheses for building this coalition in the first place, that when times are good and companies are looking to hire lots of researchers and they want the prestige of being associated with academia, then you get this really friendly relationship between academics and companies that has accomplished quite a lot over the last couple of decades.

But when you face hard economic times, when companies are distrusted by the public, when the public is less comfortable with research funded by industry, and when there's the threat of regulation, then companies will go after researchers. And we need institutions that will protect us to work in the public interest at those times.

John Markoff: What's the current state of play? Is everything between Social Science 1 and Research 1 over between Facebook and Social Science 1? There's no active...

Nathan Matias: I'm aware that there are some boards that exist. I don't know if Social Science 1 as an entity is continuing. Here at CASBS, Neil Mahotra is involved in this large 2020 election study, which I believe people are hoping to report results from sometime this year.

So there are some collaborations that the company has, but at the moment, the picture for Facebook is one of dramatically declining researcher access. They've gone after researchers who were investigating the validity of their claims on January 6th and the insurrection on the Capitol. They have also made moves that appear to be undermining a number of the platforms through which they gave data access to journalists and academics, and the Oversight Board still doesn't have access to the kind of data they need in order to evaluate whether the company is actually implementing any of the recommendations that the Met Oversight Board gives them.

John Markoff: Let's talk about algorithm design and intentional and unintentional consequences from the kinds. I mean, so I assume that on these platforms, algorithm design is probably intentionally focused on advertising to one degree or another. And what's known?

And then you might focus on algorithm, and then there's this, I think that you mentioned this notion about sort of the tendency of these algorithms to elicit emotional kind of behavior, and that tends to create more engagement, I guess, and that's what they want. They want you to stay engaged. What's known about the design of these algorithms?

Nathan Matias: So computer scientists have been writing about and creating adaptive algorithms for almost 30 years at this point. One of the earliest ones from the 90s was a music recommendation algorithm developed by some students at the MIT Media Lab. And you can think of the Google search engine as another example.

And so there is a substantial literature that describes how people thought about and created these things. Within a company, what typically happens is that you have a set of business priorities that are turned into metrics. And a single company might have thousands of metrics that they monitor at any given time.

But we'll probably pick a smaller set to prioritize for a given system. It might be ad revenue, it might be new users, it might be the retention of users, and they pick a few and they then set up the adaptive system so that it will basically try different things and on the basis of which behaviors collectively optimize those outputs will choose to repeat those kinds of patterns. It's actually not that different from the mathematics of oil drilling.

There's this exploration, exploitation pattern, where you spend a certain number of actions on exploring what might yield the biggest payoff. And then once you find what yields the biggest payoff, you exploit it. And that's actually how we teach it in universities, through exploration, exploitation and oil drilling.

John Markoff: And then of course, there's this societal or marketing industry parallel where they do, you're suggesting they're doing exactly the same thing, or they're doing something in parallel.

Nathan Matias: That's right. And where it becomes a problem is that they might not know how to, or even be thinking about the societal costs, right? What is the impact on people's mental health? That's harder to measure than clicks. For example.

Producer Joe: Could you say a little bit more about, we talked about how access to the information from these companies is shutting off. And I guess even when it was there, we had reports that they weren't really sharing all the stuff. But how do we understand these algorithms and their effects? And how do you, I guess, triangulate or reverse engineer these things without access to the data, without access to what they're doing behind the scenes?

Nathan Matias: I love that question. People often assume that you need to understand the inner workings of something to understand its impacts. And one productive path for studying the relationship between algorithms and society is to get into the internals of these things, look at the data they're producing, understand the code.

But even with that information, the researchers and engineers at tech companies still lack everything they need to understand the problem. And that's why it's so important to create other forms of data collection outside of industry to fill that gap. So one example would be, that I wrote about in the article, one example involves inviting people to monitor and log what they experience, so that you can see what the algorithm is doing, you can see how people are reacting.

It's a big reason to do citizen science, because while companies can see what's happening inside of their system, they can't necessarily see what's happening in people's lives. So when people come to us with their stories, when people collect data themselves, they bring a profoundly important part of the picture that might not be visible to companies or regulators. In fact, there are a lot of things that we can do without access to those internals.

One project I've been working on here at CASBS is a project to forecast online harassment by just observing the behavior of an algorithm from the outside. So you look at, for example, the recommender system on Reddit, what it's promoting and what it's suggesting, and we found a correlation between discussion being promoted by Reddit, and the rate of harassment that occurs. So even without having access to how the algorithm works or any of the data internal to the company, we're now working on a model that can give people early warning before harassment happens, just because we're able to monitor the external behaviors of that particular algorithm.

So we may discover that while the full scientific picture is going to require a lot of internal information, we can still do a lot of science that helps inform governance and safety even without access to that information.

Producer Joe: That makes me wonder, can individuals or I guess a broader community start to develop a suite of practices or counter algorithmic programming that could be used to protect or intervene in these situations?

Nathan Matias: Yeah, that idea of knowledge and practices and technologies that help people survive and manage life in the context of algorithms they cannot control is one of the core remits of the project I've been working on here at CASBS. Just like people living along a river, they might not be able to control the pollution that goes into it, but they can monitor the river and they can warn each other when there is dangerous bacteria or there are other kinds of pollution. We can do something similar online.

And so we've developed projects that can help reduce the spread of misinformation by algorithms when even when people can't control the algorithm, we're working on this harassment problem. And there are any number of other issues where you might not actually need access to the internals to be able to create reliable knowledge that helps people manage a problem in context. And so if one outcome of our work is this suite of techniques, that would be very valuable because let's be honest, let's say the US government passes regulation and requires certain kinds of transparency and accountability, we don't even have a tenth of the staff who would be able to investigate and monitor the vast number of adaptive software that is in play today.

And in addition to that, there will be countless companies that don't comply with transparency and accountability. And so if we're able to create tools of science that work for people, even when companies don't play ball, those tools end up being some of the most valuable of all.

John Markoff: So what are the other things that play besides algorithms that are fostering this?

Nathan Matias: That's the right question because you can think of the Facebook newsfeed as a mirror. It's reacting in a particular way. It's not an unbiased mirror.

It's designed in a particular way to achieve certain ends for its creators. It's reacting to individual psychology. I see something on the Internet that makes me upset and I post it online and then my friends give me validation and I feel good about my morals because I get that validation and all those clicks and that's work that psychologists have documented.

But also there are institutions at work. Yochai Benkler has documented really beautifully how political actors with a lot of money and who control various broadcast media outlets are also driving what gets the supply of content that shapes then what people share and what algorithms are able to distribute. And so whenever we think about these human algorithm behavior questions, it's not enough just to think like a computer scientist and say, well, what is the algorithm going to do with the supply it has?

It's not enough just to think like a psychologist and say, how are people going to react to the stimuli? You also need to think at the level of sociology and organizations to ask, well, how are institutions acting? How are they responding to these dynamics? And how does this drive the complex system?

John Markoff: So where does then content moderation fit into that?

Nathan Matias: Content moderation is basically the tool we have to staunch the wound of all of these problems. When people can't measure, identify, forecast or prevent problems, what we're left with is response.

And a meta executive, Nick Clegg, wrote this really telling article a couple of years ago called It Takes Two to Tango, where he talked about the relationship between human behavior and algorithm behavior. And where that article lands is the note that although meta and their algorithm, they can't necessarily make changes to their algorithms that prevent some of these crises, they do spend just unimaginable amounts of money doing content moderation, basically surveilling human behavior and policing us in order to protect their algorithms from us. The idea is that if someone is saying something hateful or encouraging violence or doing something discriminatory or spreading misinformation, the company algorithms could amplify and spread that.

Without the ability to prevent that otherwise, you find yourself monitoring everyone, trying to remove things and scrub what the algorithm sees. That's happening at companies like Metta. It's this vast operation also being conducted by some of the more responsible AI firms as they try to protect their own products and the public from the worst excesses of what their algorithms are doing. And if we don't figure out this science, we're just going to see that surveillance and policing expand more and more in human society.

John Markoff: It occurs to me as these companies try to minimize cost, that language models are probably a tool that they will add to their armament in terms of, I mean..

Nathan Matias: It’s already being done.

John Markoff: And do you have any sense of how, are there any reports or has anybody looked at the consequences?

Nathan Matias: So companies do not in any way submit to any kind of third party evaluation of the automated tools they use for content moderation. Where public domain systems have been tested by researchers, we've seen that these systems exhibit some of the exact same biases and inequalities that we've seen in other systems. So a classic example would be a system called Perspective developed by Jigsaw, which is a think tank within Alphabet, the owner of Google.

And researchers did an audit study of this content moderation filtering algorithm. And they found that not only did it not catch racist comments as much as one would hope, but it automatically labeled the ordinary speech of black Americans as racist. And it just wasn't able to handle differences of context.

And so when you think about the, like in theory, machine learning could be a really powerful and sometimes is a really helpful tool to help with content moderation, especially as a way to reduce the human impact of requiring people to look at horrible things in order to decide what should stay online. It's a gargantuan and almost quixotic task to take all of human culture as it evolves in real time and not only organize a labor force to monitor and police what we say and do to each other, but then also to reliably train automated systems to do that at scale.

John Markoff: Early on, there was this period where there was a community of people who were early internet users who were very optimistic about the notion of online and virtual communities. I mean, John Perry Barlow was probably the extreme. There was going to be this Socratic abode that was going to be separate from the real world, but you know, the Howard Rein Gold view that these online communities would be very positive.

There was a professor, a political scientist at Stanford, I'm blanking out his name now, who was studying the decline of face-to-face interaction in society. And you know, he had been looking at television viewership, and then he's extended his research to online interaction. And he was making the argument this was a form of isolation, which touched off, the online community hated this as an idea, because they believed that we were being brought closer together.

He was saying we were actually becoming more isolated. In terms of your work, does that dichotomy, is it of interest? And what's your view on what it's done to the traditional communities?

Nathan Matias: Here's how I think about it. People want to be able to make broad proclamations about technology is X, X is good for society, it's bringing this benefit, it's bringing this detriment. But it's clear that these things have mixed outcomes.

And as someone who does community science, I work with people who face the consequences in their daily lives. So, for example, I work with communities of color, who face racism and harassment and discrimination online. And who come to a conversation where someone says look at how helpful the Internet has been for people on average.

And if they're part of a marginalized community, you can have something that's great for people on average, but causes real harm for others. And so this is one of the reasons why community science matters. That when people try to take the big picture, we lose the detail of how something is actually affecting people's lives.

You have this big debate about mental health and social media. And one side says it's been really beneficial to people. And another side says, but look at all of these problems that it faces for some.

And it becomes a really debate about whether you care about the average effects and want to leave behind people who are harmed by a thing. Or whether you care about making sure that a system or technology or governance system actually works for everyone. And so that's why we start with the small scale and we study things in context in people's lives and then we build up from there rather than starting with the large scale, which often leads us to miss both the harms and some of the beautiful transformative gains that people make in their lives.

Producer Joe: How can people who are interested in participating get involved with projects like the Citizens and Technology Lab or things like that? I don't know if there are other websites or browser extensions or apps or what. How do people engage with this?

Nathan Matias: One way is to go to our website, citizensandtech.org, and follow our work. There are other organizations as well, including Consumer Reports, has a new digital labs that's doing very interesting work in this space. And The Markup, which is a relatively recent journalistic outlet, is very dedicated to involving the public in these kinds of investigations.

One of my hopes for the coming couple of years is that we'll be able to pool our resources so that we can provide multiple options for people who are interested to get involved in studying these things and can do so, whichever organization is involved.

Producer Joe: What's the current funding situation like for independent research in this space?

Nathan Matias: A lot of research on technology and society has understandably been funded by the tech industry and it's been a really beneficial and fruitful arrangement. Companies have funded generations of scholars who've gone on to do lots of great work and then gone into industry to work for industry and that's been a mutually reinforcing cycle that's done a lot for science over the last few decades. But as the public trusts industry funded research less and less, and there's data from Pew to show that this is happening, funding from industry has become more complicated for scholars.

We're worried about not just the substantive influence on what we can do, we're worried about the perception of influence and what that does to public trust in our findings especially on issues where the public trusts companies less. At the moment, there are really only a few sources for research on technology and society that aren't the tech industry. There's a certain amount from the National Science Foundation, but they've been trying to stretch that further by mixing it with tech industry money, reducing the pool that those of us like me can draw from.

And there are a small number of foundations who've been interested in supporting tech accountability research. You know, Cat Lab is funded by Ford, MacArthur, the Knight Foundation, and a few others. But I do think that for the social sciences and science more generally, to play a trusted role in the governance of the tech industry, we're going to need to see substantial increases in the funding that goes to independent research, whether that comes from the government or other sources.

And that's one of the things that the coalition that I've been working on here at CASBS is committed to advocate for, not just for any individual lab, but for the science as a whole.

Producer Joe: And that makes sense because if you want something to be more democratic, part of democracy is not just openness of control, but also transparency of things like funding.

Nathan Matias: That's right. And we have that pattern in other areas of science. For example, in the late 19th century, agricultural safety testing like food safety testing was partly underwritten by the land-grant system that supported the state universities of this country.

Similarly, environmental science research has over time cultivated sources of funding that are independent from the fossil fuel industry. And so I think we're going to need that and we'll get there, but it's going to take both concerted effort across the sciences to collectively grow the resources as well as pioneering donors who are excited to help meet this democratic need.

Producer Joe: So other than some of the things you've been involved with and you've written, I'm wondering if there is a paper or a book or article for someone who was an everyday user of these technologies but wanted to learn more and understand them better. Is there something you think really gets at the current landscape and scope of these issues?

Nathan Matias: I think two books come to mind. The most accessible introduction to artificial intelligence and adaptive algorithms, I think, is Meredith Broussard's book Artificial Unintelligence. She's a journalist.

She's written this very clear and compelling book that walks people through algorithms that they might encounter in their everyday life, and then explains how they fit in a technical and policy environment. So she has this great chapter that looks at algorithms used by schools, and then looks at where those algorithms fit into broader education policy, for example. Another book is David Robinson's book Voices in the Code.

This is the only book about multivariate optimization that brought me to tears, and it's a book about the democratic process of design and oversight of the kidney allocation system, which is this complex algorithm born out of the fact that there aren't enough kidneys to go around. And American society has decided that we are going to have a common algorithm for deciding who gets them, and that's a really, really hard problem. And what David has done has been to tell the story of the people who helped bring that about, and the story of the really difficult debates that led to the current system with lessons for how to go about governing algorithms more broadly as a democracy.

It's a beautiful book. Yeah, I think we have a choice, which is whether we want the function of these systems to be subject to democracy. Do we see these as public goods, things that matter to all of us and consequently people should have a say in?

And when that's possible, then you get these beautiful stories of people having the hard conversations with each other about what we collectively value. And sometimes you win, sometimes you lose, but you have trust in a system that you have a voice in. And when the decisions are being made behind closed doors by people who want to keep it secret, you don't even have the chance for those beautiful stories to happen.

John Markoff: Finally, post CASBS, do you have a principal research project that you're excited to pursue? What's next?

Nathan Matias: So the project we've been talking about is probably a 10 to 20 year project. So up next for me is to continue to coordinate scholars on this issue, build more relationships with communities who are affected by algorithms and excited to study them, and also develop the new kinds of scientific instruments and computing innovations on privacy, ethics and statistics that we need to actually get to the point where someone could say, I think this policy could help people rather than hurt people. I think this particular algorithm is going to reduce a problem.

And I think that we can reliably forecast or intervene in a way to keep people safer when they face risks from these complex problems.

John Markoff: Thank you.

Nathan Matias: Thank you.

Narrator: That was Nathan Matias in conversation with John Markoff, discussing adaptive algorithms and the public good. As always, you can follow us online or in your podcast app of choice. And if you're interested in learning more about the Center's people, projects and rich history, you can visit our website at casbs.stanford.edu.

Until next time, from everyone at CASBS and the Human Centered team, thanks for listening.