Human Centered

Demystifying the Disinformation Marketplace

Episode Summary

There never will be enough independent fact checking of online political advertising and their ecosystems. Can we develop methods and tools to demonetize or at least disincentivize the behaviors of disinformation producers as well as the ad firms and content providers in business with them? 2023-24 CASBS fellow Ceren Budak navigates the disinformation marketplace and illuminates pathways for better design of online communities and platforms in conversation with Pulitzer Prize-winning tech journalist and former CASBS fellow John Markoff.

Episode Notes

There never will be enough independent fact checking of online political advertising and their ecosystems. Can we develop methods and tools to demonetize or at least disincentivize the behaviors of disinformation producers as well as the ad firms and content providers in business with them? 2023-24 CASBS fellow Ceren Budak navigates the disinformation marketplace and illuminates pathways for better design of online communities and platforms in conversation with Pulitzer Prize-winning tech journalist and former CASBS fellow John Markoff.
 

CEREN BUDAK: Faculty webpage | Personal website

Referenced in this episode:

"Misunderstanding the harms of online misinformation." Nature 630, 45–53 (2024)

The Prosocial Ranking Challenge (Center for Human-Compatible Artificial Intelligence)

"Intermedia agenda setting during the 2016 and 2020 U.S. presidential elections." Proceedings of the International AAAI Conference on Web and Social Media, 18(1), 254-275. 

Lawrence Lessig's Pathetic Dot Theory (Wikipedia)

----

Read John Markoff's latest book, Whole Earth: The Many Lives of Stewart Brand  (Penguin Random House, 2022)

 

Episode Transcription

Narrator: From the Center for Advanced Study in the Behavioral Sciences at Stanford University, this is Human Centered.

In today's media landscape, we see social media companies reducing or altogether eliminating independent fact checking. Other internet platforms publish third-party content, but run from the responsibility of acting or even being perceived as so-called arbiters of truth. It's a troubling trend that's only getting more so.

And amid this, consider the role of display advertising in our online information and disinformation ecosystems. Ad firms are the middlemen between content providers and brands. What kinds of ads and ad firms support different kinds of misinformation and disinformation producers?

What are the disinformation monetization channels? Is demonetizing the disinformation marketplace at all possible? Or can we otherwise induce disinformation producers into changing their behaviors?

How do researchers determine what's high credibility versus low credibility media without being seen as political actors ourselves? On the consumer end of the equation, is this a story about unsuspecting exposure to algorithmic influence like we're all told? Or is it more of a demand side issue relating to human behavior?

And sure, we're always going to have portions of the public that are uninformed and misinformed, but what can we learn that informs the design and architecture of better, or at least better regulated, online communities and platforms? Today on Human Centered, a conversation with 2023-24 CASBS fellow Ceren Budak, an associate professor at the University of Michigan School of Information. Budak utilizes network science, machine learning and crowdsourcing methods, drawing on scientific knowledge across multiple social science communities to contribute computational models to the field of political communication.

Recently, her work is focused on topics related to news production and consumption, election campaigns, and online social movements. Some of this work quantifies the degree to which different retailers and ad firms support misinformation and disinformation through ad placements. To tackle the thorny questions, Budak tours through the disinformation marketplace with former New York Times tech journalist and Pulitzer Prize winner John Markoff, a 2017-18 CASBS fellow and familiar voice occasionally heard here on our podcast.

As you'll hear, John draws upon his superb reporting from the birth of the Internet to illuminate some of the design decisions that haunt today's disinformation landscape. Along the way, the two spend time talking about a high-profile 2024 perspective piece Budak co-authored in Nature titled, Misunderstanding the Harms of Online Misinformation. We'll link to that article in the episode notes along with a pro-social ranking challenge she discusses and other material relevant to today's discussion.

But now, buckle up and get ready to take a trip through the disinformation marketplace. Let's listen.

John Markoff: Let's begin. What was your focus this year at CASBS? Where did your research take you?

Ceren Budak: Yes, so my proposed project was on the role of advertising in the misinformation ecosystem. So particularly looking at display advertising and determining what kinds of brands and ad firms are supporting what kinds of misinformation producers and building tools based on this information to bring more transparency into this space, inform consumers, inform journalists who might want to write pieces about this, and hopefully nudge these brands and ad firms to change their behaviors. I've spent some time here focused on that, but my year here also took me to some other related, but other directions as well.

One of those was really more existential, I guess, in thinking about what is misinformation? What are these low credible news producers that I'm trying to stop with my work, and how are they indeed different from traditional news organizations? And I also spent some more time thinking about other monetization channels.

So the goal was initially to look at display advertising, as I said, but spent some of my year here to think about how they are supported through donations as well, and how presumably well-established, credible other nonprofits might be funneling money to misinformation sites that are also registered as nonprofits.

John Markoff: So at this point, based on the research you've done, are there some general, having read through it, I'd like you to ask you to summarize some of the general takeaways that what you've found have distinguished between low and high quality and what you found in terms of the influence they had.

Ceren Budak: Yeah. So we had two studies. The first study was focused on the role of the ad firms.

So these are basically, if you will, the middlemen that sit between the content providers. In this case, it could be low credibility or high credibility news, or we can refer to them as publishers as well. And the brands that are advertising on them.

So these ad firms are basically running these real time auctions to determine which kinds of ads are going to be seen when you load a page. So that was the focus of the first study. And the reason that we focused on those players first is that our assumption going into this, or hypothesis going into this, was that a small number of ad firms really are sort of dominant in the space.

And therefore, if they were to change their behavior, the change in this ecosystem would be huge. So really, we're looking for these solutions where maybe we can go as researchers, we present our data and convince a small set of ad firms to stop partnering with misinformation or low credible news sites, and have a significant result. And indeed, that's what we found in our data.

What we were finding was that the top 10 credible ad servers, so we are really separating out ad servers, ad firms that show you malware, they are already showing that they don't care about their credibility. So we're really focused on the ones that presumably should care about their credibility. And looking at the top 10 ad servers that are owned by a handful of ad firms, they were responsible for 60 percent of the low credible news ad traffic.

And that concentration is really high, and it's actually much higher when you compare it to the sort of high credible news domains, where it seems like they are actually working with a larger number of and variety of ad firms. And in fact, Google alone in our data was basically responsible for roughly half of the low credibility ad traffic. So if they were to stop working with the publishers in our list, we don't have the right counterfactuals.

Presumably, if Google were to dump them, they would have to find a new ad firm. But presumably, that ad firm would be lower quality, would give them lower quality ads, and they would make less money at the very least.

John Markoff: Your focus on display advertising or that part of the equation, does that give you, is data acquisition by its very nature more transparent because of the nature of that beast?

Ceren Budak: Right, yeah, great question. So, it's in some ways more transparent because I can be very clear about the methodology that I used to collect that data, and I'm not bound by platforms. So, I really try to stay as an independent researcher and doing that kind of work that is sort of removed from the platforms allows me to do that.

So, in this display advertising work, that work was not focused on platforms per se. So, it's really looking at the home pages of both low credibility and high credibility news outlets. We basically have this web emulator at scale, quote unquote, visiting these websites on a daily basis, loading the pages, loading the ads, recording where they are coming from, and basically creating a transparency tool as a result of that.

So, I don't have to negotiate with a platform to figure out what that data access structure would look like and also having to give them any sort of control over what the published work would be as a result.

John Markoff: Maybe you can describe how we evaluate sources. How do we assess what is a high or low value source?

Ceren Budak: Yeah. So, in my work, I rely on judgment expertise by fact-checkers or scholars that are trained to do such assessments. Most often, I've been relying on categorizations that are at the publisher or domain level.

So, basically, Pultifact or Pointer Institute or scholars that are out there that have inspected these lists basically can have a list of domains that they identify in one shape or another, low credibility. And some lists have sub-categories that we can also utilize. So, we rely on those kinds of lists.

But as I mentioned before, the agreement is pretty low. In my work, I generally do robustness studies. I think of what if I use one list versus another, would that substantially change the findings?

And the answer there is really, it really depends what kind of question you're asking. So, if you're asking questions around prevalence, for instance, we found in one of our earlier papers looking at the 2016 election and conversations about it on Twitter at the time, now X, and what percent are misinformation. Depending on what list you choose, the number was between 3 percent to 40 percent, right?

So, I think that's really important for us to not sort of claim or talk as if that's a solved problem. It's a very challenging problem, but there are some sort of really low credible news that are seen across multiple lists, and some of them are seen on only one list. And part of the challenge or part of the reason there is that fact-checkers obviously have limited time to do this work.

They can't look at all publishers.

John Markoff: Google is now the subject of an antitrust lawsuit. Have you been able to enter that information into the policy discussion?

Ceren Budak: I did have some conversations with folks from Google. And their response to that was that they didn't do it. And I don't know if I can really say if this person was speaking for the entirety of Google.

One reasonable maybe response from their end that I heard is that they don't want to be arbiters of truth. So the explanation on their end was that they allow brands to choose different publishers to blacklist. And in their mind, that they made it easy enough that publishers can say, you know, here is a list of, you know, sort of brands can say, here is a list of publishers I do not want to work with.

But I asked, do people use that? And the answer was by and large no. So yes.

John Markoff: One of the themes you've discussed was that the general perception is that exposure to this type of content was very high, but perhaps that wasn't entirely accurate.

Ceren Budak: Yeah. So first of all, the paper is a perspective piece. So it's not sort of original scholarship, but really summarizing past scholarship.

And yes, what we're saying there is that at least in public discourse, so there is this narrative that misinformation is rampant, prevalent, sort of ruining our democracy, in particular, misinformation on social media. So we're really trying to pull it apart and trying to understand why is it that perhaps that's sort of the narrative out there. And in what ways is that general theme of comments misaligned with past work.

And one of them is that usually these articles on these topics are sort of calling these like really big numbers, you know. Huge. Yeah, like 10,000, you know, hundreds of thousands, which sounds really big.

But when you put it in the context of just how much overall exposure there is on social media, it's a small fraction. So this is really about what is the right denominator. So that's sort of one type of denominator that's generally not used to contextualize the numbers out there.

The other denominator that is not used is how much of the person's overall news exposure is coming from those. So that's one of them is population level, the other one is individual level denominator and again, if you're looking at online browsing behavior, these low credibility news are a small fraction of what people consume and especially if you add in other channels like TV and so forth, it's a much smaller fraction. The other aspect that we highlight there is the fact that consumption of such content is highly skewed.

A really small fraction of the population is responsible for a large fraction of the consumption out there and that's been found by various scholars across multiple papers. And I think that's really important because I think that we're setting the wrong benchmark for social media platforms to meet. So if we, if the old narrative around is around like average exposure, well, that's actually a really low bar for them to cross because most people are not consuming misinformation but if we set the bar at sort of what we say that the tails of the distribution, like tell us what your numbers are for the sort of most misinformation consuming people on your platform, how much misinformation they consume, that's a lot higher than an average consumer.

So part of it is sort of shifting the focus. And the final point is that a lot of work that's out there is focused on sharing on social media as opposed to consumption. So consumption is more private, like what you read versus sharing is what you post on social media.

Because we don't get access to that private information, a lot of studies are using that sharing data as a proxy, if you will, I think, to what we get exposed to. And we also highlight studies in our paper that show that these are not necessarily a good sort of proxy or a substitute for one another. In fact, shared content tends to have more misinformation at least, based on some studies compared to sort of privately consumed content.

So again, if you're on looking at the shared information, you might be looking at a sort of a larger number than what you might imagine.

John Markoff: One question that came to mind was, how do we disambiguate the sharing and spread of misinformation for its intended or primary purpose versus the sharing of misinformation as educational or mocking or almost inoculating purposes? So for example, someone sharing a video of a political figure because they like what's being said, or someone sharing the same video, but from the perspective of, can you believe this nonsense?

Ceren Budak: Yeah. I think those are great points. I feel like it includes multiple points in that which we can individually look at.

One of them is, how do we define misinformation? And indeed, that it's not as easy. It's not the agreement, even for experts, even for fact checkers.

They're much better than individuals like you and me, but they still don't agree perfectly. So that's definitely a challenge in our field. In terms of people sharing, as opposed to believing, you can imagine there could be signals, you know, signaling to your group, that you're an in-group, that doesn't necessarily mean that that information is influencing you, right?

That's absolutely another important point that I think you pointed to. And the final point, which is connected to another point that we make in this paper, is around what's the distinction between sort of being exposed to or sharing something as opposed to being influenced by it. So that kind of goes to questions around correlation versus causality.

And that's another point that we highlight in our paper. And that's also connected to one other point, which is, again, we observe what we observe in terms of what people share, what people click. But that includes in it both the sort of algorithmic influence as well as your demand effects.

So you're sort of seeking out that information to begin with. And in the paper, again, we point out to some recent studies that highlight that demand effects, you actually seeking out the information, it explains more than the algorithmic influence, which, again, doesn't mean that there is no algorithmic influence, but more about if we really care about this information and environment and we're trying to improve it, where should our attention go? Some of it should go to the algorithms and the platforms, but some of it should perhaps go to, okay, why is it that these people are seeking out such information?

What is our sort of political system? And how do we think about the levers in that system, sort of zooming out and really thinking about this in a more holistic way?

John Markoff: How do we build systems to address information without seeming like we are political actors ourselves?

Ceren Budak: I mean, I think one answer to that question is, once we correct some misperceptions about prevalence of misinformation, perhaps we can get folks to understand that it's actually not going to change, for most people, their information and environments in meaningful ways. Because again, most people are not even seeking even political information. Most people do not get exposed to misinformation.

And that's true also for right-leaning folks. So for most people, actually, it wouldn't be the reality of the situation. So that's why I think it's really important to set the public discourse correctly.

But especially for the extremes, if we were to try to fact-check their media feeds and alter it, those people would get the most affected. And that's a sort of political struggle. I do, I should say that there are some scholars that are looking at fact-checking and trust in fact-checkers and thinking about how do you be, how can you create more transparent fact checks and maybe balancing this sort of ideology to do fact-checking somewhat evenly on both sides, even though perhaps on one side, you're going to catch more problematic content or another, perhaps can increase trust in these institutions.

I think there are some efforts there as well.

John Markoff: You introduced me to Larry Lessig's Pathetic Dot Model of Behavior, which I thought was fascinating. I hadn't run it, so I'm still processing it. But my general sense was that online behavior, well, first of all, it was interesting to me because there are these constraints.

It was this model of constraints. I've typically thought of online behavior as being less constrained, or differently constrained than a face-to-face course.

Ceren Budak: Yeah.

So you were looking at it specifically with respect to display advertising, but I wonder if you could generalize that, what we can take away from that model of behavior. Yeah.

Ceren Budak: Yeah,I find that model really helpful and if nothing else to structure the way that I look at, what are the potential ways that we can regulate or minimize problematic behaviors online. The other reason I like that model is that it is less about the individual and more about the structure, the world that we live in, and so that I found useful in my work. So in that model, there are four different regulators that Lessig is focused on.

The law, the market forces, the norms of the communities that we are embedded in, and finally, what he calls the architecture, which is basically the environment that we live in. So absolutely true that the online environment is different from the way that our environment is on a day-to-day basis in the offline world. And so what are some ways that that's different?

One of them could be some platforms, for instance, you're anonymous, you might behave differently when you're in an anonymous setting. It might be that in some platforms, your political sort of identity might be hidden or sort of accentuated, that might change things. Or it could be as simple as, you know, a like button being there, a dislike button being there, if it's there, it's going to be used, right?

So much like how if you put a road somewhere, people will drive on it and go and interact with people in that space, or the road might be separating two communities. So I've done some work focused on this architecture type of questions. And for instance, we looked at once you introduce threading to conversation.

So the people commenting on a news article, like a post in their commenting system, if it was just serial, like one after the other versus threaded, if I'm responding to you, it's usually clear that I'm responding to you. How does that change behaviors and attitudes? So the nice thing about the online world is that those kinds of things that are happening are now accessible for researchers to examine their effects, versus it's a little harder to do that in an offline environment.

John Markoff: What did you find with the threading question?

Ceren Budak: All right. So there, we were actually, we had expectations of, for instance, changes in toxicity, which we did not find. Part of that might be that we were looking at the Guardian and the toxicity levels were already pretty low.

So it might be a ceiling effect, if you will. But we did find that actually change stickiness of the platform. So if you're being responded to, you engage more, you stay around more with the platform.

And that's sort of explained both by people writing to you as well as sort of reciprocation. When you say something, you're more clear that people are responding to it.

John Markoff: I was particularly interested in Reddit as a sub-case in this question. And I noticed that you mentioned that Reddit had a rule, a whole norm rule about disinformation, not spreading it. And I'm curious how effective their norm-setting efforts are.

Ceren Budak: So great question. So we had... Yeah.

So I think norms are really important for us to focus on. And Reddit is a fantastic platform to study that because each community, which is defined by subreddits, has its own rules. So it's own moderators, its own way of determining what kind of content belongs there.

And some of that is descriptive. So you can have the... They have their rules that are written on the sort of the right-hand side of the subreddit view if you go and visit it.

And some, they don't have any rules. So in some, or at least no rules about misinformation. So we did not look at misinformation per se, but we had one project where we looked at toxicity.

And we really were curious about how is it that different communities that have vastly different toxicity norms sustain the same level or the same at least sort of observed norms of how much toxicity is accepted on a particular community. And we, using sort of large scale observational data, we were able to identify subreddits that were really toxic and really, really low on toxicity. And we were interested in how newcomers adapt so is it that, for instance, subreddits select people who already have a preference for the same level of toxicity as the community.

So that would be a selection effect. Or is it that once you decide to join a community, you adapt, you change your behavior even before your first comment? Or is it that you learn over time and change, maybe you're a really toxic person, but after maybe 10 comments, you reach the subreddit level?

Or is it that people who have different toxicity preferences basically drop out faster or at higher rates? The thing that we found there was that the biggest effect by and large was people changing their behavior by their first comment. So people are really adaptable, and context really determines how we behave.

So that's, I think, a really important thing for us to think about. For our platforms, being very clear about the norms, and it doesn't have to be platforms. Like any community, being very clear about the norms, really people can adapt and change their behavior.

John Markoff: It's a lesson for designers, basically.

Ceren Budak: Yes, absolutely.

John Markoff: Going all the way back to the beginning of social media, my project here was a biography of Stuart Brand, who created something called The Well, which was one of the very first social media. And what people don't realize, by and large, is he walked away from it, feeling it was a failure, a design failure. Because while he banned anonymity, he did not ban pseudonymity.

And he found he got terrible behavior, synonymous behavior. But it's interesting that you learned something that could be useful in designing online communities.

Ceren Budak: Yeah. And I think that I've appreciated more as I, so again, I was trained as a computer scientist, and I'm in an information school now where I talk to more HCI, Human-Computer Interaction, focused folks. And they rubbed up on me a little bit.

Now I'm thinking more about design, design implications of the work, and thinking through that. But I would say that there, we were in this Reddit world, we were looking at behaviors. Unclear to me how that would translate to attitudes, for instance.

But you can, I think there's something to be learned about with the right design. We can't change people fully, but we can sort of encourage them. That's one of the regulators, as Lessig would also note, one of the regulators for behaviors online.

John Markoff: Another thing in the Lessig topology that really jumped out at me because of some of the stuff I'm reporting on now was architecture. This is obviously an area where, well, maybe there's some opportunity for research, but there is a W3C protocol called Activity Pub, and Threads is built on it, and Mastodon is built on it. Why I think it's significant to your research is that it's a two-way architecture, and so the potential is to create a new kind of social media ecosystem in which publishers curate audiences and they don't chase traffic.

Now these things are starting to emerge. They're at scale in the case of Threads, where there are 100 million or so people.

Ceren Budak: So I think I'm missing, how is it that they would not be chasing traffic?

John Markoff: I guess in the end, it would be other business models that would emerge. But it would be, for example, the New York Times is a classic example of this, where we still chase traffic, but in theory...

Ceren Budak: Yeah. Oh yeah, it's absolute. Subscriptions are a lot more important.

Ceren Budak: Yeah. Oh, I see. Yes.

I mean, it is true that there are different monetization channels, and that might actually change the ecosystem, which is why I'm interested in not only looking at advertising, but it could be, again, donations if you have, if you create a news domain and you're trying to find your niche audience and you do so through subscriptions, that would be one, or it could be, again, you have a few people with enough sort of monetary funds that you kind of go to them for the support. So I do think that it's important to think about these different channels, and like you said, for the architecture as well, thinking about different architectures and how that would shape a lot of focuses on consumers, but I think it's important perhaps, like you said, to focus on the publishers and how is it that it's going to be changing publishers' behaviors as a result.

John Markoff: Yeah, this is very new, but I go back far enough, you know, when Ted Nelson conceived of Hypertext and Xanadu, the links went two ways. And then when Berners-Lee designed the web, you know, the URL shortened that and links went one way, and so you couldn't follow things back, and that was an architectural decision, and there were consequences, and so this might be seen as Ted Nelson's revenge, although it's still playing itself out.

Ceren Budak: Yeah, that would be a very interesting sort of natural experiment to think about publishers that are switching between those and seeing what's...

John Markoff: Well, I'm approaching this as a reporter, and what I'm seeing is there is a very new, but very lively kind of ecosystem around this protocol, and there are lots of publishers who are looking at it, so I think there's at least a story there. I don't know if there's a research avenue.

Ceren Budak: There should be.

John Markoff: So I wanted to ask, I saw this figure in the Times piece last week, it made my eyes cross, where they said that somehow in a poll, they discovered that 17% of the people in the poll in these battleground states believed that Biden had been the one who killed Roe v. Wade. And what do you do in that kind of a world?

Although someone then pointed out to me that that was about equal to the number of people who said that they got no news at all.

Ceren Budak: Yeah.

John Markoff: And so it's crude.

Ceren Budak: Right.

John Markoff: How would you know?

Ceren Budak: Yeah, I think both uninformed and misinformed publics have been an issue for a real long time. And it's an ongoing challenge for us, like why and how can we correct it? So that does sound like, I guess, a scary number.

I would agree with that one.

John Markoff: Yeah. Yeah. The Republican Congress has targeted misinformation researchers.

Have you managed to evade this?

Ceren Budak: My... I've been okay so far. I should say, though, we had...

It's funny, because we have a project that... An experiment we were about to launch, and there were fears of attacks, and we decided not to. So it's some of the results you might see not in researchers directly being targeted, but what you don't see out there, which is really scary.

John Markoff: So there's been a... What do you call it? Chilling effect.

Ceren Budak: Chilling effect. Absolutely. And I might have different thresholds for my own sort of safety and worries, but it really changes for me when a student is worried.

So that's where I think I draw the line, and that was in this particular case, a student was involved and we didn't want to use his labor without giving him credits, but also wanted to acknowledge and go with the fears there. So that's one example. I would say I haven't been personally implicated in these, but University of Michigan, other colleagues from University of Michigan have been, and of course, we know various other big institutions like Stanford and U-Dub and so forth as well.

John Markoff: You mentioned the pro-social ranking challenge. Both the backstory and where you are.

Ceren Budak: Yeah. So this started at this point, I would say maybe a couple of years back. I'm really bad with time.

But it started as a project where we were really interested in, again, this causal question. So people talk about exposure to toxic content that we're exposed to and how that leads to all these negative consequences, but really no experimental test of this. So we wanted to do that and we wanted to do that in sort of externally valid way.

So we built this browser extension to hide toxic content. So we'll have a control group where it's not hidden and a treatment group where it is hidden, and we'll have a baseline, midline and endline surveys where we can assess your attitudes to see how things shift, as well as we can see your behaviors online to see how they shift as a result of that. So that was the project, just basically hiding content or not.

But it was such a big engineering undertaking that we thought it's almost wasteful to have that one project be the end result of that. Given that now we have this browser extension, can we repurpose it to test other questions?

John Markoff: How do you attract users to this extension?

Ceren Budak: Yeah.

John Markoff: Where do I sign up?

Ceren Budak: Currently, we are recruiting people through ads. And that's a challenge. And Facebook, for instance, has a more robust advertising branch, if you will.

So it's easier to find people through there, Twitter less so. And Reddit, we can take a different approach, because with Reddit, people visit subreddit, so we can actually post on subreddits. We're currently in the process of trying to figure out if we can communicate with moderators of subreddits to ask for permission to post our study, try to recruit participants that way.

But it's quite expensive. That is definitely a big item in our budget in recruiting people. Advertising is expensive, and we're also paying folks for filling out surveys, which is an added cost on top of that.

John Markoff: You talked briefly about sort of the lack of understanding a holistic view about how someone is influenced, both online and offline. And I think that you have a paper that's basically argued that there's an overemphasis on the influence of online information. Is there a path to do that holistic kind of research and get a better model of?

Ceren Budak: Yeah. There is no perfect research design. So I think that no matter what kind of work we do, we're going to have some limitations, which is why I think it's really important for us to be very clear about our limitations.

So I think about that paper that you mentioned being just as much about not over claiming. So in that sense, you know, whether I have a design in my mind that I think will answer that question, I think the answer is no. But I do think that there are sort of more holistic ways to do work in this space.

So for instance, more experiments, more experiments that have external validity would be one way to go here. So which is why we're involved in this project now where we're going to have this really large scale experiment, recruit people to install a browser extension, where people's social media feeds are going to be altered according to various different research ideas, and we can test how that affects people's attitudes and behaviors. So that is still not the whole picture, right?

Because that's only social media. So we don't get to see what news you're watching. We don't get to see who are you interacting with on a daily basis.

So which is why I think about it sort of more like a Swiss cheese model, if you will. So there will be some projects like this that are experimental, making causal claims around social media alone. But TV is really important.

So how do we study that? Those are a little trickier to study experimentally and at scale. But there are really interesting observational studies that, again, are looking at panels of users, and panels of people, and their TV or other news consumption, and linking that to their behaviors and attitudes.

John Markoff: How does the social science community see the influence of these various media on political influence at this point? Print, television, social media?

Ceren Budak: Yeah, that's a really crucial question that we should be asking, and I think maybe we're not asking enough for a long time. The assumption was to go to social media. I feel like part of this, I don't know, maybe it's data availability might be part of it.

It's certainly a lot easier, at least for researchers such as myself, that's trained in scraping content and using APIs to get access to such data, to study it. And I think it's also a new medium whenever we have that. We're trying to understand what happens with it.

So there is, in terms of the amount of work that's focused on social media, this is not part of a paper that's going to be published, but in the process of writing a paper, we created this visualization looking at basically Google scholars searching for social media, searching for television and print. A lot of work is on social media. But studies that rely on a representative panel of US population or consumers show that most people are still getting their news by and large from TV.

So there is this, I think, mismatch. We should absolutely want to understand this relatively new medium, but also not forget that a lot of communication, persuasion is still happening through these older channels that we, I think, ought to pay more attention to. I don't think that we're going to anytime soon find a bulletproof solution to this because, again, I think we think about the facts as like binary 1-0, but really the reality of it is that political information and information in general sort of contested, and so that's why my work, especially this year, my mind went more to away from this factfulness to what are the persuasive effects of messaging and the harms of messaging, and there, what I might think about as harmful, perhaps somebody might disagree that it's harmful, but perhaps we can have better agreement on, yes, the message indeed has that particular persuasive effect.

So, and I think that also gets us to put our values more on paper and be honest and open about them. And when I think about the role of journalism, I think some of that is, you know, informing people, but some of that I think is really important, especially as I think a minoritized person and coming from Turkey is, how do we hold power accountable? And maybe it's these low credible news or maybe it's well-established organizations that are providing things sometimes a little bit out of context.

Maybe they're not holding power accountable or maybe it's the political elite. So, who should we take? Who should we keep accountable?

John Markoff: Thank you very much for spending the morning with us.

Ceren Budak: Thank you very much. I really enjoyed it.

Narrator: That was Ceren Budak in conversation with John Markoff. As always, you can follow us online or in your podcast app of choice. And if you're interested in learning more about the center's people, projects, and rich history, you can visit our website at casbs.stanford.edu.

Until next time, from everyone at CASBS and the Human Centered team, thanks for listening.