Fully understanding and regulating our complex information ecosystems will require creating new cultures and modes of collaborating, new organizational frameworks and, yes, working with generative AI models in service of aggregating actionable scientific knowledge. Angela Aristidou (CASBS fellow, 2022-23) thinks through the crucial questions and challenges with Phil Howard (CASBS fellow, 2008-09), a renowned scholar of tech innovation and public policy as well as co-founder and chair of the new International Panel on the Information Environment (IPIE).
Fully understanding and regulating our complex information ecosystems will require creating new cultures and modes of collaborating, new organizational frameworks and, yes, working with generative AI models in service of aggregating actionable scientific knowledge. Angela Aristidou (CASBS fellow, 2022-23) navigates the crucial questions and challenges with Phil Howard (CASBS fellow, 2008-09), a renowned scholar of tech innovation and public policy as well as co-founder and chair of the new International Panel on the Information Environment (IPIE).
PHIL HOWARD:
University of Oxford page | Wikipedia page | Personal website |
INTERNATIONAL PANEL ON THE INFORMATION ENVIRONMENT:
Website | Oxford article on IPIE | New York Times article on IPIE |
ANGELA ARISTIDOU
UCL School of Management page | CASBS page | UCL article on AA | on ResearchGate |
Center for Advanced Study in the Behavioral Sciences(CASBS)at Stanford University
75 Alta Road | Stanford, CA 94305 |
CASBS: website|Twitter|YouTube|LinkedIn|podcast|latest newsletter|signup|outreach
Follow the CASBS webcast series,Social Science for a World in Crisis
NOV. 16, 2023 Event: 2023 Sage-CASBS Award Lecture | Elizabeth Anderson & Alondra Nelson
Meet the 2023-24 CASBS class
Announcing a new fellowship partnership
CASBS Program Curates Issue of Dædalus
Previous podcast episode: The Memory Science Disruptor
Narrator: From the Center for Advanced Study in the Behavioral Sciences at Stanford University, this is Human Centered.
Amid the recent noise about generative AI and large language models, there's been a chorus of calls for regulation. Countless articles sound the need for ethics and AI to such an extent that the phrase has almost become a cliché. Today on Human Centered, we'll listen to two scholars discuss what AI governance means as a practical matter for the social science community.
One of them is Angela Aristidou, a 2022 to 23 CASBUS fellow based at University College London. Her CASBUS year project, which is still ongoing, involves bringing the social sciences into AI practice for public, private and non-profit collaborations, as well as the mobilization of digital tools to support those collaborations. She and her research collaborators combined qualitative and quantitative methodologies for management, economics, sociology and public policy to create cross-national datasets and case studies about new tools being deployed in healthcare integration.
From 2019 to 22, Angela chaired the International Research Advisory Board for the International Relational Coordination Research Collaborative. In 2022, she joined the UK's National Institute for Health and Care Excellence to help develop national standards for AI implementation in healthcare. And since we recorded this episode, Angela was appointed as a digital fellow at the Digital Economy Lab at Stanford's Institute for Human Centered AI.
She is joined in conversation this episode by Phil Howard, a 2008 to 2009 CASBIS fellow and Professor of Internet Studies based at the University of Oxford. He served as Director of the prestigious Oxford Internet Institute from 2018 to 21 and currently directs Oxford's program on democracy and technology. He's the author of eight books, the latest of which published in 2020 is The Lie Machines, How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations and Political Operatives.
Howard has published widely on technology and the global information ecology across a range of disciplines including engineering, information science, sociology, law and public policy and his research applies a mix of behavioral, computational and social research methods. Phil too has news to share since we recorded this episode. He recently secured a public service leave from Oxford to spend more time building the new International Panel on the Information Environment or IPIE, an independent global organization dedicated to providing actionable scientific knowledge in response to threats to our information landscape.
It was just launched in May of 2023 and you'll hear Phil talk a bit about it with Angela. Be sure to check out some of the relevant links to those projects and others that the two are involved with in the episode notes. As you're about to hear the two discuss the social science community's obligations in this era of AI innovation, the challenges of aggregating and cross-pollinating knowledge across disciplines, the need to actually test measures and countermeasures for their causal effects, the topic of AI voice rights as opposed to decision making rights, and the thorny topic of social scientists using AI to help them find consensus about using AI.
Let's listen in.
Angela Aristidou: Phil, I am physically at CASBS right now, so the first question that I will ask you is about your year at CASBS. What are your most memorable interactions that have influenced your work?
Phil Howard: That is a fabulous question, and it's easy to answer. I would say that lunchtime conversations still stick with me, and the friendships that developed over the meals, that was such an important part of the year. We also had a regular volleyball game going and being able to still talk ideas, take them out to the court or sometimes not even take them out to the court, right? And then come on back in, get some exercise and then return to the desk, was a fabulous way of just organizing a rhythm of writing and a rhythm of social interaction.
Angela Aristidou: Obviously, it has been very productive. You are the author of 10 books, including what's my favorite in your latest, The Lime Machines, How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations and Political Operatives. And for me, what would be very interesting is if you would add artificial intelligence or algorithms at the end of that title, if you were to write it today.
Phil Howard: Yes, I think the large language models, machine learning, have so far been tools that the technology firms have used to manipulate their own data internally, and they've now just been released to the public in a large significant way that lets people play. But they've been released without a lot of forethought on how they'll be used. And there's already examples of fabulously creative applications and manipulation, political manipulation.
Angela Aristidou: And at this point in time, in particular, with all the recent developments in generative AI, LLMs in general, there have been some ongoing debates that go beyond the specific technology, but will be amplified by AI itself. For example, if there's a question that's in the political realm and there's left or right answer, should the LLM give the answer the user wants or here an answer given by an authoritative panel? And who gets to moderate that?
So what would interest you most in that area?
Phil Howard: I think it's safe to say that somebody will figure out how to use AI systems for political manipulation. So for me, the interesting challenge ahead of us is how to stop that or prevent it or steer people away from doing it or figuring out how to minimize the impact. I think, unfortunately, our experience with new information technologies is that if a lobbyist or somebody with deep pockets can take a tool and use it to manipulate public opinion, they'll try, they'll play with it.
And unfortunately, democracies are among the victims here because so many democracies have regulators that monitor elections, that monitor public communication, but those regulators are often understaffed or not well resourced. And so we get into these crisis moments where there's a new technology, somebody's paying big money to use it to manipulate public opinion, and there's no public agency that can catch them and stop them.
Angela Aristidou: What can we do as a scientific community to help in that direction or support that process to move in the direction that we would like to see it move towards?
Phil Howard: That's a fabulous question, and part of the answer involves CASBS, right? So organizing ourselves in ways that let us pollinate across the disciplines and address old questions with new forms of data or ask new questions, even of old forms of data, with new tools, that kind of scholarly creativity is necessary to keep up with the good trends and the bad. So I think working across disciplines, being able to provide these years, the oasis years that let you experiment and play, that's actually a critical contribution, I think, for public life.
Angela Aristidou: And you have practiced that in your time since CASBS. You were the director of the Oxford Internet Institute, which is a multidisciplinary community in the form that you just described. And also you have decided that since leaving that role, you were going to take on something very light and easy, like creating an international panel on the Information Environment, IPIE, which you just launched about a few weeks ago, if I'm not mistaken.
Phil Howard: That's right. At the Nobel Summit two weeks ago, we went public. It's been two years in the making, but we did our sort of public launch with the National Academies in the US and the Nobel Summit.
Angela Aristidou: And that was with a very explicit aim to be the global source of scientific knowledge about the world's information environment. So what is IPIE doing to push us towards that future?
Phil Howard: I think researchers, many researchers around the world have found that when we study the information environment or when we study public opinion, we study political communication, we often observe behavioral effects that we can decide how to mitigate against. So we observe these social problems and we have a sense of what the solutions are. But when we testify before Congress or we make a presentation to our national parliament, the regulators get it wrong or they don't quite listen or they pick out the parts that they're most interested in or worse, they overregulate or regulate without evidence.
And especially in the area of technology innovation, it is very difficult for a government to stay ahead of technology. The innovations just come up much more quickly than the law can keep up. So over the last, especially in the last six years, many of us have seen good ideas for how to improve public life, go nowhere, find no audience.
And the idea of the International Panel on the Information Environment, the IPIE, the goal is to find points of scientific consensus where we're confident on the trajectory of evidence and present that consensus to policy makers in a reasonable time, right? So this is not a 10-year journey. This is a nine-month journey on a focused research question.
Providing an answer in an accessible way to a regulator before they make big decisions is the next grand challenge. And I think it's a challenge for science and it's a challenge for organizing ourselves.
Angela Aristidou: Oh, it's definitely a challenge for organizing ourselves. I can imagine it's difficult enough organizing within the same discipline, but this is an international project that spans people interested in the same phenomenon across multiple disciplines.
Phil Howard: It is. Can I tell a funny story about that? So one of our major outputs will be meta-analysis, right? We'll be collecting other people's excellent research, surfacing the good ideas and using the systematic reviews and formal meta-analysis to arrive at points of conclusion.
Most of the best models for doing meta-analysis are only run on scoops of articles from the Web of Science, from SCOPAS, from the major scholarly archives. A lot of different scholars express themselves in different ways. I write books. I have published peer review articles, but I write books. None of my books are captured in a formal meta-analysis. So we have this epistemological challenge right now where our toolkit for aggregating knowledge is actually focused on a particular form of scholarly output and biased against another form of scholarly output.
Angela Aristidou: And coming up with better methodologies or more inclusive methodologies is part of the project itself. And that is going to be a platform going forward for others to lean on or use if necessary for their own purposes.
Phil Howard: I think so. Yeah, we know there's a lot of research in Spanish and Portuguese, in the Portuguese journals and the Chinese political science journals that simply hasn't been translated, that could well report behavioral effects that are sensible, but we have no way of aggregating that knowledge at the moment. We work with English language journals and that's it.
Angela Aristidou: So what appears to be data deserts might in fact just be just beyond the scope of what it is we can capture with the tools we're using nowadays.
Phil Howard: Yeah, yeah, that's very well put.
Angela Aristidou: So what do you think that what IPIE is building will generate essentially the next platform for research on misinformation?
Phil Howard: One of the things we're finding is that most of the research about misinformation and misinformation concludes with ideas about countermeasures, how to fix the problem or what you could do to redesign a social media platform so that it doesn't deliver junk to your inbox. But most of the scholarly articles that offer those conclusions don't test them. Don't test them out.
They're at the end of a study of behavioral effects, right? And so trying to get our own research community to up our game, to spend more time testing on it, testing countermeasures, perhaps randomized control trials, perhaps more meta-analyses. There's a bunch of ideas for how to demonstrate strong causal effects. Doing more of that work and reporting the statistics that are needed to generate new knowledge, aggregate knowledge, I think that's one of the grand challenges in this domain.
Angela Aristidou: Oh yes, and my feeling has always been that as researchers, we should be able to help policy makers take meaningful action before it's too late, especially on topics that are ongoing and rather affect multiple sides of our society. So I for one will be signing up for IPIE.info, because I do feel that if there's a way to inputting this initiative, I'd like to do that.
Phil Howard: Wonderful.
Angela Aristidou: I also found very interesting how you are including in your work overall the role of behavioral sciences in the study of misinformation from multiple perspectives. And you do that also through partnerships. Can you talk to us a little bit more about that?
Phil Howard: About the partnerships? Yes, I think so that the problem of the challenges in the global information environment will require both the social and the behavioral sciences. It's the behavioral scientists that will help us model what it means to release a piece of junk and have an impact on whether people have a vaccination, whether they choose to show up to vote, how they engage publicly.
The behavioral sciences also will help us teach how to design towards mental health. There's a growing amount of evidence about the crisis of teen mental health and what young boys and girls are seeing on Instagram with body image. There's a growing amount of evidence that visual cues people experience on social media have a long-term impact on how they see themselves.
Aggregating that knowledge is challenging, and it can't be done without building some partnerships with industry itself. For the most part, the data we need to answer big questions about problems in public life, that data is not in public hands. It's not in the Library of Congress, it's not at the ICPSR, it's not in our university libraries.
Unless we find ways of engaging with the research community that's embedded in meta, embedded in Google and Twitter, we just won't have the data that we need to get our observations right, make our conclusions.
Angela Aristidou: Was there ever a point when you decided you would engage more deeply with the sources of the data that you need for your research? Or was it always built into your platform, your research platform? For me, that point came later in my career when I realized I just couldn't do what I wanted to do without engaging them. But I know that in different disciplines it is built into their initial plans.
Phil Howard: Initial plans. So I think I've gone back and forth on this a lot. When I was at CASBS, the firms were starting to share data, but there were a number of sort of scholarly scandals where academics were playing with firm data in ways that might not have actually cleared their institutional IRBs.
So they may not have actually had the proper ethics clearance for the industry partnerships that they set up for themselves. Most of the questions I had could be answered with comparative data or public opinion data, survey data at that time, and so that's what I was playing with. Much more recently, so you asked about a point in time, and I can say, yes, there was this ridiculous project that I think of as a career low in which we studied an information operation to blame COVID on a shipment of lobsters that had been flown from Maine, and those lobsters were the source of the COVID epidemic.
And the number of languages, the number of the cross-platform integration, the toolkit behind it, the number of personnel hours that must have gone in, the visual and the text cues was really depressing. It took us months to unpack the full nature of the information operation. And at that point, for a ridiculous story, right?
And that's the point at which I decided that we needed better quality data, because doing that kind of work from the outside is extremely labor-intensive. And it's hard on researchers. I find now when I write a grant proposal, I have to include a mental health line for the graduate students, because processing some of the content and reviewing this stuff, it's taxing on me, it's taxing on the students, and it's not sustainable as a mode of inquiry.
Angela Aristidou: What I take away from that is that for research to be sustainable and effective long-term in the way that we at CASBS see sustainable and effective to be able to take meaningful action in the real world, it has to be in engagement with the real world.
Phil Howard: Yes.
Angela Aristidou: With that, we also have to find points of consensus within our own research community, which becomes even more challenging when you're dealing with multidisciplinary teams and researchers who are each an expert in their area. So have you given that some thought or are there some methods or approaches that you would like to share with us?
Phil Howard: Well, I can say a little bit about what the IPIE is doing right now, which is fairly traditional, and maybe we could talk about what's possible. The traditional techniques for aggregating knowledge involve meta-analysis and systemic reviews. So you capture all four and a half thousand articles published in the last six years about some form of misinformation, the impact of misinformation on public understanding of health or climate or a whole range of issues.
You put all those articles together. Maybe you skip the books for the moment because they're too hard to process.
Angela Aristidou: Sorry for that.
Phil Howard: That’s okay. You extract the effects, right? You need the stats to be properly reported and you build findings. That's one way.
The other very traditional way of aggregating knowledge, this is with some kind of commission. You bring 12 or 16 experts together and there's a chair and two vice chairs and you spend months. You read all the papers.
You have speakers come and address the commission and maybe you hire a technical writer to help you identify the points of consensus and you issue a manuscript. I wonder if there are ways of using AI creatively with panels of scientists who might, if we could get several dozen experts in climate misinformation together, the impact of climate misinformation, if we could use an AI deliberation tool to track the conversation, track the interaction, identify points of consensus, generate new questions for the scholars to tackle at a subsequent meeting. I don't have a particular formula here.
I just know that if machine learning tools are good at helping other small groups deliberate, I wonder if they could help scientists find, researchers find points of consensus.
Angela Aristidou: There is a big dream in that for using AI to facilitate or improve research. And we do have some handholds to start with. So, for example, reaching consensus through AI can take two forms, whether you use machine learning as a way to increase the imagination of what questions we should be asking as an academic community around this topic, or alternatively as validation to validate which questions we have answers to and which of those answers we should pursue with more attention from the research community and more funding, because that is also part of the action that would follow.
So whether it's imagination or deliberation, AI can be used. The question that I would have to follow up on that is, would scientists accept the use of AI for its assistance to find points of consensus?
Phil Howard: Yes, that's a fabulous question. I wonder if technologies have cultures of use, then scientists would need to adapt to their cultures, right? And their modes of interacting.
Many of us, we go to the ASA or the Behavioral Science Meetings, are used to aggregating knowledge in 10 or 12 minute increments, in panels of four or five, right? Where there's a chair and a discussant, and maybe some people in the audience have actually read the papers, but maybe not. And maybe some of the, depends on the conference, of course, but maybe some of the panelists, there actually is no paper, it's more of a PowerPoint presentation and the paper is not really ready.
That's right, so we have these modes of aggregating knowledge that are not fast moving and that's on purpose. We want people to take care with the ideas they generate. But it does seem, I like this idea, I like this idea of the invitation to be creative with AI.
We can't all, if every researcher had a huge budget and they had their own executive team, somebody taking notes, following them around, writing down the good ideas and keeping their calendar up to date, we might be able to do that kind of creative work. But for now, AI machine learning tools may be one of the ways of helping us dialogue and creative.
Angela Aristidou: Oh, and it definitely has that potential. There is this Goldilocks zone where it is helpful but not obtrusive. And what I've seen from my own research, which is mostly in the space of healthcare, which is also a very regulated and conservative space in many ways, similar to academia, is that the use of AI for decision making tends to follow in whatever path has existed for organizational processes before.
So the closest it is to what people are used to doing, the most likely it is that it will be adopted. So the challenge is out there to dream big about which research can be improved using AI.
Phil Howard: Okay, so Angela, may I ask you a question that I think our audience would be interested in? If you were going to start a culture from scratch, I'd buy your argument that using AI, right, it'll be adopted and adapted into the institutional structure that's there. The IPIE is starting from scratch.
If you were starting from scratch, how would you build AI into the system in a way that was not daunting, not going to spook people, not going to spook our collaborators, but might actually surface points of agreement? Are there paths to doing it well with a new organization or is that?
Angela Aristidou: I think the definite advantage of having a new organization is that you set the culture of that organization from the beginning. And if you set that culture to be a culture that accepts and is inclusive and allows technology to be part of the conversation without taking over the conversation, and this is made clear from the beginning, then everything would flow from that rather in a more straightforward way. What I would do, and this goes back to something that I heard at the Summer CASBS Institute in 2017 from Robert Gibbons at MIT and Woody Powell here at Stanford, is to imagine the future and how I would like to set up the future.
They were talking about the department of the future. You were talking about an international panel, so a little bit more difficult for you. If you would like the AI to be used as a deliberation tool, then there's a choice to be made whether you would like the AI to be considered as an expert alongside the experts, offers an opinion, a direction forward, that then gets debated by the non-technology experts in the room, to put it that way, or whether you would like the AI to offer a range of visions for the outcome, and then the other people on the panel decide which one they feel is more plausible, more sustainable based on their own understanding of the research and what they think is what we should go for.
So those are the two directions that you might choose to pursue. I would go with the second one. I take a delusional perspective on options for the future. I like to see multiple possible outcomes and keep possibilities open for other people to weigh in.
Phil Howard: For other people to weigh in. Yes, I think that's a critical point because the research community, specifically on misinformation, is primarily US-based and it's white males, often in computer science. We've already found that the research on algorithmic bias and manipulation that comes out of Africa and Latin America has different concerns.
They use different techniques and they have different problem sets. Whatever system we build, if we're actually going to try to tackle global questions or come up with global aggregate solutions, we're going to need to be able to absorb excellent science, excellent research from all corners, from all of the research communities.
Angela Aristidou: So let's playfully call this the AI voice. And let's say that the AI voice you envision in IPIE is one that can reflect issues amongst overlooked stakeholders, whether those are in communities or whether they are in research areas or geographical areas of the world. And how you bring that into the conversation, this AI voice into the conversation, so that it is then mobilized and embraced by the IPIE community.
That is where I would want to see the norms that enable the community or allow the community to accept that as a valid input. And that goes back to my point about how lucky you are that you're setting up something from scratch. So you get to create those norms together with everybody else in the IPIE community.
And that is such a privilege to have. Many of us don't. We have to work within the confines of whatever we have found.
Phil Howard: Whatever we have found. I think that's a fabulous idea. I wonder though, Angela, listening to you, if it means that we might need several AIs. So the notion that we would have an AI that would speak our voice, that would speak for some scoop of research, that totally makes sense.
But if we already know, so we have, for example, what we've noticed is that the people who study the manipulation of public opinion from the global north are mostly worried about foreign intervention. So they're worried about foreign governments manipulating local public opinion. Researchers from our global south are much more worried about their own domestic governments manipulating things in country.
It's just a different source that they're worried about. And in some cases, they're worried about different platforms. If we were to imagine, if we were imagining, accepting this invitation to imagine a new way of generating scientific consensus, would we need a global north AI based on a scope of data, scope of articles or a scope of scholarship from the global north and a global south AI to maintain that balance?
Angela Aristidou: I think the brief answer would be yes. The longer answer would require us to ask a meta question. And I'm sorry to answer your question with a question that's very CASBS though, if you think of it.
There is this limitation when it comes to any type of deliberation process is about centered around the distinction of voice rights and decision rights. So it goes to the work of Katherine Turco, who is at MIT, and I absolutely love that book. The key distinction is that giving voice rights to an entity or a population or an actor does not necessarily mean that they have decision rights.
What you just described to me is a range of combinations of entities or groups that may or may not have voice rights or may or may not have decision rights. And untangling that is a good first step to decide then where do we want, which of those do we want to or should necessarily bring into the fold as distinct voices in the deliberation process.
Phil Howard: That totally makes sense. I think one of the challenges I have heard from colleagues who played with the existing AI systems and chatting systems is that when some machine learning systems are invited to aggregate knowledge, they make up citations. And the problem is that they make up pretty good citations, right? Because they're based on the probability of word sequences. So what word string? And so if there's an author who's famous in a field, that name might appear, but the subsequent choice of words might be to articles in the right journal, but not a title that actually exists.
And so I don't think we're at a level where we can ask a machine learning system to summarize knowledge and provide real citations to actual research that actually backs up those points.
Angela Aristidou: I would say not. Actually, one of my colleagues here at CASBS this year run this experiment asking Chad GPT to talk about him. And as a response, Chad GPT-4 created titles of articles and books that he hasn't yet written. But now that he has seen those articles and books, he's very tempted to say those are really good titles. I'm going to copy that. And maybe I will write that someday.
Phil Howard: They're probably a logical extension of his career.
Angela Aristidou: It is very much so. And that goes to the point that I made earlier about how AI can be used in decision making and deliberate deliberation processes for imagination or validation. So the imagination part is that it can scan the horizon and based on the past, suggest to some extent possible futures that we might want to deliberate on.
And for humans to do that, it is very plausible, but it is also very time consuming. And we are all quite limited in how much time we have and attention we have on a daily basis. We cannot ask our fellow academics to sit in a panel for a whole week.
It's difficult enough to get you on a podcast for one hour. Actually, Philip, thank you for that. So it might, the use of AI for the deliberation process might be one of expediting that scanning of the horizon and not necessarily contributing to the quality of the options, but rather presenting us with options to consider.
Phil Howard: To consider; the voice aspect of what you're describing. I agree that makes sense, and I think in some ways, CASBS itself is an interesting, provides that creativity function and something of that validation function. So I remember being able to standing up and presenting this paper that eventually became a book about how technology provides capacities and constraints on political options.
And I had to convince the behavioralist who studies bees, they studied bees and bee cognition. And I had to, there was a Russian studies scholar and to convince that person as well. And so that test of, in a sense, we do have a model, CASBS is a model for deliberation, because we've got this validation process we have to go through.
Sometime in the year, we have to be able to present our ideas in an accessible way that makes sense and accepts critique. Accepting critique is a critical feature of the culture. Accepting critique from other intellectual traditions.
Angela Aristidou: You have always approached your research journey as, through an ecosystem perspective. You have always seen your research as part of the real world. So, what excites you the most in terms of a scientific question today that's also relevant for going forward?
Phil Howard: For going forward… That's a good question. For me, right now, I'm excited about the problem.
I'm excited about the question of how to aggregate knowledge. So we have fabulous models coming out of the behavioral sciences about what a nudge looks like on social media, how many people you can get to vote with such and such a nudge, and we can A-B test these things very quickly. But we don't have models that relate a tweet to a changed voter, a changed vote.
Or we don't have models that relate a thousand ads to a particular mental state or particular social outcome. So figuring out how to take the study of the impact of Instagram on teens from South Korea and relate it to the study of Instagram on 20-year-olds from Toronto and relate that, right, and then scope that to a study of some other, not Instagram, but still a visual platform and some other part of the planet. That's a methodology question, it's a epistemological process, and I think we're still working on the toolkit for what good mental analysis looks like.
So that's the part that's fascinating me now. How do we learn from across the disciplines?
Angela Aristidou: And what's the big remaining question or questions that you'd like to see answered when your big dream of developing this toolkit has been completed?
Phil Howard: I guess that's sort of a normative question. For me, the normative question is how can we use technology for good things, to support people, for making good decisions, for addressing social problems or identifying forms of social inequality that wouldn't take, that could be easily addressed. I think there's a host of questions, I think there's a host of social problems for which we have some plausible answers and using research methods to identify the most plausible, I think that's my normative agenda.
Angela Aristidou: You definitely take the long-term perspective in your research.
Phil Howard: Yes, although I think for myself, I'm here to speed things up a little bit. Social inequality in so many ways seems to be on the rise, right? The information environment globally is fractured. So many democracies are seriously polarized, and so many authoritarian regimes are just as strong as they were 20 years ago. Of course, there's a lot of variation in that, but I'm eager to find. I think the social sciences and the behavioral sciences are generating fabulous knowledge, and there are points of consensus that I bet we haven't surfaced ourselves.
Angela Aristidou: At this point in time, you and I both agree that it is very important what we do as a research community and as a society, and that it is almost an inflection point that we're going to have to reflect on years from now to see did we get it right. What do you think of as particularly promising or particularly threatening right now?
Phil Howard: I think, so we've already been talking about AI and large language models. For me, that's both the promise and the threat. I'd like to think it's more of a promise or more of an opportunity.
For the last few years, I've been studying how our own data gets used to manipulate us. There are examples from around the world about how lobbyists and political actors, mainstream communications firms will use our own data trail, our credit card records, the things we purchase to shape, construct our opinion of major political issues. If somebody can figure out how to use large language models to scoop our data, craft the message that we're most likely to respond well to, rapidly A-B test it in real time in the three days before we go to vote, say, that's a mechanism I don't want to see built, or at least I want to be able to have the elections officials and the public regulator know what to look for, so that if somebody is paying money to actually manipulate opinion, we can catch it and put a stop to it.
I think the hopeful part is that machine learning will provide ways of aggregating knowledge. I'm a little anxious about how the chat models will be used in education in higher ed. We hear at the University of Oxford that the policy is simply students may not use large language models.
It's simply no, you can't, never. And of course, I've had a few off the record conversations. You know, if a graduate student is doing an interviews project, they have 65 interviews. They don't have a budget for transcription. Can they drop the recordings into a large language model and get an automatic translation for $10? Does that undermine the research process? Probably not, right? If somebody is doing translation and needs translation done, asking AI to do that.
Somebody else came up with the example to compose a reading list. They asked one of the chat models to canvas the top 10 universities with courses on a particular topic to scoop up all the syllabi, extract the citations, and produce a reading list for the student. And they say it worked. And so that's kind of exciting. That seems like a shortcut. I don't know if the students who thought of these options are actually writing the whole paragraphs through Chat GPT. I'd be surprised if they are actually.
But there are some exciting opportunities that would make research very efficient, I'd say, at the graduate level.
Angela Aristidou: And those opportunities are being explored in real time around us, while the regulations are trying to catch up with them in our institutions.
Phil Howard: Yes, I'd be curious to know if you have affiliations and ties to several universities. If you've seen a good university policy on Chat GPT, I wonder what it would look like. What would the ingredients be?
Angela Aristidou: Not so far, though there are ongoing deliberations within my home institution at University College London. It is a very hot topic internally. The points made around how do we use these tools within organizations are also echoed in health care, where I do my research on how do we use the same tools sometimes, like CHAT GPT is being used in various ways in medical care currently.
And the issues are around how do we self-regulate rather than how do we expect somebody to give us a set of regulations from the outside. Now, that is something that I find particularly intriguing. The fact that people don't necessarily rely on external regulation, but instead develop their own in-house set of norms.
How do we as a community allow ourselves to use this ethically? Depending on our professional norms, whatever those are, either academic or medical. I think that might be where a lot of the future of these tools is headed towards.
The idea that the capabilities, the technical capabilities, move so much faster than the regulatory space does to cover all of those. So instead of waiting for a top-down regulation, we come together as a community of multiple stakeholders, and we put forward our norms of what we consider acceptable and not acceptable. Right now, one of the PhD students that I am co-supervising is running his own little experiment with giving CHAT GPT 50 or so interview transcripts, de-identified, anonymized to a very satisfactory level, and we are asking CHAT GPT to give us an analysis of those and then compare that to the analysis done by the PhD student who collected the interviews and knows the context where the interviews were taking place, and then comparing that to somebody who is a more senior academic who is also reviewing the same interviews, trying to do the same analysis or similar analysis without any of the context, but with a lot more years of expertise in doing the analysis than the PhD student.
I still don't have an answer to the question, but I hope that we're going to be able to say something of what comes out of that.
Phil Howard: Well, it's fabulous that you got a senior colleague to agree to be part of that experiment. That seems like I also will be interested in the outcomes.
Hey, I'm curious. You had this interesting parallel earlier of AI that's used for voice versus machine learning that's used for decision making. In the healthcare settings you were just talking about, are the settings you've seen AI for voice or are they AI for decision making?
Angela Aristidou: Both, actually. And part of my research agenda is to try to capture a series of cases where AI is used in different ways in healthcare so then I can hopefully come up with a more encompassing or comprehensive way of understanding the various ways in which AI is used, their promises and limitations, challenges and so on. So I have been very intentional about selecting both.
The reason why I emphasize that is because I had to make a decision on this at some point and there is anything that has to do with health care is very sensitive. The context is highly sensitive. Regulations are very fixed around it.
Professional norms are rather fixed as well. So the fact that health care is embracing, at least pockets of health care organizations are embracing this next wave of technologies and trying to make the most of them for the patients is a testament to the fact that they see potential into it. And that's what fascinates me.
Phil Howard: That's fascinating, yes. Yes, I think we're also at an exciting stage where we can cross-pollinate across field sites. So I would love to know what the outcomes are for the health care setting because I'm trying to get a bunch of researchers at the IPIE to use AI probably for voice, where decision-making stayed with the scientists. There must be other kinds of attributes or conditions that would help the group make good decisions in constructive ways with the tools that they have.
Angela Aristidou: Well, one of the things that you're an expert of, Phil, and I say that very humbly, is one of the many things you're an expert of is fuzzy-set qualitative case analysis. So I'm pretty sure that if anybody can come up with a set of... Yes, I can pretty much see that happening in your hands.
Phil Howard: Thank you. Thank you. That's very kind.
Angela Aristidou: Thank you, Phil.
Phil Howard: Thank you. Thank you for reaching out about this. This has been fabulous. It was a wonderful conversation.
Narrator: That was Angela Aristidou in conversation with Phil Howard. As always, you can follow us online or in your podcast app of choice. And if you're interested in learning more about the Center's people, projects and rich history, you can visit our website at casbs.stanford.edu.
Until next time, from everyone at CASBUS and the Human Centered team, thanks for listening.