A Philosopher’s Take on Truth and Misinformation
Misinformation is impacting society at all levels, from politics to health. But what makes us believe untrue things? And why is misinformation on the rise today?
In this episode of Talking Policy, host Lindsay Shingler is joined by Cailin O’Connor, a Chancellor’s Professor in the Department of Logic and Philosophy of Science at UC Irvine, and author of The Misinformation Age: How False Beliefs Spread with James Weatherall. Together, they discuss how false beliefs originate, how technology contributes to their spread, and how misinformation can be combated.
This episode was recorded on August 27, 2025. The conversation was edited for length and clarity. Subscribe to Talking Policy on Spotify, Apple Podcasts, Captivate, or wherever you get your podcasts.
Lindsay: Do you think the earth is flat? Of course you don’t. It is demonstrably untrue. But people believe a lot of things that aren’t true. And we’re here today to talk about why.
Hi, I’m Lindsay Shingler, the host of Talking Policy. And today we’re joined by Cailin O’Connor, a philosopher at UC Irvine, who’s written a book on misinformation and belief, and is someone who thinks deeply about how we come to form beliefs and the impact that those beliefs have on society.
Cailin, welcome to Talking Policy.
Cailin: Hi, Lindsay. Happy to be here.
Lindsay: So you’re the first philosopher that we’ve had on the show, and I Googled the word “philosopher” this morning and it turned up a lot of paintings and sculptures of very muscular Greek men in togas luxuriating in thought. So that’s the image.
What’s it actually like to be a philosopher? Tell us a little bit about what you do.
Cailin: Yeah, so I’m a Chancellor’s professor at UC Irvine, and a lot of what I do is pretty similar to what other academics do. You know, I write papers, I make arguments. What I do specifically is philosophy of science, which is a sub-area of philosophy that tries to understand things like how does science work and how do we make science better?
So I work on different topics, but a lot of things that I look at are asking these types of questions and then trying to make arguments about how should we change science? How should we build a better society, things like this.
Lindsay: Just a small thing, yeah.
Cailin: Yeah, just better society.
Lindsay: Better society. Yeah, great.
So we’re here to talk about the work that you’ve done on how we form beliefs, including false or harmful beliefs, and the things that drive those beliefs, including misinformation. It’s a really interesting topic, and I’m curious what brought you to this topic. You’ve devoted a lot of time to it. You’ve written a book, you’ve written articles. Why do you care about this subject, both intellectually but also personally?
Cailin: Well, I first got into working on this in 2016, so I at the time had been doing this work building models, like computer simulations, to try to understand the spread of information in science, and especially why scientists might come to form false beliefs.
And then in the wake of the Brexit vote in the U.K. and the U.S. presidential election, it was becoming clear that misinformation was playing this big role in political events in a way that at least online information hadn’t before. And at that time, a co-author of mine, Jim Weatherall, was like, “Well, why don’t we try to take some of these models you’ve been building and repurpose them to understand misinformation more generally?” So that was how I got into this.
As far as, you know, why it’s something I wanted to work on or was meaningful to me, I always, as a philosopher, have been more interested in things that are socially applicable or socially engaged. So I’ve worked on inequity, like gender and racial inequity, and I’ve worked on, as I said, the functioning of science. And so both Jim and I thought, this is something that’s playing this big role in our politics, in our society. It’s causing problems for people. It’s leading people to make bad choices in our day-to-day lives. It really matters. So we thought it would be something worth researching.
Lindsay: Yeah. Can we say that Jim, your co-author, is also your husband?
Cailin: Yeah.
Lindsay: …and co-conspirator.
And I mean, yeah, it’s one of these issues, that false belief, that just comes up all the time now. I mean, when I think back to 15 years ago, I don’t know that we were all talking about this. Not that it didn’t exist, but now it comes up all the time, you know, over dinner with family, at work with coworkers.
There are different versions of the truth floating around, and it feels pretty useless to try to argue about it. So I’m really glad we’re talking about it.
Cailin: Yeah, I mean, it’s always been the case that people have held false beliefs. It’s always been the case that people believed in conspiracy theories or had wacky scientific ideas.
I think one of the things that has changed is that with massive polarization in our society, we’ve also gotten this polarization of beliefs where sometimes you have almost half of your whole country publicly supporting something that’s just demonstrably false, and that’s newer and weird.
Lindsay: Yeah. What’s an example of that? Again, just to ground our audience, what are we talking about here?
Cailin: An example would be something like, the climate isn’t warming, or it’s not warming as a result of manmade emissions. You know, this became a Republican talking point at some point, and as a result, and because of massive polarization, people who are identifying with the Republican Party and want to show that they’re part of that identity and are mostly influenced by, you know, right wing media, will at least say they don’t believe in climate change, and many of them don’t believe in climate change.
Lindsay: Yeah. Okay, so your book The Misinformation Age, which, as you said, you wrote with the philosopher James Weatherall, is about how we form false beliefs, why they persist, why they spread, and what can be done to change them. Would you give our listeners—I know this is a super complex topic and an impossible question— but can you give them an overview, like what did you look at, what did you find, if you had to sort of boil it all down into a bit of a nutshell, what can you tell us about this book?
Cailin: Right. So when we wrote this book, there was a particular thing we wanted to explore, which was the role of social connections in the spread of false beliefs. And the reason we were so interested in that is that, you know, 2016 is now quite a while ago, but in looking at misinformation in the early days, a lot of people were talking about the ways we as individuals reason badly, like that humans have these reasoning biases, confirmation bias, or we’re bad with probabilities.
And we thought, okay, well those things are important, but much, much more important to what anyone comes to believe is their social world, because we learn almost everything that we believe from other people. We can’t find most things out by ourselves. We have to trust others. We have to trust scientists, experts, friends, neighbors.
And so what we wanted to do in the book was make that point number one, that what really matters are these social connections, and then explore various ways that you can get misfunctions and pathologies arising because humans are such social learners.
Lindsay: Yeah, so on the one hand we arrive at a lot of good beliefs from our social networks, but it’s also how we can arrive at false beliefs.
Cailin: Yeah, like a metaphor I like to use is that social learning—so transmitting information from person to person via language, via text, via images—it’s like opening up a door, and when you open that door, all this good information can come through, and that’s fantastic. That’s how we have culture and technology and modern medicine and all these things. But once you open it up, bad information can come through too, and there’s almost no way to have that door open without getting at least some bad information spreading between people.
Lindsay: Okay, so you’re arguing that misinformation and false belief are social phenomena. I’m curious how you define misinformation.
Cailin: Yeah, this is something a lot of people have worked on, especially in the last ten years.
And a lot of people will say things like, well, misinformation is false information that’s shared not necessarily with an intent to mislead, whereas disinformation is false information shared with an intent to mislead. So I think that’s often the standard definition. I don’t love it though, and part of the reason is that misinformation is really varied and weird and complex and takes all these different forms.
For me, what I like to think of is more that it’s information that for some reason or another interferes with people being able to form good beliefs or to take good actions. And sometimes that could be a true statement that ends up being misleading, or it could be a video that creates an evocative emotion, or it could be an image that creates a misimpression of something that happened even though it is an actual photograph. So I think the best way to understand it is as this kind of umbrella concept for a lot of weird, different stuff that prevents people from acting in ways that are good for them or believing things that are accurate.
Lindsay: One of the unique features of your book is that you use game theory and computer simulations to model these social interactions that you mentioned. I’m very thick on this kind of stuff. Can you explain, for my level of knowledge on this, which is about kindergarten level, how you do that?
Like, for people who are not in this space at all, what does that mean to use those kinds of techniques to understand this kind of thing?
Cailin: Yeah, so what we do mostly is build computer simulations and run them, and you can think of the simulations as somehow simulating something like a group of humans learning.
And obviously, these simulations are in no way identical to an actual group of humans learning from each other. But what you do is try to pull out key features and elements of the way people communicate, the way they learn, the way they spread information, and then test in this very simplified, simulated world, what happens when you instantiate these features, and then that can do a few things for you. One thing it can do is help you really pull out what causes what. So you can ask in this very controlled, simple environment, what happens when we introduce confirmation bias, or when we introduce desires to conform with other people, or where we add a propagandist who’s trying to confuse everyone by sharing information, so we can get real control on what could potentially be causing what in the social world.
Another thing it allows us to do is show something that’s kind of a theme of the book, which is that even very good learners can go wrong when they’re in the wrong type of social environment, so that it doesn’t have to be that everyone’s dumb biases are causing them to believe false things—even ideal learners. So we can model our little agents in the model as being perfectly rational, and they can still come to false beliefs because of this social spread of information.
Lindsay: Okay, so let’s unpack some of the ideas in your book. And this is all pretty heady stuff, so to the extent that you have real life examples, let’s talk about them.
But let’s start with how you define truth. And it’s funny because this should be easy, right? To define something that’s true and something that’s not. But it’s actually not that easy. One of the things I found really interesting is that, in the book, you’re connecting the idea of true belief with action.
So your definition is that true beliefs are those that generally successfully guide action, and that false beliefs are the ones that generally fail to reliably guide action. So you’re connecting it to action, but also to this notion of success. And I’m curious about why you chose to define it that way.
Cailin: Yeah, this is a really good question. So there’s sort of two parts to the answer here. One thing has to do with the fact that we’re philosophers of science, and if you look at the history of science, there’s all these statements about the world that end up being technically false, but are extremely useful guides to action. Like gravity is pulling you to the ground. It’s not literally true, because we now no longer think gravity is a real force. We now think we’re in a curved space time where our natural trajectories would pull us toward the center of the earth.
And yet statements like that can be incredibly useful for guiding action. And so in some way, it’s quite useful to think of them as true statements of the world. And when you look at science, we’re never really fully sure about a lot of things, whether they’re going to remain true forever, whether they’re going to continue to be our best understanding of the world. So that’s sort of the academic background to it.
Now in the context of understanding misinformation, we also thought this would be a very useful definition because there’s this long history of propagandists trying to weaponize the idea of doubt. So for example, the classic case is tobacco, where big tobacco for many years pushed the idea that we’re not totally sure that cigarettes cause cancer, we’re not totally sure that cigarettes are killing us. Or with climate change, like, well, there’s some doubt that the climate is really warming because of carbon emissions.
And so part of the point we’re trying to make here is that that doesn’t matter. What we’re trying to do is guide action in a successful way. We’re trying to come to beliefs that allow us to act in our own best interest. And when you have a lot of evidence, a belief like anthropogenic climate change is happening is true in that sense. You know, it is the belief that will help you make good decisions and good policies for the future.
Lindsay: Yeah, that’s interesting. Part of the reason why I wondered if you tied it to the concept of action that leads to some kind of success as a way to get around the fact that people just often don’t believe evidence. And so if the definition of what is true is that it’s evidence-based, like how do you get around the fact that people are perfectly willing to reject evidence?
But in your definition, if I believe something that isn’t true and I take an action based on that belief, then you know, the world will kind of show me that that belief isn’t true by kind of biting back. It will thwart me, it will hurt me. I won’t achieve my goal because the belief was false.
So the evidence lies with me, and I don’t have to trust this other evidence that I can’t really easily evaluate. I was just curious, I don’t know if that’s how you were thinking about it, but I was wondering if that had something to do with it.
Cailin: Well there’s something there that we write about, and it’s sort of almost something that we’re getting at, which is this problem of induction. It’s this very famous problem in philosophy, which is that evidence from the world can never tell you something for sure. You know, you can see the sun come up thousands and thousands of days in a row, but you don’t know that the sun is going to come up tomorrow.
So there’s this kind of deep worry about evidence—that you can always interpret it in different ways and it never a hundred percent makes you certain of anything. It can give you a very high certainty in something, but it can’t get you all the way to a hundred percent. So I think that’s kind of related to this idea that people could interpret evidence differently, or mistrust it for different reasons, or come up with some other explanation for why that evidence exists, you know, “Maybe the scientists were lying to us.”
But yeah, this definition also does foreground this issue that if you believe false things, it can bite you in the ass. It doesn’t always…
Lindsay: It doesn’t always, yeah.
Cailin: [You could believe] your whole life that evolution isn’t happening and nothing’s going to happen to you. But if you don’t get a COVID vaccine, you could die.
Lindsay: Yeah. That was going to be my next question, is there are so many cases where false belief doesn’t have a negative consequence, you know, personally. Like you might think that, you know, vaccines cause autism—which they don’t—and therefore not vaccinate your kids against horrible diseases, but your kids might not ever get those horrible diseases. So nothing is biting back to sort of push against that belief. And so what do we do about that?
Cailin: Yeah, I mean, this is a thing that’s quite tricky. So there’s some kind of beliefs where it really, at the end of the day, doesn’t matter if people believe false things, and I think evolution is the perfect example.
There just isn’t a place where the rubber meets the road, if a bunch of people in whatever country don’t believe evolution is happening. And so in some way it’s not really a big problem for public belief. It’s these kinds of things where the world is going to bite back where it matters, or where there’s a chance the world could bite back.
So as you’re pointing, there’s all these cases where the consequences of your actions may be uncertain. So you could not get vaccinated for COVID and be fine, or you could not get vaccinated for COVID and die. I actually had a friend of the family who did die because he didn’t get vaccinated. But most people who didn’t were just fine, right?
So it’s these probabilistic cases where people don’t get a lot of input from the world where it could be quite difficult to convince people that you’re wrong about something, that you believe something false, and where it actually does matter to convince people, you know. If you want to be a flat earther… okay.
Two other things that are issues is sometimes there are false beliefs where the world is going to bite back, but there’s a delay in time. So climate change is like that—we can all run around driving our cars [and] nothing all that bad’s happening to us yet, but we know bad things are happening in some places in the world, and will get worse over time. There’s also sometimes ways where your false beliefs end up hurting others. If you have false beliefs about immigrants and criminality, and then you vote for someone who has strong anti-immigrant policies, you at the end of the day are not really harmed by that.
Lindsay: Is there like a value judgment in the way that you’re defining true belief? If it’s connected to successful action. Like who gets to decide what “successful” is?
Cailin: Yeah. This is a deep question because people want different outcomes in the world. And so, you know, someone who at the end of the day doesn’t want immigrants in their country for a bunch of reasons, voting for anti-immigrant policies may be a totally successful action for them, even if it was a choice that was grounded in false beliefs, right? So one way we sometimes think about or model decision making in economics and the social sciences is that your decisions are grounded both in your beliefs and your values, where your values have to do with what you prefer, what you want to bring about in the world. And people have different values, right?
Now, one reason, in the book, we try to convince everyone [that] holding true beliefs is important, is often the best way to implement your values is by holding true beliefs, right? So, I think there are a lot of people in our country who actually value sustainability, who want their grandkids to be in a world where they can thrive and be happy, so they hold that value, but they’ve been convinced that climate change isn’t happening, and so they aren’t able to bring about or implement that value via their actions because of this false belief.
Lindsay: Yeah, I was thinking of another example from your book about when sort of a value, or a need in our kind of hierarchy of values and needs come into conflict with each other, like in a scientific community, for example, where you value a true belief but you also value and need social acceptance, and speaking up and dissenting from a dominant view might put social acceptance at risk. This is true for all of us, right? And so what do we need most, that true belief, or do we need that social acceptance more than anything else in the hierarchy?
Cailin: So we have a whole model in the book where we talk about conformity bias. The fact that people like to do things that are the same as others, state beliefs that are the same as others, conform to a group. And this is related to what you’re calling social acceptance, right? So the reason people like to do that is that they get social payoffs for conforming with others.
And the way we model it is thinking of us as having different sorts of payoffs. We have payoffs for taking actions that are useful in the world, and then we have these social payoffs. So maybe I get a payoff for not vaccinating if my friends are all anti-vax, and that might overwhelm my expected payoff for vaccinating, right? If not vaccinating is not that risky because of herd immunity, it might actually on balance be better for me in a sort of payoff value sense not to vaccinate.
Lindsay: Yeah. Well, so tell us more about what the model shows about, again, how these ideas are spreading in social networks. How is that happening and where does it originate from? What’s the source?
Cailin: So there’s different kinds of ideas and beliefs, and actually they can spread in quite different ways. And if you look at this whole literature modeling the spread of information, there’s different kinds of models because they’re tracking different kinds of ideas.
So some things, what we might think of as rumors, basically get passed from head to head, almost like a virus passes from person to person’s body. So something like you might’ve heard when you were a child that bubblegum sits in your intestines for seven years. That’s very rumor-like—so one kid tells it to another kid, tells it to another kid, they all just immediately believe it.
Then there’s other beliefs that are much more tied in with evidence that we’re getting from the world. So do I think that mangoes cause diarrhea? I’m sorry, I’m just like, what can I throw in here? So they do, if you eat a lot of mango…
Lindsay: Nice. Listeners, pay attention.
Cailin: Yeah, listeners pay attention. Don’t eat too much mango. But that’s the sort of thing where someone could tell you, but you can also try something and figure out whether that’s true or not. And those kinds of ideas spread a little differently, because someone can tell it to me, I can change my behavior on the basis of thinking that, and then that’s going to change what I learn about the world and then what I tell other people. So if someone tells me that about mangoes, I might stop eating mangoes. I might tell others you shouldn’t eat mangoes. And then everyone might come to think, like, don’t eat any mangoes or something like that.
So we focus a lot on those kinds of beliefs. Beliefs that are related to evidence that you’re regularly getting from the world. There’s also another thing people model, which is a little different from what we’re talking about, but sometimes they’re called opinions and these are things that are often representing more value-based stuff, like “We should regulate guns”, would be an example of something that would be modeled as an opinion.
Lindsay: Well how does it happen in sort of the political realm? I mean, the premise of your book and part of why we think we agree with you, that this is so important, is that these false beliefs, produced through misinformation, can have these giant impacts on our society.
What does it look like in our civic shared political space? Again, we’re all kind of accustomed to thinking about, okay, there’s misinformation, there’s social media, things are bad. But when you’ve investigated this, what is actually happening, and is what’s happening actually worse, or are we having the same problems we’ve always had, irrespective of our technology?
Cailin: No, I would say that technological changes over the last couple decades have really created new problems that are qualitatively different from things we’ve seen in the recent past at least. So as I said, there’s always been problems with false belief. There’s always been people spreading false rumors, sharing false ideas. Conspiracy theories have been around. All of this has been real.
What social media has done is a number of things. So one, it’s accelerated the spread of beliefs. It’s brought many more people in connection with each other who weren’t before. So new ideas used to spread much more slowly. Now we see things like during the COVID-19 pandemic, a group published a pre-print article claiming that COVID-19 had been engineered from the AIDS virus, and this went massively viral. So within a few days, millions of people have seen this claim and they think it’s just a scientific fact because it was, you know, written in a paper by real scientists. Now that paper was taken down just a few days later, but too late, it’s out there. So that’s something that’s very new with social media.
Another thing is that it’s much harder to tell who’s who online than it is in person. So if you’re looking at someone face to face, they could be a trickster, but that takes a lot. It’s very easy to be a trickster online. So accounts you’re interacting with, this could be a bot, it could be a political sock puppet. This could be Russian agents. It could just be a teenager messing with you, and it’s very hard to verify that. So all these people who want to mess with other people’s beliefs, they want to convince them that something’s not real. They want to ruin a political system, they just want to sow chaos. They have much more access to other people and ability to interact with them than they’ve ever had before.
And then the last thing is that online spaces allow people to choose their social world in a way that they weren’t able to before. And what this can do is bring together people with false or wacky beliefs and give them a community where they solidify that. So flat earthing is like a perfect example. With the internet, it’s easy for them to find each other, create a society, be like, “Yes, you are right, you are right.” The QAnon conspiracy was another great example of that kind of phenomenon.
So all of these things are new to the internet. They’re new problems that we’re trying to solve now.
Lindsay: But given that the internet also gives us, like, unparalleled access to good information, why do false beliefs stick so stubbornly?
Cailin: Yeah, this is the great irony of the internet age, because I’m sure we both remember the advent of the internet and I think it was like school teachers telling me, like, we’re going to have all the world’s information at our fingertips, like modern age kids will never be wrong again because you can just look it up.
I think that this big theme from our book is one of the main diagnoses here, that, in fact, human beliefs are deeply social, and so in the age of the internet, it is not the case that we as little perfect reasoning agents got online and sought out great information and learned only true things about the world. Instead, we got online and we created communities and we tried to fool each other and we developed new social identities and we spread rumors that we weren’t checking. So the creation of the internet basically took our social worlds and made them bigger and weirder and different, rather than just acting as a resource for us all to get good information about the world.
Lindsay: Yeah. I want to ask you about the models that you used again. You used models, you weren’t like studying real people, you were using computer simulations. Simplified, idealized, rational. And of course we know that people are emotional, complex, inconsistent, and that’s how the world is, too.
Talk to me about, like, how far can we get in understanding what’s really going on in this incredibly messy world using these techniques? Talk to me about the advantages and then also some of the limitations.
Cailin: Yeah, so this kind of modeling technology, sometimes people are very critical of it because they’re like, “Look, this simulation, as you said, doesn’t have all the things that are in the real world, and so what can it tell us about the real world?”
Now when I use models to try to understand how society works, I don’t think of them as some perfect representation of reality. Instead, they’re more like tools to investigate. They’re tools to make arguments from, and you can only make arguments from them that make sense given what you see in the model.
So here’s a typical thing that we’ll do with this kind of model. We’ll take perfectly rational learners, put them in a group where usually they should always learn what happens in the world perfectly, and then we add conformity bias. And what we find in that model is that they do much worse: that now they tend to polarize, some of them are trying to conform with each other and they all believe the false thing. Others believe the true thing, but they’re not listening to each other. And so the kind of argument we can make is, alright, we know humans do engage in conformity bias. It may be that this bias actually has these really negative impacts on our learning, even if we ourselves are pretty good at reasoning from evidence. So I’m not saying it’s guaranteed that this is how conformity works in the world. It’s more like we don’t even need to be all that irrational to start doing badly once we enter a social situation where we care about conforming with others.
Lindsay: Yeah. You know, it’s interesting. I’m glad you brought that up, that the idea of rational is important and it’s used in your book as something that’s good and I think it is good. And I gather, and I think you stated this actually, that part of what you’re pushing against in taking this social perspective and using a model that’s perfect, that you can show even the perfect model gets it wrong, is pushing against this idea that, like, “People are just dumb. Like some people are dumb. You know, not me, but other people are dumb.” It’s like, no, actually, even the perfect model gets it wrong.
Cailin: Okay, a few things. Some people actually are pretty bad at dealing with evidence. Like it is true that people are better and worse at dealing with evidence. And my uncle Matt, who I love very much, and I know will never listen to this podcast, for example, will tell you, like, fairly wacky stuff about the world pretty dependably. So that is true.
But part of what we were trying to resist is the idea, especially in this highly partisan world, that if someone disagrees with me, they must be an idiot. Like they must be reasoning really poorly or doing something really stupid. Now that is not true. And we also want to create some humility along the lines of, if you look at history and if you look at these models it genuinely could be the case that whatever you think, you are wrong.
Lindsay: Well let’s talk about policy. Let’s talk about solutions. In your book, you suggest that focusing on individual psychology or intelligence will misdiagnose how false beliefs persist and spread, and lead to the wrong remedies. So what are some of the right remedies?
Cailin: Yeah, so I do think that thinking about social remedies for false beliefs, it’s not just that I think that it’s more accurate to how humans learn. I also think they’re just more practical because you can change a social media platform through the actions of one person in one day, and that can change how millions of people interact with information across the world. Getting all those millions of people to be, like, really smart at identifying misinformation—to learn how sock puppets work, to learn how online propaganda works—that’s really hard in comparison. So we emphasize a lot that just the most practical ways to get an informed public in any country are to have good news sources, good platforms, good government legislation to protect belief. That’s like a simpler and more effective way to solve the big problem of misinformation than trying to educate everybody.
One of the things that I think the EU has done quite well is more legislation of online platforms. I think that places like the U.S. ought to be implementing that as well. Of course in our current political climate, there’s little will to do that, unfortunately. But, you know, for example, we could have a government body similar to the EPA, a flexible regulatory body that works with platforms to promote certain ideals—the spread of good beliefs, the suppression of harmful beliefs. Of course the details of doing that, you have to be very careful how you do it because you do not want to be impinging on free speech rights. But many, many people have pointed out free speech is not the same as a right to platforming. So we have a right to say what we want without being prosecuted, but we don’t have a right to have our thoughts blasted out to everyone on the internet when they are harmful or false or whatever. So there’s a ton of space in designing platforms for creating good informational environments without suppressing free speech.
Lindsay: Yeah. And it’s interesting, I mean, I can see the appeal of focusing on things that are policy level, that allow us to think about having an impact at scale, like you said, but also because, you know, going back to your Uncle Matt, it’s really hard to change people’s minds.
Cailin: Yeah, and let me put something out there that I think people don’t always think about. So we hear this all the time, it’s really hard to change people’s minds. That’s true, but only about a very small number of beliefs that are highly politicized. It’s really easy to change people’s minds about thousands of beliefs that aren’t politicized or tied up with your social identity.
So if I say, did you see this new study that says tomatoes don’t have lycopene in them? You’d be like, “Oh, okay.” It’s only about stuff like global warming, like vaccines, like these hot button issues we’ve been talking about that people are really stubborn about.
But what we need to be thinking about are things that are intervening on social environments and social structures in such a way as to break up this dependence between identity and a particular false belief. You know, if you can get someone who’s really prominent in some political group to flip the script and be like, “We are no longer associating with that false belief.” If you can reduce the prevalence of partisan news—I think partisan news is actually one of the most harmful factors for public belief in the whole U.S. Like, if you asked me what’s one thing we should change? That would be the thing. Those kinds of interventions may actually be effective at changing some of these super sticky beliefs.
Lindsay: Was there a time that you believed something that you later found out wasn’t true? You mentioned gum in our intestines for seven years, but I’m wondering if you have an example of something that was a bit more fundamental to your belief system than that, that maybe shook you in terms of your own susceptibility to misinformation?
Cailin: Yeah, there’s been plenty of times where I’ve fallen for things online. I think everyone, no matter how careful they are, has. Here’s one that’s kind of, like, embarrassing to me. I mean, so I am a liberal. I try to understand misinformation without that biasing me too much, but of course, no one’s perfect, right? So there was a meme that went massively viral during the 2016 U.S. election, where it was a picture of Donald Trump with a supposed quote from Donald Trump, and the quote said something like, “If I were to run for president, I’d run as a Republican. They’re the dumbest viewers out there,” or something like that. It was this idea that, like, years ago, Donald Trump had been like, “I’m going to target Republicans, and I have no respect for them.” I saw that and I thought it was real. It was not real. It was just misinformation. And so this doesn’t reflect well on me, right? Like, I wanted to think that that was true because I don’t like Donald Trump. Perfectly happy to just put it into my beliefs as a true thing without actually checking.
Lindsay: That’s so interesting. My example that I think about a lot when I think about how we come to believe things. I read The Guardian, the UK newspaper, regularly and I respect them, and they had a story about a British nurse who was killing babies and she’s in prison now, and it was crazy. And I followed this case. It was front page news, and it was just astounding. It was horrible.
Flash forward six months later, The New Yorker had a piece on this case that had a completely different view that was like, she has been wrongly accused, and I realized I believed the Guardian wholeheartedly. I was like, hook, line, and sinker, “Yep. This is horrible.” I am like taking it in and accepting exactly what you were telling me—and they’re reporting in good faith, I have no reason to believe otherwise.
But then the New Yorker is reporting, and I’m taking it in because I trust it, too. And it’s totally come to a different conclusion. And I realized, like, how much we take for granted from sources that we trust. In this case, two sources that I trust who are in conflict with one another.
But it was a shock to me about how easily I will believe something if it comes from a trusted intermediary. So again, it’s like, that’s not misinformation. They’re not trying to mislead us, or lie, or trick us, but it does show that we’re all very susceptible to taking for granted things that come from our networks that we trust.
Cailin: Yeah. And actually, you know, when people ask, like, how do I develop good beliefs? I tell them the number one best sources are mainstream news, because they have reputations to uphold. They do a lot of fact checking. So in fact, you were doing something totally reasonable. You were doing about as good as you can for a person listening to these mainstream sources. I think it just demonstrates that, as I said before, when you open that door, good stuff’s going to come through and some bad stuff too. We just don’t always get it right.
Lindsay: Cailin O’Connor, thanks for being with us on Talking Policy. So good to talk to you. Thanks for writing this book.
Cailin: Yeah, great to talk to you, Lindsay. Thanks for having me on.