Skip to content

Public Trust in Science and Technology

Going back to the beginning of the course, establishing social trust in science is one of the goals of public engagement. Not only does a lack of trust diminish science's legitimacy in the public eye, it also makes the more practical goals of science much harder to achieve . If laypeople distrust science and are incredulous or suspicious of its claims, it will be quite difficult to put scientific results into wide use.[@grasswick2010]

Vaccine hesitancy is a clear example of this. If a relevant portion of the public distrusts the science behind vaccine efficacy and safety, they will be less likely to get vaccinated no matter how readily available the vaccine is. Not only does this leave the unvaccinated with a higher risk of falling ill themselves, it also negatively impacts the overall level of immunity in the population.

Another example from the Covid-19 pandemic comes from contact-tracing apps which were designed to track the spread of the virus. These apps enjoyed widespread support from healthcare organisations as they had the potential to develop and strengthen epidemiological research. However, the reception from the general population was mixed. In particular, a relevant portion of the public was concerned with whether their data was being handled responsibly, and this sometimes led to poor adoption rates from the general population, once again highlighting the difficulty of implementing scientific research and processes when people do not trust science.

As important as citizen trust in science is, there are many challenges that can make science less trustworthy to individuals and communities. As scientists and researchers, it is important to become aware of them and reflect on how they might be mitigated.

10 challenges to public trust in science

Over the next sections we will look at different challenges or barriers to public trust in science and technology. Although we will go through each issue separately, in practice there is a lot of overlap and interrelations between them, and some of them feed into or amplify each other.

1. Understanding of science

The first challenge to fostering public trust in science we will look at has to do with science education. In particular, with the general public's poor understanding of what science is and how it works, which can lead them to misinterpret the normal workings of science as its failings.

On the first day we looked at the Deficit Model[@lewenstein2003] which assumed scientific literacy amounted to knowledge of scientific findings. As we saw, there are problems with this view since arguably members of the public have no reason to know the detailed workings of generative models in machine learning or complicated equations in theoretical physics.

There is however, another way to think about scientific literacy; not in terms of how much scientific knowledge the public has, but instead on whether they grasp the nature of the scientific process. As Douglas (2012) argues, what should be at the core of scientific education is not science as a set of facts about areas of knowledge, but instead a thorough understanding of what science is as an epistemic endeavour.[@douglas2017]

The most important thing to understand about the scientific process is that science is jointly critical and inductive in nature (ibid). Science seeks to build an empirical understanding of the world through proposing explanatory theories and then testing them in the best way possible (ibid). Therefore, science must always rely on induction to reach its conclusions. As such, there is no irrevocable proving of facts in science. Instead, one can only falsify (or fail to falsify) its theories.

A classical example is that of the black swan. The phrase dates back to the Roman poet Juvenal who used it to describe a rare bird presumed to be non-existent. The phrase was popular in 16th century London to refer to impossible events or inexisting ojects. Because no one had ever seen a black swan (at least among Europeans), it was assumed that black swans did not exist.

Such an analysis relies on induction: it goes from particular instances of (a lack) of sightings of black swans to the general conclusion that black swans do not exist. However, in 1697 Dutch explorers became the first Europeans to see a black swan in Western Australia and their non-existence was thus disproven.

The black swan story helps illustrate the limits of induction: no matter how many sightings of (only) white swans, one cannot irrevocably prove that black swans do not exist. To do so, one would need to examine all swans in existence, which is not possible. And science always works like this: we can only generalise findings from particular instances. This means a definite and irrevocable proof is unattainable.

Given its inductive nature, science must also be continually open to criticism: it must always allow for the testing of its claims under the light of new evidence. The very reason for the advancement of scientific research is the fact that the status quo is continually challenged and tested as opposed to being dogmatic and definitive in its conclusions.

However, the argument goes, the problem is that many people do not understand science as an ever-evolving process which produces only provisional results. Instead they perceive science to be a set of facts about the world (the speed of light is 300,000,000 m/s, humans cells have 23 pairs of chromosomes, etc.). This leads to public confusion about how science works, and it can make people interpret the normal (even crucial) workings of science as evidence of scientific failure.

A good example is expert disagremeent and debate. As we just saw, this is a key component of a healthy scientific community. Only if scientists are always open to criticism and willing to change their minds, can science continually improve its understanding of the world.

As Douglas notes, "[e]xperts changing their minds is also evidence of science functioning properly, not evidence of experts being fickle of weak-minded".[@douglas2017] However, expert disagreement can be met with frustration from the general public if they are under the impression that science should provide definite answers, and when scientific 'facts' change (as some inevitably will) this may be interpreted as evidence of science's failings, slowly corroding the public's trust in it.

2. Spread of doubt and confusion

Another phenomenon which can erode the public's trust in science is documented in Naomi Oreskes and Eric M. Conways's book 'Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming'.[@oreskes2011] The authors' show how a group of scientists in the United States used the inductive nature of science to discredit scientific findings that went against the financial interests of the industries that employed them.

Two of the most-well known cases are those of the tobacco and oil industries. The tobacco industry famously denied the link between smoking and lung cancer long after science had provided robust evidence for it. The same can be said of the oil industry; first they sowed doubt on whether climate change was real, and when that became increasingly difficult to deny, they advanced the possibility that perhaps it was not man-made (again, going against what the evidence was consistently showing).

In both cases, the scientists (who had ties to these industries) used the very nature of the scientific method as a way to undermine science's findings. As we know, because science is at its core inductive, nothing can ever be definitely and finally proven. There is always the possibility, however remote, that current scientific understanding is wrong.

This does not mean of course, that there cannot be overall consensus in science. In the case of climate change for instance, Oreskes (2004, 2007)[@oreskes2004]-[@oreskes2007] documents that in a review of approximately 900 scientific articles on climate change, none of them refuted the idea "global climate change is ocurring, and human activities are at least part of the reason why."[@almassi2012] However, she also notes that a 2006 ABC News poll in the US found that 64% of Americans perceived there to be a lot of disagreeemnt amongst scientists on the reality of global warming (ibid).

This is the strategy of the 'merchants of doubt': spreading confusion among the public about the level of scientific consensus on certain topics in an effort to delay public criticism and therefore regulation. If, as we have noted, the public expects science to provide with a set of unchanging facts, this strategy will have a doubly negative effect: not only will it undermine trust in science as scientists are supposed to have the answers, but it will also create the false perception that there is no scientific consensus when there actually seems to be one. Clearly, this can increase distrust in science, especially if the strategy is used to hide harms to the public in an attempt to maintain or increase the profits of those funding the scientists (as is blatantly the case in the examples from the tobacco and oil industries).

3. Vested interests

Related to the 'merchants of doubt' strategy is the general problem of scientists having incentives other than those motivated by the disinterested attitude of someone trying to understand how the world works. This is of course, always true: scientists can be driven by multiple motivations, such as ego or the quest for status, among many others. Yet when scientists' incentives are aligned with protecting the interests of the companies who employ them, things become particularly problematic.

Cases where this happens are well-document in medicine, where the pharmaceutical industry has been known to use diverse strategies to advance their interests: from aggressive marketing directed at physicians to increase the prescription of their drugs,[@keefe2021] to funding scientists who are supposedly objective and neutral, but who in reality are under the pharmaceutical companies' payroll.[@ritchie2020]

An example from the world of tech comes from Facebook (now Meta). In documents known as 'the Facebook files', the Wall Street Journal published various articles documenting the company's clear awareness of the many problems and failings of it products. However, it did surprisingly little to fix them, all the while minisiming the extent of the problem or feigning ignorance to the general public.

In particular, leaked company documents show that internal research inside Meta concluded that Instagram was detrimental for teen mental health.[@wells2021] The documents not only detail the evidence Meta had amassed linking poor teen mental health outcomes to Instagram use, but they also documented how this information was relayed to Mark Zuckerberg, the company's CEO. Additionally, it shows that worries about users interacting less with the platform played into the company's decision to fix (or more accurately, to do nothing about) some of Instagram's problems (ibid). When research like this is exposed, it is not hard to see how it could erode people's trust in researchers. Although in this case we are looking at Meta employees payed to carry our research and not at independent researchers, 'the Facebook files' highlight the negative consequences for the public that occur when research is ultimately serving the interests' of private companies rather than the public good.

4. Fraud

Even more extreme cases occur when scientists are caught in outright fraud, claiming to have achieved feats which are simply not true, or inventing data and publishing it as real. Many examples come from the world of medicine and technology.

Take the case of Paolo Macchiarini, a surgeon who claimed to have solved the rejection problem in trachea transplants (whereby the body rejects the transplanted organ). Not only that, he managed to convince the scientific establishment of his successes and for a while became a rockstar in his field.[@ritchie2020] But the reality was far from a success story: Macchiarini had exaggerated or outright lied about the effectiveness of his treatments, and tragically, many of the patients to which the procedure was performed to died in the following months or years due to complications from the surgery.

One of the most worrying parts of stories like this is how long it took to catch Macchiarini in his wrongdoings. It would be one thing if had he been exposed during the peer-review process or shortly after publishing, but this was not what happened. Macchiarini managed to publish his research in the most prestigious journals in his field, operate on multiple patients, and even land a job at the world-renowed Karolinksa Institute before being exposed.

Fraud to the extent commited by people like Macchiarini serves to highlight just "[...]how much science, despite its built-in organised scepticism, comes down to trust" (ibid).[@ritchie2020] As a general rule, scientists operate with a certain level of trust towards other scientists: trust that researchers are telling the truth, that they actually conducted the experiments they claim they did, trust that the statistical analysis have been reported correctly, etc.

Why do scientists lie? Clearly there can be many reasons. As we saw in the last section, scientists (and human beings in general) might have many different motivations such as the search for status or fame in their fields, among others. But the so-called "publish or perish" culture prevalent in academia today certainly does not help. Because it might feel like their whole career is at stake, scientists can feel extremely pressured to get something, anything, published (even if that means resorting to the worse means to make that possible). Of course, this is not to excuse fraud at any level, and there are plenty of other reasons scientists will resort to lying, but it does highlight a structural problem in the social functioning of the science today. We will delve deeper into the problem in the next section.

Cheating and fraud will never be completely eradicated from any human endeavour, and cases where fraudsters are exposed show us that at least not all lies can stay buried in science. However, there remains the question of how much fraud is not being exposed. As Ritchie notes, the image of objectivity and honesty that the scientific community prides itself in, might be exactly what prevents it from spotting fraudsters like Macchiarini in a timely fashion. If the implicit trust level between scientists is 'too high', this might stop them from scrutinising the data and results in enough depth.

Another extreme case is that of Elizabeth Holmes, founder of Theranos, the now-infamous company which claimed to have developed breakthrough health technology to automate and miniaturise blood tests. This technology supposedly enabled hundreds of tests to be done with just a single drop of blood. A few years after raising millions of dollars for Theranos, (which even earned Holmes the title of youngest self-made female billionaire), she was exposed as a complete fake. Her technology did not work at all. The company had given customers innacurate test results which in many cases compromised their health. Once again, the fact that it took years and millions of dollars in investements to realise the extent of her fraud, unsurprisingly may cast doubt on the public perception of science as honest and legitimate.

5. Bias, negligence, and hype

Scientists do not have to commit outright fraud to skew their results in ways which can, in the long run, diminish public trust in science.

There are other ways in which scientific results can be presented to make them seem more robust than they really are. As we saw, the incentives in the "publish or perish" culture are such that scientists are driven to put the possibility of publication over every other consideration, which can lead to biases in their research as well as sloppiness.1

The drive is not only to publish, but to publish "attention-grabbing, unequivocal, statistically significant results" (ibid, emphasis added), and it makes for one of the biggest sources of bias and skewed results in science.2 Researchers know that certain kinds of studies are very unlikely to get published. Studies which do not find any new and surprising effect, studies which (only) replicate previous findings, or studies with no statistically significant results have very slim chances of getting published in peer-reviewed journals regardless of how rigorous the methodology is. Obviously, this is a big problem.

In a perfect world, studies would be published based almost solely on their methodological virtues, paying no attention to how new or surprising the effect found is (or to whether an effect is found at all). If a study is designed properly, its results should be of interest to the scientific community whether the result is positive, negative or null.[@ritchie2020]

Instead what we get is scientists not publishing (and sometimes not even writing up) research which did not find any statistically significant results (sometimes refered to as the 'file-drawer effect'). If only studies with statistically significant effects are published while others which show a smaller effect or no effect at all never see the light of day, the whole literature in the area will overstate the effect(s).

In fact, this is partly what is the driving the so-called replication crisis in academia, where famous studies which established relevant effects cannot be replicated by other researchers. The problem is widespread and well-documented, and it seems to be the most prevalent in social sciences like psychology as well as biological or medical sciences.[@ioannidis2005] In fact, in a study published in Nature in 2016, over 70% of the 1,500 researchers who filled out a questionnaire, declared that they had tried and failed to replicate other scientists experiments and over half of them failed to replicate their own experiments.[@baker2016] Not only that, but over 60% of respondents claimed that two factors, pressure to publish and selective reporting, were driving the problems in replicability.

Does this mean the original scientists were lying? No, not at all. More likely, they just 'got lucky' and their data showed bigger effects than what the 'real' effect is (or what one would expect to get on average if one runs the experiment many times). In any case, if only the 'lucky' studies get published, the overall effect in question will most likely be inflated.

But there are also other reasons which contribute to the inflation of research results. An all-too prevalent practice in academia, known as p-hacking is one of them. P-hacking refers to a set of practices where scientists slightly nudge (or hack) their p-values until something reaches the almost holy grail status of statistic significance. They can re-run almost identical versions of their regressions until they get a statistically significant result, drop certain data points, change the statistical tests used, or even take a data set with no particular hypothesis in question and just see what sorts of effects are statistically significant. The pressure to publish makes this kind of behaviour way too common in current scientific practice, to the point where many researchers might not even think they are doing anything wrong.

One particularly revealing case is that of Professor Brian Wansink, for a long time one of the most important voices on food psychology (the famous studies which shows that people who use bigger plates tend to eat more food comes from his lab). Professor Wansinsk inadvertently outed himself when he wrote a blog post detailing how he adviced a student whose original hypothesis had "failed". Wansink encouraged his student to keep mining the data until something was salvaged. The blog post got other scientists revising Wansink's work, which led to the retraction of many of his studies, and with his resignation from Cornell University where he had been head of the Food and Brand Lab.[@ritchie2020]

As bad as Wansinks case is, he is hardly alone. A 2012 poll of over 2,000 psychologists asked whether they had ever engaged in p-hacking[@john2012]: 65% admitted to collecting data on several different outcomes but reporting on only some of them, 40% claimed to have excluded particular datapoints after looking at the results, and 57% said they decided to collect further data after running their analyses.[@ritchie2020]

Scientists might just also be negligent when checking their findings which may skew their overall results. As Ritchie rightfully points out, when researchers get 'good' results, that is, results which they think are likely to get published, they will probably feel excited (and perhaps relieved), and then move on. Conversely, if the results are 'bad' (unlikely to get published), they might scrutinise them in detail to make absolutely sure that such a dissapointing result is correct. If this kind of uneven behaviour is consistent, then all flawed null results will get corrected, but most statistically significant which are flawed will not, further inflating the statistically significant effects found.[@ritchie2020]

Finally, there is also the over-hype of scientific results. There is no shortage of examples where scientists use words like 'unique', 'robust' and 'unprecedented' to describe their work, and the most prestigious journals pride themselves in publishing studies of "exceptional importance" (Proceedings of the National Academy of Sciences) and papers which have "great potential impact" in their fields[@ritchie2020].

As we saw, it is highly unlikely that the most methodologically rigorous studies will all find unique and unprecedented findings. However, there is pressure to present them this way since researchers might feel like it's language that appeals to readers and to reviewers and editors of prestigious journals.

All of these issues end up heavily skewing data towards mostly positive results which many times are inflated due to a combination of the effects just described. The replication crisis in academia and the (correct) perception that there are relevant problems in the way the incentives are currently set up for scientits, provides us with good reasons to be at least somewhat weary of scientific findings.

6. Lack of control over the message

No matter how much care researchers take when communicating and engaging with the public, the truth is, no one is entirely in control of the message they put out. Scientists might be misquoted or misinterpreted in the media by unscrupulous journalists or simply by errors of miscommunication. And scientific findings may be spun in ways which are designed to grab people's attention while not necessarily communicating in the most truthful way.

Going back to pandemic examples, since the Covid-19 vaccine has been rolled out, there have been numerous claims that more vaccinated people are dying of Covid than unvaccinated. While technically this may have been true in some cases, (that is, the number of deaths of vaccinated may have been higher than that of the unvaccinated). However, these numbers failed to take into account that as the vaccine uptake increased, the proportion of the unvaccinated grew smaller and smaller. So, although the total number of deaths among the unvaccinated may have been small, if you take into account the total number of unvaccinated people, the proportion of deaths among the unvaccinated was a lot higher than that of the unvaccinated.[@spiegelhalter2021] Examples like this show the importance of science communication: it is not enough to get the numbers 'right' (in the sense that they are not adulterated or fabricated), one needs to be be able to read them properly.

Even in cases when researchers are successfully able to convey their message across, it is impossible to control how this message might be then further distorted or changed in social media or otherwise.

As we saw in sections 9 and 10, messages which cause strong emotions (such as anger or moral outrage) spread much more quickly through social media than unemotional or nuanced ones. Again, this is in great part due to the workings of the content-filtering algorithms which are programmed to show us content which is most likely to grab our attention and/or get users to share it.

Click-bait headlines and misleading quotes are therefore easily propagated through social media and scientists engaging with the media and general public should be well aware of that. One cannot control headlines and quotes, but it is important that researchers are mindful of these issues, as well as be willing to engage with journalists when they feel they are being misquoted. It is important to remember that engagement does not end after giving an interview, it is important to follow up and clarify one's message when needed.

7. Mistreatment and discrimination of marginalised communities

Science history is riddled with cases of sexism and racism being passed off as 'objective science', with some particularly gruesome episodes such as the Tuskegee studies in the United States during the 20th century. There is no shortage of examples where science was used to justify discriminatory practices and worldviews.

It is unsurprising that the long history of science's racist and sexist practices can make people from historically oppresed communities suspicious or distrustful of science, evidenced for example in African American women's distrust of the birth control pill when it first emerged,[@grasswick2010] or in the higher rates of vaccine hesitancy of marginalised groups both for vaccines in general as well as and response to the Covid-19 vaccine.[@nguyen2022]

Given its history, how does trust in science fare when in comes to data science and AI? When these technologies were first used, and as their use became widespread, it was thought that they would eradicate (or at least greatly diminish) biases and discrimination (such as racism or sexism) from scientific practice. The reason for this was that data and AI were broadly perceived as 'neutral and objetive'. It was humans, not algorithms, who were full of biases.

We now know this way of thinking is grossly mistaken. If anything, algorithms can amplify the already existing human biases, regardless of whether these biases are conscious or not. Sadly, there are far too many examples in recent years. A 2019 paper published in Science showed how an algorithm used in US healthcare to predict patients' needs was producing racist results.[@obermeyer2019] The bias was introduced because the algorithm used past health costs as a proxy for health needs, which inadvertently favoured White patients. Less money was spent on Black patients with the same level of need as their White counterparts, and the algorithm thus falsely concluded that Black patients are healthier than equally sick White patients.

Another famous example is Amazon's hiring algorithm which turned out to be biased against women. In an attempt to automate their hiring practices, Amazon developed an experimental hiring tool which used artificial intelligence to give job candidates scores ranging from one to five stars.[@dastin2022] The algorithm quickly taught itself to discriminate against women candidates, penalising resumes which included the word women, (in 'women's chess club' for instance), as well as downgrading resumes which came from all-women colleges (ibid). Although the algorithm is not used by the company, it was actually taken down precisely because of concerns about its sexism,[@dastin2018] it serves as a powerful example of how AI can perpetuate and amplify historical biases (such as learning from the fact that traditionally Amazon has not hired many women, and extrapolating that to mean that women are not good employees).

Because discrimination is often embedded into technology, people from marginalised groups have a rational reason to distrust it, all the more if these technologies are untruthfully portayed as just the opposite: neutral and imparcial. Therefore, it is crucial to be aware of the potential biases of algorithms, reminding ourselves that no technology is ever truly neutral.

8. Misuse of data

Often scientists and researchers can abuse their power and the trust members of the public have given them. The world of data and AI is full of opportunities to do so, especially if we take into account the huge asymmetry of information between researchers and the public in terms of how data is collected, used, and how the algorithms work.

A famous example is the so-called Facebook emotional contagion study.[@kramer2014] A group of researchers at Facebook and Cornell University studied how emotional contagion spread across the social network. To do this, they manipulated the News Feed of Facebook users, either to reduce positive messages (thus amplifying negative ones), reduce negative messages (thus amplifying positive ones), or reduce messages at random (control condition).

The level of emotional contagion was measured by the proportion of positive and negative messages the manipulated users themselves then posted. The study found that when positive expressions of emotion were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. The article was published in the prestigious Proceedings of the National Academy of Sciences (PNAS).

However, it was met with outrage from the public.[@meyer2014]-[@chambers2014] The company claimed it had not violated Facebook's terms and conditions (they explicitly invoked them to argue that users' acceptance of these T&C constituted informed consent for the research). However, the public did not agree. None of the users knew they were part of this research, much less what the research amounted to in terms of manipulation of their News Feed. The realisation of how easily Facebook could manipulate its users (for research or other purposes), angered, scared, and increased people's distrust in technology. It is interesting to note that one of the company's defenses actually had to do with how much and how often they already tweak and manipulate the algorithm.[@meyer2014] As we will see in the next section, this is no cause for celebration.

This episode also highlights how unaware researchers might be of the public's concerns. These researchers decided to publish their research in one of the most prestigious journals in science where it would receive a lot of attention. Presumably, it did not occur to them that other researchers and the general public would be outraged at their privacy policies and would find the study was unethical. This reminds us of the importance of public deliberation, and having conversations which can allow trust to flourish between scientists and the public.

9. Online misinformation and disinformation

We now turn to misinformation and disinformation. Misinformation refers to information which is incorrect or inaccurate whereas disinformation has been "[...]used to denote a specific type of misinformation that is intentionally false".[@scheufele2019]-3 Unsurprisingly, they both can cause important damage to the relationship of trust between the public and the world of science and research. When exposed to misinformation people might become confused about what the scientists are saying and start to distrust scientists' motivations, which can then lead to overall distorted narratives about the state of scientific evidence for any given topic.

Misinformation is certainly not a new phenomena, but it seems to have become increasingly more prevalent over the last years. It is now well-documented that fake news is more likely to be retweeted and spread online than real news,[@vosoughi2018] and the Internet and social media can sometimes seem to be infested by it.

The link with algorithms and technology is direct (although possibly unintended as such). As we previously stated, no piece of technology is neutral. It is designed by humans with particular aims in mind and it can perpetuate and amplify human biases.

In the case of social media, the algorithm which determines the users' News Feed is maximising for one thing: time spent on, and interacting with, the platform. Because of the business model of social media companies, they are competing for users' attention (sometimes referred to as 'the attention-economy').[@zuboff2019]-[@center]

Therefore, content-filtering algorithms are designed to show us content which is most likely to grab our attention and thus keep us on the platform. Sadly, it seems that emotional content which angers or outrages us is an easy way to do so. Studies have shown that emotional, and particularly angry messages spread much faster in social media (one of them is the infamous Facebook emotional contagion study from the last section).[@kramer2014]-[@chen2017]-[@crockett2017]-[@brady2017] If the message being spread is fake or otherwise distorted, it is easier to make it as outrageous as required. Even though the algorithm was designed with the aim of maximising attention and engagement in mind, we can see how this can inadvertently also end up promoting fake news and misinformation.

Widespread misinformation is detrimental to public trust in science for at least two reasons. First of all, if misinformation is rampant in social media, misinformation about science and scientists will not be an exception. The amount of misinformation and outright fake news about the Covid-19 pandemic and the response to it has sadly given us (too) many examples of this over the last two years.

Additionally, an environment rife with misinformation promotes an overall worldview that maybe there is no 'real' information (sometimes known as post-truth), and that instead we are just confronted with people using supposed information to push their own interests forward in the public stage. As we will see in the next section, when this is combined with siloed communities which distrust anyone outside it, it can lead to very problematic epistemic outcomes.

10. Filter bubbles and echo chambers

This leads us to epistemic filter bubbles and, even more epistemically pernicious, echo chambers. Again, these are not problems exclusive to social media platforms and the algorithms that fuel them, but the latter certainly play a role in making them more ubiquituous as well as amplifying their contents.

C.Thi Nguyen (2020) proposes a useful disintion between epistemic bubbles and echo chambers. He defines an epistemic bubble as a "social epistemic structure which has inadequate coverage through a process of exclusion by omission".[@nguyen2020] That is, it is a filter bubble which omits certain views and positions. The key here is that this inadequate coverage occurs through omission. There is no need for ill-intent in the creation of epistemic bubbles, they can arise "[..]through the ordinary processes of social selection and community formation" (ibid).

In fact, epistemic bubbles can be quite common in everyday life. We might find ourselves in one if we only buy newspapers of a certain political leaning, or only speak to friends who hold similar worldviews to us. Social media News Feeds can in many occasions become epistemic bubbles as people mostly interact with others who are similar to them.

The good news about epistemic filter bubbles is that they can be burst through sufficient exposure to information from outside of the bubble.[@nguyen2020] In this case, someone's warped view of the world is mainly due to lack of exposure to a variety of views on certain issues. Therefore, the solution is relatively easy: in order to burst the bubble, people should be exposed to many varied worldviews and opinions.

An echo chamber however, is another story. Unlike an epistemic bubble, a lack of diversity is not the main reason people become polarised and entrenched in their views. Instead, Nguyen defines an echo chamber as a "social epistemic structure in which other relevant voices [those outside of it] have been discredited" (ibid), which implies at least a certain level of intentionality in the discrediting of those not part of the echo chamber.

In fact, the crucial element of an echo chamber as an epistemic community is that there is a "[...]significant disparity in trust between members and non members".[@nguyen2020] Members of the echo chambers are given almost infinite credence when they voice their opinions and views, while beliefs of those outside it are completely discredited. It is a process similar to cult indoctrination, and it is very easy to see why it is so pernicious.

By preemptively dismissing the opinions of those who do not share the beliefs of those inside the echo chamber, it is easy to epistemically insulate oneself to the point that even evidence which contradicts your views and should give you reason to reevaluate, instead ends up confirming your original views even further.

An echo chamber might be the most dangerous challenge to trust in science: once someone is inside one where scientists are considered outsiders and discredited, it is almost impossible to get them to reconsider their beliefs, especially when the suggestion to do so comes from those outside the echo chamber. Once again it is important to remember that although echo chambers are not completely online phenomena, the way algorithms are employed in social media do seem to aid in their formation.


  1. For a detailed explanation of these phenomena, see Stuart Ritchie's book, Science Fictions (2020). 

  2. Even the crucial concept of 'statistically significant' has led to a lot of confusion, as the word significant seems to allude to an effect which is big and important in some way, where it actually means that the effect found is sufficiently different from what we would expect to see if there was no effect (Ritchie, 2020, 133).[@ritchie2020] The related concept of p-value has also lent itself for gross misunderstanding. In fact, a study found that 89% of Introduction to Psychology textbooks got the definition wrong.[@ritchie2020]-[@cassidy2019

  3. In order to be concise, I will use the term misinformation to refer to both misinformation and disinformation unless explicitly stated.