Feed on

Epistemic Trust

About the Author: Emanuele Ratti  is a Postdoctoral Fellow in the Developing Virtues in the Practice of Science Project. He is a philosopher of biology interested in the epistemology of contemporary molecular biology with a particular focus on how the field is shaped by developments from a small-science regime to a big-science structure.

We commonly think about science as the exemplary endeavor of rationality. Rationality can mean several things, but here I have in mind the fact that the whole scientific endeavor is transparent in all its aspects. Science is praised in the public because every single aspect of its practice can be inspected and judged according to robust standards. However, on closer inspection, scientific practice relies on a variety of factors that severely limit rationality. Of course, I am not saying that science is irrational. Rather, the rationality of science depends heavily on its transparency, and in fact, several aspects of the practice of science are quite opaque. Far from being fully transparent – as though a single scientist could directly verify every scientific claim – science actually relies deeply on trust. A scientist x has to trust another scientist y because most of what x does depends on what y has done. This is the idea of epistemic trust. Let me be more precise about this.

In order to understand the notion of epistemic trust we have first to understand the notion of epistemic dependence. One is epistemically dependent upon another, as explained by Wagenknecht, when “the former cannot acquire and/or create knowledge independently of the latter” (p 162). This is the rule in collaborative research in science. Because all contemporary science is collaborative to some degree (even an individual scientist relies on the previous work of others), then it turns out that epistemic dependence is an essential condition for any scientist. A person cannot assess every piece of evidence directly – she has to rely on what other people have said.

Epistemic trust is thus central to being a scientist. But how do we know whom to trust? How do we know that a person or a piece of evidence is trustworthy? It is interesting to notice how an endeavor which aims at being completely transparent is so dependent on leaps of faith.

However, leaps of faith are not completely blind: scientists do not rely blindly on the work of others. Moreover, there are some documented mechanisms (see again the paper by Wagenknecht) that scientists instantiate to assess indirectly whether one is trustworthy. In general, the notion of trust is epistemic and, at the same time, has ethical ramifications because trusting another scientist means trusting that the person in question is both knowledgeable and truthful – hence trust rests on both the epistemic and moral character of the scientist.

Decisions about whom to trust have additional ethical ramifications. Andersen lists a series of research misconducts (arguably data fabrication) where younger scientists committed the felony, and the seniors had simply ‘trusted’ the wrong person. She points out that senior co-authors are somehow responsible to a certain extent for the work done by more junior scientists. Responsibility also implies a moral dimension in this direction of the relation of trust. If there is misconduct, senior co-authors may not be legally charged with misconduct, but they are responsible, exactly because they have blindly trusted a person who, eventually, turned out to be completely dishonest. Therefore, scientists should make an effort to assess to the best of their capacity the moral and epistemic character of their peers.


Comments are closed.