Thursday, September 22, 2016

An Epistemological Crisis

With regard to what ‘truthfulness’ is, perhaps nobody has ever been sufficiently truthful.

Nietzsche

The question “how do I know what is true?” has never seemed so difficult to answer: we are in the midst of an epistemological crisis. It is neither the first nor will it be the last. However, in contrast with previous crises that were mostly of interest to philosophers and scientists, the present situation is considerably more relevant to the general population.

At its core, the crisis is one of evaluating the veracity of testimony. “Testimony” is used here as a philosophical term to mean purported knowledge that is communicated by another person rather than perceived or conceived directly by a subject. The problem of testimony is not new, though it could easily be argued that it has not received enough attention.

It can be illustrated simply by considering an everyday situation: a news article on the health benefits or shortcomings of some particular food or diet. How do we know whether, or to what extent, such an article is true, both in its factual claims and its interpretation?

The Superficial and the Deep Problem

I divide this problem into two components: superficial and deep. The superficial component is about pure testimony - the conveyance of knowledge or information. The issues here arise from whether the testifier is correctly transmitting the information that he or she received, regardless of where it came from. Analyses of this problem can be found in sources ranging from the Islamic hadith, to courtroom rules of evidence, to treatises like Schum’s The Evidential Foundations of Probabilistic Reasoning, to my own analysis presented in this talk.

The solution to the superficial problem is effortful but tractable. We try to reduce the number of testimonial “hops” to the original source of information. We evaluate the reliability of sources through various means, and try to rely on sources who have exhibited truthful testimony consistently in the past. We try to find multiple independent sources that offer the same account. We make an effort to evaluate whether the independence of sources is impaired by collaboration or implicit biases.

An aggressive pursuit of the truth of pure testimony leads us to original sources. These might be in the form of numerical data, or written communications, or eyewitness accounts, or video or audio recordings. The deep component of the problem is in evaluating the veracity of these original sources. It is partially a problem of testimony, but also incorporates issues of perception, memory, and cognitive bias. It is a deep problem because an evaluation of the original source is sometimes impossible and almost always requires a great deal of effort and knowledge. As subjects evaluating truthfulness, we do not have access to the original facts of reality and all the details of the means by which they were characterized.

In some cases the deep problem does not go that deep. Original sources can be falsified, retouched, or even just carelessly recorded or re-typed. If an individual or institution is generally a reliable original source, then we have some reason to believe that these factors do not come into play. There are occasionally stories of researchers who completely fabricate data, or of photos that are fake, but they are actually fairly rare. These concerns look more like issues of pure testimony.

More difficult is when data is legitimately produced but it is wrapped in interpretive cover. Statistical analyses have this characteristic. A scientist or or data collection agency might collect a large amount of raw data and then perform a variety of adjustments and statistical operations on it. Those adjustments and operations might be well-established as appropriate in the field, or they might be novel to the particular researcher. One might assume that such procedures would be evaluated during peer review, but for a variety of reasons peer review has declined in its thoroughness and efficacy. Government-sourced data does not have peer review, though economists and medical experts and others often opine on its validity. It is easy to find conflicting opinions on such matters, and these differences are difficult to resolve, even for experts, but especially for lay subjects.

Adjustments and statistics are only one example. More generally, a scientific paper or government report incorporates conclusions, based on the data along with the application of experience in the field of interest, and it can be extremely challenging to evaluate the logic of these conclusions without a similar level of expertise. Yet, once again, even experts weighing in on the conclusions will often have differing opinions.

The deepest level of the problem is when the data collection process itself is tainted with cognitive bias. This is extremely difficult to discern, because such work is always and necessarily affected by cognitive biases of some kind, and the real question is the extent to which those biases affect the results and conclusions. Further, it can be very difficult from the outside to tell the difference between explicit manipulation and implicit cognitive bias.

In the most innocuous yet endemic version of cognitive bias, as numerous philosophers of science concluded in the second half of the twentieth century, all terms are theory-laden. This means that even the statement of the hypothesis and the data collection effort depend on a particular paradigm, or model of the world. There is no “view from nowhere.” Thus, all data collection has a tendency to confirm the extant paradigm in a general sense. This occurs beyond just scientific fields. An individual who takes a photograph of a newsworthy event will point the camera at the most dramatic elements and will not capture the entire context of the scene. What is captured depends on what that person considers important or photo-worthy.

Of greater concern is the desire on the part of a researcher to gain support for a theory or hypothesis. Though we hope that scientists are mostly honest, they often truly believe their viewpoint. All science requires that some data be ignored due to spurious measurement error, that experiments be designed to control for external influences, and that other elements be selected from a wide range of seemingly reasonable options. Attention-related cognitive biases will naturally drive them toward approaches that support their theory and to fail to notice facts and concerns that counter it. Further, individual researchers face many pressures, including publishing regularly to gain tenure, gaining notoriety for their theories to obtain funding, and just generally seeking prestige in their field. Scientific journals strongly prefer to publish experiments that show an effect rather than those which show no effect or disconfirm previous effects. These difficulties due to implicit cognitive bias have been documented at length recently.

More egregiously, sometimes funding sources (e.g., corporate or government sources) are essentially looking for a “right answer” that will sell more products or support a particular policy. Scientists can always arrange things to be more favorable to the desired hypothesis even while providing perfectly rationalized support for that approach. A recently discovered example of this also shows that the phenomenon is not new. But money is not the only such influence. When public policy is integrated with science, the political beliefs of scientists, data collection agencies, and scientific journal editorial staff can influence the results in important but opaque ways.

Skepticism and Faith

The end result of these manifestations of the problem of testimony is a broad and profound skepticism, hence an epistemological crisis. We see opposing journalistic sources providing conflicting information, and when we dig down to the original research and data sources, we often find contradictory results or disagreeing expert opinions about those results. As information consumers we have the sense that, for all apparently factual material there is interpretive and methodological bias that affects both the data itself and any conclusions one might draw from it. Conspiracy theories about scientific results are rampant. We often see results from “established” science accused of either industry influence or a bias toward technocratic values, and the establishment treating all results that disagree with the dominant paradigm as “pseudoscience” or again as tainted by financial incentives.

Little of this skepticism is aimed at finding “truth,” of course. It is often a political or values fight, with extremely high standards expected of those who disagree and a pass given to those who agree. And for those who are genuinely looking to find truth, their skepticism is dismissed as partisan as well. Even attempts to apply appropriate standards of knowledge seem subject to bias; the snake is eating its own tail.

One might hold out hope that reality is the final arbiter of such matters. If only it were that simple. For questions where we, as subjects, will never encounter the phenomena directly, it is entirely possible that the debate will rage indefinitely. Unless the effect sizes are strong and transcend methodological choices, the same biases exhibited initially will continue to live on. For those questions where we will encounter phenomena directly, we only have the anecdotal observations we can make individually, which means that we probably cannot assign causality in a reliable fashion. For example, if I eliminate some food from my diet and subsequently feel better, the likelihood that there is some other cause for the change or that it is a placebo effect is fairly high.

Now, none of the underlying knowledge issues discussed here are fundamentally new. There have always been frauds, egos, cognitive biases, and paradigmatic influences. Some things that are new include: (a) science and government data collection is used to drive public policy, some of which extends beyond pure rational analysis and into values; (b) inexorable increases in the number of scientists drives a need to publish; (c) business model problems with journals have reduced the effectiveness and application of peer review; (d) due to the Internet, laypeople have greater access to both reporting and to original sources, making conflicts that were always there more broadly apparent, and values other than seeking truth more influential. There are probably other reasons why we have reached this crisis now.

The existence of the crisis does not reduce our desire to believe and to have knowledge. In the words of Charles Sanders Peirce, a scientist and philosopher of science who wrote in the second half of the 19th century:

Doubt is an uneasy and dissatisfied state from which we struggle to free ourselves and pass into the state of belief; while the latter is a calm and satisfactory state which we do not wish to avoid, or to change to a belief in anything else.

The unfortunate result for many laypeople is that they come to rely on faith, not only in matters of unverifiable metaphysics, but in areas where there is actually a fact of the matter that could be discerned. Their values drive an agenda, the agenda drives a cognitive bias toward supporting evidence, and an unencumbered faith in both the agenda and in this supporting evidence is the result. Since experts disagree, they pick those who share their values or agenda. As discussed above, modes of skepticism can be applied to any contrary evidence, suggesting not only that the particular evidence is suspect, but also that claimed knowledge from faith has an equal status to that from evidence and reasoning. This is an ugly situation indeed: among other things, it means that values can float unfettered by reality.

The syndrome is not limited to people who subscribe to a particular religious faith. It is not confined to the Dunning-Kruger idiocracy. It results any time someone holds their existing values and beliefs above valuing and learning how the world works. It happens left, right, and center. It happens among scientists and other experts, even within their own field. We are all at risk, and the only cure is to take truth seriously and to seek it with unqualified regard.

Looking Toward Solutions

In the early twentieth century, an epistemological crisis arose from a series of discoveries showing that our knowledge is fundamentally limited. Heisenberg showed that we cannot accurately know both the position and momentum of a particle. Gödel’s incompleteness theorems demonstrated that every formal system - e.g., a system of logic or mathematics - is either inconsistent or has true theorems that cannot be proved within that system. Many similar and related discoveries followed. Uneasiness with these limits continues to this day, but practitioners have gradually come to accept that these limits are simply facts about the world, and that knowledge of our epistemic limits is just another kind of knowledge. They found that there was plenty of knowledge still accessible and that we could cope with inherent uncertainty through indirect or partial means. Ultimately, the crisis did not inhibit progress.

Perhaps we can work toward solutions with this historical example in mind, and guided by Nietzsche’s point about the nature of truthfulness. The difficulties we are seeing are probably not some sort of contingent, temporary issue related to modern society, nor even due to flaws that are particular to human nature. Any autonomous agent collecting data has some set of values and purposes, or it wouldn’t bother to collect the data (or do anything else, for that matter). Those values and purposes, and the paradigmatic assumptions guiding the data collection and analysis, create biases which lead to biased results.

Three approaches come to mind. Original sources can try to ameliorate the bias resulting from their purposes; they can disclose their purposes and consequent biases; or they can align their purposes with others who have somewhat different purposes. Note that these approaches may or may not have practical implementations in any given case. Let us look briefly at examples of each in turn.

Suppose that a researcher is seeking tenure. A crucial bias of such a purpose is to find results and effects of interest to the field. The tendency will be to find effects where there are none and to overstate the significance of small effects. To ameliorate such biases, a researcher in this position should go to great lengths to demonstrate that the effect is real - not just finding a single statistic that supports the conclusion, but showing that a variety of analyses lead to the same conclusion. If the effect is small, do not overstate its potential importance.

Suppose that a researcher has pursued a career in a scientific field because she thinks the field studies a phenomenon that is an important risk to humanity. This needs to be disclosed, just as clearly as a financial conflict of interest needs to be disclosed. It is only natural that this sort of motivation exists in a scientific field, but it creates strong biases that readers of research need to be aware of.

Suppose that researchers in a field have opposed political viewpoints regarding a particular scientific question. They can work together to evaluate the data collection processes and analysis and develop a methodology that removes at least the most troublesome biases. This will only work with researchers who have a genuine desire to find underlying phenomena despite their political views. Those who are primarily concerned with their agenda will find ways to object to anything. I suspect that the vast majority of scientists would benefit from this, just as pair programming makes software developers focus on the flaws they would prefer to avoid.

These are only single examples of situations and how these approaches might be applied. Nevertheless, if among our purposes is improving our lives, and increasing knowledge of how the world works, it is crucial to find ways to mitigate this epistemological crisis. The alternative is ignorance and power politics, which is unlikely to result in our being happier, healthier, and safer.

3 comments:

  1. well described.

    with respect to the sourcing problem. I'd like to see us take sourcing more seriously as consumers. fundamentally we need an underlying framework that supports transparency, and visibility of results, in the content creation ecosystem. I've proposed something that lets us understanding the sources/Journalists in jRank: http://one.valeski.org/2015/08/journalist-rank-jrank-news-feeds.html

    there's also a tradeoff in here somewhere that we, collectively, have made. ease of access to information (mobile devices coupled with network connectivity) has yielded tiny attention spans and a tendency for people to rely on insufficient validation of information they're consuming. we collectively have broader access to lots more information, but quality has dropped. add to that the fact that the system is supported by advertising dollars, and almost nothing can be trusted anyway.

    somehow, we as users/consumers/readers/knowledge-seekers, have to consider everything with a critical eye, and triangulate possible validity qualities in everything we consume. effectively, I think we all have to start thinking more like scientists in order to be safer with the content we consume.

    of course, this can be tackled by producing better, cleaner, less-biased content, but... not in our lifetime.

    I've taken to a relatively careful approach to cleaning up the sources of information I consume. http://one.valeski.org/2016/09/main-stream-media-and-me-year-later.html

    ReplyDelete
    Replies
    1. It seems like most, though not all, of what you are addressing is more about the superficial (and by superficial, be clear that I don't mean "easy") problem. Journalists, in general, are not present at the original events on which they are reporting, so they are *only* testimonial. This is of course not always the case, for example, if they attend and then report on a speech, for example, or if they travel to and report on the aftermath of an accident. Most of the time, though, they are relying on other original sources.

      But, there is a really interesting point in all this: some of the evaluation processes I describe regarding original sources could be done by quality journalists - indeed, this used to be their role - and we could find journalists who do their diligence on sources. So, a scientific reporter could talk to contrary sources regarding a scientific result and try to get to the heart of the issue. I think in some cases this would be very, very helpful. It doesn't really solve the deep problem in full, though. If there are conflicting opinions on what a result means or whether it is valid, it does not seem like we know anything more after hearing all that.

      Delete
  2. Those interested in exploring the topic further would do well to read this (long) article: http://www.thenewatlantis.com/publications/saving-science

    ReplyDelete

Comments are moderated to ensure that they are relevant to the topic.