Saturday, December 17, 2016

Testimony, Truth, and Convention

The philosophy of testimony examines the nature of knowledge as it relates to things people say (whether verbally or in print). Is information we receive through testimony knowledge? How do we know it is true? On what basis do we justify this? These questions may seem obscure, but if we consider the extent to which our knowledge relies on testimony, and the present difficulties we have with “fake news” and other systemic failures, they have new relevance.

There are two primary positions in philosophy of testimony. The reductionist position (generally associated with David Hume) says that our only warrant for believing testimony is underlying evidence associating it with the truth: things like the speaker’s having been present at the scene, or that she has generally reported facts reliably, or that he has properly functioning senses. On this view the testimony is only a conduit through which other justifications flow.

The anti-reductionist position (associated with Thomas Reid and more recently with Coady) says that testimony offers independent warrant for belief or knowledge. Those subscribing to this position vary in the extent to which this occurs, but in general it is the sense that we are justified in believing testimony in the absence of defeaters (specific reasons not to believe it). Generally this justification is viewed as somehow a priori, part of the nature of knowledge, which makes sense since otherwise we would be claiming that it required some independent justification.

I am not a fan of a priori knowledge, but it is also clear that if we take a radical empirical-reductionist approach to testimony, we will have lost a great deal of efficiency if not our entire ability to function. How can we reconcile this?

A relatively simple answer occurred to me today. It requires considerably more investigation on my part to get the details right, but the idea is straightforward. It is that there is an independent warrant provided by testimony, and it holds this warrant through convention.

In Searle’s The Construction of Social Reality, and Austin’s How to Do Things with Words, we see how some facts of reality are created by social convention. This does not mean that reality itself is entirely a social construct - it means that in a social context, some things are true that would not be true outside that context. Money is a favorite example: the pieces of paper you use to buy a candy bar at the convenience store would not function to obtain food if you were stranded on a desert isle, or even for that matter in a country that does not accept the particular pieces of paper in question. Bitcoin is an even clearer example since it does not even have the backing of government. Other examples include status questions like marriage and job titles, or pervasive elements such as language generally and the meanings of words in particular.

Searle captures this notion as “X counts as Y in context C.” We can apply that formula to testimony. In particular social contexts (C), testimony (X) counts as knowledge (Y). It is not just that we trust people in that context, or that we have empirical reasons to believe them. It is specifically and independently that we have a social convention of believing what people say. Though in many situations we might be on the lookout for defeaters, we have a strong presumption that people are speaking the truth in that context.

This makes it fairly easy to see what is going on today in the United States and perhaps elsewhere. This social convention has partially broken down, particularly with respect to journalism and to some extent with respect to science. Without getting into the sociology and the causes, it is clear that our presumption of truth is much weaker than previously and in some cases it has been eliminated. There has arisen an army of skeptics, of a variety of persuasions, willing to challenge any article of testimony if it does not suit their political or moral preferences. The only apparent substitute is a thoroughly reductionist approach, where we expend a great deal of energy attempting to verify claims, and only trust individuals and institutions after they have proved themselves extensively. In the absence of the social convention of presumption of truth, or an elaborate system of verification, we risk epistemic nihilism, where we have a sense that we don’t really know the truth about much of anything.

It is not at all clear whether or how we can re-establish the convention of generally believing testimony. What is clear is that its effects go far beyond mere abstract questions of knowledge justification. We are seeing first-hand that a democratic society relies utterly on this social convention, for if we do not know whether elections are real or who is telling the truth about them, then how can the institutions of democracy function?

Wednesday, October 26, 2016

Rationality and Values

I realized recently that I use the term “rational” or “rationality” frequently without having ever defined it carefully. While contemplating that concern, I also noticed that the relationship between rationality and values is a crucial one that bears some analysis. So, that’s what I’m going to do here.

What is Rationality?

When I use the term “rational,” I mean a method that is both intended to be, and is, effective in seeking particular values. This needs to be unpacked. First, by methods I mean mental actions like thoughts, ideas, and thought processes, as well as physical actions, and I will use these somewhat interchangeably. I include both mental and physical actions because both relate, in different and connected ways, to seeking values. A thought cannot fully determine an action - there are always details left unspecified. Nevertheless an action that is not guided by at least some thought or thinking processes is largely animal behavior. This is among the reasons Aristotle called us the “rational animal.”

Next, such methods must be effective. It is a vague word, and to the extent that our values are not clearly specified it might be difficult to measure effectiveness. Still, to the extent that we can say that we have procured values that we sought, and did so using methods that were intentional, we can say that those methods at least seemed to be effective. Whether they actually were effective, or we were merely fortunate, is a separate question that we will look at later. Ideally, we would have some sort of reason to believe that the methods we pursue will be effective, which leads to the morass of causation; so the analysis is complicated. In any case, the practical question of whether a thought or action is rational often boils down to what we mean by effective.

Seeking means actually pursuing the values in question. It does not mean deciding whether they are actually one’s values, or weighing them amongst themselves or in comparison to other values. It is about execution, not vision. This does not mean that a thought or planned action is not rational until it takes place. We can have a rational plan of action that has not yet been implemented, and I include such planning in the notion of seeking. But what makes it rational is whether or not it is or will be effective when we go about the seeking.

I use particular because I see rationality as a relation. Methods should never be judged as absolutely rational or not; a context of values is always required. A thought or action is rational with respect to the values that they help to achieve. Note also that values is plural. This means that the definition may relate to more than one value at a time. It might take into account desiderata subject to certain constraints, where both the constraints and desiderata are values. It might also refer to all of a particular person’s values, or to a particular class of values intersubjectively.

Finally, I have left the phrase intended to be for last, because it is easier to understand in the context of the other terms. If thoughts or actions are not intended or expected to help in seeking a set of values, it does not really make sense to evaluate their rationality with respect to those values. They are simply independent or unrelated. We might refer to them as non-rational with respect to the applicable values, though this is a term of art that we would only want to use in precise circumstances. The “intended to be” phrasing makes it possible to define irrational with a parallel construction - simply change “and is” to “and is not.”

I am conflating values with purposes in the definition, even though there are differences. In general, all purposes are values of a sort; the reverse may also be true, but not necessarily. Also, I do not see it as essential that one must act to pursue something for it to constitute a value; I prefer instead to call those priorities - but I am not prepared to argue that here, it is terminological and a different topic. I use the term values to cover all those things that we would like to have or have come to pass.

An example may be helpful at this point, though let us be clear that every example can be debated. Suppose that an individual is lonely and seeks love. He might consider trying online dating, putting some effort into a profile and perhaps touching up his personal grooming. Alternatively, he might wash away the loneliness by staying home alone and drinking whiskey until he falls asleep each night. The former approach is rational because the value he seeks, love, might be gained in that manner. The latter is irrational because it will certainly not be effective.

If sleep or not feeling one’s pain are instead the values in question, then the whiskey consumption might very well be rational. Later, we will look at the question of whether certain kinds of values can themselves be evaluated on criteria of rationality.


We need to look more closely at the very general notion of effectiveness of methods. The ultimate arbiter of effectiveness is necessarily the direct question of whether we are attaining our values, assuming that we are applying the methods in question. That is the foundation, but as a criterion it leaves much to be desired: first, it is possible that we were lucky or that other factors produced the values; second, such an analysis is only useful in hindsight.

For a strong claim of effectiveness, we need to have evidence of reliability; to have reliability, we need to understand the range of circumstances in which a particular method has particular effects. In short, we need some sort of empiricism. If I enjoy having a beautiful garden, I know from both scientific research and my own repeated experience that each plant needs a certain amount of water over a certain period of time. Rational behavior in pursuit of that value will involve knowing those amounts and acting to provide them. In contrast, playing Beethoven in the vicinity of the plants has not been shown to have an effect. It is possible that it does, but we certainly cannot make strong claims about it until it is tested.

Not all decisions offer a context where we have on-point empirically validated knowledge. In these cases, we must rely on pattern matching, limited inductive inference, and other sorts of heuristics. This is where we encounter the most difficulty. How are we to say, in advance, that the conclusion we reach from such methods is effective, and worse, how do we know, in hindsight, that the attainment of the value is attributable to the prescribed action?

To resolve this question, we must first note that just because methods are statistical or heuristic does not mean that they are arbitrary. Pattern matching, though its final result may be ineffable, is almost always a consequence of known inputs. We might decide whether or not to invest in a company based on the management, on the market in which they are selling, on the capabilities of the product, and other such factors. Our weighting of those factors is too complex to explain or directly analyze, but the fact that we have used factors known to play a role in the success of companies is not. In contrast, if our pattern match includes astrological conditions, which have no demonstrated relationship to investment success, then it is to that extent irrational. Similar analysis applies to induction, where we have observed a temporally correlated relationship a few times, and rely on that for future decisions through an inference of causation.

In these cases we are using our best efforts to apply information and knowledge that may be pertinent and use it to form a judgment. Such judgments may or may not be effective in a given instance, yet they are effective in a broader sense because they are more likely to result in attainment of the value than random behavior or by applying factors that are not pertinent. This is the sense in which such approaches are effective, and therefore rational.

Correlation to success and inclusion of pertinent factors are a starting point for an heuristic, but its likelihood of reliability is much improved with some semblance of a mechanism by which the applicable factors operate. That mechanism needs to be based on some broader empirical knowledge. We must be particularly cautious in ensuring that the mechanisms we posit are not metaphysical (i.e., cannot be verified). If the terms we use in describing the mechanism do not refer to entities that we can readily identify, then there is considerable risk that we are simply making it up.

Remember, the context is that we are looking for methods that are effective in seeking particular values. You can believe whatever you want, but if you want to actually get those values then the effectiveness of the methods is paramount. If you cannot show a method’s consistency directly, nor readily identify pertinent factors, correlations, and mechanisms, then you have no good reason to think that the method will be effective - the method is irrational.

A related way to think about effectiveness is that the ability to predict is essential to gaining values. Though we might gain some values simply by being able to recognize them and to grab them as they turn up or pass by, if we are to pursue them actively we must predict. For example, we would predict where we might find them, or what sorts of actions in the world tend to consistently produce them. The methods we use to predict are the subject of philosophy of science. Familiar examples include logic, empiricism, concept formation, and measurement. But these methods do not constitute rationality per se; it is because they enable us to predict, and prediction enables us to pursue values effectively.

A brief nod toward decision theory and expected value is appropriate here. Expected value is the idea that we want to maximize the sum of probability-times-value. There are some - particularly computer scientists, economists, and other rationalists - who see expected value as a preferred definition of rationality. It is surely a valuable model, and in strongly defined circumstances it may even be directly applicable. But it is incomplete, even in its generalized forms. It assumes that we have, and can, articulate all the values under consideration in advance. It assumes that the scales of different values are commensurable in some form, e.g., that honesty and money can be compared via a single scale within an equation, as opposed to only as the output of a more complex process. Further, some of the applicable values may be more complex than simply things-to-get - they may relate to the evaluation process itself. My point here is not that application of decision theory is irrational, but rather that there are many deviations from it that are still rational under my definition. Decision theory is potentially an effective method, but it is unsatisfactory as an overall definition or criterion of rationality.

We can also look at factors that impede effectiveness. For example, cognitive bias can be viewed as tendencies toward thought processes that are not effective, or at least not as effective as they could be. However, we should be careful to distinguish between a process that is a cognitive bias in one value context and an effective (therefore rational) process in another. For example, the fact that we sometimes have only a short time and limited information to make a decision means that we need to rely on intuitive processes (Daniel Kahnemann’s “thinking fast”). Whatever our synthesized pattern matching and decision heuristic in these cases, it will necessarily have biases of some sort built in, some of which will be wrong at some times. In effect, cognitive biases are themselves relational; in attempting to extirpate them we have to be careful not to throw the baby out with the bathwater.

Finally, it is evident that effectiveness admits of degrees, and this suggests that rationality may also. This primarily comes into play when more than one method is being assessed and we compare them. We might say that a first method is “more effective” than a second, even while both are effective to some degree. Is the first method then more rational? Though it is not clear that this construction would cause any particular difficulty, it seems awkward. Instead we could take a step back and consider our process of selection of methods, treating that process itself as a method to be evaluated. In cases where one method is clearly more effective than the other, it would be simply irrational to select the less effective method (as assessed with respect to the same set of values being sought). Thus it is the selection that is subject to scrutiny in a case of comparison between two methods, and it does not appear to be necessary to admit degrees in rationality, despite the continuum of underlying effectiveness.

Rationality of Values

So far, so good: we want to obtain values, and our efforts to do so can be judged as more or less rational depending on whether they use effective methods. Effective methods are, for the most part, those that help us predict successfully. Successful approaches to prediction are either reliable, in that they can be repeated, or at least pertinent with some empirical evidence and a hypothesized mechanism.

We now ask: can or should values themselves be judged as to their rationality? Values can be treated as methodical ideas (even as they have other attributes), so they at least conform to the structure of the definition.

To answer the question, we will consider some distinctions and structure. I first make a distinction between direct and process values. Direct values are things that we want simply for themselves. We might value love, or chocolate, or a particular person, or physical fitness, and we value these things directly. In contrast, process values relate to the way in which we pursue other values. For example, we may value low risk in pursuing values, thus guiding our approach to it. We may actually value planning itself as something we enjoy. Process values can help or hinder our effectiveness at attaining other values. If they are intended to help, we might call them instrumental, whereas if they hinder, we might call them constraints or guidelines.

A value may be derivative or subordinate to others - we may value completion of a marathon in the context of our value of physical fitness, or a software development methodology as a means of planning. For any pair of values, we might find that one is superior to the other - not in the sense that it is more important; rather that it is partially served by the subordinate.

These distinctions and relationships are not necessarily discrete or invariant. One can easily imagine direct values that have implications for process (for example, intellectual curiosity), as well as constraints that, at least in some circumstances, aid with effectiveness (e.g., stay focused). A subordinate value could serve more than one superior, it could in part be valued for its own sake independent of its subordinate status, and two values might be mutually reinforcing. Still, the meaning of the distinctions is fairly clear in individual contexts.

Since rationality is a relation between a method and a set of values, to determine whether a value (considered as a methodical idea) is itself rational requires at least one other value against which to assess it. We can use the structure just discussed to examine a variety of such pairings..

A subordinate value relationship is generally established by purposeful selection or intent. For example, if we have an important goal we might break it into subgoals that must be achieved in sequence. We can see that in these cases those subordinate values genuinely represent methods aimed at seeking the superior value. It makes complete sense to assess whether they are effective in that role and therefore rational. However, it does not make sense in the other direction - to assess a superior value with respect to a subordinate. Superior values are not intended to subserve their subordinates - they are selected by other means. That relationship is non-rational.

Independent direct values are also non-rational with respect to each other. They are selected for their own reasons, and if they happen to aid in seeking other values that is a happy coincidence, not a question of rationality. This relationship is also non-rational. It does lead us, however, to the larger issue of the overall compatibility of the values that we hold, what we might call coherence. I discuss a variety of issues surrounding coherence of values in The Facticity of Values. Later we will consider coherence as a method and value and look at its rationality status.

Let us now take up process values. In some ways, instrumental process values look very much like subordinate values; their essence is the effective pursuit of other values. With respect to the set of values where it is effective, an instrumental value is rational, whereas if it is applied where it is not effective, it is irrational.

Although instrumental values are a kind of subordinate value, they tend to be more general. They are usually subordinate all at once to a large swath of our other values. Because of this, they often attain a status as independent value. For example, I value an ability to read because it enables me to learn material that assists in pursuing other values. Because it is so broadly useful, and because I practice it frequently, it has become something I enjoy independently even outside the context of pursuing particular values.

Still, no method or approach to achieving values is effective in all cases; this is easily demonstrated by the fact that for any value we can state its contradiction, and someone might very well hold the latter. Practically speaking, we want to beware seeing every problem as a nail simply because we like our hammer. When we are assessing the rationality of an act that is an instance of an instrumental value, then, we must be clear whether we are performing it to serve the instrumental value itself, or relying on that instrumental value as a means to attaining a distinct value.

Constraints, in contrast, are process values that are held independently. In general, they inhibit the pursuit of any particular value, though they are often supportive of broader values. For example, we may value money, but also value compliance with laws. The latter is a constraint; we do not steal to obtain money. This relationship is non-rational, because the constraint is in no way intended to be effective with respect to that value. In contrast, compliance with laws might be a subordinate value of wanting to remain free, or to respect the rights of others. It is supportive of these values and the relationship is potentially rational.

The presence of constraints often confuses the assessment of rationality. For example, one method may not be the most effective to pursue a particular value, but when the constraint is added we see that it is the most effective in achieving both values at once or some weighted combination of them. When we see other people behaving in a way that seems irrational, it is often not that they fail to think well but that they are imposing a set of constraints that makes their other values difficult to achieve. Assessing those constraints can only be done in the context of their overall values.

Reviewing these results, we can see that assessing the rationality of a value is appropriate in relation to another value or set of values to which it is subordinate, but otherwise, the relation is non-rational. This is true of both direct and process values.

Attaining Values as a Value

In discussing process values, we broached the notion of values that are about how we pursue values. The cases discussed were relatively concrete, such as reading as an instrumental value and being law-abiding as a constraint. We now consider something more general: valuing the attainment of our values.

This seems redundant and very odd indeed. Does not the mere holding of a value imply valuing its attainment? Surely to some extent, but consider that there are many individuals who exhibit self-destructive tendencies while still claiming or hoping for typical direct values. Others exhibit apathy and while they might pursue values directly, they might do nothing to enhance their general ability to pursue values. Thus, it is worthwhile to call out this value. Further, it offers a relation that allows us to look at how we organize our values with a lens of rationality.

With attainment of values established as a value in itself, we see immediately that the evolution of instrumental methods into direct values can itself be assessed as a rational method. By developing enjoyment of instrumental methods, such as reading, logic, mathematics, etc. we expand the toolbox of methods available to us and improve our ability to perform them in the event. To be clear, we are not talking about self-improvement generally, but rather the pathway of instrumental methods becoming instrumental subordinate values then becoming direct values. It is about the organization of our values.

Next, overall coherence among our values can be assessed against the value of attaining values. It is clear that it is easier to attain values if they are not opposed to each other. Yet, we almost certainly will have some constraining values, at a minimum to support self-preservation, but also to enable beneficial social interaction. Further, all values conflict with each other in relation to time and effort applied. Thus, coherence does not mean that all one’s values point in the same direction or that there are no conflicts; rather, it means they fit together into a whole that provides pathways for effective action for seeking each of them in appropriate circumstances. Further, it means that whenever necessary, one endeavors to elaborate and clarify apparent conflicts among values so that the lines are clear. Coherence is very much about organization of values, and their prioritization, so working toward coherence is also a rational method when weighed against the value of attaining values.

In our very description of effectiveness we relied to some extent on empiricism and science. Even to know whether a method is effective, we need to have some ability to assess its ability to predict. However, this still relies on the deeper notion of effectiveness - do we, in general, attain the values we seek? This requires a partially foundational, partially circular use of empiricism - we have learned that empiricism, when applied properly, improves our attainment of values. Thus we value these methods directly because they do, in general, enable us to assess the effectiveness of our other methods. Empirical methods and the valuing of them are rational with respect to the value of attaining values.

Finally, though it is extremely general, the value of attaining values is a subordinate value to all of one’s other values, and except in unusual and highly incoherent cases, it is rational with respect to that set.

Thursday, September 22, 2016

An Epistemological Crisis

With regard to what ‘truthfulness’ is, perhaps nobody has ever been sufficiently truthful.


The question “how do I know what is true?” has never seemed so difficult to answer: we are in the midst of an epistemological crisis. It is neither the first nor will it be the last. However, in contrast with previous crises that were mostly of interest to philosophers and scientists, the present situation is considerably more relevant to the general population.

At its core, the crisis is one of evaluating the veracity of testimony. “Testimony” is used here as a philosophical term to mean purported knowledge that is communicated by another person rather than perceived or conceived directly by a subject. The problem of testimony is not new, though it could easily be argued that it has not received enough attention.

It can be illustrated simply by considering an everyday situation: a news article on the health benefits or shortcomings of some particular food or diet. How do we know whether, or to what extent, such an article is true, both in its factual claims and its interpretation?

The Superficial and the Deep Problem

I divide this problem into two components: superficial and deep. The superficial component is about pure testimony - the conveyance of knowledge or information. The issues here arise from whether the testifier is correctly transmitting the information that he or she received, regardless of where it came from. Analyses of this problem can be found in sources ranging from the Islamic hadith, to courtroom rules of evidence, to treatises like Schum’s The Evidential Foundations of Probabilistic Reasoning, to my own analysis presented in this talk.

The solution to the superficial problem is effortful but tractable. We try to reduce the number of testimonial “hops” to the original source of information. We evaluate the reliability of sources through various means, and try to rely on sources who have exhibited truthful testimony consistently in the past. We try to find multiple independent sources that offer the same account. We make an effort to evaluate whether the independence of sources is impaired by collaboration or implicit biases.

An aggressive pursuit of the truth of pure testimony leads us to original sources. These might be in the form of numerical data, or written communications, or eyewitness accounts, or video or audio recordings. The deep component of the problem is in evaluating the veracity of these original sources. It is partially a problem of testimony, but also incorporates issues of perception, memory, and cognitive bias. It is a deep problem because an evaluation of the original source is sometimes impossible and almost always requires a great deal of effort and knowledge. As subjects evaluating truthfulness, we do not have access to the original facts of reality and all the details of the means by which they were characterized.

In some cases the deep problem does not go that deep. Original sources can be falsified, retouched, or even just carelessly recorded or re-typed. If an individual or institution is generally a reliable original source, then we have some reason to believe that these factors do not come into play. There are occasionally stories of researchers who completely fabricate data, or of photos that are fake, but they are actually fairly rare. These concerns look more like issues of pure testimony.

More difficult is when data is legitimately produced but it is wrapped in interpretive cover. Statistical analyses have this characteristic. A scientist or or data collection agency might collect a large amount of raw data and then perform a variety of adjustments and statistical operations on it. Those adjustments and operations might be well-established as appropriate in the field, or they might be novel to the particular researcher. One might assume that such procedures would be evaluated during peer review, but for a variety of reasons peer review has declined in its thoroughness and efficacy. Government-sourced data does not have peer review, though economists and medical experts and others often opine on its validity. It is easy to find conflicting opinions on such matters, and these differences are difficult to resolve, even for experts, but especially for lay subjects.

Adjustments and statistics are only one example. More generally, a scientific paper or government report incorporates conclusions, based on the data along with the application of experience in the field of interest, and it can be extremely challenging to evaluate the logic of these conclusions without a similar level of expertise. Yet, once again, even experts weighing in on the conclusions will often have differing opinions.

The deepest level of the problem is when the data collection process itself is tainted with cognitive bias. This is extremely difficult to discern, because such work is always and necessarily affected by cognitive biases of some kind, and the real question is the extent to which those biases affect the results and conclusions. Further, it can be very difficult from the outside to tell the difference between explicit manipulation and implicit cognitive bias.

In the most innocuous yet endemic version of cognitive bias, as numerous philosophers of science concluded in the second half of the twentieth century, all terms are theory-laden. This means that even the statement of the hypothesis and the data collection effort depend on a particular paradigm, or model of the world. There is no “view from nowhere.” Thus, all data collection has a tendency to confirm the extant paradigm in a general sense. This occurs beyond just scientific fields. An individual who takes a photograph of a newsworthy event will point the camera at the most dramatic elements and will not capture the entire context of the scene. What is captured depends on what that person considers important or photo-worthy.

Of greater concern is the desire on the part of a researcher to gain support for a theory or hypothesis. Though we hope that scientists are mostly honest, they often truly believe their viewpoint. All science requires that some data be ignored due to spurious measurement error, that experiments be designed to control for external influences, and that other elements be selected from a wide range of seemingly reasonable options. Attention-related cognitive biases will naturally drive them toward approaches that support their theory and to fail to notice facts and concerns that counter it. Further, individual researchers face many pressures, including publishing regularly to gain tenure, gaining notoriety for their theories to obtain funding, and just generally seeking prestige in their field. Scientific journals strongly prefer to publish experiments that show an effect rather than those which show no effect or disconfirm previous effects. These difficulties due to implicit cognitive bias have been documented at length recently.

More egregiously, sometimes funding sources (e.g., corporate or government sources) are essentially looking for a “right answer” that will sell more products or support a particular policy. Scientists can always arrange things to be more favorable to the desired hypothesis even while providing perfectly rationalized support for that approach. A recently discovered example of this also shows that the phenomenon is not new. But money is not the only such influence. When public policy is integrated with science, the political beliefs of scientists, data collection agencies, and scientific journal editorial staff can influence the results in important but opaque ways.

Skepticism and Faith

The end result of these manifestations of the problem of testimony is a broad and profound skepticism, hence an epistemological crisis. We see opposing journalistic sources providing conflicting information, and when we dig down to the original research and data sources, we often find contradictory results or disagreeing expert opinions about those results. As information consumers we have the sense that, for all apparently factual material there is interpretive and methodological bias that affects both the data itself and any conclusions one might draw from it. Conspiracy theories about scientific results are rampant. We often see results from “established” science accused of either industry influence or a bias toward technocratic values, and the establishment treating all results that disagree with the dominant paradigm as “pseudoscience” or again as tainted by financial incentives.

Little of this skepticism is aimed at finding “truth,” of course. It is often a political or values fight, with extremely high standards expected of those who disagree and a pass given to those who agree. And for those who are genuinely looking to find truth, their skepticism is dismissed as partisan as well. Even attempts to apply appropriate standards of knowledge seem subject to bias; the snake is eating its own tail.

One might hold out hope that reality is the final arbiter of such matters. If only it were that simple. For questions where we, as subjects, will never encounter the phenomena directly, it is entirely possible that the debate will rage indefinitely. Unless the effect sizes are strong and transcend methodological choices, the same biases exhibited initially will continue to live on. For those questions where we will encounter phenomena directly, we only have the anecdotal observations we can make individually, which means that we probably cannot assign causality in a reliable fashion. For example, if I eliminate some food from my diet and subsequently feel better, the likelihood that there is some other cause for the change or that it is a placebo effect is fairly high.

Now, none of the underlying knowledge issues discussed here are fundamentally new. There have always been frauds, egos, cognitive biases, and paradigmatic influences. Some things that are new include: (a) science and government data collection is used to drive public policy, some of which extends beyond pure rational analysis and into values; (b) inexorable increases in the number of scientists drives a need to publish; (c) business model problems with journals have reduced the effectiveness and application of peer review; (d) due to the Internet, laypeople have greater access to both reporting and to original sources, making conflicts that were always there more broadly apparent, and values other than seeking truth more influential. There are probably other reasons why we have reached this crisis now.

The existence of the crisis does not reduce our desire to believe and to have knowledge. In the words of Charles Sanders Peirce, a scientist and philosopher of science who wrote in the second half of the 19th century:

Doubt is an uneasy and dissatisfied state from which we struggle to free ourselves and pass into the state of belief; while the latter is a calm and satisfactory state which we do not wish to avoid, or to change to a belief in anything else.

The unfortunate result for many laypeople is that they come to rely on faith, not only in matters of unverifiable metaphysics, but in areas where there is actually a fact of the matter that could be discerned. Their values drive an agenda, the agenda drives a cognitive bias toward supporting evidence, and an unencumbered faith in both the agenda and in this supporting evidence is the result. Since experts disagree, they pick those who share their values or agenda. As discussed above, modes of skepticism can be applied to any contrary evidence, suggesting not only that the particular evidence is suspect, but also that claimed knowledge from faith has an equal status to that from evidence and reasoning. This is an ugly situation indeed: among other things, it means that values can float unfettered by reality.

The syndrome is not limited to people who subscribe to a particular religious faith. It is not confined to the Dunning-Kruger idiocracy. It results any time someone holds their existing values and beliefs above valuing and learning how the world works. It happens left, right, and center. It happens among scientists and other experts, even within their own field. We are all at risk, and the only cure is to take truth seriously and to seek it with unqualified regard.

Looking Toward Solutions

In the early twentieth century, an epistemological crisis arose from a series of discoveries showing that our knowledge is fundamentally limited. Heisenberg showed that we cannot accurately know both the position and momentum of a particle. Gödel’s incompleteness theorems demonstrated that every formal system - e.g., a system of logic or mathematics - is either inconsistent or has true theorems that cannot be proved within that system. Many similar and related discoveries followed. Uneasiness with these limits continues to this day, but practitioners have gradually come to accept that these limits are simply facts about the world, and that knowledge of our epistemic limits is just another kind of knowledge. They found that there was plenty of knowledge still accessible and that we could cope with inherent uncertainty through indirect or partial means. Ultimately, the crisis did not inhibit progress.

Perhaps we can work toward solutions with this historical example in mind, and guided by Nietzsche’s point about the nature of truthfulness. The difficulties we are seeing are probably not some sort of contingent, temporary issue related to modern society, nor even due to flaws that are particular to human nature. Any autonomous agent collecting data has some set of values and purposes, or it wouldn’t bother to collect the data (or do anything else, for that matter). Those values and purposes, and the paradigmatic assumptions guiding the data collection and analysis, create biases which lead to biased results.

Three approaches come to mind. Original sources can try to ameliorate the bias resulting from their purposes; they can disclose their purposes and consequent biases; or they can align their purposes with others who have somewhat different purposes. Note that these approaches may or may not have practical implementations in any given case. Let us look briefly at examples of each in turn.

Suppose that a researcher is seeking tenure. A crucial bias of such a purpose is to find results and effects of interest to the field. The tendency will be to find effects where there are none and to overstate the significance of small effects. To ameliorate such biases, a researcher in this position should go to great lengths to demonstrate that the effect is real - not just finding a single statistic that supports the conclusion, but showing that a variety of analyses lead to the same conclusion. If the effect is small, do not overstate its potential importance.

Suppose that a researcher has pursued a career in a scientific field because she thinks the field studies a phenomenon that is an important risk to humanity. This needs to be disclosed, just as clearly as a financial conflict of interest needs to be disclosed. It is only natural that this sort of motivation exists in a scientific field, but it creates strong biases that readers of research need to be aware of.

Suppose that researchers in a field have opposed political viewpoints regarding a particular scientific question. They can work together to evaluate the data collection processes and analysis and develop a methodology that removes at least the most troublesome biases. This will only work with researchers who have a genuine desire to find underlying phenomena despite their political views. Those who are primarily concerned with their agenda will find ways to object to anything. I suspect that the vast majority of scientists would benefit from this, just as pair programming makes software developers focus on the flaws they would prefer to avoid.

These are only single examples of situations and how these approaches might be applied. Nevertheless, if among our purposes is improving our lives, and increasing knowledge of how the world works, it is crucial to find ways to mitigate this epistemological crisis. The alternative is ignorance and power politics, which is unlikely to result in our being happier, healthier, and safer.

Saturday, August 6, 2016

Purpose is a Hot Mess

Those who contemplate the means by which we determine our purpose often arrive at a description based on some sort of hierarchical structure. In this structure there is typically some final or ultimate purpose, from which we subsequently and consequently derive cardinal and subsidiary goals. Thus all we need to do is discover or decide on this ultimate purpose and everything else flows from it as strategy and tactics. My own small contribution along these lines is in the blog post A Taxonomy of Purpose.

When we examine more carefully the relationships among our purposive influences, we find nothing so straightforward. For example, our desire to survive is certainly strong, but rarely is it so dominant that we are willing to forego all but the safest activities. Those who claim to have a manifest purpose are sometimes observed not to be focused at all on that purpose. Most people have a set of important values in their lives that continuously ebb and flow in their importance. They have career, family, friends, and avocations, all of which vie for attention and time. Indeed, the people who genuinely hold a single purpose - such as world-class athletes, musicians, or inventors - stand out precisely because they are so unusual. Interestingly, It is common for such individuals to experience burnout and depression in the midst of their obsessive pursuit.

Even establishing a general criterion such as “leading a happy and satisfying life” provides only limited guidance. We must discover what actual activities provide such effects, and they are likely to change over time. Further, though, it is often necessary to invest in activities that are not intrinsically satisfying, so as to derive greater benefit later. How do we decide the extent to which we ought to make such investments at the expense of immediate gratification?

The point here is not that there is a lack of substantive purposes or values available to us to make decisions about our activities and directions. Quite the opposite - there are many of them and the relationships among them are uncertain, variable, and often unstructured. One could argue that this is merely how we typically behave, but that we ought to operate in a more structured way. Such a claim of course assumes some prior or higher purpose on which to judge a purposive structure. More importantly, though, we can look at examples of those who approximate such behavior and see what we would typically consider unhealthy or even sociopathic outcomes.

In the field of artificial general intelligence, efforts are underway to model motivation and purpose. To the extent this aims at capturing something like human purpose, it is likely not comparable to a rules engine or even a static pattern matching process. It may not even be possible to model it with a Markov process, because it is dependent on the path by which we arrived at our present state. Perhaps we could come close with some sort of time varying, constrained optimization model. Instead, though, it is often assumed that an artificial agent will be imbued with an explicit and singular final purpose, and the logical consequence is usually that the AI endeavors to kill all humans to achieve that purpose. In some accounts, this is even the case if the purpose is to make humans happy. Conclusions like these could be used as further suggesting that the inscrutable organization of human purpose has a certain stability and desirability that is lacking when it is oversimplified.

Purpose is and probably ought to be a hot mess of desires, values, goals, and visions that constrain and contradict one another so as not only to keep life interesting and fresh, but also to keep us from going off the rails on a crazy train.

Friday, February 5, 2016

Decision Processes

When we are faced with a choice, how do we decide?

We make many decisions passively or without apparent conscious effort. From a psychological viewpoint, we might assume that such decisions are made through some combination of: an expectation of pleasure or pain, entrenched habits, emotional states and reactions, implicit “pattern matching,” implicit and practiced decision principles, social cues, and possibly other ingredients.

Other times we choose consciously, but make a meta-decision to “go with our gut.” Interestingly, it does not seem as though the decision criteria in these cases are significantly different; rather we are simply more likely to bring various facts and values into consciousness and change the weight they are given in the choice.

Take for example a decision whether to drive on a freeway or a surface road to one’s destination. If we drive this route frequently and normally take the freeway, we might just take the freeway without ever consciously being aware of making a decision. Or, we may pick the route based on apparent traffic, and in this case we are conscious of the decision; but if there are no obvious traffic problems we take the freeway anyway based on instinct.

Non-conscious decision processes develop naturally from our biological reward systems. We come to predict those actions that will satisfy drives such as hunger, thirst, and warmth. We also have strong, though less immediate, drives such as curiosity, sexual desire, and a need for social interaction and status. Early in our lives, the more basic drives dominate and the providers who satisfy them create a strong influence on our habits of action and decision. Later, a broader social environment conditions us to certain behaviors through the dynamics of individual relationships, peer pressure, and authority figures. The habits that we develop early in life may persist even though they may no longer correspond to our actual desires as we mature.

At other times, we actually deliberate consciously. Consider the following general procedure, which is not intended as definitive, but seems a reasonable approximation:

  1. identify the relevant options;
  2. for each option, attempt to predict its consequences;
  3. for each option, compare the expected consequences to our desires;
  4. compare the assessments over all the options.
The first two of these are primarily epistemic processes. Rarely is it this simple, of course: we usually rule out or never consider options that are prima facie in conflict with our values; or the contents of a prediction might relate directly to values. Nevertheless, we will not treat of (i) or (ii) here except to point out that improved skill in them will help one to make better decisions.

Comparing the expected consequences of an option to our desires is not as easy as comparing a grocery list to a filled shopping cart. Those consequences may take the form of a statistical distribution of outcomes, or a tree of possibilities that turn on unknown extrinsic factors. The desiderata may be multifarious and their weightings or priorities may be time- or path-dependent or combination-sensitive. Further, it is not only those criteria relating directly to the context of the choice (a goal) that must be considered: there are often background considerations that render an otherwise acceptable outcome untenable (side effects).

Comparing these assessments over the various options is a higher dimensional version of the same process. We must compare the statistical distributions or outcome pathways across options and assess how well goals are met and side effects are minimized along with the relevant probabilities. Here there are also stylistic meta-desires involved, for example, whether one seeks values as an optimizer or a constraint-satisfier, or whether we prefer to cast the die once or to leave downstream options open.

Crucially, underlying this epistemic labyrinth is a requirement to have knowledge of our desires and how their interrelationships. While we do not have control over the extrinsic factors that affect the outcomes of choices, we do control the extent to which we understand what we seek. Here arise further interesting questions (which we will not address here): do we to some extent need to discover what it is that we desire? How does that process unfold?

It is no wonder that we typically rely on habits, gut instinct, and norms whenever we can. The cost of following a conscious and rational decision process (whether that described above or some variation) is extremely high, and in any case it is often so filled with uncertainty that the incremental value of spending time and energy on detailed comparisons is low.

Given that, we should ask the question whether we even need to consciously deliberate. What is wrong with just relying on our existing habits and intuitions? The answer is that, though these sub-conscious mechanisms undoubtedly contain elements of our genuine criteria, it is likely that they would contain some influences that we have rejected, are missing some that we have embraced, and weight them incorrectly. In short, our sub-conscious mechanisms rarely match up with our conscious desires, particularly when the decision involved will have long-term effects. Even though the conscious process is complex and imprecise, it is likely to produce results that are closer to our actual desires.

To reduce the ongoing cost of conscious decisions, and to improve the consistency of our efforts to obtain what we desire, we can actively establish norms and values and attempt to entrench them as habits of behavior. In both the “subconscious” and “conscious-implicit” cases discussed earlier, the psychological basis for the decision is a combination of unanalyzed habits and responses along with habits and responses that were developed or practiced consciously.

It seems that there are two approaches – or perhaps the poles of a continuum – by which we can consciously habituate decision criteria. First, we might consciously make a decision anew in each individual case, perhaps based on more fundamental criteria. Over time an abstraction will form that enables recognition of a circumstantial pattern, whether or not we explicitly identify it, and that pattern-response mechanism is the source of the habit.

In contrast, we might instead recognize in advance that a species of circumstances does or will occur regularly, and we contemplate and calculate a decision rule in general form. Initially, when applicable circumstances come to pass, we must consciously recognize the situation and form the decision. We may even engage in some deliberation during the first few such occurrences; in this case, such deliberation is in the context of whether our initial, abstract deliberation was correct. In any case, after sufficient practice, decisions in such circumstances can occur without either conscious deliberation or even recognition.

From this analysis we observe three categories of decision processes: conscious, habitual based on consciously developed norms, and habitual based on conditioning.

Sunday, January 17, 2016

The Constitutive Role of Risk in Entrepreneurship

In an earlier, somewhat inaccessible essay, we began to explore some of the rough outlines of what constitutes entrepreneurship. In particular we suggested that a novel, instrumental idea, along with the desire and action to see it realized, might be minimum or necessary conditions, though they may not be sufficient. Here we will explore the potential constitutive role of another factor that is often associated with entrepreneurship: risk.

In this piece, our examples will all involve commercial business situations, but that is for the sake of simplicity and clarity, not to limit the discussion or conclusions to entrepreneurship of a commercial nature. Further, for those concerned with rigor, I will point out that in any definitional enterprise there are two value-laden considerations. The first is due to the fact that we have purposes driving the need for a distinct term, and the second that if we are to trouble ourselves with a distinct term, then its meaning should be unique in some way so as to perform some cognitive work. While the latter point is generic, I should clarify for the sake of the former that my purpose is in studying the nature of entrepreneurship, as I believe it has been and will be a major contributor to human civilization.

The popular archetype of an entrepreneur includes “risk-taker” as an attribute, and risk is normally present in entrepreneurship. However, this could be a consequence of another factor, or even just a coincidence resulting from the way entrepreneurship is historically conducted. We would like to determine whether it is among the necessary or sufficient conditions constituting entrepreneurship. We would also like to understand more about the role it plays, given its apparent ubiquity.

To do that, we will need to look more closely at what the term “risk” means in this context. It has several definitions, such as “variability of economic returns” (finance) and “uncertainty of outcome in making a decision” (psychology). In relation to entrepreneurship, we might instead emphasize the possibility of loss of something one currently possesses. What sorts of things might an entrepreneur lose?

Most obvious is the potential for financial loss. Entrepreneurs are usually their own first investors, not only supporting direct outlays but also expending effort and time that could have been directed toward earning money. Bootstrap entrepreneurs sometimes use credit cards, second mortgages, and savings to get the business off the ground. And entrepreneurs - even those with outside financing - rarely draw a full market salary.

Also apparent is the possibility of harming one’s reputation. If the venture fails, one might be viewed as less competent or capable by colleagues, as an embarrassment to one’s family, as an object of pity to one’s friends. Recently, there has been some emphasis on the positive aspects of business failure (for example, learning and experience), and in Silicon Valley some entrepreneurs even tout such failures as a selling point. Nevertheless, loss of reputation remains a real risk.

Less widely understood is the potential for a toll on one’s health, both physical and mental. Entrepreneurship typically involves long working hours, grueling travel, and rushed or indulgent meals. It involves substantial emotional stress, difficult and awkward interactions with a wide variety of people, and extraordinary volatility in the prospects for success. Depression is common and the daily instability may exacerbate any propensity to bipolar disorders.

A few other potential losses, possibly not as consistently present, come to mind. Often an enterprise is founded with trusted and respected colleagues: those relationships could be damaged. The entrepreneur’s adventurous spirit or enthusiasm could be lost. Sometimes the very vision that guides the effort is a longtime dream and an integral part of the entrepreneur’s persona, and failure would do violence to it.

Thus when we speak of risk in entrepreneurship we are typically talking about important values such as these, values that the individual currently possesses and that the venture might directly or indirectly cause to be lost. Further, if the notion of risk is to help us demarcate entrepreneurship from other endeavors, it seems necessary that such risk exceeds that of commonplace activities - whether due to putting larger quantities of the aforementioned values at stake, or from a higher probability of loss.

With that framing of risk in place, let us now consider whether it is a necessary or sufficient condition for entrepreneurship.

We do not normally consider gamblers and thrill-seekers to be entrepreneurs despite their bearing considerable risk. The reason seems to have something to do with the absence of creation: even if the gambler wins, or the thrill-seeker survives and garners an adrenaline boost, nothing new has been produced. This becomes even more clear if we add a creative element to each activity. A “gambler” who develops a team method of counting cards, or a thrill-seeker who devises a new way to plunge toward the earth and decelerate before impact, these activities seem to come closer to our sense of what an entrepreneur does.

Based on these examples it seems that risk alone is not a sufficient condition to constitute entrepreneurship. More difficult to ascertain is whether it is necessary. Put another way, is it possible for an activity to be entrepreneurship if there is no incremental risk?

Let us assume for a moment that our previously suggested conditions are necessary but perhaps not sufficient. A putative entrepreneur has a novel, instrumental idea along with a desire to see it realized, and takes action on that account. Do we need to add risk to the broth?

Suppose we stipulate further that our subject is financially very sound, physically and emotionally healthy, and has had several large successes as well as some failures, putting her reputation beyond reproach and rendering her ego stable. The amount of time and money she intends to put at risk are proportionately small. The envisioned widget is clearly needed in the market and for whatever reason there are no competitors. Is this activity entrepreneurship?

If we were to claim that it is not, we would be in a somewhat uncomfortable position. An activity that, earlier in our subject’s career, might have been clearly entrepreneurship, is not so designated because the person has managed to be successful and healthy and the project is relatively small. By requiring risk per se, we make the question of whether an activity is entrepreneurship dependent on the financial, health, or reputational status of the subject pursuing the activity. This would be a startling result. We do not ask about a scientist’s prior success to determine whether his latest experiment is science, nor an artist’s mental condition to decide whether her attempts at sculpture constitute art.

Perhaps, though, the necessary conditions we assumed are too strong, and occlude an underlying need for risk in constituting entrepreneurship. Two elements that seem promising in this regard are the novelty of the idea and the need for action. We can consider each in turn.

Imagine an individual investing his entire net worth to purchase a local chain of dry cleaning outlets from the retiring owner. The chain is successful, and no significant changes are planned, so there is no novelty in this endeavor aside from the fact that our subject will be managing the business. Nevertheless, there is considerable risk due to the scale of the investment and the ever-present, uncontrollable macroeconomic factors that affect all businesses. Is this entrepreneurship?

It is a case where opinions might vary. Peter Drucker, in Innovation and Entrepreneurship, excludes situations such as this despite the risk, because there is no innovation involved. Others might hold that in this case the risk substitutes for novelty in constituting entrepreneurship. Still others might claim that the existence of the risk implies at least some hidden novelty - for example, every moment in time has different characteristics - though this sort of claim tends to blur any distinction between entrepreneurship and other activities.

We can more likely agree if we perturb the scenario, so that the subject does not actually operate the business but merely makes the investment. In that case we see the individual as an investor, or perhaps a gambler, but not an entrepreneur. It seems that at a minimum, for risk to act as a substitute for novelty, we also need to see ongoing involvement beyond an initial transaction.

We can take this further, and address the requirement for action, through the following, admittedly recherché, scenario. Here imagine an individual who is a non-operational major shareholder in a company, who has a novel idea and a desire to see it realized. She describes this idea to an operating executive but takes no further action. There is novelty, and there is also risk, but there is no direct action on the part of the shareholder.

Though agreement may not be universal here, most would not call this entrepreneurship. The reason is subtle: the risk does not appreciably arise from the idea or even the fact that it was shared. The risk antedates the sharing of the idea, and exists whether or not the idea is ultimately implemented. WIthout her stake in the company, our subject is merely someone who has an idea and shares it, which is quite outside the realm of entrepreneurship.

From these scenarios we conclude that in constituting entrepreneurship, risk cannot substitute for action, and to the extent that we think that risk can substitute for novelty, it needs to be combined with ongoing action. Consequently, risk is neither a sufficient nor necessary component of entrepreneurship, unless it is substituting for novelty.

We now return to a more realistic scenario to fill out the picture. Suppose an individual seeks a role within a large organization, having distinct novel ideas about how the role could contribute to the firm’s success, and subsequently joins the firm and executes on those ideas. This seems to be entrepreneurial without being entrepreneurship. That is to say, it is similar to entrepreneurship, or has some but not all of the qualities of entrepreneurship. A comprehensive elaboration of this distinction is outside the scope of the present discussion.

But what if we now substitute risk for novelty, as we did before? For example, the subject leaves a stable job and moves her family to a new city to join a company in a particular role, but has no particular novel ideas about that role. Here the similarity seems to fail. The subject is merely making a risky job move; not only is it not entrepreneurship per se, it does not even seem particularly entrepreneurial. This outcome suggests that, even if we do consider risk to be a potential substitute for novelty, its constitutive role is considerably weaker.

If risk is only a necessary condition for entrepreneurship in limited circumstances, and even there its constitutive role is weak, why does it seem to be present virtually all of the time? There are two straightforward reasons.

First, action in the face of novelty always carries incremental risk, above and beyond that of prosaic activities, even if such risk seems immaterial to a particular subject. Not only is there some probability of failure; that probability can be quite difficult to estimate, because there are no genuine comparables to rely on.

Second, the relatively strong requirement of action means that there is an opportunity cost risk for the entrepreneur. Typically, an entrepreneur makes a deep commitment to a single venture rather than engaging in a portfolio as an investor would; thus there is significant unique risk (as the term is used in portfolio theory) relating to the time, money, and effort expended. Even in cases where the entrepreneur is involved in multiple activities simultaneously, each one limits the time available for the others. This commitment implies a concomitant risk, again even if it is not material for the individual.

Given these straightforward causal relationships, it is no wonder that we often conflate risk with entrepreneurship. More accurately, we might consider certain forms of risk to be entrepreneurial, even if the activity as a whole is not genuinely entrepreneurship. As a similarity relation, the term entrepreneurial admits of degrees. For example, in a scenario where an individual takes a job with a startup at half-pay in exchange for a significant equity stake: that is surely entrepreneurial in some degree, simply because bearing risk of this particular kind is common among entrepreneurs.

Finally, risk is often present in entrepreneurship because investors and other parties see it as a motivational tool. The upside is a carrot; the risk is a stick. An entrepreneur who takes little risk is seen as lacking commitment. When difficult situations arise, so this view goes, the entrepreneur with “skin in the game” will be more likely to persevere in the effort and do whatever is needed to succeed. Whatever the merits of this view, risk incurred for this reason is clearly not constitutive of the activity of entrepreneurship, but simply represents a common business practice.

As the practice of entrepreneurship continues to become more professionalized and systematized, it is important to recognize that the presence of risk is a common but not inevitable artifact of other, essential characteristics, or of particular business methods.

Saturday, January 2, 2016

Deconstructing Atheism

Though probably too much has been written on atheism and its various strains, I will nevertheless contribute to the fray, as a result of some recent concerns and insights I have had on the topic. Though I have long been a committed atheist, I have never been completely satisfied with the standard depictions of what that means. I am also concerned not to overreach in my own beliefs, as I described in detail in Doxastic Promiscuity Considered Harmful.

In the following I make numerous claims about rationality and what qualifies as rational. Arguing those claims is well outside the scope of this article, but in any case I do not think any of the claims is outrageous, even if debatable.

The usual organization of atheism divides the field with two oppositions: implicit vs. explicit, and weak vs. strong. An implicit atheist merely has not thought, or thought much, about the subject, while an explicit atheist has made a conscious decision. A weak atheist does not hold a belief in a deity, whereas a strong atheist denies the existence of deities. This results in three actual categories (implicit weak, explicit weak, explicit strong) since it makes no sense to hold strong atheism implicitly. Somewhere in all this are agnostics, which I will not address here because agnosticism is really about knowledge claims rather than belief.

These descriptions of atheism have numerous difficulties on their face, and consequently it is common for theists to debate the logical consistency of these positions using this casual presentation as a straw man. So, let us attempt to improve on it. Here is an informal statement of the two forms of explicit atheism:

Weak: I do not believe that God exists.
Strong: I deny that God exists; or, I believe that God does not exist.
In the event you are unclear on the distinction here, the weak atheist has some set of beliefs, but the belief in God is not among them. He is mostly epistemically passive with respect to the beliefs of others. The strong atheist, in contrast says that a belief in God is incorrect, and not only does he not include it among his beliefs, but would also say that you are wrong if you do so.

There are three obvious difficulties here:

  • What do we mean by “exists”?
  • As to the Strong claim, is a statement of non-existence logically coherent?
  • What do we mean by “God”?
As to what we mean by existence, we need to assume agreement on some sort of ontic postulate, i.e., that there is a world of real things that exists independently of our consciousness and of which our own existence is a part. The primary alternative to this is radical phenomenology, a vaguely Berkelian universe of phenomena presented by God, roughly akin to The Matrix but without even the bodies. It can be argued that an ontic postulate is a metaphysical article of faith, but since virtually all theists and atheists alike accept some version of this, we will proceed in that context. With an ontic postulate, the notion of existence is largely our common sense view of it - something that is part of that external world.

Even with this clarification, the logical coherence of existential statements about particulars is suspect. We do not need to get involved in that debate, as these statements are straightforward to repair using the notion of reference. Reference just means that there is an actual thing in the world to which a concept or name refers beyond its representation in someone’s mind. Not surprisingly there is also debate about reference of proper names, but here we can rely on the weakest form of reference, that there is something in the world that at least epistemically justifies the use of the term. Using the language of reference, we can restate the two explicit forms of atheism as follows:

Weak: I do not believe that “God” refers.
Strong: I deny that “God” refers; or I believe that “God” does not refer.
As to what we mean by “God,” we can agree with theists that a variety of concepts of God or gods are held or have been described by different believers now and through history. Further, these concepts often contain certain attributes that are shared among many or most of these concepts (e.g., immortality). Capitalized it is a proper name, implying monotheism, but it is nevertheless defined by a set of conceptual attributes, some of which a believer may claim to have experienced but others not. We could also consider a broader class of concepts that have never yet been expressed but that are somehow of a kind. We will need to look at all this more closely.

We now introduce further terminology, a distinction between de dicto and de re statements. Suppose we take the statement:

Something is rotten in the state of Denmark.
This can be interpreted in two different ways. It might mean that one believes there is a particular, known thing in Denmark that is rotten (de re); or it might mean that one believes that there is some unidentified but apparently rotten thing in Denmark (de dicto).

Using this distinction, we can further adjust our statements to clarify which meaning we intend:

Weak de re: I do not believe that that particular concept of God refers.
Weak de dicto: I do not believe that any concept of God refers.
Strong de re: I deny that that particular concept of God refers.
Strong de dicto: I deny that any concept of God refers.
Note that nearly everyone is at least a weak de re atheist about some concepts of God; for example, a Christian does not (usually) believe that Zeus exists. And weak de re atheism is clearly within the bounds of rationality, as we can simply claim that we do not find convincing the level of evidence in favor of a particular concept of God. But what about strong de re atheism?

Demonstrating that a concept or name does not refer is of a different order than demonstrating that it does. In the latter case, we simply show that the various attributes match actual evidence. This sort of demonstration is subject to all sorts of justificatory disputes, but at least we are dealing with substance. In the former case, it seems as though we must claim that there cannot be a thing to which it refers, or that we have looked everywhere. Nevertheless, rationality cannot require that our standard of belief is apodeictic knowledge - otherwise we can only believe logic, and perhaps not even that.

Further, we generally consider it within the bounds of rationality to deny the reference of fictional characters. Thus we can say that neither unicorns (which are a good example, because they present no logical difficulties) nor Mickey Mouse refer (except to those fictional characters in stories and screen), because we know (at some level) the provenance of their authorship.

There may be additional rational bases on which to deny reference, but I am not aware of them.

Given the ethereal nature of most concepts of God, there is probably not a rational notion of having looked everywhere pertinent in an attempt to find something to which the concept refers. Therefore the strong de re atheist must rely on one or both of the following claims in order to conform to some standard of rationality:

  1. The applicable concept of God cannot refer, most likely because it contains an inherent contradiction. Recalling that our standard of belief is less than apodeictic knowledge, along with accepting an ontic postulate, it is rational to insist that a proposed entity not imply a contradiction.
  2. The applicable concept of God is fictional, i.e., it was authored by individuals who either fabricated the concept in its entirety (e.g., the “flying spaghetti monster”) or who one believes grossly misinterpreted the evidence available to them.
A special case applies when the attributes of a particular concept of God is entirely metaphysical, i.e., it has no nexus with the known world, and nothing at all about it is verifiable or refers in any way to any known element of reality. In this case, one might be able to make a fiction argument, but the impossibility of referral is not available because the concept does not make reference to anything that could be contradicted.

Having examined de re atheism in both strong and weak forms, we can proceed to de dicto. We see immediately that the nature of this question depends on the set of concepts of God over which the claim would apply, what we might call the range. In the easiest case, the range is merely a finite, defined collection of de re claims (e.g., “I do not believe that any of the concepts of God with which I am familiar refers.”) For this narrow range, weak de dicto atheism seems safely rational, since we are in essence listing a set of potential beliefs that we do not hold. Strong de dicto atheism is more challenging, for we need to have some level of justification for either (1) or (2) in each case, but there is no fundamental impediment to such justification. Such a range, whether it is circumscribed by familiarity or by historical access, represents a simple aggregate of claims which can in the worst case be addressed individually.

We might also consider a broader, non-finite range based on attributes. For example, suppose that we include in the range any concept of God that ascribes omnipotence. We would then say:

Weak: I do not believe that any concept of God that ascribes omnipotence refers.
Strong: I deny that any concept of God that ascribes omnipotence refers.
The weak de dicto atheist here simply does not believe in omnipotence. The strong de dicto atheist must make the claim that omnipotence itself is self-contradictory. When our range is based on attributes and therefore includes not-yet-created notions of God, we cannot claim that all those notions are specifically fictional; and to claim that they are necessarily fictional is to assume the self-contradiction. Consequently argument (2) is not available.

Caution is advisable if the attribute is entirely metaphysical. Above we noted that argument (1) is not available to us in these cases de re, but argument (2) is not available in ranges of concepts described by an attribute. Consequently, while weak atheism remains safely rational, a claim of strong de dicto atheism on a range based on a metaphysical attribute risks making unjustifiable assertions.

The range can be expanded further to include disjunctions and conjunctions of attributes, with the weak de dicto atheist not believing in entities with those combinations of attributes, and the strong de dicto atheist believing that those combinations of attributes are contradictory. We could perform a union of this range with all known historical concepts of God to establish a comprehensive notion of what we mean by God, with both weak and strong versions as potentially rational.

There may be other ways to expand the range. The broadest range at first seems to be all possible concepts of God. But without any attributes, this is not a meaningful notion, and God could be anything. What if by “God” one means “cats”? We would not want to claim that “cats” does not refer. Neither weak nor strong atheism will do here: there is no rational claim of this sort to make. As the foregoing discussion has illustrated, when taking an atheist position we must circumscribe the range, and if we are to take a strong de dicto position, we must make a set of arguments applicable to either attributes or particular concepts.

I’m sure the reader is wondering where the author comes out on all this. In general I hold weak atheism with respect to any of the concepts of God with which I am familiar, or that have the typical “omni-” attributes. In particular cases of note, I am willing to step up to a position of strong de re atheism. For example, I would deny that the various Christian concepts of God refer, based on both fiction and contradiction arguments. However, I am not willing to put forth the effort to pursue strong atheism against a broader range. Actively denying the reference of a concept assumes an argument with someone who holds or might hold it; belief in God is invariably based on faith, thus such a person’s beliefs will be impervious to my rational claims. I see it then as a waste of time, and rationality requires that we ration our time.