Aristotle stated the law of non-contradiction as “one cannot say of something that it is and that it is not in the same respect and at the same time.” There are many problems with this form of the statement, including the finding that simultaneity is observer-relative, and that the phrase “in the same respect” is prone to No true Scotsman disputes. But there is a deeper concern: the rule only applies to what we can say, not what actually is the case in reality. In other words, the law does not say that a thing cannot both have and not have an attribute. It merely states that it is invalid to say or think that.
In this purely epistemic role, the laws of thought are stronger than mere rules. Some philosophers (e.g., Kant and Schopenhauer) have taken the position that they are conditions of thought and experience. We are helpless without them; we cannot even argue against them without using them. Yet these facts might simply reflect limitations of our intellectual equipment or modes of thinking. It therefore seems important to ask whether the laws of thought actually describe reality, or in philosophical argot, whether they refer. We might also examine the degree and basis of our confidence in any conclusions about the matter.
Let us begin with a naturalistic account, because we know that the laws of thought work. They have had clear survival value for us, as they are applied in virtually every aspect of human economic and technological action (I hesitate to include political action, for logic seems not to be influential in that domain). This suggests that there is, at minimum, some sort of relatively consistent mapping between the laws of thought and the way the world actually is or behaves. More precisely, we use the laws of thought to predict. By understanding the attributes that can be predicated of a class, and then successfully recognizing a member of that class, through syllogism we can predict that such a member will exhibit those attributes. Prediction, of course, enables us to avoid danger and exploit opportunity, with consequent survival advantages.
In this naturalist view, it is not essential that the laws of thought hold in all cases – it is only important that they perform better than other means of prediction. This is quite unsatisfactory for the radical scientific realist (henceforth, a “realist”), who would expect the laws of thought to refer precisely and to be true without fail. The realist views the world as consisting of distinct things with distinct attributes, so that the laws of thought are just simple facts about the extension (set of members) of any class or concept. In a realist regime, observers can isolate and identify those things and the extensions to which they belong, and even if we err in such classification it does not change the fact of the matter.
However, if it turns out that we cannot consistently and universally perform this classification even in principle, then the realist project is tainted by either a subjective classification process or unverifiable metaphysics. For suppose that we have a counterexample. Each observer can classify the instance using a private procedure; resulting predictions will differ. We have stipulated that the procedure is not even in theory fully consistent, so if some predictions prove incorrect the errors cannot be used to improve the procedure. Alternatively, if we cannot even determine whether the predictions are correct, then regardless of whether the procedure was public or private it is metaphysical (i.e., it has no predictive power). To avoid these difficulties, we must therefore show that there can always be, at least theoretically, a consistent and public way to perform classification. Put another way, the realist view begs the question because it simply assumes that the laws of thought refer, leaving the substance of the classification procedure unexamined.
The naturalist and realist views roughly represent the poles of the issue: the former validates a very weak form of reference while the latter asserts a strong form while leaving much unexplained. We will now go deeper in the hopes of circumscribing a view that is at once more satisfying and well grounded.
We might note that the laws of thought can only refer if the concepts to which they are applied (including abstractions designating particulars) refer. Surely if an application of one of these laws to particular concepts does not refer then there is no grounding for the more general claim. Our question thus inherits all the challenges of the scientific realism/instrumentalism debate.
Also required is that our concepts refer distinctly. As an example, for an object to be necessarily either green or not green (i.e., the law of the excluded middle), we must be clear exactly what in reality the object comprises and precisely what it means for an object to be green. More generally, we must designate boundaries and thresholds. Failure to do so results in objects that can be argued both to be and not be themselves and to have and not have a particular attribute.
Consequently, it is a referential fallacy to apply the laws of thought to insufficiently distinct concepts or objects or those whose boundaries or definitions are subject to dispute. Unfortunately the characterization of such boundaries and definitions results in a regress. In the example of an object being green, we would likely want to rely on the notion of wavelength, which is a theory-laden term that in turn requires the notion of photons, which are subject to relativistic wavelength shifts not to mention all the unsettling vagaries of quantum mechanics. We can see that even if this regress is finite and non-circular (the work of Quine would suggest otherwise), it is at the very least rather burdensome to explicate a concept in a fully distinct fashion.
Beyond the referential requirements for constituent concepts, for a law of thought to refer it must also be true of reality. Given the extremely general and apparently fundamental nature of these laws and their lengthy history of analysis, it seems unlikely that one could demonstrate their truth through some underlying, more fundamental mechanism. Further, caution is warranted to avoid the temptation to apply these laws in a circular fashion to demonstrate their truth. Consequently, what remains is to apply an induction from empirical outcomes and to treat the laws of thought as falsifiable hypotheses. To this effect, we noted earlier that these laws have naturalistic efficacy, but we can make the stronger observation that we have yet to encounter any reliable counterexamples. Adding to our confidence is that our data set corroborating the hypotheses is extremely large – in effect, all of human experience. One could argue that it is not even possible for a hypothesis to have a broader data set.
And so we have made incremental progress: first, our referential claims about the laws of thought can be no stronger than our claims about concepts. Second, our level of confidence that the laws of thought refer is based on their being falsifiable but as yet unfalsified hypotheses; this confidence is probably as high as it can be for any hypothesis. Third, we still need to elaborate a procedure for establishing distinct boundaries and definitions of concepts without regress or an infeasible burden.
I propose that we can effectively ground conceptual boundaries using perceptually distinct ranges. This means that discernment of the range by (normally equipped) human observers is straightforward, unambiguous, and consistent across observers. Further, whether such discernment relies on external equipment or proceeds from direct observation, it is necessarily theory-laden and we must explicitly indicate the underlying theory. Note that this combination allows us to generalize beyond just human observers, so that the approach is neither culturally or biologically parochial.
An example is helpful here. We identify Earth’s moon easily and without ambiguity without any mechanical assistance such as a telescope. Our underlying theory in this case is just the normal background assumptions that we must make to support the reference of any perceptual experience, such as that we perceive objects directly, that we are not dreaming, etc., along with the notion that the moon reflects the sun’s light and that the two bodies change relative positions to cause partially visible reflections. Further, someone with normal color vision can immediately conclude that the moon is not green; this also relies on the usual background theories as well as the notion that we have receptors that are specifically sensitive to green. Identifying the color of a planet might require a telescope and theories regarding the location of the planets or other ways to establish their identity relative to the stars and other planets. The telescope itself requires theories about optics and general empirical validation that this equipment magnifies largely without distortion of shape or color. In this case, we may continue to rely on direct perception for color identification, or we could use a spectrometer, which would require further theories about light emission, relativity, etc. but in which case we could designate a very precise range of wavelengths that we consider green. We can also indicate the amount of green that would constitute an object’s being green, e.g., anything greater than 10% of its surface area.
In cases where we cannot establish a perceptually distinct range using a combination of theories and equipment, we cannot apply the laws of thought to analysis of that situation. To the extent that this is due to shortcomings in our measurement equipment or theories, it presents no refutation of reference. To the extent that separate observers disagree on the appropriate ranges or dimensions of ranges, this is a mere semantic dispute and again has no bearing on questions of reference. However, to the extent that there is some more fundamental limitation, in which we can show that it is not even possible in principle to establish a perceptually distinct range, then we might claim that we are not actually dealing with concepts, thus we would not expect to be able to apply the laws of thought. Though examples of such a situation are not forthcoming, it would be difficult to rule out. Consequently, under this approach the ability to proscribe perceptually distinct ranges is part of the meaning of reference.
Complex phenomena may require complex combinations of perceptually distinct ranges, possibly relying on theories of dimensional reduction, as occurs in our ostensive recognition of natural kinds. There is no simple set of independent measurement dimensions that enables us to distinguish a dog from a cat, and this sort of high-dimensional differentiation may also be needed in classifying phenomena more distant from direct perception. Nevertheless, there is no apparent reason why such classifications cannot be constructed from ranges.
Perceptually distinct ranges are effective because they terminate the conceptual regression at a point appropriate for our purposes – not just in the boundaries of the concepts at issue but also in the supportive theories. This makes concepts determinate so that they can be used distinctly as components of the laws of thought, and in this sense both the concepts and the laws refer. Conclusions based on this form of reference cannot provide apodictic certainty, thus continuing to frustrate the realist. However, our confidence in such conclusions are now considerably stronger than those of the naturalist: for our purposes, with as much or more confidence than we can have in any other falsifiable hypothesis, the laws of thought refer.
A careful reader might ask whether the foregoing analysis refers. Upon reflection it seems that we cannot know, so the question is metaphysical and the analysis itself is best viewed as meta-epistemic: a way to think about whether and how our laws of thought can be grounded in external reality.