Table of Contents

1. Ethics of Nuclear Family

D: On a recent podcast, I asserted that my politics today begin and end with the question: “But is it good for the nuclear family?” I am not interested in a long philosophical inquiry into this question. Society is perpetuated by people who have children. Humans are a good. They are the ends. Not the means.

D: How do you feel about this?

R: My immediate reaction is to agree with the importance of the family and to regret the many discontents of the sexual revolution. That being said, many families today eschew this standard (especially with the growing prevalence of homosexuality and non-traditional gender identities). As such, it's simply not helpful to assert that their families should be disregarded in broader political discourse even if such a concession is suboptimal. Ultimately, it's probably for the better to encourage social change which values the importance family. Note, this doesn't necessarily mean the nuclear family, which is a relatively recent construction. Valuing the family transcends this suburban, middle class characterization.

D: https://logarithmichistory.wordpress.com/2020/07/18/the-goodness-paradox-2/ This is also quite interesting, apropos of Friday's chat.

R: Also, I can understand not wanting to entertain merely philosophical objections to valuing humans. In the same way, I don't entertain defenses of slavery or rape outside of purely intellectual curiosity. However, I don't think valuing family falls under this category. It needs to be established that valuing the family actually best achieves the aims of humanity

D: It's doable, though, if humans are assumed A) good and B) means, not ends.

R: It still needs to be shown that procreation specifically within the context of marriage is in the best interests of humans. For example, your argument is susceptible to anti-natalist objections

D: Not my argument—I disagree. I was curious about your take on the kind of reasoning involved.

R: Also, the goodness paradox article reminds me of some of the conclusions of Michael Tomsello in his investigations into the anthropological origins of human morality (he focuses on human cooperation and moral psychology)

R: Understood. I think the conclusions are probably accurate but certainly not self evident given such reasoning

D: The author (of the post) adopts what I conceive to be an intermediate position between our views: justice is evolved, but on a universal biological basis that takes similar forms— collective action against social dissidence—across human societies. The remaining question, however, is whether there exists a conceivable environment (perhaps off-planet) that would cause a different evolutionary result, thus negating the universality of the "community-first" moral disposition. Otherwise, the impulse of group defense (while social) might be said to be rooted in one of the basic ends/desires/impulses under discussion.

D: My assumption, of course, would be that universality would essentially amount to necessity/categoricity.

R: I don't think universality entails necessity. It's conceivable that we may have taken a different evolutionary path such that our basic moral desires might be different. This wouldn't refute the universality of moral desires today among those on our evolutionary path (i.e. all humans)

D: Conceivably, however, humanity could pursue different future paths if this was not a universally necessary proclivity, in which case we might reconsider their ubiquity in the present.

R: It depends on the level of contingency. If it's contingent at the level of evolution, then they're not going to change anytime soon… Though possibly at some point millions of years down the road

D: I'm not sure time is the relevant factor in absolutes, unless we posit that even at the level of cultural evolution humans experience sufficient biological change to be regarded as a different species altogether. I do not.

2. Ethics Intro

R: My point was that I'm quite skeptical of overarching normative theories as well and tend to have a more situational ethics since it's the basis on which we evaluate normative ethical frameworks anyways

D: I think there are actually pretty broad implications for the practical/pragmatic approach to ethics, though, which we should probably consider on the next occasion. They're related, in some respects, to both epistemology and a sort of Sartrean/Dostoesvkian existential morality.

R: No worries about that. I agree with you on the consequences of a purely individualistic and pragmatic ethics and its formulations in existential moral psychology, which emphasizes human freedom. Interestingly, Sartre affirms this outcome, whereas Dostoevsky abhors it and attempts to demonstrate its consequences in his novels. Instead, he affirms a historical, traditional basis of Morality such as found in Christianity. This is not necessarily in opposition to a situational, individualistic foundation as it may simply appeal to the wisdom of generations over any individual. I'm personally more inclined towards Dostoevsky's conclusion as analyses of Sartre's conclusion as exercised during the twentieth century are at least grounds for skepticism regarding the reliability of an purely individualistic ethics

D: I actually incline toward the Dostoevskian view as well, particularly my own interpretation: not Christianity but a specific form of Christian behavior, that of love/benevolence, is the only justifiable kind of action. That sort of move which does good and (at the very least) no harm is the closest thing to morality that we can understand, even in light of the radical epistemic skepticism of the Underground Man. Selflessness cannot be immoral, and therefore the behavior can be, if not systematically understood, legitimately justified and acted upon. I am sympathetic to Sartre's analysis of freedom and responsibility, and largely believe this to be representative of our experience of moral decision, but being a determinist cannot accord much truth to the perception beyond a superficial account.

3. Thought Experiment on Capital Punishment, Two Societies

D: In the interest of clarification: consider the case of two hypothetical societies, one which has institutionalized the harsh treatment (torture, death penalties, etc.) of POWs, terrorists, and genuine political dissidents and one which condones only imprisonment (and is, moreover, disgusted by capital punishment). These differences can be traced to historical origins—say, specific discussions among leadership groups, court cases, and treaties. How are these differences to be accounted for? The view of the latter state excludes the behavior endorsed by the former, but the differences are not rooted in human nature, while they react seriously and reflectively to recognized (in both regions) crimes.

R: Your question is intriguing and it has inspired some pause in my confidence regarding moral realism. However, I think it's ultimately answerable. Denote the two societies 1 and 2 respectively. In order to defend moral realism, I have to contend that at least one of the societies is wrong. Intuitively, 1 seems to be the obvious candidate. This immediate response should be illuminating already. Admittedly, we live in a society more like 2 than 1, but we are still (presumably) opposed to the lingering harshness, barbaric justice found in our society resembling 1. Going further, consider the thought experiment where the people of 1 and are switched with the people of 2 but the rules are maintained. How would the societies change over time? It seems quite plausible that the new 1 would quickly revolt against the barbarism of their justice system and restore a system much like in old 2. On the other hand, the people of old 1 might largely react negatively to the unjustness of the lukewarm treatment of criminals, and prisoners, but reverting to their old system would require them to come to terms with their former harshness and apparent barbarism. It seems plausible that at least some of the people would express disapproval when voting to change the system back to its old form. Perhaps not enough people would dissent so that the system would revert and the societies would effectively have switched numbers. Nevertheless, that new 2 would likely hesitate when reconsidering their system whereas maintaining the status quo in new 1 seems unthinkable lends some credence to the innate existence of certain moral principles of equality and fairness as exemplified in 1 and bulldozed over by propaganda in 2. In other words, even though both societies were maintained by propaganda rather than genuine moral introspection, only one system seemed reasonable once scrutinized by these innate moral principles. Admittedly, all of this is speculation, though not totally baseless. Someone more knowledgeable could probably point to historical and sociological sources to corroborate (or maybe refute) my argument. We might consider societies today more like 1 than 2. With widespread globalization, many of the people in those societies are dissenting against their social customs and adopting more Western standards of justice. Whereas, the opposite is hardly happening anywhere to my knowledge. Additionally, major moral progress such as in the abolition of slavery appear to be irreversible. If each side were truly equally legitimate, simply maintained by social institutions, then we would probably see more flip-flopping on these issues. Ultimately, my analysis faces one crucial flaw. I'm simply not knowledgeable enough to accurately disentangle overwhelming social transformations from realizations of innate principles. For example, the historical development of Christianity has played an indispensable role in the formulation of human rights and the eventual abolition of slavery in the United States. Was this shift merely an alternative propaganda which gained widespread approval or actually a deeper realization of innate moral principles regarding universal human worth? If we could develop a systematic defense of this transformation outside of historical contingencies and which accurately predicts future social progress, then the latter can be deemed plausible. In the absence of such an analysis (as far as I've provided thus far), we are relegated to agnosticism on this matter. In then becomes my task to try and provide such a system corroborated by scientific experiments and historical analysis in order to defend moral realism.

D: I appreciate the directness of your approach, though I did not anticipate your chosen tack. I have numerous qualms, however. You seem to characterize the "Westernization" of morality as a relatively linear historical development, sweeping the world alongside neoliberal governance and economic capitalism. As Williams (who I finished last night, perhaps for later discussion) writes, "[t]here is no route back from reflectiveness"; thus antiquity, barbarism, and savagery are perhaps no longer "real alternatives" for countries touched by globalization. One could argue, as Benedict Anderson does, that similar waves have occurred throughout history—say, during the Protestant Reformation, the first-wave Republican revolutions of the eighteenth century, and the postwar democratic/anticolonial movements in the Third World. You would then have to claim, as a realist, that these represent "discoveries" of moral knowledge. The "WEIRD" package of sociopolitical norms gradually accumulates as Western (and eventually global) citizens recognize it as representative of basic universal human dispositions. This is a compelling narrative, buttressed by two notions: 1) that the world has become healthier, wealthier, and happier as a result and 2) that irrationalist movements either fail to upset the hegemonic ideology (in Gramscian terms) or do so at fabulous material costs, perhaps unsustainably. Still, there have been irrationalist phases. The fascist wave of the '30s met the emotional demands of economic depression, for example, and Communism—though worn down by Western affluence—has not altogether disappeared, and may be merging with fascism through Chinese statism in an age when democratic liberal values have never been more pervasive. These movements were not just coups reinforced by reverse-propaganda; rather, mass movements were necessary to place each ruling cadre in power. Propaganda stoked and fanned blazes already burning brightly—blazes rooted in dispositions both fundamental and at odds with liberal values. Why, indeed, does Polanyi's critique of "fictitious commodities" seem so trenchant today? Whence the popularity and endurance of Marxian alienation? Given the choice to independently determine their moral lives, millions have revolted, seeking the warmth of populist, religious, and identitarian ideologies. Fukuyama's proclamation of the "End of History" was provocative three decades ago, but is ludicrous today. It's not at all clear, in short, that we necessarily prefer the set of values and norms that would tend to favor society 2. Indeed, the reflection that you attribute to the group transplanted from 1 to 2 might never conceive of their barbaric values as barbaric, but simply necessary—we otherwise assume, rather than prove, the universality of our Western ethical dispositions. These might never occur in society 1, which could see the world through a completely different lens.

R: Given your response, it seems I failed to either properly identify or convey the intended distinction between societies 1 and 2. In your initial inquiry, it seemed you were intentionally distinguishing between an obviously barbaric and unreflective penal system on the one hand and a comparatively self-critical, developed system on the other. Under this apprehension, I endeavored to explain how the latter society could claim right to this "moral reality" whereas the former had in some way erred. One way I attempted to do this was by examining the likely means by which each society had hitherto come to be and the principles by which their system was maintained. Under such an analysis, society 1 was revealed to be unstable and maintained only by social customs whereas society 2 appeared to be stable as it was grounded in a realization of innate moral principles. My basis for this speculation was an appeal to mere human dignity, not some contentious liberal virtue. This is where I think some confusion originates between our responses. My regrettable use of the word "Western" in referring to what I really understood as basic moral truisms appears to have conveyed the false perception that I affirm a linear progression of morality centered in the West and eventually adopted by "those less developed nations". Quite the contrary, I think we've headed largely in the wrong direction over the last century, as exemplified by some of the "irrationalist phases" you identified. Similarly, I reject the now common Pinker-esque dogmatic affirmation of liberalism and its dubious objectives of liberty and equality. It was not at all my intention to suppose that THESE are the principles which are somehow realizations of innate moral principles against the barbarism of conservatism or even monarchy. Rather, as clarified above, I meant to appeal to much more basic moral principles such as human dignity which are largely non-partisan. If you wish to still challenge the universality of such basic principles, that's another matter which I think can be adequately addressed through anthropological considerations. Dealing with your actual point, the moral rightness of either society becomes a much more complex consideration when we no longer assume the simplistic dichotomy which I supposed above. For example, making a proper recommendation between a society dominated by liberal ethics of individualism and freedom and a traditional society guided by religious customs and a strict moral authority is not far from "solving politics" in some sense. As such, I won't even attempt to answer that question. Nevertheless, I don't think the existence of such hard problems constitutes a refutation of moral realism (and I'm sure you don't either). Despite each society holding drastically different values and making mutually inconsistent recommendations, it's still perfectly feasible that a right answer exists and that careful deliberation and study can lead us closer to this truth. Though this might seem a bit hopeful in the face of perennial disagreement, the negation (that moral and political statements have no truth value) has just as (if not more) preposterous consequences. There's certainly much more to be discussed though it would be better suited to a real-time conversation in which basic clarifications can be made on the spot. As for Williams, I've not actually read his book in its entirety (my recommendation list was largely based off of classics within the field and my understanding of them based on secondary sources) but I'd be happy to study up a little after my finals and discuss it with you next time.

D: I do think that your response is still at least slightly conditioned by the hegemonic Pinkeresque culture—as is admittedly inescapable for me—but I'll accept your claim that this is neither intended nor essential. The purpose of the example was to present a case where, for intrinsically similar concepts of a crime, socially-determined punishments have arisen which represent diametrically opposed principles of justice and legitimate action. These two moral systems are incompatible, having grown to be so. My larger aim was to illustrate my concern that whatever gains we have made in terms of reality could be eclipsed by the demands of relativistic interpretation. Unlike Schafer-Landau, I do think that the origins of moral thought are significant, especially with respect to the "nature/nuture" question. To the extent that a hierarchical view of their approaches is possible, I must accept that my puzzle is flawed.

D: Yes, we can certainly wait to discuss further. I didn't mean to embroil you in a lengthy Socratic dialogue (and I admit to cheating by using my laptop). Perhaps you can give his work a skim over the weekend or something, if you have the time.

4. Wittgenstein

R: What book is this from? Wittgenstein's comments are interesting, in that he seems to reject a necessary correspondence between mathematics and reality, thereby ruling out the presumed fatality of contradictions. If a bridge falls, it is only because we either made a computational mistake or our mathematics fails to model reality. Neither case depends upon a contradiction (as I understand Wittgenstein) and so contradictions don't seem as intractable as supposed. In one sense, I think Wittgenstein's response resembles the one I offered during our discussion, which was to ask: so what if there's a contradiction? On the other hand, Wittgenstein seems comfortable in attributing this indifference to the immateriality of mathematics, whereas I would want to attribute it to the imprecision of natural language.

D: I think Wittgenstein views mathematics and language as games where the rules permit contradiction because they are errant and poorly formulated. He (at least initially) believed that all philosophical problems were by nature linguistic. There is no consequence, because mathematics is an ideal construct. That ideal, part discovery and part creation, can be mistaken. I'm curious at your imprecision argument. Can we overcome this barrier?

R: Yes, I think that is his position. Although if he thinks the folly of language transfers over to endeavors which use language such as mathematics, it would seem that he must be open to contradiction even in physics and any other subject. This is not necessarily an objection, but it draws out the radical extent of his conclusions. As for overcoming the barrier, I think it's simply a matter of recognizing that not all statements are propositions, i.e. truth-apt. If we generate some paradox in language (like the liar's paradox or omnipotence paradox with a rock so heavy…) it's not clear to me why we should lend much significance to it. Why not just conclude that we've constructed a sentence which on the surface appears well formulated but in fact just exploits some quirks of grammar. I imagine Wittgenstein attributes much more significance to such paradoxes since he thinks that language is in some intimate sense tied to reality.

D: Well-argued. The important question to ask, however, is a modification of Wittgenstein's: why should there be no contradiction? In human intellectual systems, the answer lies in the fact that the rules, axioms, and statements are formulated by conscious actors and can be shorn of error. If a move breaks the rules, the rules can be changed to incorporate the move or the move can be withdrawn as invalid (normatively). I read about a famous example of the latter last night. At a 1952 conference on decision theory in Paris, Leonard Savage, a UChicago colleague of Friedman's, was caught displaying preferences inconsistent with rational-choice EU theory (of which he was a primary developer) in response to the notorious Allais paradox. He later wrote that his choice of lottery was an error and switched to the option designated by his theory. Interestingly, he switched from a correspondence/simplicity defense to a normative one soon afterward. In nature, the answer is less obvious. What does a "natural" contradiction mean? If we claim that logical systems are discovered, then perhaps logical contradictions might be construed as "real." It is unclear that this is the case in the sense that, say, the laws of physics or thermodynamics obtain and can be revealed—and here, the notion of contradiction seems out of place. The sciences reflect and describe things that actually happen; if something is predicted that does not occur, either the theory is incorrect or the evidentiary apparatus is flawed. Nature is not "wrong," I think—processes are (and are therefore "true," "factual," etc) or are not.

R: Apologies for the late response. I got caught up with finals. Hopefully yours went well. Regarding your points, I found the economic example very interesting and I agree with your analysis in the last sentence that nature is not "wrong," but rather our descriptions of nature. I think the crucial point is about how we evaluate the truth / "reality" of statements. The economic and scientific examples rely upon a correspondence with reality for verification, which is why contradictions can't be entertained with any seriousness. On the other hand, linguistic and (arguably) mathematical statements reside in the realm of pure thought. So, truth means something like "in accordance with the rules of the language game." As such, Wittgenstein rightfully challenges that contradictions shouldn't be feared. Though, I'm struggling to see what relevance this caveat really has. Isn't Turing right to say that any game which entails contradictions has no business describing reality? If so, what's Wittgenstein's point?

D: I think Wittgenstein might be arguing that, since all "contradictions" are linguistic problems and all languages are limited and constructed, there are no real errors in philosophy—simply failures to properly describe what is being discussed. That accords with his early, Russell-adjacent thinking, I believe. I'm not sure why he'd take issue with Turing on this count, though— seems like he's playing devil's advocate , or simply being unserious.

D: This is the central hypothesis of all linguistic philosophy: that philosophical problems are in fact "unreal," and will be immediately resolved once the proper means of communication is devised (whether in English, "natural language," or some other symbolic form).

R: It seems like this linguistic diagnosis might face a problem with infinite recursion if it is to be communicated through language. Of course, this may very well be the necessary bullet to be bitten.

5. Truth in Ethics

R: For our next discussion, I'd like to better understand what you mean for something to be true. For example, presumably you believe "A war happened in 1812" is true. In what sense? I'm trying to argue that moral statements can be true in the same sense. I'll try to think of better ways to communicate this (as well as just understand it for myself) next time. Should we meet next week same time?

D: When I say that "a war happened in 1812," I mean primarily that past human behavior resulted in the state of organized violence that we call war 208 years ago. I would also accept the implicit claim that I'm asserting that reasonable people should believe this statement.

R: I would agree with your interpretation, but only because we (putatively) have some shared understanding of the terms you used. For moral statements, their interpretation usually comes down to the semantics of "good"/"bad". I would argue that similarly "straightforward" semantics can be applied to these terms as to "human", "behavior", "violence", etc. Of course carefully explicating these meanings is never simple, but I'm arguing that such a meaning "exists", independent of what words one may use to refer to such a notion or whether or not one agrees that this is the "correct" interpretation. Do you accept this as the task of the (minimal) moral realist? I think our primary disagreement on Tuesday came down to oscillating between two different arguments. On the hand, you seemed to be giving what's called an "evolutionary debunking argument" against moral realism. That is, if a satisfactory evolutionary account can be given for the origins of human morality, then we should not regard these moral statements as "true" since natural selection is not truth-tracking. I gave a few responses to this. Firstly I argued against the idea that such an evolutionary account exists since many of our common moral intuitions are in opposition to evolutionarily derived instincts. Furthermore, the capability of robust moral deliberation seems again to resist this truncation to mere evolutionary instinct. Finally, I argued that truth should not be understood independently of our human nature (which presumably has a biological origin). I defended this point by noting a parity between moral and ordinary perceptual beliefs regarding their mutual susceptibility to an "evolutionary debunking." Ultimately, it seems to me, our understanding of truth is always relative to some mode of experience, and so I don't consider it appropriate to judge the truth of some belief outside of this experiential framework (i.e. from "the view from nowhere"). I don't consider this to be a concession or redefinition, but rather a clarification of what I (we?) mean by truth. More can certainly be said about this, but I'm optimistic that we can reach some sort of agreement on this point. On the other hand, you seemed to respond to this with a separate point about the social construction of morality. I think some of the confusion on Tuesday came down to not carefully distinguishing these two arguments. IF a fully adequate account of metaethics can be given by appealing to social construction, THEN I should concede my moral realism and join you instead. If, for example, it can be shown (with reasonable confidence) that there is nothing beyond our moral beliefs than what we're taught (in the broadest sense, including propaganda, upbringing, surrounding cultural norms, etc.) in the sense that is most likely true of our clothing preferences, then again I should reject moral realism. So I don't at all see a parity between a kind-of social construction and biological contextualization. That is, if my human nature grounds certain moral prescriptions, I don't see this as analogous to my circumstantial, cultural setting grounding my moral beliefs. I think this may have been a point of confusion during our discussion, leading you to call my stance a pyrrhic victory. If, according to you, a genuine victory for me means grounding morality outside of human nature, as part of the sterile universe, then I concede that endeavor. I don't even think ordinary perceptual reality can be grounded in that way. So once more, our dispute seems to rely on a nuanced understanding of truth and reality. For our next discussion, I think we should focus on this latter point, since it seems to be where the bulk of our substantive disagreement lies: I don't think morality is socially constructed, you do (I think).

D: This is why I prefer writing over spoken debate: talking past your conversation partner is more difficult when the words are set in type. Yes, I do accept this as the task of the moral realist. I would grant you whatever task you set yourself, of course, but this is what I conceived your immediate objective to be. To your first point, I return that the "purpose" of those behaviors that apparently run contrary to "instinct" is to fill the situational gaps where those innate tendencies fall short, as in the effort to coordinate in collective-action problems. Thus an evolutionary origin appears to be even more probable, as groups and individuals that cultivate moral sentiments—reciprocity, fairness, etc.—are able to form larger and more successful societies. Such beliefs have founded the rise of "Western Civilization," the most economically and militarily dynamic entity in history; moreover, when these notions break down, so do nations and social systems (Venezuela, Libya, Syria, Russia, etc.). Moral deliberation cannot be separated from evolution, being an extension of our biologically-developed capacity for problem-solving and "rationality," so I reject your second claim as well. As to your comparison of moral and perceptual beliefs, I discarded the analogy on the grounds that while sensation collects information of an exterior entity (of which we can obviously understand little), our ethical faculties create information internally, or rather discover deeply ingrained feelings. The two behaviors, while similarly susceptible to doubt, are not parallel acts. Is morality found within "true," or truth-tracking? I say not, given my previous discussion of what you label "evolutionary debunking." Social construction and evolution are closely parallel processes, and sometimes are one and the same. We act creatively in response to our environments in a manner much like our genes (see Dawkins on memetics), adapting to challenges that ultimately send successful variants of both beliefs and biologies to the top. "Human nature" and moral beliefs are both influenced by the surrounding environment; the latter is affected directly and indirectly, through our changing neurologies. If the establishment of a connection between biological factors (concrete psychological tendencies, from hawkishness/aggression to dovishness) and moral beliefs constitutes a victory for you, then I concede—moral beliefs would indeed be real, in the way that socialism, utilitarianism, and paranoia are real. We could say that certain acts are good or bad TO certain individuals or WITHIN a certain society, and this would actually be the case. I return briefly to the analogy of the game: a player's strategy in response to perceived payoffs is a real pattern, mixing biology and environment, which can be evaluated only within the context of the game—as hawkish, dovish, probing, neutral, suicidal, etc. I'm prepared to discuss the socially-constructed nature of morality, if you wish. As with the "nature-vs-nurture" debate, the answer's probably "a bit of both," but as I've argued above, this makes little difference to me.

R: Although I had hoped we would agree on my first point regarding the meaning of "truth" and "reality" and its being inextricable from biology, it seems this is still a major point of contention. I don't think it would be fruitful to discuss the purported social construction of morality before agreeing on this point, since even if I could convince you that there is an intrinsic, biological basis for our moral beliefs, it seems you would not regard that with much significance. Thus, my point is two-fold: (1) To demonstrate the insufficiency of your restricted understanding of truth in accounting for both ordinary perceptual and logico-mathematical knowledge (2) To argue that my understanding of truth satisfies the usual desiderata (e.g. grounding knowledge, deeming others to be in error, being "objective") and so should be accorded as much significance as any other notion of truth. From what I gathered reading your response, it seems like your understanding of truth is heavily tied to ontology. That is, it's true that "you're holding a phone right now" because there are two entities "you" and "phone" which PHYSICALLY exist in the mind-independent world, in the spatio-temporal relation specified by the proposition. It was on these grounds that you distinguished ordinary perceptual knowledge from moral beliefs, since the former (unlike the latter) are caused by entities in the external world. As such, there's something "real" about ordinary perceptual beliefs whereas moral beliefs are merely constructed by the mind. Unfortunately, in addition to the usual external world skepticism, I think this understanding of truth fails both in actually grounding ordinary perceptual knowledge as well as in accounting for conceptual knowledge (such as logic and mathematics). Even though our perceptual beliefs may be caused by external objects, the concepts within which these experiences are cognitively situated are relative to the mind. As an example, consider a different species with a distinct perceptual apparatus, such that they have no concept of color. Is it the case that this species is unable to see the world as it is, or simply that their mental experience of external objects is different, hence colors are "real" for us and not for them? If the former, then what privileges the biologically developed perceptual apparatus of humans? Why doesn't your evolutionary debunking apply to the reliability of our perceptual beliefs? Just noting that these beliefs are caused by external objects is insufficient for grounding knowledge. Instead, I argue that truth must be understood in relation to a mode of experience. We can't talk about "the way things are" independent of some conceptual structure; nevertheless, groups which share such a structure (such as humans, for the most part) may speak of "the truth" in contraction since specifying "relative to the human conceptual structure" would be redundant in communication between humans. Without understanding truth/reality in this way, I don't think you can make sense of even ordinary perceptual knowledge. If you still find trouble with comparing ordinary perceptual beliefs to moral beliefs, consider purely conceptual beliefs such as in logic and mathematics. In this case, our beliefs are not caused by external (material) objects, but instead are byproducts of the structure of the brain/mind. We come to know modus ponens or the law of noncontradiction by investigating the contents of our mind, not some empirical experiment. Any attempt of the latter would necessarily presume the reliability of the former. Nevertheless, we have no trouble speaking of truth in these domains. The fact that our beliefs are the result of an intrinsic biological structure does not preclude knowledge. Yet, if we take your skeptical argument seriously, we should have no confidence that logico-mathematical beliefs reflect reality. They are just products of an evolutionary history, so in what sense are they real/true? Once more, I think both these cases help to reveal what is actually meant by "truth"/"reality": something is true/real relative to a given mode of experience and conceptual structure. It's precisely in this sense that I understand there to be moral truths. I think there's a (biological) structure to our understanding of what is right/wrong which can't be reduced to social construction. If you consider this a pyrrhic victory, then as I've argued above, so too should you cast all conceptual knowledge (and even sensory perceptual knowledge) to the flames. Now, I'd like to address some worries regarding the potential implications of this understanding of truth. In your response, you suggest that there's not a meaningful difference between the social / biological construction of concepts. And so, to "establish a connection between biological factors and [truth]" would be to make socialism, utilitarianism, and paranoia all true in the same sense. I would like to clarify that I'm not simply establishing a correspondence between biological tendencies / instincts and truth. Rather, I'm claiming that truth has to be contextualized within a biological / conceptual framework, as argued for earlier. This framework includes reason, logic, and evidence, all of which will necessarily have some biological origin (at least in part) but which cannot be reduced to evolutionary instinct. This is why I distinguished moral deliberation from instinct with regard to having evolutionary origins. Otherwise, your debunking would apply to reason as well. So, I don't think my understanding of truth leads to absurd conclusions where anything with some biological component becomes "real". Additionally, my understanding of truth is perfectly capable of grounding moral knowledge in the "objective sense" that you agreed is the task of the moral realist. If there is some complex, intrinsic, biological basis for our moral beliefs, then it follows that some moral beliefs can be out of keeping with this foundation due to any number of external perturbing factors (e.g. culture, propaganda, ignorance). The ethical imperative is then to investigate these moral beliefs and determine which are true and which aren't. This process will undoubtedly be complex (like any search for truth), but it will likely follow some combination of empirical studies, consistency testing, introspection, model construction, etc. There's nothing uniquely impotent about these methods in investigating the truth. So, once again, I think what we should really focus on is whether moral beliefs are socially constructed or have some intrinsic (biological) foundation. I contest your indifference to this distinction for the reasons above. As I see it, if you think the latter is of little more significance than the former, then you do away with a coherent notion of truth altogether.

R: In thinking more about our disagreement and my attempt at refining a notion of objective truth which is not tied to ontology, I've come across some terms in the philosophical literature which seem to generally reflect my (admittedly underdeveloped) views. Here's two specific resources: https://plato.stanford.edu/entries/constructivism-metaethics https://www.jstor.org/stable/20012351?seq=1 The general thrust of these views is to admit a notion of objectivity / "realism" which doesn't entail mind-independent objects, existing in some ontological sense like Plato's forms. Instead, truth obtains as the natural culmination of a rational procedure, limited by the cognitive apparatus of the (human) mind. In this way, truth is "constructed" by the mind. This is quite an involved thesis, but I think it helpfully delineates between my view about ethics (and probably also mathematics) versus error-theory (which I take to be your view) and non-cognitivism.

6. British Imperialism in South Asia

D: Do you have any strong opinions about the rule of the British empire in South Asia? Asking for a friend.

R: It's certainly not something I know much about, so I can't claim to have any particularly strong opinions about it either. As I understand it, the general narrative is drastically split—on the one hand, British (more generally European) powers exploited Southeast Asian colonies by robbing them of their natural resources, persecuting them (often on racial grounds) and establishing oppressive institutions and self-perceptions which continue to hamper them, and thereby directing the fruits of the colonial labor primarily towards the interests of the imperial motherland; on the other hand, this interference did directly motivate the rapid modernization of these countries through the introduction of improved science, technology, education, infrastructure, labor opportunities, etc. Absent this colonial intervention, it's unclear how the under-developed nations would have fared in the contemporary environment. Nevertheless, I don't think the latter can be used as a justification for the former anymore than the trans-Atlantic slave trade is justified by the present status of African Americans compared to many African countries today. Likewise, the poor working conditions in many of these Southeast Asian countries is not justified by its being preferable to subsistence farming. The basic ethical realization is that there are no extenuating goods, only pros and cons; even in cases where the pros outweigh the cons, the pros do not therefore justify the cons. Sometimes it's retorted that if the pros are dependent upon the cons, then the pros also justify the cons. Yet not many would argue that the reuiniting of a broken family following the death of a mutual loved one thereby justifies the death; or that the strengthening of a soldier's mental fortitude following a traumatizing experience as a POW thereby justifies his torture; and so on. So, as a matter of ethical principle, I abstain from participating in the debate about whether the atrocities of that period of history were 'justified', though I do think that there are genuinely interesting and difficult questions to be answered. For instance, do you think this period of history was an inevitable consequence of divergent rates of industrialization combined with increasingly global politics, as facilitated by developments in technology and transportation? Or was there a more peaceful alternative, where less-developed countries had the opportunity to modernize at their own pace? The answer to this question, I think, determines whether those events should be properly regarded as morally equivalent to a genocide. What I suspect is that the historical period in consideration is simply too broad to characterize in general terms; perhaps the Bengal famine was morally equivalent to a genocide, whereas the missionary efforts of the Portuguese in the Philippines weren't. Additionally, to what extent were the purportedly humanitarian (e.g., "White Man's Burden") motivations of the colonizers genuine? To be clear, I'm not so concerned with intention as I am with reasonable expectation: Did the imperial powers not foresee the consequences of their actions on the colonial economies? Do their actions reflect, as their statements attest, an honest effort to prevent making the colonies economically dependent on the imperial countries? Can the function of the British Empire in Southeast Asia really be regarded as symbiotic, and not simply parasitic? As is usual, I'm left with far more questions than answers. Although it's politically popular to denounce colonialism and imperialism these days (and about as brave as condemning racism or sexism), I remain unsure regarding some key historical facts about British rule in Southeast Asia and the corresponding moral judgments. I'd appreciate your own perspective, especially given your greater knowledge of the relevant history.

D: A number of points: 1) I hesitate to draw parallels between imperialism and the slave trade, especially because the British Empire was an entity that mostly arose after Britain had itself committed to ending human trafficking. There is an extent to which the Empire was intended as a benevolent force for education, development, and commerce that is not reflected in the (private, non-administrative) removal of Africans to New World plantations. We need not seek extenuating goods in the case of the Empire; rather, we can simply evaluate it by the success with which its aims were achieved. 2) Imperialism was inevitable; if Britain had not conquered, others would have—the Raj was established on the ruins of French attempts to conquer the subcontinent during the Seven Years' War, for example, while every major European power tried to get in on the Scramble for Africa. The converse would have occurred if Asia had had the West's technological supremacy—witness the policies of Japan in Korea or China in Tibet. But I'm not sure that the answer to this question has much moral relevance. 3) The Bengal Famine was not a genocide; there are many plausible competing explanations, but "destruction of a pre-existing relief system" is not one of them. 4) The nineteenth Empire was not intended to create dependence, but rather to establish open channels of free commerce between complementary states—specialization according to comparative advantage. Whether or not this was beneficial for the colonial states remains controversial, but according to the economic science of the day, unrestricted trade (per Ricardo) could only benefit all participants. What is certainly true is that most colonies were money-losing operations; so if the British were plunderers, they were clearly either insane or incompetent. 5) The central problems remain Indian deindustrialization and stagnant growth, and whether the no-empire counterfactual would have resulted in the same degree of immiseration. I'm inclined to say that the situation would have been the same or worse in the absence of the Empire—without technology transfer, legal reorganization, and education, would India really have been better equipped to develop a world-leading textile sector behind tariff walls?

R: Thanks for providing your perspective. I've given a few quick point-by-point responses below: 1) To be clear, I wasn't drawing a moral equivalence between the actions of the British Empire in Southeast Asia and the slave trade. My intent was to illustrate what I consider to be a basic moral point that the positive consequences of some actions, even when they outweigh the negative consequences, don't thereby justify the negative outcomes. I agree with your last sentence that we should not seek extenuating goods, rather we should assess what I called the 'pros and cons' of any action (whether measured in outcomes or something else). 2) Would imperialism have been inevitable had there been strong international organizations to mediate competing interests, thereby giving voice to the interests of the Southeast Asian countries and possibly allowing them to develop independently and at their own pace, motivated primarily by internal rather than external pressures? Obviously nothing of the sort existed at the time, but I want to distinguish between inevitability due to 'others would have done it' vs. inevitability due to the necessity of worldwide industrialization simultaneously in response to an increasingly global market, for instance. As for moral relevance, if imperialism wasn't inevitable, then it was avoidable. Insofar as it was condemnable, the imperial powers had a moral duty to alter their behavior. If, however, imperialism was inevitable, then there can be no duty to do that which is impossible. 3) I think genocide was the wrong term to use on my part; I should have said something like political negligence / apathy resulting in avoidable mass starvation. My original point stands, that what took place under what is called 'imperialism' must be assessed individually, not as some unified effort / set of outcomes. 4) I tend to believe that expected outcomes should take precedence over actual outcomes in any moral evaluation. So, if the British Empire's expectation for their colonial pursuits in India was to establish free commerce according to the principle of comparative advantage, and not to create dependence, then that should be seriously taken into consideration. However, doesn't the actual behavior of the British Empire in Bengal, for instance—where they destroyed its manufacturing system and imposed harsh tariffs on its textile industry—contradict this narrative and even the principle of comparative advantage? If so, then the Empire was not merely incompetent but actually had ulterior motivations hence expectations, not just beyond but actively against establishing free commerce. 5) I agree that the counterfactual question is of great historical significance, but not as much moral relevance. This is the point I was trying to make in response to your first bullet point.

D: I'll respond at greater length to these points tomorrow, except for 4): the textile industry fell not through government agency or tariffs but the lack thereof: free trade led to the destruction of the hand-weaver by fair competition from Lancashire mills.

D: The historiographical controversy is whether Britain should have allowed Indian industrialists to set up tariffs, which they eventually received (through dubious political pressure) during the Depression.

D: Furthermore, much of the decline of Indian industry occurred before the rise of the British factory—amid drought and the disintegration of the Mughal empire during the eighteenth century.

D: 1) Right, I understand. My point was simply that our question should be "how benevolent was the Empire" rather than "did imperial parasitism have any positive consequences." Nevertheless, I do believe that there could be extenuating goods worth considering, especially given the agency of Africans in the slave trade. Countries make welfare tradeoffs every day (less defense spending might allow poverty relief in the South); if an odious evil was traded for a great good, would this not be justified (I don't believe that the slave trade made this happen)? 2) I should clarify that imperialism is not an inevitable result of capitalism, but rather of technological imbalances between core/hegemon states in a unipolar world. International organizations have emerged when power disparities have been leveled—the League of Nations when Germany and America caught up with Britain, and the UN when the USSR did the same to the USA. The contingent aspect is the form that hegemony takes—the liberal world orders of the Belle Epoque and the postwar miracle, or the absolute dominions of Rome, Ottoman Turkey, and Qing China. To be clear, I am attempting to justify British imperialism by asserting that others would have done it anyway. What is inevitable for a nation-state is not for an individual. One can rebel against the times and preserve one's virtue, if this is necessary. 3) Aggregation to some degree is certainly necessary—do we assess presidencies by individual events, or by the state of the country left behind (or the tenor of the reign)?

R: I'm not sure what 'extenuating goods' you're referring to 'given the agency of Africans in the slave trade.' Could you clarify? As for making trade-offs, I certainly don't object to the principle of weighing pros against cons as in the case of deciding to save five people rather than one person from a burning building. But in this instance the consequences of inaction would have been six dead, and the decision to act was unilaterally preferable (one dead or five dead > six dead). It is much more difficult to justify this kind of trade-off when the consequences of inaction are neutral, as in the case of deciding to kill one person and harvest their organs in order to save five others. The case of British imperialism in India is of this latter kind, where inaction would have cost no lives (at the hands of the British), and so it's much more difficult to justify any resulting goods given the requisite atrocities. I'm surprised by your characterization of the decline of the Indian textile industry, where the British simply won-out in a 'fair competition' facilitated by 'free trade', and that it began during the eighteenth century for reasons unrelated to Great Britain. Indrajit Ray, writing in the The Economic History Review, concludes that "Bengal's export market for cotton textiles started to decay after 1825." And in his survey of the relevant literature, even the earliest proposed start dates for the decline are right around the turn of the century. Furthermore, Ray identifies two factors, widely agreed upon by economic historians, pertaining to the decline of Bengal's cotton textile industry: prohibitive tariffs & technological innovations. These tariffs were instituted by Great Britain in accordance with infant industry protectionism, directly against Ricardo's principle of comparative advantage. The British Parliament enhanced tariffs on Indian textiles 12 times between 1797 and 1819; this directly created adverse market conditions for Bengal cotton textiles in Great Britain, contributing significantly to its decline. Whatever your assessment of this approach, it certainly cannot be characterized as 'fair competition' or 'free trade' and the 'lack' of tariffs. Please note that I don't intend to suggest that prohibitive tariffs were the sole factor in the decline of Bengal's cotton textile industry, as I of course acknowledge the role of technological innovations during the Industrial Revolution; I'm merely contesting your assertion that in fact the lack of tariffs led to the decline. Anyways, thanks for engaging my initial (far too long) response to your simple question. I didn't expect to go back-and-forth so many times but I certainly think that it's been a fruitful exchange so far. By the way, I'm looking forward to speaking with you about Carnap's book on Monday. What did you think of it?

D: I assumed that the irrelevance of "extenuating goods" referred to a situation where slavery (or any negative policy) was imposed unilaterally, so I was proposing a hypothetical—along the same lines as "conscription is immoral, but American victory over Nazi Germany and Japan was imperative." I see your point better now; in the case of the Raj, however, poverty was already endemic and—as I'll note later—industry was doomed both by global economic forces and internal structural factors. British failure had negative consequences, but so did British inaction. The question is whether British rule exacerbated or ameliorated India's economic woes. I am disappointed by your decision to cite a single paper as a "widely agreed upon" explanation of Indian decline, which compresses multiple distinct periods of history. The free trade era, for example, lasted from the 1830s at the earliest until the late 1870s, and during this time British tariffs on all goods fell to nearly zero. At the time, moreover, India was controlled by the East India Company, which would only lose its monopoly rights in 1833 and full government after 1857. The British Empire had not taken an interest in development. Furthermore, Ray's survey (2009, I presume) in that respect plays on Indian nationalist mythos. Indian exports were minuscule, so losing the British market cannot explain absolute decline. Per Tirthankar Roy (2002), India's premier economic historian: "The export trade in itself was tiny. The proportion of textile export to total textile production was very small, at its peak not more than 1 to 2%. To give a sense of scale, around 1795, India's net export of cotton cloth was 22 million yards, and domestic production was 1102 million yards." Worse, Ray himself attributes the decline to primarily technological reasons! A wealth of research, moreover, cites other internal factors, from labor market frictions to droughts and Mughal decline, which were beyond the reach of any government, let alone a nineteenth-century state with low fiscal capacity. See Wolcott 1997, Clingingsmith and Williamson (2008), and Williamson (2011) on these various points. Indian real wages, per the most recent empirical work by Pim de Zwart, had been falling since at least 1720, and were locked at subsistence long before British textiles became competitive in the Indian market after 1820. What primarily damaged Indian industry was not declining terms of trade but poor agricultural output, which raised the price of food and thus nominal wages relative to the world price of textiles. Nor did the tariffs actually help British industry; instead, they slowed the diffusion of technology and prolonged the existence of the traditional hand-loom sector that would be in any event destroyed by the emergence of the power-loom. Indian yarn, meanwhile, was not competitive with British factories during this period, so free trade would merely have led to the temporary replacement to British cottage industry with Indian low-wage labor until machinery annihilated both anyway. It's funny that you're attacking my "assertion" that the lack of tariffs destroyed Indian industry, because I don't think that this is true—this is the orthodox Indian nationalist critique of British imperialism which I seek to rebut. The fact that an article is published in a field journal does not make it representative of the current state of the literature, or even the consensus of the time.

D: Ray does not even say what you think he does: "First, although the British tariff policy depressed Bengal's cotton textile exports to that country until the mid-1820s, it did not seem to be a factor after 1826, when the tariff rates were drastically curtailed. The British policy cannot, therefore, explain the industry’s decline in Bengal that started in the mid-1820s and continued through 1860. Secondly, there was no discriminatory British bounty policy to promote the import of her cotton textiles into India, which, as we have pointed out above, actually devastated the industry. Moreover, unlike the case of Bengal’s salt industry“' or her indigo dye manufacturing, the cotton textile industry was never subject to severe policy discrimination in Bengal."

R: I cited just one study only because I took it to be representative of my own perspective and in order to explain why I was surprised by your characterization of the decline of Bengal's cotton textile industry. I didn't want to give the impression of having authoritatively refuted your position (since I obviously don't think I have done that, nor do I think that my perspective is entirely right and yours entirely wrong), hence why I simply gave some facts with citations in order to substantiate my earlier claims and counter some of your objections. Note that I was primarily contesting your claim that "the textile industry fell not through government agency or tariffs but the lack thereof." In doing so, I didn't need to show that prohibitive tariffs were the primary or even a substantial cause of the decline, only that they even existed and were impactful. It was for this reason that I was surprised by your claim. Reading your response, it seems that everything which you say is perfectly consistent with my own stance as characterized above. Where do you show that not only did the British Empire NOT establish prohibitive tariffs but that the "lack thereof" led to the decline of Bengal's cotton textile industry? You cite many alternative factors, none of which I contested, and point out that Ray considered technological innovations during the Industrial Revolution to be the primary cause of the decline, as if I didn't explicitly acknowledge this within my previous response! I think much of this disagreement actually comes down to an unfortunate misunderstanding. In both of my previous responses, I was speaking of prohibitive tariffs instituted by the British Empire in order to weaken Bengal's textile exports. I've now realized that you seem to have meant protective tariffs instituted by India in order to protect its own textile industry. Hence why I attributed the claim that the "lack of tariffs destroyed Indian industry" to you based on your earlier statement about the textile industry having fallen "not through government agency or tariffs but the lack thereof"; and hence why you eschewed the attribution since you interpreted 'tariffs' in the other sense, which would make the claim not yours but that of an Indian nationalist, as you say. At the end, you characterize my response as an "attack", which clearly reveals a failure on my part to communicate my intentions clearly. Obviously I've given the unintended impression of hostility in my previous responses, which seems to be a repeated issue in our discussions over text. I try seriously to formulate my words carefully and not to be combative during discussions, out of a recognition of my own ignorance in many areas and respect for my interlocutors in believing that they may have something to teach me. If you're able to highlight which aspects of my previous responses failed to convey this attitude then I would genuinely appreciate that so that I may prevent future misinterpretations. Hopefully this message clarifies my prior intentions and current positions. D: In short: I disagree with the perspective (a century-old one at least) that India's inability to erect tariff barriers against British manufactures was the cause of some spectacular collapse of Indian industry. Such a claim is ideologically motivated and contradicted by a swathe of literature in economic history. British tariffs on Indian goods are irrelevant here, because the loss of a tiny portion of total production cannot explain more than a fraction of a percent of India's decline! No Indian nationalist—or any sane human—would argue that India would have prospered if Britain had raised tariffs against Indian goods. Ray himself concludes that these tariffs were barely impactful on the economy—they only damaged the irrelevant export sector. I do not believe that you have been hostile, only that you have waded too swiftly into a literature that demands careful thought and multidisciplinary reading.

D: The standard, incorrect nationalist argument (repeated and resuscitated since the 1880s) is that the Empire was bad because the British wouldn't protect Indian manufacturers with infant industry tariffs. This is wrong. Industry declined as a result of, among other things, 1) rising grain wages eroding competitiveness 2) disruptions caused by turbulent Mughal politics 3) labor market frictions 4) a wage and price structure that disincentivized the adoption of machinery.

R: Your position is much more clear to me now. As I said before, I think the earlier disagreement arose out of a mutual misunderstanding about what the other meant by 'tariffs'. I don't believe what you call the Indian nationalist position and it's now clear to me that you don't believe what I thought you had said. I did believe that prohibitive tariffs instituted by Britain had more of a negative impact on Bengal's textile industry than you believe based on what I considered to be a general consensus among those who have researched the history, but I've now lowered my conviction in that belief. It appears that alternative factors played a much more substantial role in the decline of India's manufacturing.

7. The American Historical Review

R: Skimming through the March 2021 release of The American Historical Review, I definitely see your point about not feeling as if you've learned anything. The first point which struck me was the sheer number of book reviews, which constitute the vast majority of the journal; the rest includes some video game and film reviews (for some reason) and finally a handful of articles of varying significance. (Perhaps this is normal in journals for the humanities, but I've never seen so many / any reviews in academic journals for mathematics / computer science / physics.) The defining characteristic of these articles (as well as the books which were reviewed) seems to be a focus on telling a story, backed up with some citations and corresponding argumentation. There are no formal research questions, independent and dependent variables, statements of methodology, data analysis, discussion of biases, literature review, areas for further development, etc. In summary, there was no research, just story-telling. That's a bit of an exaggeration, but it accurately conveys my impression after having skimmed the journal and read through some of the articles in greater depth. Common approaches included "how our contemporary ideological biases influence our historical perspective on …" or "here's something that I've been thinking about, now let me tie it into a broader lesson about 'Science, Empire, and Capitalism'". While there's nothing necessarily wrong with these topics, they seem better suited to a blog post than an academic research journal. More specifically, this style of writing is anything but conducive to critical engagement, since much of it holds the tacit perspective of "this is just one possible point of view among many". As such, the principal motivation appears to be not one of interrogating the evidence in order to reveal the truth but gathering evidence in support of one perspective on some historical matter. I suspect that this is due to different foundational assumptions about the value and efficacy of studying history. The basic vision (somewhat naive) of history is one founded in a search for the truth (about what happened in the past and the relevant causal factors) and an attempt to navigate the various obstacles along the way, whereas this journal seems generally uninterested in such endeavors and instead preoccupies itself with quirky new modes of analysis and ways of thinking about things. Take one of the featured articles, 'Sounds of February, Smells of October: The Russian Revolution as Sensory Experience' by Jan Plamper, for instance. What exactly is that article's thesis? Is it arguing in favor of an auditory and olfactory approach to historical analysis regarding the Russian Revolution of 1917? Not quite, since that would obviously be absurd. Instead it's arguing that the experiences of both sound and smell shaped the reality of that historical moment in a way which is not captured by traditional historical discourse. Ok, fair enough. How does this analysis contribute to my understanding of history in a way that will allow me to make predictions about the future? or guide policy decisions? even personal decisions? It doesn't, and it doesn't intend to. It merely intends to provide yet another lens through which to view history. There is no attempt to proffer an explanation of history in a way that might be challenged; and so whatever value it has must be radically different from what I was expecting from the study of history. In an attempt to be fair, I don't think my characterization so far is accurate for every single article / book reviewed in the journal. In fact, the books reviewed (from what I can tell) generally seem to adhere more so to the traditional mode of historical analysis with which I'm familiar. But it is indeed worrying that a top journal in this field would be filled with so much writing of such little value.

D: Your reaction, to my mind, appears both fair and entirely warranted. Historians, ever the officious gatekeepers, will retort that "truth" is neither a meaningful nor productive end of historical inquiry, but the impossibility of objectivity does not give sanction to frivolous, nonrigorous modes of research. Oddly, I would be less aggrieved if there was more microhistory— empirical findings from archeological sites, discussions of textual sources, etc. Then we could give credence to the claim that history has abandoned theory on truly intellectual concerns. We read an archeological text in my Medieval History class last year, for example, which contained only a scattered series of reports on the items and structures found in various sites with only limited speculations about the social functions of these artifacts (though they couldn't help themselves from a little). You can use this! It's at least a coherent picture of life in Northwest Europe after the fall of Rome. But to abandon causal claims and refuse the solidity of fact-finding is to lose sight of the mission of historical analysis.

D: I don't mean proselytize, but have a look at the table of contents of the May 2021 issue of the Economic History Review, one of our premier publications: https://onlinelibrary.wiley.com/toc/14680289/2021/74/2. You'll find a much more satisfactory range of topics—"How fast did the British economy grow during the Industrial Revvolution?" "What were rural wages in pre-industrial Southern Europe?" "Why were Spanish immigrants to Argentina poorer than others?"

D: Even the single tokenist article tries to answer a valid question: in what partnership forms (if any) did women invest in British railway companies?

D: (I do mean to proselytize, actually. I'd selfishly love to have economic history discussions).

R: I can immediately tell the difference when looking through Economic History Review. Each article has a clear question with a refutable thesis. No story-telling, just actual research. I'm somewhat embarrassed to have not realized how poor historical scholarship is outside of certain subfields, since I used to maintain a defacto respect for academia based on my experiences in STEM. Also, I certainly wouldn't be against more economic history discussions. But they might resemble lectures more than discussions given our relative familiarity with the subject.

8. Economic History

D: "Do economic laws explain why civilizations rise and fall?"

D: That's the question that made me an economic historian.

D: My current paper effectively does this, proposing an economic-theoretical explanation for Portugal's decline.

R: That's a very interesting question. My immediate reaction is to say, "While it might be part of the explanation, it will never explain the decline on its own." Though I'm open to being challenged on that. It seems like if economic fortune/challenges lead to that rise/fall of a civilization, I would expect there to be external considerations driving that rapid change which might be modeled by but never (completely) explained by economic laws. Did you end up receiving helpful comments on that essay?

D: You'll make a wonderful historian ("it's part but not all of the answer" is our boilerplate statement for everything). I agree, though: genetic and climatic factors operate outside of economics laws and tend to alter them, though I think that most of these effects can be expressed in economic theory.

9. Jared Diamond on The Agricultural Revolution: "The Worst Mistake in the History of the Human Race"

D: On a completely different note, I wonder what you make of this essay by Jared Diamond: https://www.discovermagazine.com/planet-earth/the-worst-mistake-in-the-history-of-the-human-race. He reproduces it in The Third Chimpanzee, and I find that I still do not have an effective response after several years of consternation.

R: In response to Diamond, I agree with much of what he has to say in that essay, but I disagree with the overall thesis. I believe he begins to undermine his own argument when he acknowledges the intertwined relationship between agriculture and crowding. The fact is that everything which makes us special as humans relies upon the growth of societies: culture, science, art, technology, and so on; and societies require agriculture, as Diamond acknowledges. So the renunciation of agriculture must come alongside a renunciation of everything which makes us distinct from other animal species. These things weren't available to the hunter-gatherer "society", not because of lack of leisure time, but because of an inability / lack of incentive to transfer knowledge across hundreds of generations. Why bother developing a sophisticated writing system or beginning to study arithmetic when your day-today life is preoccupied with survival? The shift towards agriculture, on the other hand, motivated both of these developments and facilitated their gradual advancement through population growth and preservation of past knowledge. It's only consequently that we've been able to take advantage of our heightened potentialities as humans through the development of incredible cultures, civilizations, and knowledge. Diamond points out that this transformation also brought about great inequality, disease, and despair. (He didn't talk too much about despair, although I think it's one of the worst consequences of the Agricultural Revolution.) Nevertheless, all the other developments of societies (medicine, politics, technology, psychology), predicated upon the growth of agriculture, also offer solutions to all of these problems. It's also not as if hunter-gatherer societies were immune to terrible fighting and social hierarchy—consider the documentation of war between chimpanzee populations as well as infanticide, murder, rape, domination, and chronic stress among several primate species. Changes over the last 10,000 years may have exacerbated some of these problems, but they have certainly improved many of them significantly as well (and not just for the rich elite societies). We should therefore not view pre-agricultural societies with rose-tinted glasses as obviously preferable to modern society, the way I think Diamond's analysis occasionally does (mostly through what he leaves out, rather than through active misrepresentation). How many of us, after all, would give up our current lifestyles to become hunter-gatherers? It's notable that Diamond largely compares pre-agricultural societies to immediately post-agricultural societies, rather than modern-day societies. Although some of the trends such as about height persist, the trends regarding nutrition and health have been vastly counterbalanced by developments in medicine. (The hardest problem now is getting people to actually listen to the advice of doctors.) While this isn't quite true globally, that is slowly changing and the reasons for this disparity lie in the realm of politics; they are by no means predestined by the advent of agriculture. Diamond's claims about the relationship between agriculture and the subjugation of women seem similarly narrow-sighted. Once we recognize that the growth of societies (hence culture, science, art, and technology) was predicated upon the Agricultural Revolution, I think we begin to see that transformative period as a necessary hurdle rather than the egregious mistake which Diamond insists it was. In the end, I agree with much of Diamond's argument, and I think his point would be made even more compelling by focusing on the current prevalence of "diseases of despair", but I think it's a mistake to use his analysis in order to portray hunter-gatherer lives as preferable. As a final consideration, I'm intrigued by the impact Diamond's argument, if successful, would have on the frequent allusions to animal suffering within ethical philosophy. It's currently popular to paint animal life as "nasty, brutish, and short" in the manner that we would typically portray hunter-gatherer lifestyle; their lives are dominated by the biological imperatives to survive and reproduce, resulting in a life of constant anxiety and terror. If we should instead view this life as preferable to post-agricultural life, shouldn't we then envy the lives of animals, rather than despair over their suffering? That seems wildly implausible to me, and I'm not sure that it's even psychologically possible.

D: One issue that I have with the piece is that I think that Diamond agrees with you, and has merely adopted his "worst mistake" typology in order to attract attention. What he means to say is something less controversial: that adopting farming is not rational for the individual hunter-gatherer, such that accusations that those bands which have missed the Agricultural Revolution cannot be accused of "backwardness." I don't think he really believes that this short-run mistake can be extended into the long run. Even Diamond, who has lived among various tribal peoples, has not chosen to remain—he came back to his cushy home, opera booth, and UCLA lecture theater. By practically every metric that he examines, humanity has either improved or gained the ability to improve—heights and nutrition are obviously better than at any point in our history (as are life expectancies); gender equality is a widespread ideal, if often unpracticed; economic inequality can be rectified through democratic action (as in Europe, or the postwar US), and is in any case founded on path dependence, not violence; leisure time, meanwhile, would be abundant if we chose to consume at lower levels. Work has for many acquired a social purpose that actually prevents despair and makes leisure just one of many commodities that we purchase. The nascent technological potential of agricultural societies makes the long-run calculus obvious: we were better off for whatever sacrifices (if any) our ancestors made. The more interesting question is the one he actually considers: whether the Agricultural Revolution truly worsened the lives of those who participated, at least in the short run. I am highly ambivalent on this question. Agricultural societies were not so dominant as to be able to smother all hunter-gatherer tribes until at least the late classical era, if not much later—many of the barbarians that toppled Rome, for example, were only barely farmers, and the famed Scythians and Huns were mobile pastoral peoples. Who would have stopped defection by families who chose to live in the vast interstices between civilizations? There were gaps between and within all kingdoms until well into the Middle Ages. I struggle to believe, in short, that mankind was bullied into such a drastic shift. On the other hand, the biological evidence is unequivocal—average living standards must have declined. Did this mask some other changes—were there perhaps fewer children in pre-agricultural societies, forcing a "quality over quantity" focus in child-rearing? I am not familiar enough with the research to know.

D: I recognize that my analysis is colored by my assumptions about human rationality, but if any situation warranted paying the information costs, surely whether or not to make (or remain in) the transition was one—the price of failure was imminent starvation and death.

D: Hunter-gatherer societies, by the way, were not necessarily sustainable on a global scale—the history of mass macro-fauna die-offs lags quite closely the history of human population growth and geographical expansion. Marvin Harris, in his book Cannibals and Kings, argues that overhunting was one potential cause of the transition: in short, our destructively aggrandizing impulses were no weaker in the distant past.

D: Incidentally, my father believes that Diamond has subsequently adopted a degrowth perspective, arguing that the transition was an error from the perspective of the present as well.

R: Your restricted interpretation of Diamond's thesis is indeed more plausible. I find myself in the same boat as you in terms of not being familiar enough with the research to make a determination one way or the other. I do think that we might view the transition as a simple numbers game, though. If agricultural societies facilitated greater levels of sexual reproduction than hunter-gatherer societies, then they would prosper in the long run, despite any decline in living standards so long as this didn't significantly affect the disparate rate of reproduction. That kind of explanation, if true, would eliminate any mystery about why a worse lifestyle would be selected for in the short-term, though it leaves open the question of why agricultural societies originated in the first place.

D: I wonder if Diamond would support a top-down interpretation: tribal chiefs, noting the surpluses derivable from agriculture, force underlings to grow crops and use warriors to defeat nearby clans and enslave the subjects as peasant cultivators. This creates a larger surplus to feed a warrior class, satiate the leaders, and continue the process of expansion.

D: Of course, this presupposes the inequality that was agrarian society apparently produced!

R: That part of his argument confused me since there were certainly social hierarchies even in hunter-gatherer societies. This should be obvious given the extreme social disparities witnessed even in other primate populations (although those tend to fall along sex divisions, though not exclusively, since sexual dimorphism is greatly exaggerated among non-human primate species). I see how greater population sizes would result in greater inequality, but then agriculture is just an accidental rather than actual cause.

D: There are also within sex differences in height, strength, and intelligence that would tend to be exacerbated/perpetuated by the ability to gain access to better nutrition.

D: I don't quite buy the accidental cause distinction, though. If the path is taken, surely the consequences alone warrant a mistake/benefit judgement?

R: If an action A leads to a consequence C, it's not necessarily the case that A caused C. Consider, for example, the action of throwing a rock through a window, thereby shattering the glass and allowing the outside air to enter the inside of the home. Consider also that a second person happens to burst a container filled with toxic gas nearby, which finds its way through the broken window and into the respiratory system of the old lady sitting inside, ultimately killing her. In this case, A is the action of throwing the rock through the window, and C is the old lady dying. I don't believe that A caused C, rather that the toxic gas entering the old lady's lungs caused C. Action A is merely an accidental cause of C, since A could have happened without C provided different external circumstances.

R: I suppose my claim is analogous for agriculture and inequality if we accept that the increase in inequality was actually due to greater population sizes. Since agriculture didn't necessitate greater population sizes, and greater population sizes could have been achieved even without agriculture.

D: You contest the notion that agriculture led to greater population sizes? Growth is the primary consequence of the agricultural revolution—hence quality vs quantity.

R: No, I don't contest that. I merely contested that as the necessary relationship between the two. Clearly we can imagine a society which practices agriculture but maintains a small population. As long as there's no necessary relationship the two, I think my term 'accidental' applies per my analysis above

D: I agree. Would you accept, however, that the possibility for inequality is necessarily increased by agriculture?

D: Obviously (you know me) I don't tend to focus on inequality as a paramount social problem; just want to give Diamond his due here.

R: I don't think so, in fact I believe that would be an even harder claim to defend since it relies implicitly not only upon a necessary relationship between agriculture and inequality, but also that the corresponding degree of inequality will always be greater than the extent of inequality under the preceding pre-agricultural society. That seems like a hefty burden to take on, though I don't claim to have refuted it. Although can't we imagine a hunter-gatherer society with great inequality which is succeeded by an agricultural society which is more egalitarian? Where is the contradiction. (I understand that you don't necessarily believe this, so pose my question to Diamond instead.)

D: The claim is not that inequality is necessarily increased by agriculture, but that the scope for inequality is widened (better clothes/food vs mansions/yachts/jets). I could envision an agricultural society as egalitarian as a ln HG band, but probably not more—stationary production increases the chances of elite expropriation.

D: The last point is speculative, obviously, and I was on a very boring phone call, which muddled my thoughts.

R: Ok, now I understand your claim. In that case, I'm inclined to agree that agriculture will widen the scope of potential inequality insofar as it extends the possible social strata, in ways unavailable to the hunter-gatherers. However, I can still imagine a group of ruthless huntergatherers who operate like a tyranny where one male rules the rest and obedience is maintained via the threat of mutual extinction and power imbalance (e.g. in terms of strength and loyalty). This would seem to me to be less egalitarian than many possible agrarian societies.

D: Less egalitarian than many possible agrarian societies, no doubt. But could that group transition into a more egalitarian agrarian society? I think not. The tyranny of the male would be all the more powerful for the inability of his followers to move their food production and property elsewhere; rents can easily be calculated and extracted; and potential defectors monitored.

R: Hmm, I think I can agree with that. In that case I see your point about the capacity for agricultural societies to exacerbate inequality

D: Cool. I don't necessarily believe that it'll exacerbate inequality, but merely that it's possible for such a situation to occur.

10. Rigor vs. Flexibility in Definitions (Natural vs. Social Sciences)

R: Yeah I find his deflationary approach to defining capitalism compelling. Although I wouldn't say that definitions are merely arbitrary, I often find that conversation is more so hindered than helped by debating the definitions of terms, rather than just agreeing upon an interpretation for the sake of moving forward. A notable exception is in mathematics, where definitions are actually extremely important and can be the deciding factor in whether or not some theorem is true. For example, Lakatos' "Proofs and Refutations" provides an entertaining and insightful reconstruction of the historical debates surrounding the definition of a polyhedron, and its significance regarding Euler's "theorem" about polyhedra. The definition of a "hole" in topology has a similarly contentious but illuminating history. What's most interesting to me is how such debates engage substantive and meaningful considerations, whereas semantic disagreement in philosophy often feels like an exercise in futility

D: That's an excellent point. I'm actually trying to ascertain just what the difference is between the social and natural sciences in the respective utility of agreeing to definitions. I think it must have something to do with the fact that definitions are transient and easily undermined for the sake of one's own alternative theory in the former.

R: I think part of the problem might be that precision and rigor with respect to definitions exists on a spectrum with logic and mathematics on end through physics, chemistry, and biology up to the social sciences on the other end. The function of rigorous definitions is to have equally precise statements about the things being defined. So that even a slight change in the definition may upset the truth of some theorem. In other words, statements in fields like mathematics are incredibly sensitive to our definitions. This seems to be less true for psychology, for example, where a general statement like "people act in accordance with their desires" does not rely upon a specific interpretation of "people" or "desires" or their conceptual relationship. Whether or not children or women or other races are included under "people" doesn't necessarily impact the truth of the statement since the concept of a "person" is flexible (i.e. imprecise) enough to accommodate such clarifications. The overall relevance of this for having conversations is that we can (usually) afford to postpone a detailed elaboration of the meanings of our terms in fields like psychology or economics whereas mathematics relies upon precise definitions in order to even understand any statements made about those things being defined, and so it can't afford to simply set aside questions of semantics

D: The problem with your interpretation, though, is that social sciences should be sensitive to definitions. The question remains: why do definitions remain open to debate in some places but not others?

D: Just because you can get a general idea about what is being said without a precise definition does not mean that the argument would not be enhanced by the acquisition of one.

D: Flexibility, to my mind, implies a kind of conceptual laxness or weakness which need not be indulged.

R: As for your claim that definitions should be precise in all fields, and that flexible definitions could always be replaced by more precise technical definitions, I'm not quite sure. There's a definite downside to rigorous definitions, which is that concepts become intimately related to the theories/theorems which utilize them, in such a way that obscures the presence of the same/similar concepts in different theories. This problem is widely appreciated by mathematicians, for example, who often attempt to mitigate against it by initiating programs specifically designed to unite disparate subfields such as "algebraic geometry" or "arithmetic dynamics". To take an example from the latter field, this paper (https://annals.math.princeton.edu/2020/191-3/p05) made a significant discovery by establishing and then utilizing an analogous relationship between "torsion points" (number theory) and "finite orbit points" (dynamical systems) in order to transform a longstanding problem in number theory into the language of dynamical systems. The conceptual machinery which was ultimately necessary to tackle the problem was not available to the subfield of mathematics in which it was originally stated. That's an example where very precise terminology, which is demanded by mathematics, actually obscures deeper conceptual relationships and fosters insular thinking, hence why mathematicians sometimes feel the need to consolidate different tools from various sub-disciplines. Another point is that I'm not sure whether some fields are actually capable of making more precise statements without turning into different fields. Since the precision of one's terminology restricts the precision of one's statements, it follows that limited precision in one's terminology may actually be preferable/necessary. Consider, for example, the concept of "homeostasis" in biology. The basic idea is imprecise: a process by which a living organism maintains stable equilibrium conditions. It allows us to make statements like "an example of homeostasis in humans is the thermoregulatory system which maintains an internal temperature around 98.6 degrees fahrenheit"; but if we attempt to give a more precise description of this concept then we quickly go beyond the language of biology. For example, explicating the specific chemical pathways by which thermoregulation operates brings us into chemistry. Or detailing the subatomic structure of chemicals, cellular structures, tissues, organs, and organ systems brings us into physics. Homeostasis, therefore, with its built-in level of imprecision (hence flexibility) is the proper level of description appropriate for making statements within biology. If we insist on providing a more specific description, then we leave the realm of biology and lose the conceptual generality of "homeostasis" which allows us to unite thermoregulation with baroreflex and bone remodeling. So it's not clear to me whether more definition-sensitive statements are always preferable or even possible. I definitely see your point about how some bad actors will hide behind ambiguity in order to deflect criticism as mere misinterpretation. However, I think the solution is not to make the social sciences more like mathematics, which honestly I would expect to exacerbate the problem; since then each researcher will adopt their own idiosyncratic definitions of common concepts and restrict its usage to their own particular pet-theories. ("Oh I'm not talking about that kind of capitalism, I mean post-hypermega-lydian capitalism as elaborated in Lundenshellenberg's classic 1879 treatise and interpreted by Holmor in his …") Instead, it's important that researchers acknowledge a shared understanding of the essential features of some common concept and then engage in mutual criticism in order to determine its proper interpretation. So, for example, every psychologist should comprehend a shared (somewhat vague) notion of "the unconscious"; they should agree, for instance, that "the unconscious" operates without being mediated via thoughts and is responsible for regulating body temperature and breathing. When two psychologists disagree about a more precise characterization of "the unconscious" (say one is a Freudian and the other is a Jungian), I don't think they should respond like mathematicians by adopting two different concepts and simply elaborating the corollaries of each view. Instead, I think they should interrogate the deficiencies in each interpretation (by, for example, testing the respective predictions of each theory via experimentation) in light of their foundational agreement about the essential features of "the unconscious" in order to arrive at the "true" interpretation. And so it's important that psychological statements about "the unconscious" remain flexible enough to accommodate various further elaborations lest competing researchers convince themselves that they're simply speaking of different concepts and thereby abandon a common scientific enterprise.

D: While I was not necessarily calling for terminological specificity, I actually think that the scenario that you've so cheerfully outlined—"I mean post-hyper-mega-lydian capitalism"—is actually better than the one that exists at present. We can actually evaluate what you mean, in that case, by going to the work that you've cited or demanding the straightforward definition of the term, and the preconditions for its fulfillment. Does HML capitalism include the capitalistworker relationship and subsist on surplus value? If not, well then we can differentiate it from Marxian capitalism in this respect—in short, clarity in terms forces one to say what one means. This doesn't entail mathematicization—far from it. But we can separate general from specific concepts and recognize their respective utilities in discussion; we know that the former is a convenient reference point without much analytical power, while the latter attempts to describe a concept as exactly as possible. We need not even attempt a hierarchical family/genus/species structure with our terms—but we do need to know when, where, and why it's worth bickering about them.

D: When our terms cease to carry descriptive meaning as a result of changes in the field, then our "flexibility" should consist of a willingness to openly debate whether the terms and their definitions remain useful. If we think it necessary, we should absolutely change them to adapt to new circumstances. But flexibility and mutability (or even generality) need to imply uncertainty, which in the social sciences has too easily empowered the "bad actors" that you've described. Indeed, it's probably inhibited the formation of paradigms and research programmes altogether (beyond popular zeitgists).

D: Summary: not calling for all terms to be made more specific and inflexible, but rather for the existence of a broader arsenal of both specific and general terms for the more accurate and comprehensible discussion of complex topics. Greater conceptual clarity is the basis for research, and for discussion that is anything but individuals talking past one another. "Capitalism" is not a useless word at all, but it would behoove everyone to realize the limits of what can be conveyed by using it unaccompanied.

R: Then it seems we mostly agree. I certainly acknowledge that precision about our terminology sometimes facilitates clarity in discussion, in which case I'm totally in favor of it. Though I dispute the claim that it is always a virtue (which I wouldn't attribute to you). Hence why I gave the example of "Post-super-ultra-hyper-mega-meta-lydian capitalism" (that's the full form). My intention was to present an extreme example where adopt highly idiosyncratic terminology to the effect of obscuring a common subject of research, namely capitalism; I tried to illustrate the consequences of this with the "arithmetic dynamics" example. Though I can see why you actually interpreted my example favorably, since I didn't provide enough context within that quote itself. I wholeheartedly agree with your last statement about preserving general concepts and providing elaboration when necessary for analysis. My fear is that the unequivocal privileging of precision leads to the mathematicization of terminology, which suits neither mathematics nor science.

D: I think I understand your mathematics example; I was simply pointing out how PSUHMML Capitalism might actually help discussions by forcing people in the social sciences to be clear about what they mean, because I find that the problem isn't idiosyncratic terminology—as long as people then explain themselves—but the use of vague terms masquerading as commonality as shields for obscurantism.

11. Democracy

D: By the way, I'm sorry if I came off aggressive at all yesterday re: democracy. I felt that you were arguing for the sake of disagreement, but I should have taken you more sincerely.

R: Sounds good. No worries about yesterday, I didn't feel that you were aggressive. I think we may have talked past each other a bit, though. At some point in our discussion, the relationship between democracy, science, and freedom got confused and that led to miscommunication. My point was just that democracy does not necessarily restrict liberty insofar as it also facilitates liberty (both through preventing its legal suppression as well as actively promoting the exercise of those freedoms, e.g., speech and religion). But then the relationship of this analysis to science was muddled since Feyerabend was advocating "democracy" in science explicitly in order to promote freedom; and so we had concluded that "pluralism" may be a better description of his view.

D: Right, I think I understood that. I was arguing that democracy necessarily constrains some liberties in order to achieve other ends—sometimes freedoms, but on other occasions safety, public order, or redistribution. Whether the balance of liberties distributed across the mass of the citizenry increases is, to my mind, irrelevant: certain behaviors have been blocked.

D: That may be an intrinsic good. Most or all citizens may believe that it is so. But the point is that the goal of political formation is to regulate the organization of society. It may be right that we restrict the ability of firms to emit CO2, and that in so doing we open up possibilities for young children to live longer, healthier lives. But this boon has come at the cost of some economic liberty.

D: Which, in the end, is why—I think—we returned to the notion of pluralism, which does not have the same regulatory mechanism.

R: I agree will all of that. I think I would be inclined to say that the ability for young children to live longer is not merely a good, but a fundamental liberty; that's why I was objecting to the characterization that democracy limits freedom in order to promote greater goods, since I think many of those greater goods reduce to freedoms themselves, in which case we may actually be promoting greater freedom. But this is, I think, not a particularly important semantic disagreement.

D: I actually think that this distribution is important, but that's talking as an ex-classicalliberal/libertarian. I think the reduction is, well, reductionistic. I also resist this notion of "aggregate freedom" as inimical to the human spirit.

D: But I agree that we don't need to settle this to resolve the scientific quandary.

R: Indeed, the scientific question is quite separate. The first point is whether greater "freedom" (pluralism) in science is actually efficacious with regard to discovering the truth. Beyond that, we may ask whether it's nevertheless morally desirable (for the sake of social harmony, for instance). It seems that Feyerabend would respond affirmatively to both, whereas both of us (I think) are at least skeptical about the former and therefore deem the latter to be beside the point, insofar as we declare the pursuit of knowledge to be the primary goal of science (in which case any auxiliary benefits of pluralism would seem irrelevant).

12. William MacAskill - Are We Living At The Hinge Of History?

D: https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5f36b015d9a3691ba8e1096b/1597419543571/Are+we+living+at+the+hinge+of+history.pdf

R: Thanks for sharing. I thought the article was interesting and generally compelling with regard to its appraisal of HH as characterized in the essay. My primary reaction, though, is to object that (I suspect) most people who make a claim about us living "at the hinge of history" are typically comparing the present to the past, not the present to the future (or even all points in time). If that's true, then the scope of the claim is for more restricted (only the past 6000 years or so of human history) and consequently our Bayesian priors should be much higher than if we were comparing our current "influentialness" to the potential influentialness of all points in the future as well. The author objects to my presentation of HH on the basis that if there were some more influential time in the future, then we should actually be investing our current resources into that future decision instead. However, proponents of HH typically contend that we will not ever get to that future more influential time if we don't first overcome our current hurdles (e.g. climate change and threat of nuclear war), in which case "investing in the future" and "tackling our current obstacles" are identical. The author also objects to these restricted interpretations of HH on the basis that they are incompatible with what he terms the "Bostrom-Yudkowsy view on superintelligence." Without having read their books, I'm skeptical that they would argue that our response to the threat of superintelligence will constitute the most decisive moment in human history EVEN IF we succeed in eliminating it, in which case there seems plausibly to be room for more pressing threats in the future. This asymmetry seems to defuse a lot of the skepticism since the grandiose claims are restricted to the case where the threat isn't eliminated, in which case there is no future human history and so our claim of maximal influentialness concerns only a marginal period of time compared to the indefinitely long span of human history should we succeed in overcoming the threat in question. My final consideration is just to point out that in addition to the salience bias associated with determining the influentialness of our current time, there is a competing bias which naturally disposes us towards not wanting to accept that we're living in the most influential (or even just an enormously influential) time because that scenario immediately imposes a high degree of responsibility upon us—especially the kinds of people who have the luxury of reading that article in the first place.

13. Historical Economics, Mathematical Notation, Teaching History of Science

R: Just read your latest post, "The New Historical Economics is Self-Aware". I found your distinction between economic history and historical economics to be enlightening and, once laid out, quite clear. It's one of those concepts which seems obvious once you encounter it, but never previously occurred to you. I'm reminded of the remarkable invention of analytical geometry (think graphs on the Cartesian plane), which connects algebra (relationships between variables) to geometry (shapes and curves) in a way that now seems to me obvious, but at the time must have required a great spark of ingenuity. Another such innovation is modern algebraic notation; the use of "x" to denote an unknown variable and the use of superscripts for exponentiation were not invented until 1637 by Descartes in his work, "La Geometrie". For your amusement (and perhaps to inspire some gratitude for the conveniences of modern mathematical notation), here's a collection of polynomials, first in modern notation, and then in the notation of Diophantus (c. 150 AD):

[TODO: insert images] [mathematical-notation-past-future-images-notation.xml_gr_12.gif] [mathematical-notation-past-future-images-notation.xml_gr_14.gif]

R: Imagine trying to solve even simple quadratics, let alone complicated problems in econometrics, using the notation of Diophantus!

D: Hey Rajat, good to hear from you. I'm very glad that you enjoyed the latest blog post, though I do have to admit that it's sort of a poorly edited ramble. But that's all that I have time for these days, and I assume that some people will like that better than nothing at all. In any event, it's something that I had to get off my chest. No, that revelation was not obvious to me either. Someone accused me later of stealing an idea they had been trying to push on me for months; and I said well, if you were saying that I had really no idea, and couldn't have realized it until I was more of a practitioner. They thought that this was fair.

D: I really have no idea what Diophantus is doing there. Reminds me of an old Chinese parable about a young child learning the first four numbers as characters, and then trying to count to one billion.

R: Personally, I don't mind the somewhat conversational tone of your posts. It makes it more accessible for laymen like myself, whilst retaining higher standards for rigor and documentation than is typically found in popular articles. Regarding Diophantus, I'm as lost as you. I'm always impressed by how much progress was made in mathematics (and the sciences) using the rhetorical style of Euclid, especially when compared to modern texts which are filled with symbols. If you've ever tried to read documents from those periods, like Kepler's Astronomia Nova or even Newton's Principia, you'll recognize it as a daunting task; it's telling that modern renditions of these topics deviate so significantly in style. For example, the famous "F = ma" never actually appears in Newton's writing; instead you find "A change in motion is proportional to the motive force impressed and takes place along the straight line in which that force is impressed." It's obvious which form is more convenient for trying to solve projectile motion problems.

D: Oddly enough, there are close analogs of this in the history of economic thought. There is a big debate about whether we should interpret Adam Smith, David Ricardo, and Karl Marx as they wrote, in terms of the concepts that they were probably thinking of, or in terms of modern economic models.

D: One compendium, for example, explains Capital and the wealth of nations using calculus and various supply demand graphs. Having read the former, I think you can probably guess that this is somewhat anachronistic.

D: This also reminds me of some interpretations of Greek and medieval science and philosophy as well.

D: Say, the treatment of atomism.

R: I remember Kuhn also talking about this in The Structure of Scientific Revolutions. He mentioned how textbooks usually present the history of their science as a linear accumulation of knowledge, with key figures making their particular contributions, and later scientists simply adding on to this work. Atomism is a notable example, since our concept of what an atom is has changed so drastically since it was first suggested by pre-Socratic thinkers like Democritus, whose understanding of an atom was as something indivisible, yet today we speak of sub-atomic particles. People like Boyle (c. 1661) later spoke of corpuscles, like atoms in the sense that they are fundamental constituents of matter, but not necessarily indivisible. Dalton (c. 1808) approached our modern understanding of atoms with his law of definite proportions, but it solved a very different problem than the one addressed by the Bohr model (c. 1913). This is to say nothing of the post-quantum understanding of the atom. So one might justifiably wonder why we use the same word "atom" for this concept which has taken on such disparate meanings over time. Are we faithfully reinterpreting the work of past scientists updated via modern observations, or are we anachronistically imposing our modern predilections on to past science in order to present a simple narrative?

R: I don't have a clear answer to this question, but here's a few thoughts: The big problem, it seems to me, is when we reinterpret the work of past thinkers using modern terminology, but then retain their conclusions stated in this new language without appropriate modification. My suspicion is that this is not uncommon with reinterpretations of Marx, for example. In science, however, I think we tend to update our terminology / understanding of concepts only when they're accompanied by new conclusions, so this is not as much of a concern. For instance, with advancements in our understanding of the basis of heredity (i.e. genetics), we updated our understanding of evolution to mean "change in the frequency of alleles in a population", and so we accordingly gave up Darwin's theory of pangenesis as an account of heredity, but retained the world "evolution".

R: Another concern is whether we are misleading new students, when we teach them about old concepts using new terminology, about the kinds of questions with which these thinkers were concerned and the way in which they approached them. This is almost definitely the case, but the tradeoff is that we can learn from past thinkers with much less effort. I'm glad that I can learn about Diophantine equations without having to learn the notation of Diophantus! So it's not clear that this is a practice which we should change, but maybe it would be good to at least point out what we're doing.

D: I tend to agree with the notion of a simple narrative, but it's certainly a complex question. The analogy between physics and economics in this case is imperfect, because in the first instance antiquated terms are being preserved across history to describe several phenomena consciously by the users who adopt them. Atomos, for example, is Democritus' word, and subsequent thinkers have borrowed it for their own purposes. It has the meaning that each of them has endowed it. Whereas something like general equilibrium, say, is a term developed in a more recent economic era (the late 19th century) and then grafted onto the works of Smith.

D: Conceptually speaking, I do think there is a simple narrative aspect to it. Using the same words doesn't help, but I would say that the narrative in this case is a consequence of language and not the reason for the shared terminology. I think we have spoken before and largely agreed about this myth of cumulative progress.

D: As for educating students, I am generally of the opinion that the marginal return to education in the history of science is very low. I think that in general, people tend to compartmentalize what they know about history from what they do in practice. I probably know more than the median economist about the history of economic thought, but it doesn't help me do anything on a daily basis. Now, as a historian the question is different; interpretations from over a century ago are, if not completely valid, still provocative talking points today. Max Weber still gets papers arguing that he's wrong.

R: I agree that it's important to distinguish between those concepts which have been preserved across time organically by the practitioners themselves, such as with atom, and those concepts which are artificially introduced into the work of past thinkers, like your example of general equilibrium. That being said, the effect is the same: we're constructing a simple narrative whereby we pretend that people have been dealing with essentially the same concepts for hundreds of years, just slightly modified to reflect developments in our knowledge and understanding. As for whether the narrative is a consequence of the language, or the language is a consequence of the narrative, I'm sure that in practice they develop alongside each other. Regarding education, I think that studying the history and philosophy of science generally has some immediate practical value, but with strongly diminishing returns. So it's probably good for scientists to understand why the notion of a singular scientific method is so misleading, and to realize that the history of science is not a story of gradual accumulation of knowledge, and also to appreciate that there is no definite, unchanging concept of what is "natural/material", rather this notion is constantly evolving with the latest developments in scientific theory. But all of this can be learned in a couple of lectures, and additional details don't add much value.

D: On the subject of education, I think it can be useful to spend a little bit of time talking about the history of mistakes, and ensuring that students know that we are if not equally still prone to them now. For example, the replication crisis in the behavioral sciences should, if we are serious about getting better at applied statistics and experimentation, be taught to every future generation of empirical economist, sociologist, and social scientist in general until the end of eternity.

D: I wonder how much we are really susceptible to this simple narrative. Obviously it is true that we believe there to be greater similarities between the Greek version of the atom and our own. But I can't think of many contexts in which this issue is taught where the nuance is not discussed. You would really only learn about Greek atomism a philosophy or a philosophy of science class, and I cannot think of a reputable professor who would not draw the distinction.

D: I don't really argue with you here, I'm just musing.

R: I agree that a physics professor probably wouldn't mention Greek atomism, and if he did, he would surely point out its differences with the modern conception. But there are also significant differences between our modern conception and Dalton's conception, or Bohr's conception, potentially enough to warrant a different classification scheme. Yet, I think the practical value of retaining the same name for the sake of simplicity in teaching probably outweighs the dangers of overloading terminology

R: A similar point can be made about the concept of the gene in biology, by the way. It's postulation based on the observation of Gregor Mendel was highly inferential and vague. Since then, we've gained insights into the physical structure of DNA which allow us to more precisely define genes, yet this introduced difficulties in reconciling our new understanding with Mendel's work, so we amended the concept but retained the term. Since then, even more difficulties have been revealed with existing definitions, leading to further modification (e.g. one gene-one enzyme hypothesis was challenged by the discovery of genes which can encode multiple proteins, and there were further challenges with the discovery of overlapping genes), whilst retaining the term. I think a similar case can be made for the concept of species too. Overall, this process seems to be nearly ubiquitous in scientific practice, even though I think it receives little attention in most textbooks, which are silently updated with the newest definitions.

D: I think I may be slightly missing your point: are you saying that our concepts need to be flexible, and thus our terms will be changing constantly if we are not flexible with their definitions?

D: i.e. we may be making a definition in the present that maybe invalidated by future knowledge, so discarding a term every time you're forced to refine a concept is a recipe for chaos?

R: Yes, that's basically what I'm saying. The consequence of this is that we are presented a misleadingly simple narrative of this history of science, since the same term is used to describe sometimes wildly different concepts; the benefit is that we don't need to learn a new language every time we wish to learn from past thinkers. So long as the differences between the modern conception and previous conceptions of these terms, such as atoms or genes, is made clear, it seems to me quite useful to simply update the meanings of these terms as our knowledge advances, rather than to invent new terms, even if this involves a bit of misrepresentation of the history (e.g. I was surprised to learn that Darwin's conception of natural selection was intimately related to a physical struggle for existence and the elimination of the weak, because this interpretation is explicitly cautioned against in contemporary treatments of natural selection following the modern evolution synthesis).

D: My general view, I think, is that the history of science must be useful to the scientist. That is, the true history of science is for historians, but whatever is useful to the modern scientist, who is ill equipped to understand the nuances of history, should be kept and everything else scrapped. If a term is misleading but useful it should be kept, so long as it doesn't mislead from a scientific point of view.

D: I'm not suggesting that a science should necessarily forget it's founders, although I think the loss from doing so is small. Instead, I am suggesting that a science should remember its founders selectively. We should remember a Canon of heroes and villains who serve as instructive parables, like the American founding fathers, Churchill or Napoleon.

R: I think I'm mostly in agreement with you. As much as I regret the misrepresentation of the history of science involved in the overloading of scientific concepts, it's clearly very useful for the working scientist, and the difficulties only come into play when asking questions about the history of science (e.g. what was Darwin's conception of natural selection?) not question about science itself. As for whether the teaching of science should focus its attention on a small selection of its founders, I'm more conflicted about this…Presumably you're suggesting that we should just remember those founders who were mostly right (and pretend that they had it completely correct, in terms of the modern understanding), and not mention the names of those who made valiant and potentially important contributions which were later overturned. But what about Lamarck? Isn't it quite instructive to know his name and theory, especially as contrasted with that of Darwin? I'm tempted also to suggest that we should forgo mentioning names altogether, and just talk about the theories as understood currently, so that we don't distort the history of science; but it's clearly useful to associate theories with names and historical landmarks, so I don't think this suggestion is tenable.

D: I wasn't suggesting that we only remember like three people for each discipline, but rather be selective about what we do remember. We shouldn't be nagging our scientists about the extent to which previous discoverers who were wrong were actually a little bit right, or previous discoverers who made contributions that were flawed or sent us down dead ends. Accumulative picture of science is useful, as is the notion of debunking, because it encourages cooperation between researchers and a healthy dose of skepticism about existing theories which motivates people to test them empirically. I do not think that we should focus on names. However, you must acknowledge that having some great biographical figures is a motivation for entering and being more passionate about the discipline. Emulating your heroes is a great reason to do science.

D: As regards Lamarck, we should remember him as a failure, no matter how close he got to being right, unless we rebound and restore lamarckism someday. I don't know how prone biology is to quackery, but any encouragement of heterodoxy of this kind in economics is only an encouragement of wasted time.

R: I agree with what you say about not nagging scientists with historical details, and also about the psychological importance of associating scientific achievements with persons (or small groups of people), even when this is somewhat misleading (as it usually is).

R: Regarding Lamarck, I think your assessment is too harsh. After all, the neo-Lamarckians played an important role in contributing to the modern evolutionary synthesis, and don't forget that Darwin also believed in the inheritance of acquired characters (it was considered an obvious deduction from the observed interplay between structure, function, and environment which couldn't really be accounted for by natural selection until it was supplemented and refined by an understanding of the basis of heredity and variation, i.e. genetics). It's also worth pointing out that epigenetics now supports the inheritance of acquired characters, although only in restricted contexts; and I don't want to be misinterpreted as saying that epigenetics vindicates Lamarckism, because it doesn't (Lamarackism requires its peculiar form of inheritance to be an inherent feature of all living things, and for it to be a self-sufficient cause of evolution, neither of which is provided by epigenetics).

R: So was Lamarck was wrong in some pretty fundamental ways, but it seems clearly wrong to call him a failure, given that he was the first to take the leap of really defending evolution as a fact, and his suggestions were important counterbalances to the excesses of the neo-Darwinians, who attempted to reduce all adaptation to the mechanism of natural selection as understood by Darwin.

R: By the way, I may have slightly misinterpreted the point of your remarks about Lamarck. If all you're saying is that it's not fruitful to continue the project of neo-Lamarckism today, and that it should be regarded as a historical curiosity of some significance, then I'm in agreement with you. But I would still object to calling Lamarck a failure

D: I'll respond to the rest tomorrow, but on Lamarck I was sort of saying the last point. I do not regard him as a failure, and I was too strong and suggesting that we should regard him as a failure. What I meant is that we should take him as an example of honest mistakes that made sense at the time.

R: Yes, that makes sense. Probably I reacted too strongly to the word failure in your earlier remarks

D: Looking back, I think I was speaking a bit too strongly and trying to be a bit provocative. Darwin shouldn't be called a failure because his ideas were flawed, and he shouldn't be called the greatest hero because our modern evolutionary tradition is drawn from his heritage. But I am rather calling for us to remember the best parts of Darwin as an inspiration to future generations of biologists. Reading the voyage of the beagle or the origin of species is a transformative experience, kind of like how watching Jacques Cousteau on television made my dad want to be a marine biologist.

R: Fair enough. I agree with the importance of having heroes in the history of science. As long as we fairly acknowledge their flaws as well, then we escape the trap of idolatry. I wonder whether identifying such heroes will become increasingly difficult as science (as well as mathematics) becomes more collaborative. I know that this is something which awards committees have already encountered, such as with the discovery of the Higgs-Boson. The original paper had 5154 authors, but of course the Nobel Prize only went to two people.

D: Didn't know that. That's insane. I was thinking about the collaboration issue and whether that sets a bad precedent to speak of heroes. I think it's probably useful to emphasize that heroes did cooperate, like Darwin with Wallace.

R: Right, collaboration has always existed, but just not at the current scale. It's probably more accurate to speak of lone scientific geniuses in the past like Galileo, Newton, Maxwell, and Darwin. Even though they did collaborate with others, it's not usually too misleading to say that they made incredible advances by themselves. It's difficult to think of anybody like that in the last century

R: Even Einstein, though he made great advances, particularly with general relativity, is somewhat unfairly singled out among his colleagues upon who's work he heavily depended, such as Bohr, Heisenberg, and Schrodinger.

D: Now, I think we've discussed enough inspiration. What remains to be settled is the history of science that is most productive for achieving results. Now, an inspirational history of science may achieve better results through increasing the mass of practitioners. But the question remains whether increased mass actually produces negative returns at some point. I think the answer is not, so long as there are high enough quality standards for entry. But that is no guarantee.

R: My impression is that the distorted presentation of the history of science given in most science textbooks is not that directly related to issues with scientific practice. I think most scientific work involves working within a paradigm and elaborating its details, which requires careful experimentation and strong analytical skills (both of which are emphasized in textbooks), but not necessarily groundbreaking ingenuity or willingness to challenge prevailing dogmas (precisely those things which are underemphasized in the textbook's history of science). I'm sure that someone like Feyerabend would strongly disagree with me, since he actively encourages iconoclasm and heterodoxy (he calls it "democracy") in science. But I see it like this: if we emphasize agreement in the history of science, then we are sometimes delayed in adopting correct theories which challenge the existing paradigm, but we retain a general order (i.e. consensus) on basic points which facilitates the discovery of new facts, which are then amenable to further refinement or later dismissal in light of new facts; if we emphasize disagreement and revolution in the history of science, then it's difficult to see how we would proceed past the stage of protoscience.

R: So the history of science which best supports scientific achievement seems to be that which emphasizes agreement and conservatism (sometimes even dogmatism), but which allows for the introduction of new ideas provided enough prodding. I'm reminded of the period around the turn of the 20th century when evolutionary thought was in a state of turmoil, which Huxley memorably termed the "the eclipse of Darwinism". There were the neo-Darwinists vs. neoLamarckians vs. orthogenesists vs. mutationists vs. finalists, all disagreeing quite adamantly and attempting to gather evidence for their preferred theory. The disputes were eventually resolved (mostly) with the modern evolutionary synthesis, facilitated by advancements in the understanding of the basis of heredity and variation (i.e. genetics). But until then, many scientists working in that field at that time expressed defeatism regarding the hope of ever deciding the right theory of evolution. I fear that a history of science which emphasizes disagreement would render ordinary periods of science like that of "the eclipse of Darwinism".

R: On the other hand, it's quite true that the modern evolutionary synthesis would not have occurred if we didn't tolerate the uncomfortable period of "the eclipse of Darwinism". Instead, we might have comfortably accepted one of the competing (and wrong) theories and proceeded until we ran into a brick wall and were forced to start anew. So the important distinction seems to be between artificial and natural disagreement among scientists, i.e., is the disagreement due to a genuine uncertainty regarding the interpretation of the facts (natural) or is the disagreement due to contrarians who want to challenge the accepted beliefs (artificial)? The eclipse of Darwinism reflects a period of natural disagreement, whereas the state of science between Aristotle and Galileo reflects something more like artificial agreement (and so could have benefited from some disagreement). I guess the difficult question is as to what social conditions facilitate natural disagreement (which may begin as artificially stimulated) whilst allowing for a return to general agreement once the natural disagreement is resolved and all that remain are contrarians. I don't think that our current institutions do such a bad job at this, but they probably lean towards shouting down disagreers a little too much.

D: I'm struggling with your distinction between natural and artificial disagreement, because to my mind it seems to imply a sort of outside view for which you've rightly criticized me for holding. That we can only know post hoc that one disagreement was productive, and another was not, and even then we are not sure. We can perhaps look to the motives of the people disagreeing, and to the basis for disagreement. If the basis for disagreement is an absence of knowledge, then perhaps this is to be considered natural? Whereas attempts to challenge established theory without much adequate evidence is not?

D: I agree about institutions, however I worry that this is perhaps because of my own personal status quo bias. I freely admit to that. But I think that institutions that make people pay a price for attempting to disrupt consensus, and raise the bar for inflation to the consensus, are generally a good thing. To make a comparison to American politics, it appears that the bar is simply too low for participation in both discourse and choice. You can vote from the comfort of your own home, spew ridiculous opinions from the comfort of your own home, and do so within anonymity, such that you never pay a price for failing to collect sufficient information on the subjects about which you opine. But universities don't let you do that

R: By natural disagreement, I mean to refer to cases where the known facts are legitimately ambiguous with respect to a range of plausible theories which explain them. In such cases, I think we should encourage the disagreement in the hopes of shedding light on new information which will ultimately allow us to decide on the right theory (which may end up being entirely different from the currently available proposals). By artificial disagreement, I mean to refer to cases where the known facts approach something like a convergence on a single well-accepted theory, yet, inevitably, due to the nature of underdetermination of empirical theories by facts, some people insist on proposing alternative theories which are substantially incompatible with the prevailing theory (i.e. posits totally different objects and/or natural laws) yet explains the known facts equally well. In such cases, in the interest of advancing normal science, I think we should ignore these alternative theories unless and until they are shown to be (significantly) more empirically adequate. Let there be the few outliers who persist with their pet theories, and maybe they will die off, or maybe they will eventually prove successful, in which case my conservatism would have delayed its acceptance, but I'm alright with that tradeoff. In retrospect, perhaps natural and artificial are not the best descriptors for my concepts. My thinking was that natural disagreement stems "naturally" from the ambiguity in the facts, whereas artificial disagreement is created "artificially" by the contrarian tendencies of some scientists. Let me know if you can think of better labels.

R: About institutions, we seem to share a general conservatism, and I also acknowledge that this a value judgment, not plainly decided by the facts alone. As for the analogy to politics, I'm always hesitant to speak of a competency to vote, because I think that some of the most important areas in which the "voice of the people" is needed are not those which require any particular expertise, but simply a certain experience / perspective. For instance, I don't think that being able to recognize the deprivation of civil rights in the US during the last century required any expertise in social science; it just required you to be black. Obviously the solution to these problems usually requires lots of careful weighing of considerations, which laypeople motivated by passions are terrible at. But that's why we don't have a direct democracy, which would be a travesty. Yet, it's still important for those laypeople to be heard, lest we risk ignoring issues which are pertinent to them. So the "voice of the people" seems to be important for pointing out problems which would otherwise be ignored, but not so good at providing solutions to these or other problems.

D: My point is not exactly about the education required to be a voter, although I was using this morning about making college free and then mandating that voters be college graduates, and wondering what kinds of political economy results that might bring. I was also considering how you could measure the economic effects of franchise extensions, and try to identify causally whether economic policy improved when women got the vote, or when poor people got the vote.

D: I was talking more about making voting costly in some way such that people took the time to invest in their decisions, acquiring information about the issues at hand and the politicians such that they had a better opportunity of voting for the common good. Because most people vote for the common good as they perceive it, and not for themselves.

D: See Brian Caplan, the myth of the rational voter.

D: As for your distinction between natural and artificial disagreement, I remain uncomfortable, but I think that you may have a distinction in your head that is stronger than the one that I'm seeing on paper, and at the very least I do understand the intuitive logic. Natural and artificial kind of work as terms, especially the latter in its most literal sense, as artifice meaning fabrication or construction.

D: But perhaps the right words are simply ambiguity versus contrarianism. One stems from the absence of knowledge, the other from a desire to contradict received wisdom.

14. Canadian fur trade

R: Just finished reading your paper on the Canadian fur trade. It's well written, with a clear and interesting thesis and methodology, and remarkably suggestive findings (despite the many appropriate qualifications you make in the paper). I thought that "distance to nearest enemy post" was a clever and surprisingly simple proxy for measure of competition, and Section 2 did a good job of summarizing the relevant historical context for someone with minimal background information. I was also surprised to learn about how accommodating the Europeans were with native traders and the active role that native traders played in negotiation, since it challenges their characterization as merely passive victims of European colonialism which I'm more used to. I have just a few lingering questions: Can your conclusion be interpreted to suggest that HBC and NWC would have mutually benefited from maintaining independent monopolies over distinct regions of Canada rather than engaging in competition? If so, is this an example of the Prisoner's Dilemma? Since you measure prices as a percentage of the company's comparative standard, does this control for fluctuations in demand from Europe for native goods, which might otherwise influence the relative prices paid to natives (apologies if this question is malformed, I'm not sure that I have an adequate grasp of the economic concepts)?

D: Just briefly on the separate monopoly areas question: Yes, this would have been the optimal outcome, and both companies try to negotiate a solution in this form. One of the puzzles to be explained is why they failed to come to an agreement.

D: I don't think it would be a prisoners dilemma, however, because I'm not sure that the Nash equilibrium here is actually defect.

D: On the question of fluctuations in European demand, no, we do not control for that. I am in the process of collecting data on European prices, which have been used by other authors for the pre-1763 period but have not been transcribed subsequently. That could definitely have an effect on the relative price paid, although it's not clear about what direction it should go.

D: The willingness of the European traders to accommodate native Americans is one of the motivating forces behind this paper, because it stands in such dark contrast to the evils perpetrated on natives in the United States. Or, indeed, upon natives by imperial regimes or concessionary companies around the world.

R: That makes sense about fluctuations in European demand. About the accommodations, you're right that it's a stark contrast to the treatment of natives elsewhere. However, if the interpretation of A.J. Ray is correct, that excessive gift-giving, especially in the form of alcohol, had the effect of making natives dependent on European trade, then the contrast becomes less extreme.

D: Still fairly extreme, in my opinion. The natives accepted the alcohol of their own free will— along with tobacco, it was one of their favorite products. There's also evidence to suggest that contact with the Europeans changed living standards for the better, for example by supplying metal tools and cooking equipment for the heating of food. Whereas the damage in the United States was done intentionally, for the sake of acquiring land, the damage that was done in Canada was done accidentally, and only resulted because of the incompatible natures of Western European and native society.

D: Even the competitive provision of alcohol/tobacco was of another order of magnitude.

R: Yes, that's true. By the way, the damage done by the introduction of European pathogens in Canada only really took effect after the fur trade had died down, right? That's why it wasn't relevant for your analysis?

D: Well, more like before and after. I think most of the die off occurred prior to the advent of the fur trade, but around the early 19th century there was a pandemic of smallpox that proved quite destructive. However, it was mitigated by the company's efforts to vaccinate.

D: To be honest, it's also not that important for our analysis because I just don't have the data right now.

R: I see. It's interesting how business interests seemed to facilitate positive relations between European and native traders, to the extent of motivating them to develop a vaccine. Do you know why this didn't occur in the American colonies? Were business incentives not so strong there?

D: You may have missed this part in the paper, but in the discussion section we talk about a model whereby participants in unequal trade fare better when they provide a service that cannot easily be replicated by another party. In this case, they are much less likely to be expropriated. I think in Canada, the natives ability to navigate inland and make contact with other tribes, as well as their long expertise in beaver trapping, and gave them a comparative advantage that could not be replicated by the small number of European traders, who were basically terrified of moving inland anyway.

R: Yes, that makes sense how the principle of comparative advantage made trade mutually advantageous in Canada. What I'm wondering is why a similar explanation didn't apply to the American colonies.

D: Ah. Easier conditions for settlement, different reasons for settlement (ie. building a new England vs. getting furs), and different economic bases (trade vs. ag) would seem to be at the heart of it. The HBC was never trying to settle so it never really had any land hunger

15. Theoretical Virtues, IBE

D: The conversation was I think about the different ways to do historical research, comparing Marxist and economic history paradigms (or rather not being able to compare them).

D: Ruling out programs based on their results rather than pre-existing premises about why they should/shouldn't work.

R: Interesting. That's definitely an important question in the philosophy of science: What role should theoretical virtues, like simplicity or parsimony, play in our evaluation of theories? Typically, I see them used as tie-breakers for theories which explain the known observations roughly equally well. But, theoretical virtues can also play the role of motivating scientists to consider theories worth pursuing, in order to ultimately demonstrate their empirical adequacy/superiority. This was the case with heliocentrism, of course

D: I think I completely agree? I was making a similar sort of argument to a friend the other day, who was asserting that messianism was rational. Sure, we have as much evidence for a world with a messiah as for one without one, and maybe messianism would make us happier/behave better, but that's the least economical explanation for the universe—you introduce additional mechanisms and assumptions to the fabric of reality for which we have no evidence, not even observed patterns.

D: The "tie-breaker" formulation makes a lot of practical sense to me.

R: I agree with your messiah example. Another of way of thinking about it is in Bayesian terms. Let E be all the available evidence to be accounted for, and let M be the messiah hypothesis and let N be some incompatible hypothesis, e.g. naturalism. Then even though P(E | M) = P(E | N)—meaning that each hypothesis explains the evidence E equally well—we should side with whichever hypothesis has the higher prior / intrinsic probability, P(M) or P(N), since that would maximize our posterior probability, P(M | E) or P(N | E), according to Bayes' rule.

R: The main difficulty, of course, is when we have competing hypotheses where one explains some piece of evidence better than the other, but its prior probability is lower. So we have to try and balance these points when evaluating the posterior probability of each hypothesis, which is very difficult without an explicit quantitative model for determining the probabilities. Additionally, these difficulties are compounded when we incorporate multiple pieces of evidence, which may favor various competing hypotheses or have subtle relationships which make it difficult to identify probabilistic independence

D: That's a really interesting perspective. However, my interlocutor would probably say that we cannot determine The prior probabilities of messianism and non-messianism.

D: In effect, he would probably claim that we've just assumed what we are trying to prove.

R: Hmm, then you would probably have to make the case for a low posterior probability on messianism after starting with with equal prior probability to some competing hypothesis. So you would have to find pieces of evidence which favor the competing hypothesis, say naturalism. This could be things like the gratuitous suffering which we observe ubiquitously and can even infer about the past given the violent mechanisms of evolution, which presumably would be unexpected given a loving messiah but are totally expected on naturalism.

R: But, if your interlocutor insists that we cannot even begin to calculate the posterior probabilities because we can have no knowledge whatsoever about the prior probabilities, then I would challenge him on this point. For example, it seems that we can know that for two independent hypotheses, A and B, we have P(A & B) < P(A) as long as P(B) < 1, which should always be true if we are (as we should be) fallibilists about knowledge.

R: Additionally, it seems like we are reasonable in lowering the prior probability of hypotheses which lack theoretical virtues. For example, if they are unnecessarily complicated, or ad-hoc, or conform to some psychological bias, like motivated reasoning or wishful thinking, then it seems reasonable to lower our prior probability in these hypotheses. If we can't, then it would be impossible for us to distinguish between reasonable scientific hypotheses and crazy ones which were maliciously contrived in order to fit all the known data but lack an overall cogency, predictive utility, simplicity, parsimony, intuitive plausibility, and so on.

D: Interesting to frame this discussion in terms of evidence for naturalism as opposed to against —insert bad philosophy—. Although suffering really is just evidence against a messiah—one has to make greater logical leaps and contortions to make the theory and 'reality' fit together. I like your probability argument, but I think we may have to flesh it out some more. I presume you are arguing that A = see evidence of the world and B = existence of a messiah/creator. But in this case no creationist would accept the independence of A and B– the only reason we observe A is because B, so P( A & B ) isn't necessarily less than P(A). So my friend is constantly concerned with "complicating" (blasted deconstructionist literary major) our notions of rationality and bias—he sees value in the illogic of thought given the holes in other parts of logical reasoning. Indeed, the notion of the word "reasonable" disgusts him, in a semi-Kuhnian (he hasn't read it) sense that research programmes are incommensurable and self-consistent from the inside. I don't think he's satisfied with our discussion of practical approaches to understanding the world and being "less wrong," as he doesn't see that as being any different from the impossible effort to become "more right."

R: Regarding the distinction between arguing fro naturalism vs. arguing against some incompatible hypothesis, as long as we are following the probabilistic style of reasoning which I've described, there's no real difference between the two. That's because any evidence which favors one of two competing hypotheses will (by definition) oppose the alternative hypothesis, and vice versa; so how we choose to describe it doesn't change the calculation. For A and B, I actually wasn't thinking of any specific hypotheses. Rather, I was just using those letters to denote two hypotheses which are assumed to be independent. Under that assumption (plus fallibilism about knowledge), it follows that P(A & B) < P(A), which proves that we can (at least sometimes) in fact have knowledge about the relative prior probabilities of two competing hypotheses; in this case, we've shown that more complex hypotheses have lower intrinsic probability than simpler hypotheses. We could try to apply it to the naturalism vs. messianism case by arguing that messianism is more complex since it involves positing the existence of a messiah in addition to the natural laws, whereas naturalism only requires positing the existence of the natural laws. But this formulation is quite crude, and so it's usually more fruitful to just argue about posterior probabilities and not quibble over intrinsic probabilities. As for deconstructing rationality, I'm not quite sure what to say…I'd be interested to ask him directly whether he believes in such a notion as truth. If yes, then I would just say that rationality is a process of thinking which is aimed towards the truth. If no, then I would ask him whether he believes that the things that he's saying have meaning. If yes, then I would argue that meaning presupposes a theory of truth, since to assert <p> is simply to assert <p is true>, so that as long as <p> is meaningful, so too must <p is true> be meaningful, hence there must be some notion of truth. If no, then I guess I would just ignore anything he says since, by selfadmission, it has no meaning.

D: Certainly it is not fruitful to quibble over intrinsic probabilities if you do not believe such things exist. I find your reputation extremely compelling, but I remain unconvinced that one can really halt a skeptic. You could probably deal with this particular skeptic, because he's not particularly fixed in his opinions and, like most people of his type, prefers to poke holes rather than stand on any kind of ground.

D: He's particularly interested in Derrida, which leaves him to question the intrinsic meaning of language altogether. The last line kind of seems like a bit of a gotcha, though. If language really did not have a meaning but someone was trying to convey this impression to you, then You would be unjustified in ignoring what he said.

D: I guess the question might be to try to understand, moving away from my crazy friend, whether it matters whether language that is perhaps inherently meaning has a meaningful intent, such that multiple different expressions might be used to import the same central concept. Whatever we argue, is that real? What are our concepts?

D: I guess my suggestion, Ill informed as it is, would be that an inherently meaningless language can still be meaningful if it is paired with meaningful intentions. But can we separate intentions from language?

R: Typically, I think of meaning being determined by usage. On that view, all language is inherently meaningless, as it depends for its meaning on our decision to use it in certain ways. But I think that you're suggesting something more concerning. What if two people appear to use the same term in the same way, but internally attach different meanings to it? Here's a classic example: What if, due to some optical quirk of mine, my internal representation of "red" corresponds to your internal representation of "blue" and vice versa? So when we point to the same object and agree that it's "red", or agree that it's "blue", it seems that we're using the terms in the same way, and hence they mean the same thing, but the mental experiences to which these terms correspond are actually the opposite between us.

R: I think that such scenarios are theoretically possible but in actuality quite implausible. That's because of the interrelated meanings of terms. For example, when you eventually ask me whether some object is closer to red or orange, I'll look at you with confusion, insisting that it looks nothing like "red" (really blue) and so is clearly closer to orange. We'll probably quibble back and forth, comparing different colors (e.g. I say "red" is similar to violet) until we realize that we've been using "red" and "blue" in opposite ways. To avoid this scenario, we would need to suppose that my optical quirks actually reverse the entire spectrum of visual light, so that we will agree in our statements comparing colors. But now our scenario involves many more postulates, and so it's much less likely. Therefore, these kinds of hidden miscommunications (which aren't immediately revealed through ordinary language use) are either unstable or unlikely.

R: There's a final, more damning, interpretation of your worry, which is that language does not gain its meaning through a correspondence with certain concepts (as established through usage); rather, language gains its meaning through coherence alone. On this view, there is no sense in asking which internal mental representations correspond to the terms "red" or "blue", because these terms are only defined in relation to other terms, which are themselves defined in relation to still more terms, and so on. We're left with a complex web of associations which form an internal semantic structure through formal relations and rules of inference. For example, the terms "1", "2", "3", and "more than" are defined in such a way that <2 is more than 1> and <3 is more than 2> are "true" (within this language), and we can infer that <3 is more than 1> is "true" whereas <1 is more than 3> is "false" via the transitivity of "more than" in this language, which effectively establishes its "meaning". Importantly, on this view, "true" or "false" don't (necessarily) correspond to any experiences or facts about the world; a proposition is "true" simply if it's coherent with respect to the individual "meanings" (i.e. usages) of the constituent terms, and "false" if it's incoherent.

R: Now, I think that this view of language is at least plausible, but it has the potentially worrying consequence that language has no definite meaning, even after fixing its usage (i.e. specifying the meaningful terms and their formal relations/rules of inference). That's because the usage is determined only by the formal structure of the language, which is independent of any correspondence between the terms and certain experiences/mental representations. At this point, my comments are very speculative, but I think that we may be able to overcome this worry by noting that our experiences/mental representations have a structure themselves too (e.g. many experiences are hierarchical, like the increasing experience of heaviness when carrying 1 lb, 2 lbs, 3 lbs, etc.); therefore, any language which attempts to describe these experiences must share that structure in the relevant terms. Presumably, the unimaginable complexity of our experiences precludes nearly all possible languages from being adequate for the purpose of describing our experiences, so that, in effect, there's only one language whose structure perfectly corresponds to the structure of our experiences, in which case we can fix its meaning accordingly. Thus, we've rescued the definite meaning of language.

D: If two people label an object red, but the sensory experience corresponding to "red" for one individual is the analog of "blue" for the other person, then I would say that the red-blue internal distinction is… potentially meaningless? Unless the only attribute possessed by color is our perception of it, which could be fair. Otherwise, however, what seems to be important is that we can have a consistent naming scheme upon which everyone can agree. I think it's probably obvious that my sensory experiences are different from yours, even abstracting from how well our respective glasses work.

D: Let's talk about this "ordinal" theory of language—i.e. that the meanings of words lie in their relations with each other. What is orange? That which is some mixture of red and yellow. What is red? Or blue? Nothing more definite than "one." I think you may be close to rescuing us with your "hierarchical experiences." But we know that observation is theory-dependent, and that our words for things give structure to our concepts. I will often have an epiphany when someone "tells me the word I was looking for," because now I can see and understand clearly. Because the word structured my thoughts and experiences in a way that makes sense. In response to the weights example, I think people would fail to notice the difference between 1/2/3 lbs if there were no labels on the objects themselves. Certainly this would be the case for objects of 1.3, 1.5, and 1.25 lbs. This experience would be undifferentiated, but introducing the weights would create an artificial hierarchy. Just how people overrate the quality of paintings that were made by acknowledged masters if they know this to be the case beforehand.

D: As to the "single language" postulate with which you close, I am sympathetic but want to dive deeper. There are hundreds of languages that (have) capture(d) human experience. Languages are shaped by eccentricities of geography, random chance, cultural patterns, etc., and they lead different people to think differently about the same things. Was it necessary that dinosaurs should have been called lizards, not birds? And our perception of the former case leads us to specific pictures of allosaurs, T-rexes, etc. that may not correspond to actual reality. Perhaps different languages presently have different competencies in different situations, such that in combination and compromise they more accurately describe the world. Think of the French and Latin expressions that have been brought into English for which there is just no translation. Maybe some intersection of (modified) languages gives a better correspondence. But is there a necessary one?

R: You make a very interesting point about the active role that language sometimes plays in structuring our experiences. I hadn't considered it before, but it seems true that our experiences are sometimes vague until they've been described in language, which in effect resolves the ambiguity in our experience. But, there are many possible ways to resolve this ambiguity, so that whichever terms we end up choosing to describe our experience don't merely reflect some structure already present in the vague experience, but in fact also introduce some new structure into our final, disambiguated experience. If this is true, then my simple model where the structure of language merely reflects the structure of experience won't work, but it will still be the case that the structure of (vague) experience limits the possible languages which are capable of (unambiguously) describing that experience.

R: I think we can resolve the issue where the differences in weight are too granular to be noticeable, which appears to illustrate an incongruence between the structure of our experience (i.e. undifferentiated) and the structure of the corresponding language (i.e. hierarchical). We simply need to realize that the numerical descriptions of the weights aren't meant to describe our experience of their heaviness, but rather a fact about the physical constitution of the weights. As for describing our experience of their heaviness, the description "equally heavy" will suffice, so the language is indeed equipped with the necessary structure to describe our experiences, so long as we use the right terms.

R: Regarding the "single language" postulate, which I readily admit is speculative, it seems that we might be able to hold on to it by suggesting that there really is just one language (understood as a collection of meaningful terms equipped with semantic relations and rules of inference) which perfectly captures the structure of reality, and that our human languages are approximations of this one true language. Some human languages, like French and Latin, agree with this perfect language better than other human languages for some descriptions, and so we recognize this by borrowing those terms when needed.

D: I think this idea of a two tiered experience of the natural world is interesting. We should pursue it further. I think the implications, however, will depend upon the extent to which the first tier, primary unfiltered experience, resembles the structured reality that comes later. If the unfiltered experience is just the perceptual equivalent of a big jumble of pixels, blurry and unfocused, then language is doing a lot of work. Indeed, we might say that the first level experience is just raw data, and not experience at all. Then our words might actually be reflecting nothing.

D: I agree that numerical weights are describing facts. My issue here is that we are trying to escape the problem of purely relative terminology, and attempting to ascribe to actual objects real properties. I agree that we have the linguistic capability to describe the sensation, indeed, the fact that we have this capability is why we are able to discuss this problem. But sufficient is very different from accuracy. And it raises the question of what words are attempting to describe: real things, or psychological perceptions?

D: I am enthusiastic about the single language concept. I've actually long hoped that this might be possible, and looked once to linguistic philosophy to provide the answer. Maybe I should delve into that again.

D: Two more thoughts on that head though, one, we should see words as approximations of the "correct" terminology in the single descriptive language, because we should probably despair as much of knowing true words as of knowing true things. And second, there is the issue that language formation is context dependent. So even bundling together many different languages from many different areas will encompass a greater diversity of experiences—but will it be right? Or will it be encouraging a slew of slightly variegated images of things that are actually the same?

R: I agree with the need to clarify the extent to which the unfiltered experience is structured. The main difficulty in investigating this question is that language and experience are quite deeply entangled, so that it's not easy to tear away the language and look at the experience itself. We've tried to do this by looking at cases where we have some vague sensation which becomes focused once the appropriate description is given to us, but these events need not suggest that the final, focused experience REQUIRED the language in order to be experienced. It could just be that the description was the stimulus which evoked the experience but didn't supplement it with any new structure which wasn't already present in the final experience. So we need to know whether there are any experiences which actually depend upon language for their full expression.

R: I'm not sure that I fully understand the issue that you describe with the weights example. I think that language can describe whatever we want, including both mind-independent things as well as psychological states. I've suggested that the reason why language has the ability to describe anything, even in the case where language is understood as purely relational (i.e. without content), is that language has structure, and that this can be used to describe things which have an analogous structure. It doesn't matter whether the thing being described is mind-independently real, merely psychological, or even fictional, so long as it embodies the appropriate structure.

R: I agree with your hesitations about the single language concept. Our human languages will always be mere approximations, though we are able to refine them as we discover more about the world. Additionally, there may be structures in reality which our minds are not capable of comprehending, in which case there may be large chunks of meaning which will never be accessible to us as humans. As for how we come to know whether our language is correct, that's a difficult question. One clue would seem to be when several languages have their own words for the same concepts, like numbers or shapes. Of course, this universality might be artificial if we think that it's the result of conquest, in which case we will require some historical analysis to establish that these concepts were developed truly independently. In general, determining whether a language is correct or not will depend on its domain of application. So to determine whether a scientific language is correct is no different than determining whether the corresponding scientific theory is correct, which is done via appealing to the standards of scientific evaluation (e.g. explanatory power, simplicity, predictive utility, etc.). To determine whether a description of our experience is correct, we may appeal to the standard of communicability, i.e. if I were to provide this description to someone else, would they experience the same thing?

D: The first part of our analysis revolves around the extent to which prelinguistic people have fully realized experiences of the world, rather than a vague haze of muddled sensations. So I guess the proper question might be, what kinds of experiences don't rely on language for their full expression?

D: I think my objection to the weights example was that in our discussions, we usually don't presuppose the ability to describe or properly comprehend mind independent things, do we? I know that you have often reprimanded me for taking a false "outside" perspective on the world. And so even if we reduce the question to that of analogous structures, how are we to know what the true structure of the mind independent things is? Still don't know what the structure of our language is giving what we perceive a relational structure.

R: About the weights example, I accept that it's difficult (maybe impossible) to gain knowledge about the mind-independent world. (Science seems to be the most plausible candidate for this kind of knowledge, but there are still scientific anti-realists.) However, I think that this is independent of the points I was making about language, since those points only hinge on a possible correspondence between the structure of language and the structure of the thing being described, irrespective of its ontological status. I don't think that worries about gaining knowledge of the mind-independent world have much, if anything, to do with language; rather, they typically have to do with the fact that we seem to gain knowledge through experience, yet the link between our experiences and the mind-independent reality are unclear at best and nonexistent at worst.

D: I think the notion of independent language evolutions is critical. Indeed, this extends to our concepts. If we could possibly identify instances in which entire fields of study evolved independently of one another in different regions, their convergence upon similar conceptual structures would be an indication that there is something inherently accurate in our research programs. Or, at the very least, that we are doing our best given the limitations of our cognitive abilities.

D: So maybe I do not see that there can really be a distinction between the ontological status of a thing being described and this perceived structure? Isn't the model that we use to describe, say, a particular kind of molecule tied to the very existence of the molecule? In which case it will matter a lot about the kind of language that we use to describe it, because that in turn can shape our model.

R: Hmm, I guess I don't see the link between the ontological status of a thing being described and its description. Can't we just describe the thing in question and then worry about its existence later? For example, I can describe the concept of a God (omniscient, omnibenevolent, omnipotent, mind, timeless, spaceless, etc.) without needing to know whether God actually exists. Even in your molecule example, wouldn't it be the case that I can describe a hypothetical molecule or particle whose existence is uncertain? Wasn't this the case with the Higgs boson, whose field was described long before its existence was experimentally verified? It seems that a description just consists in a specification of all the properties of a thing, and since existence isn't a predicate (otherwise we could define things into existence by specifying existence as one of its properties), we need not know whether something exists in order to provide its full description. And once we have a full description, we have a full account of the structure of the thing being described. Then it's just an empirical question whether that thing actually exists.

D: In theory, our description of the thing should reflect its structure. I could theorize a relationship between objects X, Y, and Z without observing them, but it is probably that my description would be altered if I did observe how they relate in a structure. Our description is also complicated by the fact that our words for describing the hypothetical molecule are derived (IMO) from other observations in less relevant fields.

D: We cannot just say anything

D: So the two points would be A) our theoretical description may not correspond with our empirical description and B) our theoretical description is circumscribed by the language that we have eked out from previous experience.

R: I think I misinterpreted your earlier comments. When you said that the description of a thing is tied to its existence, I understood you to mean that we can't describe something without thereby commenting on its existence. I now understand you to have meant that when we formulate a description (D), there are really two objects: the actually existing thing which we are attempting to describe (X), and the hypothetical object which fully matches our description (Y). The point is that X is the actual thing, whereas Y is merely our model of X based on our description D. Relating this to your second point about language, it seems that our observations of X will permit a range of various possible descriptions (D1,D2,D3,…) each of which agree on the current observations but disagree about potential further observations. Ultimately, the description (Dn) which we end up choosing will be determined in part by the fact that some of the possible descriptions are not meaningful within our current language, since those descriptions may contain terms which don't have an analogous concept in our language. Thus our model (Yn) of X will be determined not just by our observations of X but also our language, which limits the available concepts. The recommendation, then, seems to be that we should attempt to distinguish between those aspects of our description of X which are unique to Dn versus those which are common to all possible descriptions (D1,D2,D3,…). The latter is what we are truly justified in ascribing to X based on our observations, whereas the former are merely artifacts of our choice among the possible descriptions, which was determined by irrelevant things like what concepts were available to our language. In the end, it seems that our model of X shouldn't be Yn but rather the collection of features which are common to all the possible models (Y1,Y2,Y3,…) which are consistent with our observations. Two worries: (1) How can we, as it were, transcend our own language in order to identify those aspects which are unique to our description of X? (2) What is the role of theoretical virtues (simplicity, coherence, falsifiability, etc.) in adjudicating among the possible descriptions? Why should we treat all descriptions as equally plausible by only admitting those features which are common to all the possible models?

D: Your elaboration of my argument is fascinating and provocative. I think your weighting scheme in averaging models has a lot to do with your pre-commitments. You and I would probably use priors to weight highly unevenly. Feyerabend would argue for an even weighting. Further, it is possible that searching for commonalities is incorrect. If all descriptions are formed using an inadequate observational apparatus, then all may share wrong features. If some use adequate tools, by contrast, then we may have complete disagreement on salient issues.

D: The further complication that I had originally intended by my point was that our language itself is limited by the set of objects already observed. Having seen object set X at time t, we are limited to vocabulary set V in modeling potentially observable outcomes in t+1. So we cannot have any description or model in a period.

D: The "gene" or the "bacterium" were not terms available to Aristotle, for example, so he cannot have devised the genome or germ theory.

R: I think you're right about priors coming into play when weighing the possible descriptions (hence possible models). If Feyerabend is to truly live up to his defense of counterinduction, I think he might actually go beyond just an even weighting to saying that those descriptions/models which are more different from our current models should be given a higher priority than those which are less different!

R: I also appreciate your point that the true description may not (most likely won't?) reside in the commonalities among all possible descriptions, since it might be that the true concepts only belong to a proper subset of the possible languages. However, what this commonality approach guarantees is that we do not draw any conclusions beyond those which are strictly supported by our observations; that's because all of the possible descriptions are assumed to be equally supported by the current observations. So, if our observational capacities are too limited to hone in on the one true description, then too bad! We'll have to settle for a weaker, more general description, unless we allow for evidence beyond observations (like theoretical virtues of simplicity, coherence, falsifiability, etc.) which would allow us to further distinguish among the possible descriptions. If we do allow for such further considerations, it raises the question of why such theoretical virtues are aimed towards the truth rather than merely convenience/intelligibility or something else. At least in the case of observation, its status as evidence seems to be supported by a causal theory of knowledge, namely that our observations are causally related to what's "really out there", hence why our observations are (generally) taken to be a reliable source of evidence.

R: Your last point really highlights a fundamental limitation with actually implementing my "possible descriptions" scheme. We can't possibly know all the possible descriptions consistent with some observations, especially those which utilize concepts from languages which have yet to be invented; and so, in practice, we'll be limited by our imagination. Additionally, the theory-ladenness of observation makes it practically difficult (maybe not theoretically impossible) to disentangle the aspects of our observation which are tied to our current conceptual schemes from those which are inherent in the thing being observed. As some consolation, I do think that my "possible descriptions" scheme succeeds in avoiding the pitfalls of language in relation to observation and evidence in principle, just not in practice.

D: Great point on Feyerabend. While I think you're right, it does seem to highlight the absurdity of his position. Kind of how a literary studies scholar might insist that we "complicate" our position by considering incorrect views. We don't need to do that!

D: On your second point, I do see the underlying precautionary logic. However, I am not certain that this is actually how science occurs. I think that instead of settling for vague heuristics in describing phenomena, we instead propose a best ambitious theory supported by your "external" criteria. It is not clear why we should favor these criteria—should we expect the world to be simple and intelligible rather than bizarre and complex? This is why I say "criteria" and not "evidence," because we cannot know whether a theory "seeming" right does so just because it reflects the "real world" best or because we are cognitively designed to prefer the structure of the argument. What I do agree on is that we have no reason to believe at any point in time that our observations are an unreliable source of information on the real world. Wouldn't it be fair to say that incorrect views (geocentrism) were formed not because our observations were bad but because we interpreted existing facts poorly? Worryingly, however, you could say that we pursued… external criteria like simplicity and intelligibility!

R: I agree that science as actually practiced goes beyond the "mere commonalities" approach which I outlined. Generally, it follows IBE (inference to the best explanation). I think that all rational inquiry should follow this approach, but the criteria which determine the BE will depend upon the domain of inquiry. For science, a standard list of theoretical virtues (i.e. those criteria which determine the best explanation) includes consistency (internal and external, meaning with itself and with other theories), empirical accuracy, unifying power (a.k.a. "scope"), simplicity, and predictive power (a.k.a. "fertility"). Note that, on this view, empirical accuracy (i.e. agreement with the observed facts) is just one criterion among many. Some will amend this by considering full empirical accuracy a necessity and then treating the remaining theoretical virtues as useful for further adjudication, whereas others will choose to just weigh empirical accuracy very prominently without treating it as fundamentally different; I think that the former is more popular during "normal science", whereas the latter is more acceptable during "paradigm shifts", when nascent theories are permitted to "fix up" the discrepancies with observation later on.

R: Now we get to the hard question: Why should IBE be taken to reveal the true nature of reality? Much ink has been spilled on this question, and it's fundamental to the scientific realism/antirealism debate. I'll just say that my philosophical studies have disillusioned me to the very concept of the "real"; I think that it takes on various meanings in different contexts and we simply need to be cognizant of its meaning in any particular context of usage. Colloquially, "real" tends to mean correspondence to the mind-independent world, i.e. the world "as it really is". I don't know of any context in which this understanding of "real" is defensible or actually adhered to, and (in my opinion) attempts to preserve this understanding in all contexts have led to much pointless philosophical speculation. In science, "real" just means in accordance with the best explanation once all the observable facts are known (where "best explanation" and "observable" are measured according to human standards, so that the world as seen by super-intelligent bats might yield a different scientific "reality"). In mathematics, an object is "real" just in case it's coherent and productive for mathematical investigation as practiced by humans (and so aliens might have different psychologies within which the so-called natural numbers are quite unnatural, in which case they would not be "real" according to alien mathematics). In literary criticism, "real" is defined by the facts of the story. And so on…

R: If we insist on maintaining a mind-independent reality for all of these subjects, then we're forced into a strange kind-of Platonism, where numbers, sets, and fictional characters all have some queer mind-independent existence; but I think that there are much more natural interpretations of "truth" and "reality" which are context-sensitive and don't rely on these speculative philosophical postulates. You might remember that this deflationary attitude towards "real" is the basis upon which I argued for moral realism.

16. Brief Return to Moral Realism

D: What is truth and how can we know it? What is the good life? Without answering these questions, it's hard to do or say anything.

R: Agreed, and I actually believe that we've made progress in answering these questions, even though we keep coming back to them.

D: We should try to assess where we've moved at some point

D: I know that I was at least a little bit immature when we were talking about moral realism. Too emotional and prideful.

R: Yes, I think that would be helpful. As a starting point, I'll note that I've softened in my stance on moral realism. At first, I would have said something like "to do what is good is no more than to do what is rational", but I no longer believe that. I still believe that we can be incorrect in our moral judgements, and that's the core sense in which I still hold to a kind-of realism in ethics

D: From reading the sequences, I have been toying with a somewhat different view, namely, that it is rational to do what we believe is moral.

D: Yudkowsky terms this "instrumental rationality"—systematically working to achieve our ends.

R: I agree with that, and I think it's a lot easier to believe than full blown "categorical rationality", the idea that we have reasons to do/believe things even when they don't align with our conscious desires

R: But it's difficult to let go of the idea that someone who simultaneously believes <p> and that <p implies q> SHOULD also believe <q>, whether or not this final belief aligns with his desires. This is called an "epistemic norm"

D: Right, I agree with that too. The question would be whether this sort of logic can be applied to all of our beliefs. If you believe in equality of opportunity, for example, what empirical facts about the world can we tie this to?

D: Perhaps some psychological findings about the effects of striving on human well-being? But most people can't become Gates/Obama/Musk/Dylan etc.

D: And many don't even want to.

R: Well, I take it that to "believe in equality of opportunity" is to believe that "equality of opportunity" is a good thing. Since this is a moral proposition, we can only deduce other moral propositions, such as that "John should be given a tractor instead of a shovel so that he has an equal opportunity to Jack for digging the hole."

R: Then I think empirical findings about human psychology would help us determine whether we actually believe that equality of opportunity is valuable, and studies in sociology and economics would help us to determine how to achieve this if so.

D: Yes, I slightly misinterpreted your previous comment. My issue is this: does John really need a tractor to garden in his backyard? We know that humans have "satiation points" and that there are diminishing returns to capital in any project. Yet there are those who believe that even those who can flourish need more opportunity—or even guaranteed equality. Are they wrong? What are they basing their views on?

D: To rephrase: many acts that are not psychologically, socially, or materially beneficial (even some that are harmful) are deemed moral. What does it say that we must use noninstrumental criteria for evaluating the goodness of these behaviors?

D: And of course there is the deeper question of why we should pursue these ends anyway.

R: I approach it like this: We observe in humans this peculiar concept of normativity/reasons, i.e. the notion that we should do some things. Some reasons are purely logical (like the epistemic norms), and others are moral, which I'll focus on. How do we determine what the correct moral beliefs are? The same way we determine what the correct logical beliefs are. We start with certain compelling intuitions and attempt to formulate a system of principles within which these concrete intuitions are explained. During the process, we'll likely come across conflicting intuitions, or a conflict between certain concrete intuitions and general principles which we've formulated. In such cases, we must either discard the conflicting intuition or refine our principle so as to accommodate the apparently conflicting intuition. There are no simple rules to follow, and so we must rely on good judgment.

R: Applying this to the moral sphere, I think we realize that we have many intuitions, only some of which say that it's good to do what is psychologically/materially beneficial for ourselves. For example, many people believe that they have an obligation to do what's best for their child, but this often conflicts with what is in the direct psychological interests of the parent, who has to make all sorts of personal sacrifices in order to care for their child. With this simple illustration, we should see that the starting point of morality shouldn't necessarily be taken to be self-interest. Rather, we should start with those moral intuitions which are most compelling; and then the task of moral philosophy is to construct the system which best reconciles, explains, and predicts these intuitions. This answers your question about non-instrumental criteria, since they shouldn't be treated as fundamentally different from any other moral intuitions.

R: As for why we should pursue those ends which systematically make sense of our intuitions upon careful reflection, I think that the question is misguided for the same reasons that I gave when talking about the pointlessness of talking about "reality". These are simply the ends which we have as humans, and so they define for us what we have reasons to do. So we should pursue the human ends because we are humans.

17. Return to IBE

D: Let's go back to IBE for a bit. I think that we're actually closing in on something important. Let's suppose there is a world in which we have a weighting scheme between different theoretical virtues in which all of those factors that you mentioned have non-zero weights. Before we even start to discuss why IBE reveals the true nature of reality, we need to figure out why we should privilege a certain weighting scheme. What I have been suggesting is that our weights really correspond to our preconceived notions about what the world is like. If we believe in a knowledge base that is undergirded by unity and intelligible structure, then of course we should privilege scope and consistency above all. But is there an ex ante reason to believe this? How would we come to understand the fundamental structure of knowledge without resorting to the very tools that we are trying to justify? If we do not expect reality to conform to a unified structure—say, if we believe it possible that different branches of knowledge extract different kinds of truths that fit together like a puzzle, rather than directly overlapping—then we should privilege empirical accuracy and predictive power instead. This reminds me of important distinctions that are made when choosing to use a causal model or machine learning in econometrics. If you understand the phenomena beforehand, then you can set up a theoretical model for how the actors should work to achieve expected outcomes, and then set up your equations such that you can test the model using data. This is kind of the essence of the natural experimentalist paradigm that I was describing to you on Thursday. Instead of using kitchen sink regressions, you do research to understand the question at hand, and then design your equations so that they specifically follow what you hypothesize as the model of the scenario. But if you don't have a model, then you may be justified in taking an ML approach, which as you obviously know privileges prediction from the optimal sample of regressors no matter what your story about the phenomena is. Trouble is, of course, that you no longer have the causal model, and that you are forced to take whatever results that you find and make a story about them.

D: I think that I inherently told sort of Platonist assumptions. It's hard to get rid of them, if it is necessary to do so. But let unbundle your definition of scientific reality. I worry that we are facing something circular here. If reality corresponds to the theory that best fits the facts, then our definition of reality is theory-dependent. Then by endogenizing reality to our best theory, we are no longer in a position to test the theory. Because we already know that what is real is what fits the facts.

R: To your point about weighing the various theoretical virtues, I'm tempted to say that these weights are always changing based on what proves to be most productive in scientific investigation, sort of like feedback mechanisms. For example, someone who overvalues simplicity might propose very simple theories which have difficulty accounting for some of the known facts, and so he continues to make minor adjustments, but new difficulties keep arising, until he finally gives up and admits that reality is more complex than he had hoped for; this would be a case where the theoretical virtue of empirical accuracy is providing feedback to the theoretical virtue of simplicity. We could also imagine an opposite case, where a very conservative scientist is totally reluctant to consider new theories which can't immediately account for all of the known facts, until a new theory is proposed which is marvelously simple and over time all of the known facts are accommodated, so that this conservative scientist is forced to concede. The hope is that, although different scientists may weigh the various theoretical virtues differently, the true theory will exemplify sufficiently many of the virtues to a sufficient extent that a consensus will eventually emerge.

R: As for circularity, I think we can avoid this charge by noting that reality is defined not with respect to our current theories, but rather with respect to an ideal theory which exemplifies all of the theoretical virtues (weighted according to our standards) and is based on all the knowable facts (not just those which are currently known). This shouldn't affect our ability to test theories since we can still follow the ordinary procedures of comparing our observations with the predictions of the theory. All that my definition of reality achieves is the elimination of the concern as to whether reality is rationally discernible by us given our cognitive capacities, since I'm defining reality according to our cognitive capacities and procedures for rational inquiry (thereby sidestepping the worry about whether this corresponds to some mindindependent reality).

D: An issue with deriving our weighting scheme from empirical data fit is that we do not know which part of the theory is causing the error—which virtue is overvalued— and cannot test them in isolation. Thus you are left with an ML approach where you move to the point with minimum loss, but don't really learn anything that generalizes. And is this even possible?

D: We must also consider what the virtues are doing in the theory. Feyerabend might say that simplicity/neatness is coming at a cost to correctness, which is why we juxtapose internal consistency with data fit in the first place. So often we cannot just adjust the parameters on the different theoretical virtues, because each of them serves a different goal in theory development.

D: A little bit confused by your last comment. Are you assuming that there is an ideal theory that corresponds perfectly to reality, then embodies all the theoretical virtues, and is perfectly intelligible by us through some combination of deduction and induction with sufficiently capable instruments?

D: BTW, I found a good definition of "causal" in an econ sense for you: A causes B if A is the only difference between groups T and C and B is the average outcome for T.

R: I think my point about the weighting scheme is that minor disagreements about the weighting scheme only matter during an early exploratory period prior to the emergence of a consensus. However, as groups of scientists pursue what they perceive to be the best candidate theory (according to their own weighting scheme), more and more evidence will accumulate until one theory emerges which exemplifies all the theoretical virtues better than the alternatives; at which point a consensus will develop. The only people who will refuse to accept this consensus are those who have very unusual weighting schemes where some of the theoretical virtues are prioritized way more than others, and so I'm assuming that most scientists consider all the theoretical virtues to be significant, but just have minor disagreements about their order of importance.

R: Regarding the ideal theory, all I'm saying is that, were all the observable facts to be known (observable being defined in the broadest sense, limited only by our human cognitive capacities, not by any technological advancements), then there would be various theories which explain these facts (due to underdetermination), and at least one of them would best exemplify the chosen theoretical virtues according to a given weighting scheme (existence of global maximum is guaranteed under assumption that candidate theories are bounded above, uniqueness is likely but not guaranteed). Importantly, this ideal theory is intelligible by definition, so if there happens to be a better theory which is not knowable via observation, then it won't be considered. Additionally, I'm not making any claim about whether this ideal theory perfectly corresponds to mind-independent reality, because I have no idea what mind-independent reality is actually like or whether our best theories reveal it. Instead, I'm defining "reality" in science according to what this ideal theory says.

R: I'm assuming that A and B are events, T and C are collections of events, and that by the average outcome for T you're saying that if we introduce the event A to the collection C (thereby creating T), then the expected outcome is event B. If I'm understanding you correctly, then wouldn't it follow that "buying a lottery ticket" (event A) causes me to "lose the lottery" (event B) since that's the expected outcome?

D: Will get to other comments later but T and C are groups, treatment and control. Suppose an RCT where you randomly assign people (unwittingly) to treat and control. On average both groups have the same characteristics. A is a treatment (a drug), B is the outcome observed. In this scenario A is the only difference between T and C, thus A is said to have caused B, which obtains for T and not C.

R: That makes more sense. RCTs are obviously a gold-standard when it comes to establishing causality in the social sciences, but I think it might be mistaken to interpret them as defining causal relationships, rather than as generally useful tools for determining causal relationships. Firstly, it's important to distinguish between perfect control (no differences between the control and treatment groups prior to intervention) and control in expectation (no difference between the members of each group on average, which is what randomization achieves). Secondly, even supposing perfect control, this test would only establish a necessary but not sufficient relationship between the treatment and the outcome; so it could be misleading to call A the cause of B since this relationship may depend on other conditions (C1,C2,C3,…) without which we would not observe the association between A and B. This is what happens in my lottery counterexample, since purchasing a ticket (A) is a necessary but not sufficient condition for losing the lottery (B), but since the remaining necessary condition (i.e. that the purchased ticket is a losing ticket) is so overwhelmingly likely, we may fail to realize that B is causally dependent upon it just as much as B is causally dependent upon A.

R: By the way, for some interesting criticism of RCTs, see this paper (https://www.sciencedirect.com/science/article/pii/S0277953617307359) by economist Angus Deaton and philosopher of science Nancy Cartwright.

D: I agree from philosophical point of view that the RCT does not define the causal relationship. Indeed, randomization often obscures causes. For example, one natural experiment used quasi random variation in subscriptions to Diderot's Encyclopedie to proxy for local levels of upper tail human capital in French industrialization. Nobody thinks that industrialization was actually caused by blind parachute drops of book subscribers into various French cities. All you get to see is the average effect of the presence of upper-tail human capital. But I think your second point is slightly confused. In order to identify a causal relationship, those C factors that create the relationship between A and B would be called confounders, and randomization should equalize these across groups. It may well be that A does not cause B after controlling for C1, C2, and c3. But the purpose of an RCT is to make these controls happen.

D: Yes, Deaton is a famous anti-RCT prophet. Especially in development economics, RCTs kind of were and still are kind of annoyingly dominant. You should talk to my friend Oliver if you want to hear stories about stupid RCTs. But by and large, the natural experiment is framework that I have repeatedly described to you I believe to be a powerful and, more importantly, easily accessible tool for experimentalists to identify causal relationships between variables.

D: On weighting schemes, it seems plausible to me that some theories will not, in their best form, embody all of the theoretical virtues better than all of the options. Furthermore, there is the risk that the kind of evidence produced by the present theory will, as Feyerabend argues, be unable to offer evidence against said theory. I think the biggest concern with this entire discussion is the presumption of the ability to freely move around parameters to favor one theoretical virtue over another. I think combinations are sticky, such that you can only have a couple of them at a time, and must regress on one axis in order to advance on another. In machine learning terms, you must at some point suffer on calculable losses on model complexity in order to achieve near perfect fit. And worse, you cannot simply smoothly trade off between the two. You can just select a couple of different combinations. I think this possibly leads to bad equilibria.

R: I was using C1,C2,C3,… not to refer to confounders but rather conditions which were in fact held constant across the two groups, but which were causally relevant in bringing about the outcome. So we would say that A,C1,C2,C3,… were collectively necessary and sufficient for causing B, but neither A by itself nor C1,C2,C3,… by themselves (like in the control group) are sufficient for causing B. So the criticism was that the RCT (even when modified to assume perfect control) would establish that A is a necessary but potentially grossly insufficient cause for B, and we don't get any information about just how insufficient A is for causing B from this kind of test.

D: Ah, I see. Yes, that is true. Killing Franz Ferdinand may have precipitated the July Crisis but if Germany didn't exist then Austria wasn't invading Serbia.

R: Yes, exactly. I'm still thinking about your comments on the difficulties with weighting schemes. I'll probably respond to it in person during our meeting today

18. Free Will, Moral Responsibility, History of Science, More IBE

D: Are you a determinist?

D: We definitely talked about this but I forgot.

R: Yes, I would say so. I believe that humans are animals, and so we are ultimately governed by natural laws (some of which may be irreducibly stochastic, like the quantum laws). In this sense, I believe that we are totally determined by our initial conditions acting according to natural laws. But I also think that our conscious deliberation (i.e. decision) can strongly influence our actions (basically just another form of determinism, i.e., our thoughts can determine our actions), and so I tend to preserve a compatibilist notion of free-will in this sense.

R: Also, when it comes to social analysis, I also tend to be strongly determinist, but this is partially for practical reasons. That is, when it comes to social policy, it seems that we have to believe that these policies functional like natural laws which determine human behavior, so that individual instances of human failure (like crime) can be understood as failures of social policy. I wouldn't necessarily interpret this kind of determinism in a metaphysically significant sense, as opposed to a general attitude adopted for the purposes of analysis

D: On point one, that's almost essentially my point of view.

D: On the second, are you describing statistical "laws"—i.e. that most people because of their innate tendencies will, ceteris paribus, respond in certain ways to a policy change based on their underlying tendencies?

D: Anyway, I heard a nice thought experiment posed by a skeptic: if you are a determinist, you must suppose that, given proper measurement, you could deliver an unconditional prediction about whether your interlocutor will lift his arm in five seconds. What stops the interlocutor from doing the opposite?

R: I certainly accept that these statistical generalizations are applicable and useful for social analysis, but I'm actually saying that I just never really consider free will when thinking about social outcomes (perhaps to my detriment). So when I look at an individual whose performing either well or poorly in society, I always tend to attribute their outcome to their nature and environment, not their will. This is more of a confession about my own psychology than a philosophical stance

D: Oh, I see.

R: About the thought experiment, I wouldn't say that your knowledge of what the interlocutor will do prevents the interlocutor from doing the opposite anymore than I would say that my knowledge of a prospective lunar eclipse prevents the lunar eclipse from happening one day later. Rather, what will happen will happen, and in this hypothetical, we're stipulating that I will know what will happen, but this knowledge is causally independent from the thing that is known

R: I think the real issue is what kinds of things I would need to know in order to make that prediction in the first place. If we believe that we're just animals governed by natural laws, then what are the relevant natural facts? Can they be spelled out in the language of fundamental physics, or are psychological facts (which will no doubt be relevant) irreducible to physics? This is a much harder question about which I don't have any confident opinions. I'm tempted by reductionism, but it's extremely difficult to defend

D: Can you explain the last sentence of the first message?

R: Yes, sorry for the my poor wording originally. To answer your question directly: what prevents the person from doing other than what I know they will do are the natural laws. That is, if we believe that human action is totally governed by natural laws, then they will do what is prescribed by the natural laws. Independent of this fact, we may stipulate that I have total knowledge of the natural laws, in which case I can predict how you will act. But this knowledge has no causal influence on how you will act, only the natural laws are causally determining how you will act, so there's no real mystery here, even though the skeptical thought experiment makes it seem like my knowledge is somehow causally restricting the person from acting otherwise.

D: So basically the unconditional prediction that you make about the action that they will take incorporates their reaction to the prediction that you make, assuming that, deterministically speaking, you are able to make the prediction.

R: Yes, all of that would have to be accounted for before making the prediction, like you suggest

R: One lingering concern with making this kind of prediction is a potential vicious regress. This relates to the question of what information is relevant for making the prediction. From classical physics, we're accustomed to thinking that systems are determined by the natural laws and their conditions at some point in time. But human systems are clearly more complex than simple pendulums, and so it's at least conceivable that more information will be relevant for determining the system. If, for example, multiple time points are required for determining the evolution of the system, then won't the act of making the prediction itself need to be accounted for in the prediction? Then this accounting for will itself need to be accounted for, and so on…If so, then it seems like there's a fundamental difficulty with making a prediction about a system from within the system, such that predictions require a kind of "outside point of view" wherein the relevant facts can be held constant, without being disturbed by the act of making the prediction

D: I think the issue here is with the assumption that determinism implies the ability to make the prediction, as you seem to suggest. I'm actually kind of struggling to wrap my head around this right now. But I also have the difficulty with saying that prediction is impossible in a deterministic system because if it were possible the system wouldn't make sense. That's kind of tautological.

R: I think that, in principle, determinism does imply the ability to make predictions in the following way: if a system is truly deterministic, then it's governed by laws which act on the relevant facts, and so, in principle, anybody who has a full understanding of the relevant laws and facts should be able to predict the evolution of the system. The problematic term here is "relevant"; what if the relevant facts include your prediction, and what if the corresponding law is that the system's behavior will always defy your prediction? Then surely this system, even though it's technically deterministic, will be unpredictable (at least by you).

R: So the concern is whether the case of telling someone else your prediction about their behavior is sort of like this case where the system always defies your prediction. Maybe laws about psychology mimic this property

R: I'm reminded of Newcomb's paradox, where there are two boxes, a mystery box and one containing $1000; you can choose either just the mystery box or both boxes. The contents of the mystery box are determined by the prediction of a superintelligent being; if this being predicts that you will choose just the mystery box, then it contains $1 million, otherwise it contains nothing. Do you choose just the mystery box, or both boxes? Besides the paradox in decision theory, this thought experiment raises the question of whether such a superintelligent being could exist. Surely your psychological deliberation after learning about your options will be part of the relevant facts when it comes to predicting your decision, but the question is whether this psychological deliberation could have been predicted beforehand by the superintelligent being. On a reductionist account, these psychological responses are nothing more than matter in motion (put crudely), and so of course they could have been predicted beforehand, because all of the relevant facts about the laws governing the motion of matter and the physical states of the matter could have been known prior to your psychological deliberation. However, if psychology is irreducible to physics, then it seems that new relevant facts are actually introduced only at the point of psychological deliberation, so that, in principle, they could not have been predicted beforehand; and this is entirely consistent with determinism.

D: I like this first formulation. It is impossible for a being to acquire the information needed to make the prediction because a critical event in the causal chain leading to the subject of prediction cannot be observed by the individual making the prediction until after the prediction has been made. This, of course, renders it impossible for you to make this prediction. The question is whether a third observer watching through a telescope or monitoring every physical component of the room can make the prediction but is unable to communicate to the participants? What if he has a button next to him that flashes the prediction onto a screen within the room, or can text the hand-raiser a second before he acts?

D: In this latter case, having the means to communicate removes one relevant fact from the predictor's hands?

D: On the second point, I sense that the person making the claim (Bryan Caplan) was not really arguing on a particularly sophisticated level. He overrated his psychological perceptions of having free will over the theoretical improbability, saying that "science is observation." That's a big question, for sure. What is this thing called science? This is all well and good, but we are not entitled to believe our sense perceptions if our instruments clearly are faulty. Why should we think that we are correctly perceiving free will when we have good theoretical reasons why this is not the case? We think we perceive lots of things that aren't there: colors, horizons, etc.

R: On the first point, I think that a third observer would face the same issues if we believe that the facts of psychology are irreducible, such that new relevant facts are introduced only at the point of deliberation. In this case, the third observer also wouldn't be able to make a prediction until after all the relevant facts involved in the process of deliberation were completed, at which point the thing being predicted would have already occurred. But if we believe that psychology is reducible to physics, then I think there's no need to introduce a third observer since there's no more vicious regress, since the only relevant facts are about the laws governing motion and the physical states of all matter prior to the point of deliberation.

R: On the second point, I think the "science is observation" idea is important to keep in mind. We shouldn't blindly privilege the theoretical postulates of science over ordinary sense perception, especially when the justification of science depends on the reliability of ordinary sense perception; and so it's a real question as to how we should weigh these sources of evidence. For questions like "Is the earth in motion?", it seems clear that we should privilege science over so-called common sense; but this may only be the case after we've been able to make sense of our everyday experience of motionlessness with heliocentrism by pointing out that the sensation of being in motion is actually a result of acceleration, and so can't be felt when moving at a constant speed (like on Earth). In the case of free-will, I think that we don't have such a straightforward and compelling explanation for the ordinary and direct experience of choosing to act one way while retaining the ability to act another way. Additionally, the theoretical grounds on which to dismiss free-will typically rely on the tenuous reduction of all explanations to the language of physics, for which there is currently no even remotely plausible candidate or suggestion for a way forward. Once we realize that science is still in a primitive stage when it comes to understanding psychology or anything related to consciousness, it's not difficult to imagine that our mental states actually play some fundamental (i.e. irreducible) role in nature which accounts for our experience of choice. Physics seems so all-encompassing just because it's the only aspect of nature which is simple enough for us to have gained some insight about it; from this point of view, the dismissal of our ordinary experience of free-will on the grounds that it's incompatible with the theoretical postulates of physics seems premature.

D: I'm curious about this all propositions are true dictum. You'll have to explain it sometime. But that definitely does capture why I'm interested in Feyerabend. His madness sheds light on my more conventional views, as I am sure he intended.

D: I think I see where you've gone on the first point. If psychological factors matter, then only at the moment of action are all the facts in. But suppose you could monitor the cognitive state of the other individual and had finely-grained information on all sensory inputs as well as his neurological reactions. Suppose you could predict all things that would be said to the individual in the next five seconds, all possible "random" exogenous events that might change his deliberation, etc. I guess it still might be impossible because your prediction would not contain information about whether or not you would make the prediction?

D: On the second point, I think that my reluctance to accept free will is, as the interviewer suggests, founded not in any decisive theoretical justification for determinism, but rather an absence of either evidence or a coherent theoretical framework that would explain why free will should exist. To me, free will seems like a convenient "miracle"-style explanation for the yet unexplained: the experience of decision making. But just as in the flat Earth example, we have no reason to believe that our sense perceptions in this case are really justified in making the inference that the physical evidence that appears to exist in favor of free will and choice really constitutes what we think it does. If you walk into a laboratory and stare at some of the petri dishes without a microscope, you are not justified in making any generalizations about the behavior of the protozoa in the dishes, no matter if everyone in the room is absolutely agreed that the protozoa look in a certain way.

D: So it's less the fact that physics seems really compelling as a world system than that a free will-based universe just doesn't make a lot of sense compared with a compatiblist universe.

R: The view that all propositions are true is called "trivialism". Paul Kabay completed his PhD dissertation on it, called "A Defense of Trivialism". From a dialectical point of view, this position is interesting because nothing can be said to the trivialist which would motivate them to change their mind since they will simply accept everything you say as true, including that "trivialism is false".

R: I think that if you do not tell the other person your prediction, then it should be in principle possible (under the assumption that neurological states determine the psychological states), since your prediction is presumably irrelevant to the other person's decision. But if you tell them your prediction, then it does become relevant, and so we run into this problem of vicious regress once again.

R: I tend to agree that libertarian free will (the kind which includes the ability to have chosen otherwise) strikes me as difficult to understand, and often motivated by vague and confused understandings of common terms like "choice" and "decision". But it seems that we might disagree a little on epistemology. In fact, I would say that ordinary experience does support a flat Earth, especially when combined with a pre-modern (Aristotelian) understanding of gravity as simply a tendency for heavy objects to fall "down" (understood as a universal direction), in which case, if we lived on a globe, everybody outside of the Arctic Circle would be expected to "fall away"; so I think that this initial impression should be regarded as probative until some alternative and more plausible theory is suggested which explains this experience away. I would say the same thing about the ordinary experience of motionlessness supporting geocentrism, which was legitimate until it was understood that the feeling of motion results from acceleration. So, given that we shouldn't expect current physics to be able to make sense of consciousness or free will, it seems that the ordinary experience of choice should be regarded as probative and without a straightforward defeater as well; then I think philosophical reflection can help clarify whether our understanding of free will should be libertarian or compatibilist.

D: Let me backtrack. I think that in the early stages of the Middle ages, it was fully justified to believe in a flat earth at the center of a geocentric universe. We had little sense, prior to the voyages of Columbus, that what was outside the realm of visual experience might contain information that could refute our theories. Interestingly, the history of science book that I'm currently reading suggests that few learned people actually thought that the world was flat in 1492, but rather that it was misshapen by the confluence of differently shaped spheres. Anyway, that's beside the point. But I'm trying to argue is that following the scientific revolution, we have realized that taking our unvarnished sense perceptions as truths, rather than proximate information that has some utilitarian and perhaps some informative purpose, is likely mistaken. We are rarely able to correctly assess the properties of objects without the aid of tools. So while I agree that it certainly seems like free will is possible, I also think we should subject our impressions to a healthy dose of skepticism, knowing how often that similar kinds of impressions and convictions have been in error before.

D: Would be exciting if we do have a disagreement though, it's been a while

R: It's true that the early astronomers (starting at least around 5th century BC) already conceived of the planets as spheres and even made surprisingly accurate calculations of its circumference. I think that part of the motivation for this was the commitment that the "heavenly bodies" were perfect, and so of course they would exhibit only perfect geometries consisting of uniformly circular orbits and spherical bodies. (Although, when you actually look at their planetary models, they all made clever deviations from this ideal in order to fit the observations. Eudoxus employed multiple concentric spheres centered at the Earth, with each inner sphere attached to its outer sphere at its poles. Ptolemy of course employed epicycles, but he also cheated by setting the center (a.k.a. "eccentric") of the deferent circle somewhere other than Earth, and he allowed the orbital body to move in uniform circular motion not with respect to either the Earth or the eccentric, but with respect to a third point called the "equant". Very clever, but hardly "heavenly" in my opinion.)

R: Anyways, I'm still sort of undecided on how much weight to place on the ordinary experience of choice as evidence of free will, so let me try to defend the view that it provides strong evidence, and hopefully through our back-and-forth, I'll learn what I should actually believe. First, we should note that the experience of choice and its natural interpretation in terms of free will is widespread, persistent, and robust to variation in historical era, language, worldview, etc. (note that this claim is slightly presumptive, since I haven't done a thorough comparative analysis of the concept of free will across cultures, but the persistence of free will as a philosophical problem since antiquity seems to support my assumption); therefore it would be inappropriate to dismiss this experience as a hallucination or idiosyncrasy, unlike typical examples of the unreliability of sense perception. Second, we currently have no scientific theory of human choice which "explains away" our natural interpretation of this experience in terms of free will; I also explained earlier why the inconsistency between this natural interpretation and the theoretical postulates of physics shouldn't cause much concern. This makes the experience of choice unlike the experience of motionlessness, for example. Finally, you suggest that the history of science renders our natural interpretations of common experiences unreliable, and so we should at least remain agnostic on the question of free will until an adequate scientific theory of human choice is proposed. I think that this is the most salient objection, but I worry that it says too much. Wouldn't it also follow that we should be skeptical of our scientific theories of motion and atoms, for example, both of which have undergone major alterations even after the Scientific Revolution, and so should be regarded as unreliable? Perhaps we can appeal to a distinction between folk theories and scientific theories, where the latter can be justifiably regarded as tentatively true, whereas the former should simply be discarded until they have been replaced by a proper scientific account. Perhaps this distinction is legitimate, although it's difficult to identify a clear point of delineation, and the distinction can't be made in terms of reliability (since even scientific theories have undergone significant modifications), but instead probably in terms of mechanistic detail (i.e. folk theories do little more than label phenomena and provide only a thin layer of explanation, whereas scientific theories provide much more detailed mechanisms for explaining the phenomena). Nevertheless, why shouldn't we regard folk theories as tentatively true just like scientific theories, though perhaps with lower credence? It seems that a justification is still needed for treating folk theories as categorically distinct in terms of reliability.

D: Yes, I hope to learn what I should believe as well. I am at least partially swayed by your arguments, but I still feel that there is a distinction to be drawn. First, I will start by arguing that the universality of a particular observation does not mean that it is correct. We could all be prey to the same illusion–the flat earth—or we could be caught within a culture that causes us to observe something in the same way. We could imagine, however, a society in which we placed greater emphasis on the frequent feeling of powerless that we feel in the face of history and events outside our control—hence the Stoic philosophy of Marcus Aurelius. What if, instead of privileging our observation of choice, we found ourselves privileging our observations of the many instances in which we did things that we did not intend to do, and later regretted? Of course, they instances may not be of the physically reducible character, but such a society might not necessarily observe choice as unfettered. I like the folk-scientific theory distinction. Perhaps it is the correct one here. I am not saying that the folk theory should be discarded—merely that, since it is not rigorously tested or formulated, that we should instead treat it as a heuristic guide to our behavior rather than an equivalent method of interpreting our reality from an epistemic point of view. We should always qualify "folk theories" by (internally) saying, "well, this is just how it seems to me, whether or not it's true," recognizing that there is a certain inescapable utility in acting as though it is, at least until something makes you see the Rorschach blots in a different way. So we can regard the folk theory as tentatively true in one sense, but not in an epistemologically significant way. We know that our scientific theories change, but to the best of our knowledge, because we have tested them, they should not be expected to. Whereas folk theories always hang in the balance because the evidence isn't there either way.

R: Certainly I agree that the universality of a belief does not entail its veracity, but the specific features of the free will belief which I identified (universality, persistence, and robustness) do at least seem to rule out its dismissal as a mere hallucination or idiosyncrasy. I like your suggestion of a nihilistic society in which people are raised to believe that everything is governed by fate and that our "choices" are mere illusions, no more free than the decisions of either puppets or a story's characters. What I wonder is whether such a philosophy could exist without an accompanying belief in free will for at least some instances. Indeed, these two beliefs often seem to coincide, so that the presence of one highlights the absence of the other, and these contrasts are mutually reinforcing since each belief takes turns being either present or absent. For example, could anyone really lack the (pre-theoretic) belief that deciding to speak or move one's arm (under ordinary circumstances) involves the exercise of free will? And so, the worries about one's destiny in some circumstances, like one's financial situation, seem to depend upon a contrast with a belief in free will in other circumstances. If so, then while fatalistic philosophies might diminish the scope of our free will, they don't seem to fully eliminate free will and in some sense they actually reinforce it. I see the intuition behind regarding folk theories as mere heuristics and scientific theories as actually (tentatively) true, but I'm not sure to what extent this applies to the case of free will. Here are some potential reasons for drawing the distinction: [1] folk theories tend to be immersed in culturally-specific concepts and beliefs, making them less likely to be actually true as stated, even if they capture some real phenomenon; but the apparent robustness of the free will folk concept seems to elude this criticism, though of course some actual empirical analysis would bolster this point [2] scientific theories are typically situated within a broader system of coherent and independently verified beliefs, which makes them mutually reinforcing, whereas folk theories tend to exist on their own (although this wasn't true for primitive religious worldviews, which have since been superseded by scientific explanations, so maybe the surviving folk theories are disconnected just because they are the remnants of a discarded worldview persisting only within the gaps of our scientific knowledge?) [3] folk theories tend to posit new laws or substances which are sometimes radically different from or even incompatible with the current scientific worldview, which lowers their intrinsic probability, and often these radical ideas are determined to be obsolete once a scientific account is developed, and so there is an inductive case against believing folk theories; but in the case of free will, we have nothing even approaching a scientific account of human choice or related phenomena like consciousness, and so the postulation of new ideas seems warranted, and the notion of "will" already coheres with many of our current psychological theories; therefore our nascent attempts at making scientific sense of human choice have actually corroborated rather than discarded our folk theory, though perhaps this will change as our scientific understanding advances [4] folk theories often exemplify the characteristics of a bad explanation, like being ad-hoc, involving unnecessary complexity, lacking predictive power, and being fragile in the face of scrutiny due to retaining its widespread acceptance on the basis of tradition rather than evidence; but the folk theory of free will is actually quite simple, has remarkable predictive power, and has survived despite much scrutiny, probably due to being grounded in a universal psychological instinct rather than tradition. As a final consideration, one feature of folk theories might actually make them more likely to be true than scientific theories: folk theories are less detailed and so involve fewer theoretical postulates, which reduces the number of possible refutations; after all, a highly specific theory is likely to be wrong in at least some respect. Overall, I recognize the case against trusting folk theories, but it's not clear whether all of these concerns apply to the case of free will, and I'm also just trying to play devil's advocate ;)

D: On the first point, I would say that universality, persistence, and robustness are all characteristics of the flat earth worldview in the absence of theory to the contrary. One may be justified in subscribing to it without it actually being correct. Find the nihilistic society, I would say it is equally possible to imagine that this group has a baseline assumption that determinism prevails, the recognizing that sometimes it seems like they have free will, which is the inverse of our current situation, in which we assume that we have free will, but recognize that in some situations it feels like our actions are predetermined. I am not sure whether fatalism requires a belief in free will being true, but I do believe that it probably requires some concept of it? Perhaps. If, however, the society decides that free will is merely an inevitable imagination in certain circumstances, that I think the juxtaposition will work.

D: On the subject of folk theories, I do think you bring up some interesting points. I will start, however, by saying that I'm not sure in this case that we should use the concept if we must apply your definitions rigorously. If we must accept free will because we have defined it out of being a folk theory, then the problem seems to be with our schema, and not with our argument about free will. For example, I believe that the universality of the experience of free will is irrelevant in this context. As my postmodern friend argued to me, the belief in the existence of God, a creator, or something divine is near universal in human societies—shall we ascribe credence to this too? If we are to tie folk theory to cultural practices, then I will simply say that there are transcendent cultures that span the planet, or at least have variants existing in every society. On point three, I would argue that free will adds a new concept or substance, perhaps not completely incompatible with present science, but certainly one which defies explanation. Where would the will come from? I see this as akin to the ascription of miracles or the creation to some divine being—has anthropomorphic beings, it just makes intuitive sense, regardless of whether or not it is true. So perhaps we should move past the discussion of folk theory and treat the perception of free will as a universal basic instinct akin to the belief in some kind of divinity, whether it is Allah or the great Juju up the mountain.

D: I'm not trying to dunk on your use of the term "folk theory." Perhaps I do not understand it properly. I just want to avoid arguing over the semantics, at least while we have more important adversary in front of us. ;-)

R: I agree that a belief can be rationally justified without it actually being true, but it seems that when we're asking whether some claim <p> is true, what we're really seeking are reasons for and against believing that <p> is true so that we can form a rational judgment about whether it's justified to believe <p>. This is evidenced by the fact that if we ask whether <p> is true, and somebody responds by answering either "yes" or "no" without elaborating, then we won't think that they've really answered our question, since what we really wanted were reasons for or against believing <p>. Therefore, I take our question to be "Is it rationally justified to believe in free will?" If we can answer this question in the affirmative in the same sense that it was rationally justified to believe in geocentrism prior to Galileo, then that seems to be all we can reasonably hope for, and I would hold the same standard for all of our current scientific theories, namely that they are rationally justified according to what we currently know, not certainly true.

R: By folk theory, I just mean an explanation for some phenomenon which is natural (i.e. easily understood by and compelling to ordinary people without requiring much reflection) and isn't sufficiently detailed/rigorous in its mechanisms to be considered a scientific theory. I consider the folk theory of free will to be the natural interpretation of our ordinary experience of human choice, namely that "there is such a thing as a 'will' which is possessed by humans and allows them to choose to act in one way while retaining the ability to have acted another way". This theory is 'folkish' because it's natural and because neither the concept of 'will' nor the mechanisms by which it facilitates human choice are spelled out in detail. So I think it makes sense to consider free will a folk theory, although I'm not married to the terminology, and I would still want to distinguish it from other folk theories in the ways that I suggested in my previous comments. Regarding your comparison between free will and God, I would want to clarify that what is claimed to be universal is the experience of free will, not the belief in free will, and I claim that everybody (except possibly those with severe disabilities) has experienced free will, whereas there are a significant number of people (including myself) who have not experienced anything like a divine being. So the two are not universal in the same sense, since the experience of a divine being is only universal at the level of societies, not individuals; and so I wouldn't regard the experience of a divine being as a "universal basic instinct" like free will, rather it's more like a natural tendency which has found its expression in most societies, but neither invariably nor inevitably in every individual. More importantly, I'm not wanting to argue directly from the universality of an experience to its veracity; this would obviously be fallacious. The point about universality was just one aspect of one argument intended to preemptively rebut the claim that free will is a hallucination or idiosyncrasy.

R: Regarding the claim that free will defies explanation, I think we should turn our attention to trying to understand just what is meant by free will, because I too am often confused by what is being claimed. One suggestion is that a 'will' is a fundamental entity (or maybe an emergent phenomenon from some more fundamental entity like a 'mind') which has its own causal power that's not reducible to the causal interactions of the objects studied in physics. Then, to exercise free will is simply to utilize this causal power of one's own 'will' when making decisions. But how is the decision of this 'will' made? It can't be determined by the objects of physics, since it's postulated to be causally independent, and that would hardly be 'free will' anyways. It also can't be determined by the person's character traits and environment, since that would also undermine the notion that this will is 'free' in the sense that it could have acted otherwise. Maybe one's character traits and environment determine the viable choices, from which the final decision is made at random, but then this still doesn't seem quite 'free' since the decision is ultimately outside of our control, and this also doesn't account for the fact we can train our will, and so the final decision is not totally arbitrary; perhaps we can alleviate this latter concern by suggesting that 'training' involves refining the probability distribution according to which the final decision is made. This suggestion is interesting, but the metaphysical postulates are quite extravagant and speculative. Finally, if we suggest something other than determinacy or randomness for the operation of the 'will', then the behavior of this 'will' seems unintelligible; perhaps it's really so, but I regard this as a last resort explanation. Personally, I lean towards a compatibilist account of free will, since it retains the familiarity of deterministic explanations by appealing to so-called natural laws which govern the behavior and interaction of all natural things, including the 'will'. The main defect of this account will be to make sense of the apparent ability to have chosen to act otherwise, which presumably has to be "explained away".

D: On your first point, I think the crucial distinction to make between the pre-galileo geocentrism and free will is that the former is genuinely pre-science, and the latter is not. The cosmology into which geocentrism fitted was also founded on what I guess we my term folk scientific observations: religious belief on the one hand, and the visible appearance of all the stars and planets at various distances from the earth on all sides. Whereas now, we do have a cosmology that is based upon rigorous empiricism and testable theories. Does free will really fit into that? Geocentrism was as good as the other theories of its day. Free will is of a lower order than postulates that we accept today. I would agree that it is reasonable for children to believe in free will. I would agree that it is reasonable for a 15th century European to believe in free will (but did he?). I am not sure that it is reasonable for us, even in the geocentrism-equivalence case.

D: I see. So as long as points one to four are modifications of the folk theory concept required to make it useful for us as an analytical device, then I am fully agreed that we can use it. On the comparison between free will and religiosity (because it is the experience of religion, a la William James, and not the belief in some anthropomorphic creator being that I really refer to), I do think religious experiences are probably near universal in individuals. And if they are not, I resist saying that they are less universal than the experience of free will. Here is an interesting question: are pre-literate individuals capable of experiencing the concept of free will? But when I am not deliberating consciously, I presume that I am acting on instincts and impulses most of the time. Weighing the balances, so to speak, I do in language. But preliterate people, or babies, must either not have this experience or do it to a much more limited extent. So in this case the experience of free will would be a cultural phenomenon, just like religiosity.

D: Yes, I haven't really got a great idea of what a free will advocate would demand. From a quick scan of the Stanford encyclopedia philosophy and Wikipedia, it seems to me that free will, at least to an incompatibilist, involves being able to freely choose among alternative actions, such that the non-chosen actions were real counterfactual possibilities. Futures of many forking paths. From what I've read, the compatibleist account doesn't really explain very much, which is fine with me, as it really just offers a stylized description of deterministic behavior with free will operating on a more abstract level—the absence of physical constraints, while as you say natural laws do govern the operation of what we call the will. Indeed, your main defect indicates that the compatiblist account that I have considered hardly does anything at all, because we still are wondering why we think we have free will even when we don't.

R: It seems that part of the disagreement will come down to our view of the history of science, since I'm not sure that I believe in a categorical difference between what Ptolemy, Galileo, and Hubble were doing (e.g. pre-science, nascent science, mature science). Rather, I'm inclined to believe that their differences involve the amount of available information (largely restricted by the contemporary technology) and only slight variation in methodology. The common strand among these periods of science is the pursuit of the best explanation as supported by the known facts, a standard according to which the postulation of free will as an interpretation of our experience of choice appears to be legitimate and not of a lower order than competing theoretical postulates in (say) physics, which provides no account of this experience of choice, even though it has the merit of being much more mechanistically detailed.

R: Your consideration about free will in babies is very interesting, and I find your suggestion compelling that they probably have at least a severely diminished experience of free choice. As for pre-literate adults, I'm inclined to think that they would have at least a rudimentary experience of choice, of the variety which doesn't involve much deliberation. For example, so long as they are presumed to be self-conscious, it seems that they could still have the experience of controlling the movement of their own limbs and eyes, and of making simple choices about which rock to pick up from a large pile, and so on; so they probably have a more sophisticated experience of free will than a baby, but less than that of a literate adult.

R: I think you're right that a compatibilist account of free will can seem underwhelming, which is sometimes why opponents will criticize it as merely changing the subject to talk about something other than what is typically understood as free will, which presumably requires real counterfactual possibility, which is plainly inconsistent with determinism. Personally, I felt this way initially about compatibilism, but have grown to see it not as a cheap trick which merely labels some thin notion of choice as "free will", but rather as a reconciliation of two competing intuitions: a belief in a law-abiding universe, on the one hand, and our ordinary, pre-theoretic understanding of free will, on the other hand. The lesson to be learned is that, despite first appearances, a substantive understanding of free will does not require real counterfactual possibility and can be made consistent with determinism, hence compatibilism. What's retained in this revised understanding of free will is moral responsibility, a (causal) relation between desires/motivations and actions, and a substantive distinction between the ordinary murderer and a person who commits murder because of a brain tumor (so it's not "tumors all the way down" contra Sam Harris); so the thesis of compatibilism isn't insignificant, and we'll need to spell out exactly how these features can be retained given determinism.

D: I would disagree with your account, on the first comment, of the history of science. As David Wootton remarks in The Invention of Science, the methods are different. The fact is itself invented during the scientific revolution, and this implies a fundamental difference in how scholars compile evidence and argue. And even if they're along more of a continuum, the ends are still far enough apart to make what we do now qualitatively different from what was done then. I do not believe that the common strand that you pick out alone constitutes rigorous inquiry today. The fact that we do not possess knowledge of a higher order does not mean we are entitled to take for granted that our lower order assumptions are true.

D: Pre-literate adults may have a greater experience of control, just as they have a greater experience of life in general. However, that does not entail that they actually experience free will. For example, I have greater experience of my physical movements, say, habitually picking my nose, then what a baby. However, most of the time I will not simply decide that I am going to pick my nose, but rather instinctively do this. I'm fully aware that I am instinctively doing this sometimes, but no deliberation is necessarily required to bring about the act. I imagine that pre-literate people are acting in this way much, if not all, of the time.

D: The last point you bring up about moral responsibility is very interesting to me. I'm currently listening to another episode of that 80,000 hours podcast with a host talks with the philosopher David Chalmers. The host advocates for a similar position as you, compatibilism with moral responsibility. I take the host's position, however. How can we be morally responsible for having the impulses that we have? That doesn't really seem fair at all. If a man chooses to murder for personal gain, but he does this because this is in his personal nature, he did not decide that this was going to be his nature, and consequently I am hesitant about assigning him moral responsibility for this. Obviously from a practical sense and an emotional one as well, I will certainly assign him responsibility, because he is a murderer and his nature compels him to be murderous. But philosophically speaking I find this to be much weaker.

R: Regarding my view on the history of science, the common strand which I tried to identify was simply IBE (Inference to the Best Explanation), which I earlier held as the standard of rational inquiry in all domains, though the criteria for "best explanation" take on a particular form in scientific inquiry. I'm curious whether you would dispute that IBE is an appropriate standard for rational inquiry, or whether IBE doesn't appropriately describe what the pre-modern scientists were doing, or whether IBE in the particular formulation of scientific inquiry doesn't describe what the pre-modern scientists were doing, or whether all of these are in fact true, but the pre-modern scientist simply didn't have enough information (e.g. factual evidence about the solar system) to warrant much confidence in their application of IBE in a scientific context; nevertheless, they can be excused for assigning more confidence to their inference than really warranted because they didn't know any better, whereas we (existing centuries after the advent of modern science) should know better. Which of these views best describes your own? And how does it justify the claims of lower-order and higher-order knowledge which you allude to?

R: I'll also respond to your point on moral responsibility, but I need more time to think about it

D: On your first comment, I think I believe that: 1) IBE does not describe what pre-modern scientists were doing because their evidentiary standards and methods were different from our own. For example, references to irrelevant theories and postulates and appeals to authority were accepted as valid argumentative practices, which might have greater import than the (in)ability to reproduce experimental evidence. 2) The pre-modern scientist did not have enough evidence to warrant confidence in their application of IBE, but can partially be excused for this. We, on the other hand, know about how strong our evidence is in other domains, such that we can say that an individual today having great confidence in a belief held based on similar standards is unwarranted. "Higher-order" knowledge would be any scientific theory that has been strongly empirically validated or has been derived deductively from sure and valid principles. "Lower-order" knowledge would consist of postulates derived from commonsense notions (or any other similar criteria) but which are not supported by supporting evidence and theoretical frameworks. You can also add speculative scientific theories into this latter category, such as Gould's punctuated equilibrium or Dawkins's primordial soup. As a rationalist would say, you may be entitled to believe these things but your epistemic status should be very low.

R: Your explication is very helpful, and I think I mostly agree with you, but here are a few comments: 1) Why wouldn't we say that they were still employing IBE but employed a slightly different weighting scheme for their explanatory virtues? Also, couldn't we say that it was reasonable for them to assign a lower weight to empirical evidence since the available evidence at the time was often of poor quality or open to multiple interpretations? I want to hold that the pre-modern natural philosophers were not doing anything fundamentally different in terms of seeking the best explanation for their observations; but they came to radically different conclusions largely because they had different (often insufficient) evidence available to them and because their theoretical assumptions about the nature of the world were different, but not irrational, even when influenced by religious precepts, which I don't believe are categorically unfit for motivating theoretical postulates, so long as they are subjected to empirical testing. 2) This interpretation is, in my view, the strongest. I think I agree with you that we should have less confidence in our natural interpretations of experiences by themselves compared to when they are accompanied/refined by a much more detailed, mechanistic, scientific account of these experiences which exhibits the typical theoretical virtues. I would caution that the mere possibility of a better explanation should not influence our confidence in a worse explanation, but I don't think you are saying that; rather, I understand you to mean that the pre-modern scientists were just as unwarranted in believing their "lower-order" inferences as we are, but that their irrationality was more forgivable due to ignorance, in which case I agree. As for free will, do we agree that it's justified to regard our natural interpretation of the experience of choice as legitimate evidence for the existence of free will, so long as we don't believe it with greater confidence than the laws of physics? If so, would you further grant that this evidence, if supplemented by compelling philosophical analysis (whose existence I'm not meaning to suggest), can outweigh the fact that free will is inconsistent with the laws of physics, even though there's currently no even remotely plausible reduction from psychology to physics? Or would you hold that the theoretical postulates of science should always override philosophical analysis which isn't (currently) understood in scientific terms? If the latter, then doesn't that unreasonably presume that the current understanding of science is exhaustive?

R: Returning to the question of moral responsibility in a deterministic world, it seems that the big question is whether moral responsibility requires that a person retains the ability to have acted otherwise than they in fact did. If they had no ability to act otherwise, then they can't really be deemed responsible, so the argument goes. I'm undecided on the success of this reasoning, but I'll try to give the argument against it. To do so, consider the following thought experiment by Harry Frankfurt: "Jones has resolved to shoot Smith. Black has learned of Jones’s plan and wants Jones to shoot Smith. But Black would prefer that Jones shoot Smith on his own. However, concerned that Jones might waver in his resolve to shoot Smith, Black secretly arranges things so that, if Jones should show any sign at all that he will not shoot Smith (something Black has the resources to detect), Black will be able to manipulate Jones in such a way that Jones will shoot Smith. As things transpire, Jones follows through with his plans and shoots Smith for his own reasons. No one else in any way threatened or coerced Jones, offered Jones a bribe, or even suggested that he shoot Smith. Jones shot Smith under his own steam. Black never intervened." (SEP: Compatibilism) The suggestion is that Jones clearly acted freely, since, after all, Black never interfered with or even influenced Jones's actions. Nevertheless, Jones lacked the ability to have not shot Smith. So it seems that the aspect of free will which confers moral responsibility doesn't really require the ability to have acted otherwise; rather, what's important seems to be that a person acted according to their will. At this point, I think it's helpful to distinguish between two types of responsibility, what we might call "proximate responsibility" and "ultimate responsibility". The former merely requires that a person's will was the proximate cause of their decision, granting the possibility that this will was completely determined by the person's nature and environment. The latter requires that a person has complete control over their own will, independently of their nature and environment. I think we both regard ultimate responsibility as an illusion, and accordingly reject it. What Frankfurt's thought experiment suggests is that proximate responsibility is sufficient for grounding moral responsibility. Notably, proximate responsibility is consistent with determinism, hence so is moral responsibility. A final observation is that this notion of "proximate responsibility" allows us to distinguish between the case where someone commits murder because it was in accordance with their will (morally responsible) and the case where someone commits murder because they were physically forced to do so (morally innocent). It's tempting to say that, in either case, the person's actions were determined by factors outside of their control, and so it's silly to talk about responsibility; but analyzing things in terms of proximate responsibility explains our intuition that the two cases are significantly different in terms of moral responsibility. Finally, the case of a person who commits murder due to a tumor is more complicated, because presumably they did act according to their will, but the will was corrupted by the tumor. So is the person responsible once they've had the tumor removed, and thereby the urge to kill alleviated? I'm inclined to suggest that once the person has the tumor removed, they become a fundamentally different person, so that they can't be held morally responsible for the actions of what's effectively another person; but this raises the question of why any criminal who has a subsequent change of heart doesn't thereby become a different person who's no longer morally responsible, and it seems that the difference here is that their change was more gradual whereas the change in the case of the tumor was sudden, but this is a pretty weak justification.

D: 1) Not every weighting scheme is consistent with IBE. Certain weighting schemes might be consistent with rational behavior across multiple objective functions—social, economic, and political considerations—but inconsistent with IBE. I could be persuaded to grant that most of the deviation in conclusions stems from a different evidentiary basis, but to me the different results of the scientific revolution stem not from path dependence on previous-achieved findings but from… a fundamental change in methodology! People were willing to test, replicate, and debate in ways that were novel. I think it is unlikely that IBE considerations were the primary force impelling thinkers to incorporate religion. 2) No, you're right. I am saying that we should be aware of the methods by which we attain knowledge and accord it due status based on the rigor used irrespective of whether we believe that better explanations are possible. With my Portugal paper, for example, I know that better explanations would be possible with more advanced methods, which would in turn be enable if better data were available; consequently my conclusions remain tentative. It's not an exact analogy but I think it illustrates my point. Since I don't see a black-and-white distinction between "the laws of physics" and "philosophical analysis," I would accept that the latter in combination with evidence could override the former. It would have to be very compelling, simply because the analysis underlying physics is very compelling. But I would caution that we do not understand yet what our observation of free will is evidence of.

D: I think I inherently understood the difference between proximate and ultimate responsibility, but your example is illustrative and may hopefully give us some grounds for discussion. In Frankfurt scenario, Jones shoots Smith free of sociological constraints on his behavior. Supposing that Jones was not the son of a homicidal maniac within Napoleon complex, he felt no social pressure to commit the crime and, more likely, was biased against the act by our peaceable society, which judges harshly against murder. In this scenario, free will seems only to indicate that our individual historical trajectories are able to develop according to our natures. But then approximate responsibility to my mind is little more than a description of the sociological factors underlying the crime rather than a moral statement. Because you are simply ignoring the fact that Jones did not have any ultimate control over his nature. I even struggle with the tumor example, because I tend to see mental illness as a continuum, with some people being more touched in the head than others, and in different ways. Having the tumor removed is akin to changing the nature of the individual. Presumably he will no longer commit crimes. Either way, assigning him moral responsibility or not just seems to be totally inappropriate. On the one hand, he did commit the crime. The family of the victim will always see him as the perpetrator. On the other hand, he is unlikely to commit any more murders in his life, may sincerely repent of his act, and be a fundamentally kind person from now on. I don't even know what to do with moral responsibility here. If you screamed murderer at Jones, he would say that he was deeply sorry and that he also condemns murder. If you ask him whether he had committed the crime, he would say that he had. If moral responsibility just describes the fact, then he is morally responsible, I guess.

R: I would say that IBE as a method is independent of the weighting scheme. So long as one's weighting scheme looks roughly like an ordered list of theoretical virtues, then IBE simply dictates that they should believe the "best" explanation according to their own weighting scheme. It seems like you're suggesting that some weighting schemes don't even really capture the concept of the "best" explanation, presumably because it's so different from our own weighting scheme. If that's your interpretation, then while technically the method of the pre-modern scientists could still be classified as IBE, I would agree that their approach would be so fundamentally different as to warrant a different name. More significantly, however, I'm actually inclined to believe that their weighting scheme was not so radically different from our own, only slightly different, and that the radical difference in application is mostly due to differences in the available evidence, not to differences in methodology. But I acknowledge that I may inadvertently be projecting my modern epistemological sensibilities onto what I read about pre-modern science, so I'm going to reflect more carefully on this. On the remaining points, it seems that we're pretty much in agreement.

R: On moral responsibility, I think we need to get clear on what moral responsibility entails. It seems that your current understanding requires what I called ultimate responsibility, in which case you rightly reject its existence. Frankfurt's thought experiment is meant to evoke the intuition that Jones is morally responsible, despite the fact that he isn't ultimately responsible. It seems to me that you share this immediate intuition, but then reject it because, upon reflection, you realize that Jones isn't ultimately responsible, and so can't really be morally responsible either. What I want to suggest, based on the thought experiment, is that there is some understanding of moral responsibility which doesn't require ultimate responsibility in order to capture cases like those of Jones while distinguishing them from cases where (for example) someone commits a murder because they were physically compelled to, in which case the same analysis that this person was ultimately a product of their nature and environment applies, but we have a different intuition regarding their moral responsibility. Namely, Jones acted according to his will when committing murder (hence morally responsible), whereas this other person acted against their will (hence morally innocent). Given this modified understanding of moral responsibility (based on proximate rather than ultimate responsibility), what do we lose, retain, or potentially even gain? Well, it seems that we lose the concept of "evil people" whose corrupted desire to do bad things transcends both their nature and environment; to this, I say good riddance! There are no evil people, just people who do bad things, for reasons ultimately outside of their own control. What we retain is the justification behind scolding a child who willingly steals candy, because we recognize the good that our scolding will achieve in terms of improving the will of the child for the next time he has the desire to steal; this justification would not apply in the case where a child is forced to steal candy, and so corroborates our intuitions about the point of moral responsibility. I think we also retain some justification for punishing people who willingly commit crimes (which is not to say that our current criminal justice system is ideal) for a similar reason that our punishment may reform their will but also, and more importantly, because sequestering them will protect other innocent people. So this seems to me to capture the essence of moral responsibility, not some cosmic notion of guilt or evil, but something like blameworthiness, i.e., the notion that it makes sense to scold or punish someone who willingly and knowingly does something bad, precisely because of the effect that our blame will have on reforming their will. Understood in this way, it allows us to see why the person who commits murder because of a tumor is not morally responsible, because his will was able to be reformed without the need to blame him. As a final intermediate case, consider the bully who preys on weaker children at school because his own father abuses him at home. Is the bully morally responsible? The bully's will is indeed the proximate cause of his decision to harass his peers, and so in that sense he is morally responsible. Yet I suspect that this comports only partially with our intuitions, since it's also tempting to look at the father's abusive behavior like a tumor corrupting the bully's will, especially since he's just a child. Since our refined understanding of moral responsibility is grounded in the utility of blaming the responsible party, we might say that a more effective diagnosis of the bully would involve a combination of punishment and compassion to the extent that this will reform his will. This example seems to suggest that our understanding of moral responsibility needs to exist on a spectrum, and perhaps this can be accommodated by examining the number of levels for which a person's will was the proximate cause of their action. For example, in the case of the bully, his will was the proximate cause of his behavior, but his father's will was the proximate cause of the child's bullying, and so he's only responsible for one level. In the case of the person who willingly commits murder, his will was the proximate cause of the murder, and perhaps his decision to murder was proximately caused by his decision to join a gang, and this decision was proximately caused by his decision to do drugs, but this decision was proximately caused by peer pressure; and so he's responsible for three levels. The main issue with this analysis seems to be a kind of arbitrariness in how we determine the proximate cause, but maybe there's a way to make it more systematic.

D: On IBE, I agree that this is worth further reflection. As I continue to listen to that history of the scientific revolution, my conviction that this period represents a seminal methodological break with the past is deepening. The very notion of experimental proof had to be invented, as something that was different from and equally acceptable to witness testimony. The concept of natural laws as regularities in the ordering of the cosmos had to be invented. A relationship between them and the divinity had to be established that circumscribed the role of the divinity in altering them. Authors like William Gilbert, for example, had to write justifications in their works for why they had cited the Greeks to infrequently: the answer, of course, was that he (controversially) had found their knowledge of magnetics to be bunk! But experimental proof was not then perfectly trustworthy.

D: That said, I generally believe in a Smithian kind of morality in which we are guided toward moral acts by an internal "spectator" who imagines the responses of our peers to what we are doing. That specator is in turn cultivate through socialization, i.e. actual experience of others' behavior. So perhaps "blameworthiness" can also have some connotation here.

D: I apologize for my crude interpretation of Frankfurt's parable. Perhaps my intuitions about the meaning of morality, emotional as they are, serve me badly here. I agree that we can make a factual distinction between instances in which bad acts are coerced and when they are not. I also agree that if we redefine moral responsibility to mean that "an individual has acted according to his will and thus should accept the consequenes," and if we make free will "acting free of physical (at the macro-level, say, if someone is holding a gun to your head)or sociological coercion," then free will and moral responsibility may be applied to the analysis of human behavior and make these kinds of distinctions. However, I am not convinced that this responsibility is really moral at all. Morality encompasses actions of good and bad. But "blameworthiness" in this formulation encompasses only acts that help or harm society. Why shouldn't a child steal a candy bar? Not because it isn't fair, because we have no transcendent notion of fairness, but because the preservation of an orderly, harmonious society requires the protection of private property. The enforcement of property relations is better served when individuals are restrained by their belief in justice, rather than simply by rules. We seek to reform or imprison men because their freedom endangers us, and blaming and scolding are similarly utilitarian tools for self-defense. Where is morality? I guess the only role I can see for blame is literally in saying, "you did this." Finally, I am unconvinced by the true relevance of proximate-cause considerations here. What different does it make what the proximate of the most proximate cause (will) is, so long as the will is what it is? How is moral responsibility different between a bully whose will is shaped by an evil father and a bully who was born nasty despite his parents' efforts to reform him? I agree that we may reform the former and struggle to redeem the latter, but I don't see where morality comes in. Help, enlighten me!

R: I wonder whether our discussion about the compatibility between determinism and a notion of free will which supports moral responsibility is being confounded by our discussion about moral realism. If you reject moral facts—say, because you think morality is just a useful fiction to motivate people to behave in pro-social ways—then naturally you will also reject facts about moral responsibility, but this point is independent of the question about whether determinism is compatible with moral responsibility. If you grant moral facts (at least for the sake of this discussion), then this would seem to include facts like "it's (generally) wrong to steal", about which we can go on to ask whether a person is morally responsible for stealing, especially given determinism; I'm suggesting that the answer to this latter question exists on a spectrum from "yes" to "no", depending on the extent of the person's agency, i.e. the extent to which the reason for the person's act of stealing can be attributed to their personal free choices. Crucially, this definition rests upon a notion of "free choice" despite determinism, which is what Frankfurt's thought experiment is intended to demonstrate, since Jones apparently freely chose to shoot Smith, even thought this outcome was technically unavoidable. Do you grant that a meaningful notion of "free choice" or "agency" can be established even on determinism, namely one which says that a person acts freely if they act according to their own reasons without interference from others? If not, how do you makes sense of the apparent difference in terms of agency between the person who acts according to his own reasons—even granting that those reasons are determined by factors completely outside of his control—and the person who acts under duress? If you grant that people sometimes have agency, then we can move on to discussing moral responsibility, whose existence I think stems quite naturally from this ability. We simply say that a person is morally responsible when they freely choose to act in a way that is considered morally good or bad (we shouldn't forget that moral responsibility entails both moral praise and moral blame, though the focus understandably tends to be on blame). Notably, agency exists on a spectrum just like moral responsibility; and so a young child who steals candy is less morally responsible than a typical adult, who has more agency in virtue of a greater ability to deliberate and act according to reasons rather than mere instincts. As for the relevance of proximate-cause considerations, I tentatively proposed this as a possible way to assess a person's agency with respect to a particular action. The guiding intuition was that if a person acts according to their will, but their will is directly corrupted by some exogenous force such as a tumor or abusive father, then they don't have nearly as much agency as someone whose will is the result of a series of free choices. But, upon reflection, I don't think this analysis is sensible, both because of the apparent arbitrariness of identifying proximate causes and also because, as you point out, if a person's will is corrupted by nature, then their "series of free choices" is hardly "free" after all, even if each choice is in accordance with their will. Instead, I suggest we assess a person's agency in terms of their capacity to act according to reasons, like I hinted at above. On this view, the person whose will is corrupted by nature, such that they are instinctively violent and unamenable to reasons, would have minimal agency, hence minimal moral responsibility; they would resemble a lion more so than a typical adult human in terms of agency. Finally, there's a lingering question about the purpose of labeling someone "morally responsible"; what can be concluded from the fact that someone is morally responsible, and what should be done in response? I had suggested that the primary purpose of labeling someone "morally responsible" isn't to consider them either evil or saintly, but rather to deem their action blameworthy or praiseworthy. This can be spelled out in a few different ways, each of which captures some aspect of moral responsibility: [1] we praise/blame moral agents in order to encourage/discourage their future actions [2] we praise/blame as a reaction to the inferred attitudes of a moral agent who acts goodly (reflecting goodwill, affection, or esteem) or badly (reflecting contempt, indifference, or malevolence) towards us [3] we praise/blame moral agents in recognition of their responsiveness to reasons (in virtue of being moral agents), and so we're attempting to positively influence their process of rational deliberation on moral issues; this is closest to the view which I was describing in my previous message, when I talked about "reforming a person's will" To summarize: We first establish the existence of agency, despite determinism, as motivated by Frankfurt cases. Then we define moral responsibility in terms of agency with respect to morally significant actions. Then we assess a person's agency on a spectrum based on the extent to which they acted on the basis of personal reasons. Finally, we understand the practical function/meaning of "moral responsibility" in terms of incentives, reactions to attitudes, and responsiveness to reasons.

D: I see. I think you are right that I have confounded the moral realism discussion with more responsibility, and for that I apologize. However, I am not certain that one can simply grant moral facts for the purpose of this discussion, because it then impinges upon the purpose of asserting or not asserting moral responsibility. I will certainly grant that whether or not one is definitionally responsible according to your formulation is independent of moral fact. What I deeply question is the distinction you are making between the active influence of others and the natural will itself whose action you are deeming free. Is there really a great distinction epistemologically between he who steals because there was a gun to his head, he who steals because his father has normalized his behavior in childhood, and he who is naturally sociopathic and acquisitive? I will grant there is a social function [1] and [3] of asserting responsibility for performing an act. I will also grant that we often do [2] in practice; as I have hinted, this is the Smithian moral sentimental system in action. But to call this "moral responsibility" rather than an artificially defined "responsibility" requires the assertion of some kind of moral fact or rationale for deeming certain acts moral or immoral. So maybe we can attribute this parcellized responsibility without discussing moral realism. But I'm not sure that we can justify doing this without either moral facts (it is good to make people better reasoners about moral behavior—but is that not simply self-justification??) or appealing to the social function of reform. To summarize: I am not certain that the kind of agency you describe is meaningfully distinct from cases of lacking agency. I can see the utilitarian objective of separating the potentially reformable from the irredeemable, irrespective of whether we think both are worth our charity. So in calling this agent responsibility moral, you are assuming what you are trying to prove, in my opinion.

D: Moreover, if you accept my critique that identifying proximate causes is unworkable, then I think your distinction between individuals who are amenable to reason just means that moral people are those who listen to you when you say no and immoral people are those who don't. We're not even talking about responsibility in the same sense any more—an abused child who sins and won't reform is worse off than someone who sins of "their own volition" and will do so. Say both of them killed a man. You'd deem the latter more morally responsible, but you would also call him "more amenable" to reason.

D: So, I think it is incumbent on you to help us to understand 1) how we can make better distinctions about responsibility, which I still think are incredibly fuzzy 2) why we should add the word "moral" to the general term responsibility—i.e. we collectively accept that they performed an act without "coercion"

D: Sorry if my argument meanders a bit. Hopefully the last summary paragraph clears things up and extricated me somewhat from confounding.

D: I should also clarify that I am not simply biased against your arguments on either moral responsibility OR realism because I act as though they are true in my daily life!

R: I think we'll be able to make more progress during our live discussion today since it should allow us to more swiftly and directly engage in a back-and-forth on each particular point. For now, I'll just make a few quick responses to your comments: On the prospect of granting moral facts, my question would be whether you would believe in moral responsibility if (ceteris paribus) you were to believe in full-on free will? If not, presumably because you'd still be a moral anti-realist, then that would seem to confound our discussion about compatibilism, since no articulation of the compatibility between determinism and a kind-of free will which supports moral responsibility will seem genuinely moral, since that would require there to be moral facts! Since I get that it's a bit unfair to ask you to just concede moral realism for the sake of this discussion, I think we should just try to keep this distinction in mind going forward. On distinguishing between the natural will and the influence of others, I think there's one important case which you're not considering, namely the person who stole simply because the opportunity presented itself and, upon consideration of the reasons, he thought he could probably get away with it and didn't really care about the stranger he was stealing from. Such a person is perfectly normal, not sociopathic or traumatized or acting under duress. He doesn't fail to realize that stealing is considered wrong, he just either rationalizes this particular instance or ignores morality (again, not in general, just in this instance). Such a person I would describe as having agency (with respect to this particular act of stealing), since he is capable of deliberating upon the reasons for stealing, understanding their consequences, and ultimately making a decision about whether or not to steal, independently of the interference of others; this is unlike the other cases which you mentioned, where I would say their agency is diminished to varying extents by the influence of sociopathy or childhood trauma or the threat of a gun. The flaw in your reasoning, it seems to me, is in (tacitly) suggesting that because a person's decision is ultimately determined by factors outside of his control (i.e. nature and environment), that therefore his will is nothing more than an epiphenomenon, i.e., an impotent byproduct of circumstances decided long ago and whose apparent causal power is merely illusory. What I want to suggest is that willful deliberation on reasons is not a mere epiphenomenon, but a process which is genuinely causally efficacious in facilitating a person's decision. Once this is granted, then the difference between the case I described about a normal person who steals opportunistically and the cases of sociopathy, childhood trauma, and stealing under duress should seem more significant. As for why we should consider this form of responsibility "moral", I think the confounding influence of our discussion on moral realism is reappearing. But even as a moral anti-realist, shouldn't you be willing to grant that some acts are morally significant (like stealing or killing) whereas others are amoral (such as reading a book)? If so, then what's the difficulty with calling the attribution of responsibility for morally significant actions "moral responsibility"? I think I may be partially missing your point. Finally, the point of determining whether someone is amenable to reasons is to assess their agency, i.e. ability to make genuine decisions, not to say that "moral people are those who listen to you when you say no and immoral people are those who don't", as admittedly funny as that does sound ;) Likewise, concerning the abused child vs. the ordinary sinner, I would agree that the former is worse off because his moral agency has been diminished; he's no longer as capable of making choices due to childhood trauma, instead having been relegated to instinctively reproducing the actions of his abuser. Thus, the abused child is less morally responsible (if at all) than the ordinary sinner, who is capable of making choices with respect to morally significant actions in virtue of retaining rational capacities unhindered by any childhood trauma. So, I agree with what you're saying, but why is this conclusion supposed to be problematic?

R: Right, if anything, the bias is in my favor since we both intuitively believe in moral responsibility. Although, it sometimes feel like my position is more difficult since we both also agree that our decisions are governed by laws, and so I'm having to make a positive case for the compatibility of determinism with moral responsibility, whereas it's usually easier to poke holes as a skeptic ;) Regardless, I'm really enjoying this back-and-forth, and I already feel that I've learned a lot in attempting to defend the compatibilist's position.

D: I may not be able to answer these questions before we meet. I will just reply to your last remark: I wasn't saying that I thought your position was easy, but rather I was attempting to show that I am neither nitpicking nor fundamentally opposed to what you're saying. I agree that my skepticism is probably the easier position, and I regret that the positive system that I would put into place is actually loathsome enough that I don't want to argue for it. But I'm also really enjoying this back and forth and I think that it may lead us to some real growth.

D: Emotionally we are in accord, but rationally I despair.

R: These are some summary notes from today's discussion so I don't forget what we learned. Let me know if I'm missing anything

R: (1) The notion of moral responsibility can't be disentangled from a moral system, and so insofar as one's moral system is conflicted, say due to competing intuitions about particular cases and general views about moral anti-realism, then we can't really resolve the question of moral responsibility before settling the moral system. (2) Agency consists of two necessary and mutually sufficient components: acting on the basis of deliberation upon reasons and acting in accordance with one's will. The former is necessary in order to avoid cases where one's will is corrupted by nature, like in the tumor case; and the latter is necessary in order to avoid cases where someone is forced to act against their will under duress. (3) Moral responsibility stems naturally from agency in the following way: A person is morally responsible only to the extent that he acts as an agent with respect to some morally significant action. (4) There are at least three commonly acknowledged functions/aspects of moral responsibility which appear to be supported by this definition: incentivizing future behavior, reacting to inferred attitudes, and influencing the agent's will by providing them with reasons. (5) Reflective equilibrium is the process of rational inquiry by which we develop our moral beliefs. We start with concrete moral intuitions, attempt to explain them by appealing to general principles which exemplify theoretical virtues, and continue to refine both our intuitions and principles through an ongoing process of rational deliberation, which consists in reconciling our intuitions with our general principles. (6) Moral realism posits that there are moral facts in the sense that we can be wrong about our moral beliefs. The moral truths are said to be that collection of moral intuitions and principles arrived at by a perfectly rational person who has deliberated upon all possible moral scenarios and is comparable to us in the morally relevant respects (e.g. human nature, social circumstances, etc.). (7) The scope of a moral truth depends upon its basis. A moral truth grounded in human nature applies to all humans. A moral truth grounded in a particular society's convention applies only to that society. And so on…

D: I have no objection to our moral responsibility conclusion. It appears to be an eminently reasonable line and seems to cover any objection that I would muster. Shockingly, allowing for more than a single dimension makes our definition much more flexible! Is reflective equilibrium your coinage? If so, I strongly approve. The process seems like a very intuitive description of how we come to settle upon ideas in practically all fields of knowledge. Before we do not lightly discard our intuitions and preconceptions, but rather strive to incorporate them with our logical structures. The question remains, however, why we should believe that our intuitions about morality reflect things that we should do. Moreover, I am reluctant to accept that the moral truths are those that would be attained by a perfectly rational individual. Perfectly rational with respect to what? Homo economicus is perfectly rational with respect to his lifetime consumption—he maximizes an objective function according to a clear metric. How is our moral philosopher to be perfectly rational? If he is epistemically rational, he systematically works to expand his knowledge; if he is instrumentally rational, he works to achieve his values. But can he be rational in attempting to glean these values? You might say that he works from his intuitions. But why should he believe that they are the correct intuitions? We make mistakes, have prejudices and biases. Is he rational in that he has no biases? I am uncomfortable with this assertion.

R: I wish I had coined "reflective equilibrium", but I actually stole it from John Rawls. I think my usage pretty closely corresponds to his usage, but I don't claim to be representing his views perfectly when I borrow the term for myself. Also, by perfectly rational, I just meant that this person didn't make any logical errors when deducing prescriptions in concrete cases from general principles or when identifying contradictions and so on; so rational in this sense doesn't have any normative connotations, except in the sense that the rationality is being applied to moral propositions. As for why the moral beliefs arrived at through reflective equilibrium correspond to what we really should do, I think it's because we should do what we're justified in believing we should do, which is precisely what reflective equilibrium gives us. Let me restate that more generally: Reflective equilibrium is a method which tailors our beliefs to justifications, so that, when followed properly, we end up believing only that which we are justified in believing. Justification is achieved because our considered judgments are continually subjected to scrutiny, forcing those which fail this test to be either discarded or amended, leaving behind only those which pass this test, and hence are justified. More can certainly be said about this point of whether reflective equilibrium achieves justification, but I'll leave it at that for now; if you want more details, I'd recommend the essay "Is Reflective Equilibrium Enough?" by Princeton philosophers Kelly and McGrath. At this point, one might still object: Just because a belief is justified doesn't mean it's true! Certainly, I agree. But we should act according to what we're justified in believing, not according to what is true. For example, when considering whether to purchase a lottery ticket, we're only justified in believing that the ticket is probably a loser. Nevertheless, the ticket may in fact be a winner. But we should act according to what we're justified in believing, and so abstain from purchasing the ticket (unless we don't care about most likely losing). If some special reason can be given for thinking that this ticket in particular is probably a winner, then the validity of that special reason should be assessed precisely according to the method of reflective equilibrium. For example, suppose the special reason is that "I have a good feeling about this one!" Then a sober analysis of this principle would reveal numerous cases where such an inference was fallacious. Perhaps those specific instances could be discarded, but their prevalence and coherence with other general principles would make this extremely difficult according to the method of reflective equilibrium. So perhaps the special reason could be amended instead, and these attempts would continue, just as prescribed by the method, until at last either a legitimate special reason could be discovered or else the search was given up due to a string of failures. The point of this example is that concerns about potentially "missing out" on the truth due to restricting the guidance for our actions only to justified beliefs are unwarranted since any legitimate exception is able to be incorporated into our beliefs precisely via the method of reflective equilibrium. Finally, there may be some lingering concerns about the gap between justified beliefs and the truth. What about cases where the truth is simply unattainable via reasoned reflection, perhaps because our cognitive faculties are fundamentally limited? What I want to suggest is that these concerns are irrelevant, especially in the case of ethical beliefs. The question of how we should act is an unmistakably practical one. Therefore, questions about how to act seem to be inextricable from questions about justification in a way that questions about the nature of the world are not, since the nature of the world might well be quite different from what we're justified in believing about it. Indeed, it's difficult to imagine how it could be true that we ought to do something without also imagining that we could be rationally convinced of acting this way; after all, what would be the source of the obligation? That's not to say that reason alone grounds morality, since it may need to rely upon foundational value judgments. Rather, what I'm saying is that if I'm told that I really should act one way, then I should be able to ask why, and if a reason can't be provided, then that would seem to undermine the claim that I really should act that way.

R: By the way, the philosopher and legal scholar John Mikhail gave a wonderful response here (59:10 - 1:10:18) to a question very much like yours where he explains why nothing more than reflective equilibrium should sensibly be desired in order to determine how we should act. Here's a transcript: Question: In moral philosophy, a lot of philosophers talk about the notion of normativity. Can we and how should we bridge the gap between moral psychology and normative ethics? Answer: A substantial part of my book was devoted to answering that question and the basic idea which I tried to elaborate was one which Rawls first articulated, which attempts to bridge the gap between descriptive moral psychology and normative moral philosophy. Rawls' notion is was he called reflective equilibrium. I tried to argue in my book that many philosophers got Rawls wrong when they failed to see the substantial role that descriptive moral psychology plays in his notion of reflective equilibrium. Rawls coined the term as a way to describe what an older literature, going back to what Nelson Goodman and others, had written about the problem of induction. The same kind of normativity question can arise in the problem of induction. When we theorize about induction, are we trying to justify inductive inferences or just describe the way that people actually make inductive inferences? Goodman argued in his book "Fact, Fiction, and Forecast" that although the tradition had cast David Hume in a bad light for confusing the two questions, in fact Goodman's view was that philosophers owed what he called a "belated apology" to Hume. This is because in describing instances of sound inductive inference, Hume was in fact explicating what sound inductive inference ought to be, in the normative sense. Rawls was quite right to see that something similar goes on in the case of moral philosophy. He understood that in fact it's true that virtually every moral philosopher who engages in normative theorizing at one point or another relies upon moral intuitions to either validate or falsify the normative theory. In other words, there's no moral theorizing without moral intuitions. In light of that, we ought to go ahead and describe and explain those moral intuitions to the best of our abilities. When done in the right way—and this is the connection to the case of induction—that is, if we explicate what Rawls called "considered judgment" (as opposed to just any old moral judgment), the result of our efforts will be a theory that is simultaneously descriptively adequate and normatively adequate. Why? Partly because these "considered judgments" are the ones that pre-theoretically we think are normatively sound. We begin with judgments in which we have a high degree of confidence. Rawls used as an example the judgments that exemplify the principle that racial discrimination is unjust. We then try to explicate the deeper principles behind such a judgment. So, if our set of judgments which we're trying to explain are themselves already normatively laden and we're able to state in an adequate way the principles from which those judgments derive and upon reflection, once we've pruned both the principles and the judgments, we've arrived at a stable state of affairs, i.e. where we feel that both the principles and the judgments are correct, we will reach what Rawls called a state of reflective equilibrium. Once having achieved this point—after having scrutinized these principles and considered all the criticism from the social sciences, history, economics, and any other number of critical endeavors—if we still affirm the judgments and the principles even after this process, the question presents itself: What more do you want? You now have a theory of moral judgment that (1) is descriptively adequate in the sense that it purports to be describing the actual operation of moral competence (2) explains what you already took to be the sound moral judgments (3) has been subjected to the maximum criticism of facts and logic. The result seems to be a theory which is normatively adequate. In other words, you are justified in holding those judgments and principles. Additionally, you might furthermore link all of this to a theory of human nature, so that the principles are not just adventitious (i.e. happened to have been internalized from your environment) but are actually a deep reflection of innate human nature. At that point, moral skepticism and its perennial worries which beset moral theorizing begins to lose its force. If you can show that this system that is both descriptively as well as normatively adequate and can also justify commonly held human rights norms which transcend cultural relativism, the question persists: What are you seeking that this theory doesn't provide? One potentially lingering desideratum is for this theory to describe objective mind-independent moral reality. The entire theory is located within the head whilst buttressed by an external theory of justification for why these are sound moral principles to maintain. However, the theory is not metaphysically ambitious in the way that a robust metaphysical realist might demand. As such, that may be a cost of the theory in some people's eyes. In my view, it's not a cost but rather a virtue. I think history should have persuaded us by now to realize that the more robust theory is probably not achievable but that this alternative may satisfy any reasonable demand of a normative ethical theory. Notably, we are not relegated to total subjectivity. We do still maintain a kind of mind-dependent objectivity. As an analogy, consider the Euclidean character of physical space as we perceive it through our visual apparatus. That's not metaphysically real in the sense that scientists don't consider it to be actually true of the nature of physical space. It's mind-dependent but it's substantially objective with regard to how we perceive space. That's nontrivial and actually rather ambitious in relation to the philosophical tradition. It just refuses to transcend the limits of scientific epistemology. As another example, take color perception. It's not clear what it would even mean to say that colors are an objectively mind-independent facet of reality. Nevertheless, we don't groan over the "illusory" status of colors. They are taken to be real in a fundamental and important way. Indeed, everyone universally agrees that a shirt is "blue" or "red" and this characteristic is taken to be real in a meaningful sense. If moral perceptions and the properties that we intuitively and spontaneously posit regarding human actions are minddependent in this same way as colors, then what is the problem? It doesn't make moral facts any less real than facts regarding color perception. If moral knowledge fails to satisfy some epistemological standard for which even something as basic as colors are unknowable, then that's reason to doubt the validity of the standard rather than to persist in our untenable skepticism. We should be happy, not regretful, if moral facts and properties and notions of justice turn out to be as real as colors or as real as the apparently Euclidean nature of physical space.

19. Metaphysics of Identity

R: As for being skeptical about answers to those questions like whether identity is preserved over digitization, I share your skepticism and tend to go further by suggesting that it's a philosophical error to treat our ordinary concepts (like "personal identity") as rigorously defined by applying them to contexts far beyond what they were developed for. Another example is the ship of Theseus, where a wooden ship is gradually replaced board by board, until every single board has been replaced. By the end, is it still the same ship? I say that our concept of "the same ship" was not developed for such cases, and so there's no meaning to the question; rather, we have a decision to make about whether we want to extend the usage of this concept "the same ship" to include or exclude the ship of Theseus case. But it's ultimately a choice that we make to broaden the meaning of this ordinary concept, there's no truth of the matter about how the concept as currently defined applies to these marginal cases, since the concept is simply insufficiently defined for such cases

D: For me, there can actually be a meaningful concept identity that helps us to solve the practical problem: whether or not the "thinking machine" that is producing your thought proceeds continuously into the new digital body, or whether it dies and is replaced by a duplicate with your memories. Thus there would exist a digital individual who thinks exactly like you and believes that it has survived digitization, but you will either be alive in the copied brain, or dead, if the organic you was discarded. The hosts did make an interesting case that if 1% of your brain were replaced by silicon each day, however, that this continuity would be preserved.

R: About personal identity, I'm still skeptical of the notion that there is a real "you" about which we can meaningfully wonder whether it survives different methods of digitization. It seems to me that the notion of "you" doesn't correspond to an actually existing entity (unless we posit the existence of souls, which I regard as unwarranted), and so its meaning is derived from its usage in ordinary contexts (e.g. "you are the same person as when you were a child"). Therefore, once we use this notion outside of its ordinary context (such as in this hypothetical case of digitization), it doesn't have a definite meaning (until we choose to assign one to it), and since the notion doesn't correspond to an actually existing entity, there's no "true definition" to which we can anchor its meaning. So, the question of whether personal identity survives different methods of digitization reduces (in my view) to the question of how we should choose to define personal identity in these contexts, which is not trivial, but (in my opinion) a little uninteresting since the speculative scenarios appear to lack practical relevance.

D: I guess you are not as interested as I am in the practical question that I highlighted (can we survive being uploaded?). In this case, the only self that matters is what is perceived to be the self. If that can be preserved, then perhaps there's a chance of cognitive immortality!

R: I think the possibility of uploading our consciousness onto a computer in order to effectively grant us immortality is fascinating, but the semantic game about whether this is the real you or just a simulation or whatever is what I consider to be uninteresting and actually misguided.

D: I don't care about the semantic game. My analogy would be whether the Ship of Theseus feels like it's the same ship ;-)

D: If you feel like you have been uploaded onto a computer, then for all intents and purposes you have been.

D: It's more silly sci-fi speculation than semantic games to me.

R: I agree then that this is a legitimate and interesting question, about whether the experience of psychic continuity could be preserved across a process of digitization. I must have misunderstood your earlier comments, since I'm used to hearing people quibble about the semantics, and that's usually what's discussed in the Ship of Theseus example

20. Lakatos-Feyerabend Correspondence, Reactions to Against Method

D: A point for Feyerabend: https://www.johndcook.com/blog/2010/01/05/how-the-central-limittheorem-began/

R: Interesting example. In fact, I think it's not atypical in the history of either mathematics or science. For example, although the development of calculus began in the 17th century, it wasn't until Bolzano's work in 1817 that the epsilon-delta definition of a limit was established. Yet, students today typically learn about limits in precalculus and then define the derivative and integral in terms of a limit. If I had to guess, I think this pattern where the order of teaching reverses the order of actual historical development occurs because later developments have the benefit of hindsight, which allows them to formulate the early developments in the context of the whole subject as well as its various applications; and so, the later terminology tends to be better overall, including for the purposes of learning.

R: Just finished reading through the Lakatos-Feyerabend correspondence. It's full of wit and playfulness. They frequently joke about "annihilating" each other or else being "annihilated" by the other when exchanging ideas. It was regrettable to read about their frequent allusions to MAM (Lakatos' planned response to Against Method, presumably "Methodology Against Method") and even AMAM (Feyerabend's further reply, presumably "Against Methodology Against Method"). These plans were continually delayed for years until Lakatos' untimely death in 1974. Unfortunately, the correspondence didn't contain much in the way of Lakatos' criticisms or Feyerabend's defense. Rather, it was filled with funny anecdotes (like Feyerabend reluctantly praising bureaucracy for saving his job despite the repeated attempts of John Searle to get him fired); each person lamenting their seemingly endless bouts of illness; them relishing in their mutual disdain for aspects of academic life; their occasional misunderstandings due to the layers of sarcasm in their writing; their reactions to political issues (such as the "New Left" student activism, especially in response to the war in Vietnam, which caused a stir at both LSE and Berkeley, where Lakatos and Feyerabend were, respectively); some commentary on technical matters; and often ending with lewd remarks about "girls" which would seem inappropriate today. Overall, it was an interesting read which provided some insight into the life of a professional academic and, more importantly, an endearing friendship. Nevertheless, the correspondence failed to clarify the substantive details of their disagreement. Indeed, despite the fact that Feyerabend referred to himself as Lucifer and Lakatos as God, they themselves seemed confused about the extent of their disagreement. At one point, Lakatos suggested that Feyerabend had given up anarchism and so they were finally in agreement, to which Feyerabend insisted that he was as radical as ever! Yet, at other points, they seemed to agree that they were largely in agreement; Lakatos even suggested that they should give up the "for method" and "against method" labels, since they falsely indicated a schism which no longer existed. Lakatos' final letter to Feyerabend is from the hospital, just two days before he would die. It's filled with the kind of humor that always characterized their correspondence, such as "…I fainted and knocked my forehead into two, and now at least I know that the two hemispheres, contrary to recent journalism, are equal," and "I am now waiting for a Top Conference to decide whether a patient who, though clearly dying, terrorises the ward, should be discharged for Reason Unknown, or whether they should put me under glass and observation. Little do they know that this is a free and antinductivist country and I shall leave tomorrow on foot. But tonight I wanted to see three more chaps die. Very funny how people come in and die," and finally "The doctors were very amused by my statement that beyond electrocardiogram and X-rays I prefer witchcraft. They will all now read Against Method."

D: That's a beautiful story you've told me. It is sad that some of the most intellectually gifted minds tend to be placed in physically unstable bodies (and also mentally unstable brains), but I'm happy to hear about their friendship. Two people for whom life and work were synonymous in the best of ways.

D: I am increasingly thinking, after reading the Chalmers book, that Lakatos and Feyerabend did not have a deeply substantial disagreement. What I assume PF was reacting to in IM was the assertion that dogmatic adherence to the hard core of a research program was necessary to make progress in science. But what Lakatos is really saying is not that we must force everyone to act according to the research program, but to use adherence as an organizing device for adjudicating between science and non-science (not non-sense). If methods from outside the research program prove to be effective, then they are allowed to outcompete the existing research program. But you must acknowledge that program A and alternative B are taking entirely different angles to the construction of knowledge. I think Feyerabend's point would be that institutions can hijack research programs and use them as true demarcation criteria for the kinds of knowledge that can be considered—or worse, funded. But I don't see any true alternative being offered by PF in AM.

D: An open market for scientific ideas will still contain large institutions who choose, rightly, to fund the agendas that they find most promising. I don't think that the organization that we see in terms of groups working with similar methods toward similar goals is the result of propaganda, but rather of the limited cognitive power possessed by individual practitioners and a recognition that the methods on offer seem to produce results (well, and careers, too, but just imagine if you couldn't get a career in the sciences at all!).

D: Perhaps there is some definite historical debate between the two about whether, say, there was a Copernican and an Aristotelian-Ptolemaic research program. I don't know whether PF would have contested the latter point.

D: AC argued that IL would consider the "hard core" to be heliocentrism, but that the other testable parts (say, the orbital paths) could be modified according to theory and evidence. Incidentally, he also makes the interesting point that good empirical results meant different things for Ptolemy and Copernicus. For Ptolemy, empirical accuracy is an inevitable outcome of the fact that he modified his models with epicycles of varying degrees and sizes to get the best fit. Whereas Copernicus's conjecture, when it reasonably confirmed the data, seemed to be remarkable because it had actually been validated.

R: I largely agree with your comments. I don't think there's a substantial disagreement between Lakatos and Feyerabend, rather a disagreement in terms of attitude. Lakatos wants to save reason in science, whereas Feyerabend wants to exorcise reason from science; but both agree that the relationship between reason and scientific practice is complicated.

R: I definitely plan to read Chalmers soon. I also picked up a book on Darwin recently, "Darwin and His Critics: The Reception of Darwin's Theory of Evolution by the Scientific Community", which goes through the detailed responses of contemporary scientists to Darwin's theory of evolution. It should make for a good case study regarding the so-called "scientific method"

D: I'm not totally sure that PF is against reason? What would that even mean?

R: By reason, I mean rationalism, i.e., the view that science is guided by standards of rationality, hence NOT "anything goes"

D: Ah, I see.

R: We can celebrate your birthday by discussing (annihilating?) Feyerabend tomorrow, or maybe on Tuesday

D: Let's try to keep those pitchforks blunt. Although I can't help but not take PF too seriously at times. Chalmers does not really.

R: PF is definitely a provocateur, but that's why I love him. There's also a famous incident where he publicly defended an effort to introduce Creationism into public schools. He confessed that, while his intentions were to stick it to the dogmatic scientists, he knew that the Christians would be just as dogmatic if they were to succeed.

R: I take him seriously enough to believe that he means what he says, even if he intentionally distorts his message by phrasing it provocatively. Also, I thought that he made some compelling points in these last few chapters, though, as usual, I think he took it too far.

D: I am still reading so I must suspend my judgment for now. While I believe that he's mostly being serious, however, I can't help but wonder if his critique is unserious. But now that you've read his correspondence, perhaps you can stick up for him a bit!

R: After reading his correspondence, I'm actually more inclined to take him less seriously! He's always joking and seems to relish the fact that his ideas are controversial. Maybe provocateur is too harsh, but he's definitely a contrarian; not in the sense that he doesn't believe what he says, but in the sense that he intentionally seeks to defend positions which are controversial. I think he even said something to this effect, that if everybody agreed with him, he'd have to find another position to defend. I guess it's all in the name of counterinduction!

R: [Figure 3.3 in "For And Against Method" by Matteo Motterlini. Image of a postcard sent by Feyerabend to Lakatos. It contains an intimidating picture of a tiger with the caption: "Are you bold enough to fight AM?"]

R: One of PF's many humorous postcards sent to IL. Here, he's challenging him to complete his reply to Against Method

D: I kind of mean in the Chalmers sense of unserious, in that I'm not sure whether his fundamental critique is actually either useful or practical in any way, apart from the principle of counterinduction, which I do think is quite valuable. That postcard is utterly adorable. They must have really loved one another.

D: Ah, in that sense, I do think there's still some value in Feyerabend's critique insofar as it challenges scientists and philosophers of science to be more skeptical about the so-called "scientific method". Of course, this criticism isn't original, as Feyerabend acknowledges, it's the only thing on which he agreed with Popper, that there is no scientific method. But Feyerabend plays a useful role as a philosophical gadfly, like all skeptics; his conclusions are false, but he's worth taking seriously if only in order to explain where he goes wrong.

D: The problem I have with PF is that I'm not sure that he is dangerously wrong. If you were debating with a naive falsificationist, say, then you can pinpoint the distortionary effects of his thinking. But with PF, a lot of his points seem actually need to be kind of silly, even when you have reduced the exaggeration to its essence. I do think that the historical critiques are worth engaging with. We shall have to do our damndest to fend him off. However I've long since learned that a smart man makes him question your beliefs; but a great one actually replaces them.

D: Strong program thinkers like David Bloor are dangerously wrong. Stopping them is worth doing.

D: That said, I really enjoy AM as a book and have not regretted choosing to read it.

D: I take back half of what I said after reading last night. Challenging, fun, and kind of scary at times.

R: I think that basically sums up my reaction to Against Method throughout. I'm curious to know which parts you take back, but it's probably better to just wait until our discussion.

D: I take back my assertion that he's not dangerously wrong. I think he could be dangerously wrong, or persuasively right.

D: Not trivially. Chapter 11 was pretty killer for me.

R: Yeah, Chapters 11 and 15 were some of the clearest statements of Feyerabend's anarchism in practice. I agree that his ideas are potentially dangerous, but there's also some truth to them, and so it will be fun to tease these apart

D: Anamnesis alert!

R: Haha, indeed, didn't you know that this distinction between the context of discovery and justification was bogus all along? St. Feyerabend is merely recalling what you already knew

D: And then he just sets Popper on fire. Only the more backward regions of knowledge, apparently, continue to take him seriously.

R: Oh, I remember pausing to note down this quote too! In fact, I saw two versions of it. My digital copy has what you say: "let us look at the standards of the Popperian school, which are still being taken seriously in the more backward regions of knowledge", but my physical copy uses an even more acid phrase: "let us look at the standards of the Popperian school with whose ratiomania we are here mainly concerned"

R: In fact, in the correspondence, I remember Feyerabend complaining to Lakatos that the editors had removed this specific term "ratiomania" (along with other not-so-academic flourishes throughout the book) and he flew into a rage

R: Actually the phrase which was deleted by the editors wasn't "ratiomania" but something even better, "Truth-freak"

D: Ratiomania is funny, but truth-freak actually seems a little bit gauche.

D: Rather awkward and preachy, no?

R: If it were written by someone who took themselves more seriously than Feyerabend, I would probably agree with you. But I know that Feyerabend must have been laughing when he wrote "Truth-freak", and so I can't help but laugh along

D: I hope so. Because otherwise it just seems like he sits in a dark room seething with hatred against Popper

R: To be fair, it's probably a bit of both. I think he likes to tease Popper by constantly singling him out, but I also think he really vehemently disagrees with Popper

21. Divine Command Theory

D: https://www.worksinprogress.co/issue/the-decline-and-fall-of-britain/

D: By the way, how does your discussion with the command theorist go? Just reminded of this because there's a great discussion of command theory in the book that I'm currently listening to on the enlightenment, in the context of Kierkegaard's parable about Abraham and his son.

R: Nice article. As you anticipated, I was thinking about China during the introductory paragraph, but you make a compelling comparison to the American economy.

D: That's the biggest compliment you could ever pay me: that my rhetorical trick worked!

R: Unfortunately, he ended up needing to reschedule the discussion because of some emergency. I think it'll happen sometime next week. What conclusion does your book draw about the story of Abraham in relation to divine command theory?

D: Abraham realizes that if a being is telling him to kill his son, it is not God, because God would not deliver commands that are morally repugnant. The thinkers of the enlightenment used this to separate morality from divine prescription, just as you did. It was actually kind of unnerving.

R: Interesting. In that case, Abraham would reject divine command theory, since he's affirming an independent standard of morality. Although some divine command theorists attempt to get around it by either saying that the real God would never make such a command or by suggesting that he would only make such a command if he knew that Abraham would not commit to it, as a test/lesson

D: The second is definitely plausible. I think this is actually what happens in the story? Because if we are saying that the real God would never make such a command, then we are defining an entity of things that are moral which delimits what God can say.

D: Think you made this point to me the other day, so if that's true I agree with you now.

R: Yes, I think the test/lesson point is the usual interpretation of the actual story in Genesis. As for limiting God's actions, divine command theorists will usually argue that this limitation is a feature of his essential features of being loving, fair, and compassionate. In this way, there's no need to appeal to an independent standard of morality. But this response suffers from the objections I gave the other day, and also from the following natural question: Why are these characteristics essential features of God? The usual argument is that God is perfect, and so he must be perfectly good, and so that's why he's loving, fair, compassionate, and so on. But this response isn't available to the divine command theorist, since it posits an independent standard of goodness in relation to which God's essential features are derived. Instead, the divine command theorist is forced to concede that these are just brute (i.e. unexplained) features of God, from which we derive the standards of morality. This response is obviously unsatisfying and mysterious

D: Right, we're on the same page here. If Divine command theory was true, then God could make anything good regardless of whether it was in his nature.

22. Davis' Five Books Interview

R: Just finished reading through your fivebooks interview. As someone with minimal exposure to this topic, I found it to be accessible and helpful as an introduction to its history and guiding questions as well the main attempts at answering these questions. Personally, I was surprised to learn how much emphasis has been placed on geography as a factor behind this Great Divergence, since I tended to think that geographical advantages were largely overcome by technological developments. Additionally, I was surprised to see comparatively little emphasis on the history of imperialism and conquest, since this is what I usually hear talked about with this topic. But I think that both of these reactions are attributable to my limited and biased familiarity with the topic, so I was glad to have my preconceptions challenged.

D: Thanks for reading! Your comments are very astute. I would not say that geography is overplayed in the Great divergence debate; that's actually just my emphasis. Today, institutions are probably the most privileged factor in the debate, with some economic historians calling them ultimate causes. Personally I think this is misguided, but there's no doubt that the division between North and South Korea in terms of income has everything to do with institutions. I personally don't emphasize colonialism because the contribution of the periphery to industrialization is controversial and the probability that colonialism significantly held back China is low. But Pomeranz does talk about imperialism and conquest.

D: The problem is imperialism or colonialism are basically just buzzwords that are used by leftists to indict the west. The channels through which these forces would operate are under specified.

R: Yes, that makes a lot of sense about imperialism or colonialism being buzzwords, and I started to realize it too when I tried to think about how they would play a role in the Great Divergence

23. Brief Tangent on The Repugnant Conclusion

D: I've been wrestling with this.

D: Debate over the repugnant conclusion seems horribly misguided and goes against all my moral intuitions.

R: I'm sympathetic to Scott Alexander's response to the repugnant conclusion of "not playing the philosophy game" and just choosing to live in World A. However, I think this card has to be played cautiously. Like we've discussed before, I think that ethical reasoning is just about bringing our moral intuitions into reflective equilibrium. That means that it's not necessarily irrational to dismiss a counterintuitive conclusion derived from intuitive principles because the intuitiveness of the motivating principles needs to be balanced against the counterintuitiveness of the conclusion. In other words, I'm not a foundationalist about ethics, so I don't believe in any incorrigible ethical principles whose corollaries we are forced by the light of reason to admit. However, this move works both ways, and so we must also be willing to accept a counterinuitive conclusion if the motivating principles are sufficiently compelling and if there does not appear to be a rational move which preserves both the motivating principles (perhaps slightly modified) and the wrongness of the counterintuitive conclusion

R: As for my own response to the repugnant conclusion, I think it's an interesting argument, and I do find the premises intuitively plausible (by the way, Scott Alexander didn't do a good job of presenting the argument in my opinion; I suggest you read the wiki "Mere addition paradox" which lays out the argument more formally), but my main objection is that it seems like another example of the Sorites paradox. In other words, I only find the reasoning intuitively plausible when applied to a single iteration, but I'm not convinced that the conclusion holds when applied over several iterations. I also have broader methodological complaints with the style of reasoning employed by Parfit and co. in the domain of ethical reasoning. I'm not convinced that we can have fruitful intuitions about very abstract ethical principles of the sort involved in deriving the repugnant conclusion. Very often, I find myself thinking, "that's just the wrong question to ask" or "that's not the right way to think about ethics". GEM Anscombe wrote an interesting, but controversial, essay called "Modern Moral Philosophy" which echoed some of these concerns. I agree with her basic point that we have forgotten the ancient style of ethical reasoning about the so-called Good life, and that this is a shame. So I think these more general issues need to be resolved before a particular situation like the repugnant conclusion can be properly assessed

24. Reflections on Academia, Replication Crisis

R: An interesting article, "The Scientific Paper Is Obsolete", on the growing inadequacy of PDFs as a form of disseminating scientific results in an increasingly computational / data-centric age. His comments about the role of Jupyter notebooks have certainly been corroborated by the classes I've taken at Berkeley, and I wonder whether you've noticed a similar trend in actual journal articles, or have they yet to catch up?

D: Economics is so far away from this world, I think, that getting there would be like traveling to the Oort Cloud. The incentive structure is even more warped than in other scientific disciplines; not only must you submit a PDF paper—you must submit a specific format of PDF paper with the exact same sections, methodology, and languge. While I've used Jupyter frequently, I've been encouraged not to because replication packages are usually in R or Stata scripts.

D: But the paper is really a dead form. Nobody really believes what the papers say, nobody's interested in reading them, and so it's less about producing research than signalling competence at using certain kinds of methods. Whereas I could see that using notebooks as tools to show how exploration of concepts works allied with greater computing power could actually be helpful in getting more engagement and more honest papers.

R: It's too bad to hear about your experiences with and reflections on economics papers, but of course I'm not surprised. In mathematics, there is an analogous issue, where people are encouraged to write proofs as succinctly as possible and to avoid any details which aren't strictly necessary, even if these details would make the proof clearer, i.e. both easier to understand and more transparent with regard to how the proof was discovered. Also, there's a small number of mathematicians advocating for all proofs to be verified by computers in order to ensure absolute rigor, but most mathematicians dismiss this proposal because of the incredible amount of extra work it would involve for (allegedly) little to no value. In practice, most published mathematical results depend upon unpublished lemmas, either said to be forthcoming or deemed to be clearly true and not worth the effort of a rigorous demonstration (i.e. let a grad student pick up the slack). Also, many published mathematical proofs contain errors, even though they're rarely fatal, but they hardly ever result in retractions, usually just an errata addendum (if even that!). For these reasons and more, some have suggested that there is a replication crisis in mathematics too (see "A Replication Crisis in Mathematics?" by Anthony Bordg)

D: The same issue with proof writing prevails in economics—parts of proofs will be outsourced to other papers such that you can't actually follow the math at hand. Why would verifying proofs take too much effort? And how do these lemmas come about if they are not actually known to be true—why is there postulation acceptable within a proof? And how do proofs come to have errors; what is their nature? How do people get through to the correct conclusion with a fault in the logic? And how would we know if the proof has been fatally damaged or not?

R: Formally verifying proofs using a computer is a monumental task because it requires translating all of the fancy reasoning (e.g. argument via symmetry, drawing diagrams, complex geometric / topological / algebraic manipulation, etc.) that mathematicians commonly employ into the language of a very limited collection of axioms and rules of inference which are understood by the computer. Every step, definition, and assumption must be translated in this meticulous way which is not only tedious but requires lots of creativity, because there is no straightforward algorithm for performing the translation. As an example, the famous mathematician Peter Scholze worked out an important proof in just 5 days, but it took approximately 1.5 years and extensive collaboration to formally verify this proof using the computer program Lean. And unfortunately, most mathematicians just shrugged in response: "We already knew that the proof was correct, Scholze said so!" So there's little motivation to put so much effort into formally verifying proofs. I wouldn't quite say that postulation is allowed, however, if a lemma is seen as obviously true, or if a mere sketch of the proof is convincing enough such that it's obvious how to fill in the details, then most mathematicians won't bother with total rigor. In fact, most of the errors discovered in mathematical papers belong to these kinds of details, which is why they are typically not fatal, i.e., the major result still holds true and the proof can be corrected with only minor revisions. This also explains how mathematicians can arrive at the correct result despite flaws in the details of their logic: the overarching narrative of the proof remains correct, but possibly a minor edge case was not originally accounted for or perhaps a tacit assumption was made but not explicitly justified.

25. Ethics of Working in Defense

R: I couldn't remember where I had read about it ["The War Prayer" by Mark Twain] until I checked the date when I downloaded the file on my computer, and it's right around when I got the job offer from Raytheon…I had been considering the ethics of the decision, and came across this story as a result. On the one hand, the story obviously makes one want to distance themselves from anything to do with war; but on the other hand, to the extent that my work minimizes collateral damage or aids defensive efforts against aggressors, the story lays bare the visceral importance of the work being done. Of course, there are many more factors to consider, especially regarding the tension between my roles as employee and democratic citizen, and I'm not sure I quite resolved the question to my own satisfaction yet

D: I'm very curious about your reasoning for the above. I believe that I've said that I've got no objection in principle to the idea of working in the defense sector, as my general historical view is that the US military is a necessary counterweight against anti-democratic and aggressive regimes and that affairs would be worse without its existence/strength. But I have a feeling that your justification would be different from mine

R: I think that what you've said is probably the most compelling justification for working in defense, but it might overdetermine the case. That is, if one believes that the US military is a force for good, no moral dilemma even arises with supporting their efforts through one's own labor, and so no justification is needed, indeed, one may be obligated to support such efforts. But I'm inclined to think that even if one doubts the beneficial effect of US military involvement (as is no doubt warranted in at least some cases), a justification for working in that industry may still be offered along the following lines: War is a messy business, almost always a mixture of just and unjust, and one cannot exonerate one's self from the unjust aspects by simply choosing to work in a different industry, as almost all work is related to the war effort in at least some way (e.g. any contribution to the country's GDP also contributes to the country's geopolitical power; technological and scientific developments in almost any sector are likely to have some military application, be it telecommunications, biotechnology, materials science, medicine, computing hardware, robotics, image processing, machine learning, etc.; and even mundane, seemingly unrelated, work such as working in a restaurant or on a farm or in a factory ultimately keeps the country running, and thereby supports the war effort). Thus, if an engineer chooses to work in some other sector than defense, his decision has a negligible impact on the war effort, as not only will he be replaced by somebody else, but almost certainly his work in the alternative sector will nevertheless support the war effort in one way or another (not to mention any other potential negative applications unique to the alternative sector, such as surveillance, eugenics, job-theft, etc.). Furthermore, had he chosen to involve himself in the defense sector, he would not only directly support those military efforts which are just (thereby contributing to the saving of lives and/or protection of beneficial geopolitical institutions), but in the case of those which are unjust, he may have had the opportunity to attenuate its negative impact by, for instance, contributing to technologies which minimize collateral damage or reduce the overall destructiveness/violence of the affair (e.g. by making the weapons more precise, or by improving protective equipment, or by offloading some of the destruction of humans to the destruction of machines instead). Moreover, whether the engineer decides to work in defense or not, he still has a civic duty to oppose an unjust war via actually effective means, such as political demonstration, canvassing, voting, and anti-war propaganda, such as music, film, writing, etc.; and indeed, by distancing himself from the reality of war, he may have dulled his sense of this personal responsibility, as it is no longer something which he confronts day-to-day, and he may even fail to realize his role in supporting the war effort, since it's no longer as obvious.

Note that I think the force of my argument would be diminished in a situation where either: (1) My involvement were much more direct and significant, so that my personal decision might conceivably alter the trajectory of the military effort, such as in the case of Oppenheimer or a military general, in which case I think the decision would have to be considered much more carefully, or (2) The ratio of unjust to just military actions were much higher, so that I might realistically expect my involvement to have a net-negative effect overall, such as in the case of Nazi Germany, in which case morality would seem to require than I take much more drastic measures than simply choosing to work in a different industry.

I'm curious to know what objections you might have to my perspective? Allow me to anticipate a few: (1) [Objection:] While almost all work might contribute to the war effort, working in defense more directly contributes to it than working in other sectors. [Response:] In fact, I'm not so sure about that, or at least not when directness is related to significance rather than proximity. Is an engineer who contributed to the development of a signal processing algorithm used in the Iron Dome's radar system really more directly responsible for Israel's invasion of Gaza than, say, the doctor who developed the medical treatments used to aid the IDF soldiers? Or what about the doctor who takes care of Netanyahu, ensuring his good health? You might suggest that the medical treatments were not designed with the intention of supporting the war effort, but surely also the engineer's intentions were to, say, enhance clutter mitigation and target discrimination, not to maximize the destructiveness of Gaza's invasion. In both cases, technologies were developed which had both military and non-military applications (e.g. the radar signal processing algorithm is used in civilian aviation, wireless telecommunication, and studies of atmospheric composition), and neither developer directly intended to support an unjust military action, although both in fact did, and the significance of their contributions does not seem to be related to the proximity between the industry in which they worked and the military, since the medical aid was probably more directly and significantly helpful than some obscure algorithm. (2) [Objection:] The military industrial complex produces perverse incentives where war becomes financially desirable, irrespective of considerations of justice. [Response:] Firstly, my personal participation in the defense sector has no impact on the existence of such incentives. Moreover, there are many industries which profit from unfortunate circumstances (e.g. insurance, pharmaceuticals, all kinds of doctors) and they generally find a way to counteract such perverse incentives, at least for the most part. Of course, there are imperfections, and defense contractors lobbying the government may indeed penetrate the barrier which otherwise separates the war-profiteers from the war-declarers. But at the end of the day, these are issues which are solvable and which are neither inherent nor unique to the defense industry. Markets, it seems, are simply efficient means of organizing labor towards productive ends, and they operate on the basis of the profit motive. While this motive may introduce the possibility of externalities (such as via perverse incentives), market corrections may be implemented to address these issues, and there is also no guarantee that non-market based solutions will be free from corruption. As you are the economist, please let me know if I'm simply laboring under the misapprehensions of an econ 101 fantasy world. (3) [Objection:] The Jevons paradox invalidates the possibility of reducing the destructiveness of war via technological advancements which aim to reduce collateral damage. For instance, if drone strikes are made more precise via technological advancements, then the threshold for their usage is correspondingly lowered, and so the frequency of their usage will increase (since there are now more scenarios in which the cost-benefit analysis will deem drone strikes favorable, since their "costs" have been lowered), resulting in a greater total damage attributable to drone strikes, even though the damage per drone strike has been reduced. [Response:] I don't have a good response to this objection, only the speculation/hope that either the frequency will increase less than the marginal damage has decreased, or that the increase in total damage attributable to drone strikes will be offset by a decrease in total damage attributable to some other weapon which will have been effectively replaced by drone strikes. I mention this objection mostly in case you have a better response, or at least in case you corroborate my lack of a response.

D: Let's start with some objections. I will add more if I think of any that seem especially salient. Note that I am not objecting to your choice of occupation—I've actually encouraged my younger brother to work in the army or intelligence services—but to your justification of serving an imperfect military.

Objection (D.1): I think your objection (1) is close to what I would say here, which is that your case misses out on counterfactual reasoning: that is, the fact that nearly all occupations contribute in some way to war efforts (say, you paying taxes, at a minimum) does not mean that all occupational choices have equal moral salience with respect to those efforts. Suppose your aim is to minimize your responsibility for the military prosecuting an unjust war. While some jobs—say, making some essential contribution to Israel's highway system that permits easier movement of tanks toward Gaza—make a more obvious contribution to war than others (e.g. sweeping the floor at a high school gymnasium in Tel Aviv), it seems clear that working overtly in the military-industrial sector is not the minimum-responsibility occupation for you, probably even at most income levels. It would not seem justified to build missiles rather than build cars on the grounds that, since you would pay taxes anyway, the fact that you're contributing directly to the destruction of enemy cities doesn't matter. At the very least, I think it is incumbent on you to do the calculation trading off the true costs of one's contribution to the military-industrial complex against the benefits to yourself.

Objection (D.2): Perhaps the military does some good things (provides disaster relief, fights pirates, tackles terrorists) and some bad things (civilian casualties, foreign adventurism/interventionism, surveillance). I am skeptical that for a given defense-sector occupation, you can realistically choose which parts you can 'support' and which parts you can 'abstain from' or 'attenuate.' Suppose your job is, again, building missiles. You have no control over whether the missiles will be used against Somali pirates or against Gazan civilians. You cannot, in your position, choose to build only parts of the missile that are solely useful against pirates, and not against civilians.

Objection (D.3): Relatedly, I believe that the same logic about your dispensability in producing weaponry invalidates your argument that you would be best-placed to help shape the direction of military technology in a positive direction, unless you did hold that Oppenheimer-esque level of influence.

Objection (D.4): That everyone has the responsibility as a citizen to fulminate against adverse actions by the military does not constitute an excuse for working for the military itself. Indeed, one might argue in the opposite direction, that serving the state may compromise one's ability to advocate independently against the government's errant or unjust policies. I think it more probable that this effect would outweigh the positive information effect of 'proximity' to military issues, and you might even say that there is the possibility of bias arising therefrom.

Now, to your own objections. I don't think (2) is particularly important today, but I also don't really feel that your reasoning is sound. First, by working for a defense contractor, you improve the quality of labor available, increase the firm's productivity, and thus contribute to the augmentation of its profits in the event of a war (also, by applying for the job, you reduce the labor market power of the average applicant and increase that of the firm, further strengthening its business model). Second, it again seems apparent to me that defense contractors would stand to benefit more from the outbreak of war than other firms (which still might benefit)—witness the checkered history of arms merchants like Basil Zaharoff or Krupp in the lead-up to World War I. Third, one of the major issues with the defense sector is that it is not as insulated from the government and subject to market mechanisms as other industries. As an arms dealer (well, maybe small-arms makers make more off red-blooded Americans in Ford F150s), your main customer is going to be the state, and thus your trade lives and dies based on your ability to lobby. The allocation of resources to the military is not primarily driven by markets but by states, and states can be coerced by individuals' corruption. I would also add that industries whose revenues are generated more by non-state customers have proven equally maladroit at remaining incorruptible—doctors and pharma in the US being a paramount example of how these problems have not been solved, at immense cost to the consumer.

The standard objection to (3) has, I think, been a sort of MAD logic—dating back at least to H. G. Wells in the '30s. The idea would be that improving weapons would make war so terrible that it would be less resorted to. This may or may not be true, but certainly war was a bit less dangerous with bows and spears than with jets and nukes. Perhaps a slightly disingenuous tack might be to say that the best-case state of military technology is that nations—as is much the case with the US—wield expensive, effective, and relatively few weapons. Missiles, for example, cost millions of dollars each. We've only got a couple hundred anti-ship missiles in total, I believe. Increasing the efficacy and complexity of technology might limit use to the most necessary scenarios and also limit the collateral damage. This also seems… true-ish? Bad as the Israeli bombardment of Gaza has been, presumably a night assault by hundreds of Lancaster bombers dropping thousands of tons of incendiaries would have been far worse than the present use of expensive, imperfect guided weapons. So far, I don't think that the direction of technological change has been Jevons-esque—i.e. that killing is getting more efficient. Probably much cheaper to kill with an AK-47 than with the full kit of an American marine.

R: Thanks for your response, as it helps me think more carefully about this topic! I think your economic analysis of my objection (2) is compelling, certainly more well-formulated than anything I said/could say, and so I have nothing to add. Your MAD response to my objection (3) is interesting, as it would seem to suggest that the conscientious engineer should actually focus on maximizing, rather than minimizing, the destructiveness of the weapons which he develops, so as to accelerate our approach towards a MAD end-state of warfare. Unfortunately, this gamble, if it doesn't pay off, has the side effect of producing weapons which maximize collateral damage. Regarding Jevons, I didn't meant to refer to efficiency in a monetary sense (i.e. destructiveness per dollar spent), since I agree we are probably less efficient in that sense (which might be a good thing, insofar as it corresponds to reduced collateral damage). Rather, the dilemma was that developments which lower the collateral damage of a weapon thereby lower the "cost" in terms of human life, and so the weapon is more likely to be used in cases where previously its high collateral damage served as a deterrent to its deployment, and so ironically those who are now victimized by the weapon's collateral damage would have otherwise been spared prior to this "optimization" which supposedly "reduced" the collateral damage of the weapon. I've heard this compared to the Jevons paradox, though perhaps my version does not match the standard usage in economics.

I think your other objections may rest on a misunderstanding of my argument (my fault for being unclear, or perhaps the misunderstanding is my own), so let me try (briefly) to clarify. Let an individual's overall impact be modeled by 'r * (g - b)', where 'r' refers to one's responsibility for some action, and 'g' and 'b' refer to the goodness and badness, respectively, of that action; I take it that the maximally moral person maximizes their individual impact, and vice versa. In your objection (D.1), you correctly point out that people's individual responsibilities 'r' with respect to military actions vary. I certainly agree, and I only meant to point out that I think the difference in 'r' between a low-level engineer working in the defense sector and, say, an engineer working in a not-directly related industry (or even a doctor developing medicine / therapeutic regimen) is not necessarily as significant as one might initially expect, for the reasons in my original response. Moreover, I think the difference in 'g - b' between these two occupations is almost negligible, since the military action would proceed despite the individual's occupational choice, and so I think the primary means by which an individual can impact military actions is through political activism, which is equally available to the individual in either occupation. (You rightly note in your objection (D.4) that working in defense could produce a bias in political leanings, or even limit the practicality of anti-war political activism, but either influence is not a deterrence for a sufficiently conscientious individual, which also applies to my positive information effect; so while all of these effects are interesting to consider psychologically, I think they don't really factor into the moral calculus.) Regarding your objections (D.2) and (D.3), I mostly agree that an engineer has a limited influence on how his weapons are used, but I was trying to make three points: (1) If this influence is at least non-zero, then it is still that much more influence than what is available to the individual working a different occupation, who has zero such influence (2) While the engineer likely has zero direct influence on whether the missiles are used on pirates or civilians, he does have the ability to design weapons such as precision strike missiles, which can reduce the collateral damage incurred by innocent civilians, although I admit that this reasoning is complicated by considerations like my Jevons-esque paradox (3) These two points produce an asymmetry, or at least achieve a symmetry, under the assumption that the US military's actions involve some good 'g' and some bad 'b', since the responsibility for the good 'g' is potentially higher for the defense sector engineer than the other occupation, in the case where his involvement contributed to the development of a less destructive weapon for example, whereas the responsibility for the bad 'b' is either approximately equal for both individuals or slightly more for the defense sector engineer, for the reasons noted above. Thus, there's the possibility that either the two inequalities roughly balance out, or that the engineer even has a more positive impact in the defense sector than in another occupation. This analysis breaks down in the two cases I noted: (1) When 'r' is very large, in which case the individual has to seriously consider whether the action will contribute more to 'g' or to 'b', since if it's the latter, then he will have much more responsibility than the individual in another occupation (2) When 'g << b', in which case the opportunities for exploiting the asymmetry are minimal, and the total differential impact resulting from any discrepancy in responsibility for the bad 'b' accumulates to a significant value since it is not counterbalanced by enough 'g'.

Created: 2026-01-02 Fri 04:51

Validate