Table of Contents
- 1. Ethics of Nuclear Family
- 2. Ethics Intro
- 3. Thought Experiment on Capital Punishment, Two Societies
- 4. Wittgenstein
- 5. Truth in Ethics
- 6. British Imperialism in South Asia
- 7. The American Historical Review
- 8. Economic History
- 9. Jared Diamond on The Agricultural Revolution: "The Worst Mistake in the History of the Human Race"
- 10. Rigor vs. Flexibility in Definitions (Natural vs. Social Sciences)
- 11. Democracy
- 12. William MacAskill - Are We Living At The Hinge Of History?
- 13. Historical Economics, Mathematical Notation, Teaching History of Science
- 14. Canadian fur trade
- 15. Theoretical Virtues, IBE
- 16. Brief Return to Moral Realism
- 17. Return to IBE
1. Ethics of Nuclear Family
D: On a recent podcast, I asserted that my politics today begin and end with the question: “But is it good for the nuclear family?” I am not interested in a long philosophical inquiry into this question. Society is perpetuated by people who have children. Humans are a good. They are the ends. Not the means.
D: How do you feel about this?
R: My immediate reaction is to agree with the importance of the family and to regret the many discontents of the sexual revolution. That being said, many families today eschew this standard (especially with the growing prevalence of homosexuality and non-traditional gender identities). As such, it's simply not helpful to assert that their families should be disregarded in broader political discourse even if such a concession is suboptimal. Ultimately, it's probably for the better to encourage social change which values the importance family. Note, this doesn't necessarily mean the nuclear family, which is a relatively recent construction. Valuing the family transcends this suburban, middle class characterization.
D: https://logarithmichistory.wordpress.com/2020/07/18/the-goodness-paradox-2/ This is also quite interesting, apropos of Friday's chat.
R: Also, I can understand not wanting to entertain merely philosophical objections to valuing humans. In the same way, I don't entertain defenses of slavery or rape outside of purely intellectual curiosity. However, I don't think valuing family falls under this category. It needs to be established that valuing the family actually best achieves the aims of humanity
D: It's doable, though, if humans are assumed A) good and B) means, not ends.
R: It still needs to be shown that procreation specifically within the context of marriage is in the best interests of humans. For example, your argument is susceptible to anti-natalist objections
D: Not my argument—I disagree. I was curious about your take on the kind of reasoning involved.
R: Also, the goodness paradox article reminds me of some of the conclusions of Michael Tomsello in his investigations into the anthropological origins of human morality (he focuses on human cooperation and moral psychology)
R: Understood. I think the conclusions are probably accurate but certainly not self evident given such reasoning
D: The author (of the post) adopts what I conceive to be an intermediate position between our views: justice is evolved, but on a universal biological basis that takes similar forms— collective action against social dissidence—across human societies. The remaining question, however, is whether there exists a conceivable environment (perhaps off-planet) that would cause a different evolutionary result, thus negating the universality of the "community-first" moral disposition. Otherwise, the impulse of group defense (while social) might be said to be rooted in one of the basic ends/desires/impulses under discussion.
D: My assumption, of course, would be that universality would essentially amount to necessity/categoricity.
R: I don't think universality entails necessity. It's conceivable that we may have taken a different evolutionary path such that our basic moral desires might be different. This wouldn't refute the universality of moral desires today among those on our evolutionary path (i.e. all humans)
D: Conceivably, however, humanity could pursue different future paths if this was not a universally necessary proclivity, in which case we might reconsider their ubiquity in the present.
R: It depends on the level of contingency. If it's contingent at the level of evolution, then they're not going to change anytime soon… Though possibly at some point millions of years down the road
D: I'm not sure time is the relevant factor in absolutes, unless we posit that even at the level of cultural evolution humans experience sufficient biological change to be regarded as a different species altogether. I do not.
2. Ethics Intro
R: My point was that I'm quite skeptical of overarching normative theories as well and tend to have a more situational ethics since it's the basis on which we evaluate normative ethical frameworks anyways
D: I think there are actually pretty broad implications for the practical/pragmatic approach to ethics, though, which we should probably consider on the next occasion. They're related, in some respects, to both epistemology and a sort of Sartrean/Dostoesvkian existential morality.
R: No worries about that. I agree with you on the consequences of a purely individualistic and pragmatic ethics and its formulations in existential moral psychology, which emphasizes human freedom. Interestingly, Sartre affirms this outcome, whereas Dostoevsky abhors it and attempts to demonstrate its consequences in his novels. Instead, he affirms a historical, traditional basis of Morality such as found in Christianity. This is not necessarily in opposition to a situational, individualistic foundation as it may simply appeal to the wisdom of generations over any individual. I'm personally more inclined towards Dostoevsky's conclusion as analyses of Sartre's conclusion as exercised during the twentieth century are at least grounds for skepticism regarding the reliability of an purely individualistic ethics
D: I actually incline toward the Dostoevskian view as well, particularly my own interpretation: not Christianity but a specific form of Christian behavior, that of love/benevolence, is the only justifiable kind of action. That sort of move which does good and (at the very least) no harm is the closest thing to morality that we can understand, even in light of the radical epistemic skepticism of the Underground Man. Selflessness cannot be immoral, and therefore the behavior can be, if not systematically understood, legitimately justified and acted upon. I am sympathetic to Sartre's analysis of freedom and responsibility, and largely believe this to be representative of our experience of moral decision, but being a determinist cannot accord much truth to the perception beyond a superficial account.
3. Thought Experiment on Capital Punishment, Two Societies
D: In the interest of clarification: consider the case of two hypothetical societies, one which has institutionalized the harsh treatment (torture, death penalties, etc.) of POWs, terrorists, and genuine political dissidents and one which condones only imprisonment (and is, moreover, disgusted by capital punishment). These differences can be traced to historical origins—say, specific discussions among leadership groups, court cases, and treaties. How are these differences to be accounted for? The view of the latter state excludes the behavior endorsed by the former, but the differences are not rooted in human nature, while they react seriously and reflectively to recognized (in both regions) crimes.
R: Your question is intriguing and it has inspired some pause in my confidence regarding moral realism. However, I think it's ultimately answerable. Denote the two societies 1 and 2 respectively. In order to defend moral realism, I have to contend that at least one of the societies is wrong. Intuitively, 1 seems to be the obvious candidate. This immediate response should be illuminating already. Admittedly, we live in a society more like 2 than 1, but we are still (presumably) opposed to the lingering harshness, barbaric justice found in our society resembling 1. Going further, consider the thought experiment where the people of 1 and are switched with the people of 2 but the rules are maintained. How would the societies change over time? It seems quite plausible that the new 1 would quickly revolt against the barbarism of their justice system and restore a system much like in old 2. On the other hand, the people of old 1 might largely react negatively to the unjustness of the lukewarm treatment of criminals, and prisoners, but reverting to their old system would require them to come to terms with their former harshness and apparent barbarism. It seems plausible that at least some of the people would express disapproval when voting to change the system back to its old form. Perhaps not enough people would dissent so that the system would revert and the societies would effectively have switched numbers. Nevertheless, that new 2 would likely hesitate when reconsidering their system whereas maintaining the status quo in new 1 seems unthinkable lends some credence to the innate existence of certain moral principles of equality and fairness as exemplified in 1 and bulldozed over by propaganda in 2. In other words, even though both societies were maintained by propaganda rather than genuine moral introspection, only one system seemed reasonable once scrutinized by these innate moral principles. Admittedly, all of this is speculation, though not totally baseless. Someone more knowledgeable could probably point to historical and sociological sources to corroborate (or maybe refute) my argument. We might consider societies today more like 1 than 2. With widespread globalization, many of the people in those societies are dissenting against their social customs and adopting more Western standards of justice. Whereas, the opposite is hardly happening anywhere to my knowledge. Additionally, major moral progress such as in the abolition of slavery appear to be irreversible. If each side were truly equally legitimate, simply maintained by social institutions, then we would probably see more flip-flopping on these issues. Ultimately, my analysis faces one crucial flaw. I'm simply not knowledgeable enough to accurately disentangle overwhelming social transformations from realizations of innate principles. For example, the historical development of Christianity has played an indispensable role in the formulation of human rights and the eventual abolition of slavery in the United States. Was this shift merely an alternative propaganda which gained widespread approval or actually a deeper realization of innate moral principles regarding universal human worth? If we could develop a systematic defense of this transformation outside of historical contingencies and which accurately predicts future social progress, then the latter can be deemed plausible. In the absence of such an analysis (as far as I've provided thus far), we are relegated to agnosticism on this matter. In then becomes my task to try and provide such a system corroborated by scientific experiments and historical analysis in order to defend moral realism.
D: I appreciate the directness of your approach, though I did not anticipate your chosen tack. I have numerous qualms, however. You seem to characterize the "Westernization" of morality as a relatively linear historical development, sweeping the world alongside neoliberal governance and economic capitalism. As Williams (who I finished last night, perhaps for later discussion) writes, "[t]here is no route back from reflectiveness"; thus antiquity, barbarism, and savagery are perhaps no longer "real alternatives" for countries touched by globalization. One could argue, as Benedict Anderson does, that similar waves have occurred throughout history—say, during the Protestant Reformation, the first-wave Republican revolutions of the eighteenth century, and the postwar democratic/anticolonial movements in the Third World. You would then have to claim, as a realist, that these represent "discoveries" of moral knowledge. The "WEIRD" package of sociopolitical norms gradually accumulates as Western (and eventually global) citizens recognize it as representative of basic universal human dispositions. This is a compelling narrative, buttressed by two notions: 1) that the world has become healthier, wealthier, and happier as a result and 2) that irrationalist movements either fail to upset the hegemonic ideology (in Gramscian terms) or do so at fabulous material costs, perhaps unsustainably. Still, there have been irrationalist phases. The fascist wave of the '30s met the emotional demands of economic depression, for example, and Communism—though worn down by Western affluence—has not altogether disappeared, and may be merging with fascism through Chinese statism in an age when democratic liberal values have never been more pervasive. These movements were not just coups reinforced by reverse-propaganda; rather, mass movements were necessary to place each ruling cadre in power. Propaganda stoked and fanned blazes already burning brightly—blazes rooted in dispositions both fundamental and at odds with liberal values. Why, indeed, does Polanyi's critique of "fictitious commodities" seem so trenchant today? Whence the popularity and endurance of Marxian alienation? Given the choice to independently determine their moral lives, millions have revolted, seeking the warmth of populist, religious, and identitarian ideologies. Fukuyama's proclamation of the "End of History" was provocative three decades ago, but is ludicrous today. It's not at all clear, in short, that we necessarily prefer the set of values and norms that would tend to favor society 2. Indeed, the reflection that you attribute to the group transplanted from 1 to 2 might never conceive of their barbaric values as barbaric, but simply necessary—we otherwise assume, rather than prove, the universality of our Western ethical dispositions. These might never occur in society 1, which could see the world through a completely different lens.
R: Given your response, it seems I failed to either properly identify or convey the intended distinction between societies 1 and 2. In your initial inquiry, it seemed you were intentionally distinguishing between an obviously barbaric and unreflective penal system on the one hand and a comparatively self-critical, developed system on the other. Under this apprehension, I endeavored to explain how the latter society could claim right to this "moral reality" whereas the former had in some way erred. One way I attempted to do this was by examining the likely means by which each society had hitherto come to be and the principles by which their system was maintained. Under such an analysis, society 1 was revealed to be unstable and maintained only by social customs whereas society 2 appeared to be stable as it was grounded in a realization of innate moral principles. My basis for this speculation was an appeal to mere human dignity, not some contentious liberal virtue. This is where I think some confusion originates between our responses. My regrettable use of the word "Western" in referring to what I really understood as basic moral truisms appears to have conveyed the false perception that I affirm a linear progression of morality centered in the West and eventually adopted by "those less developed nations". Quite the contrary, I think we've headed largely in the wrong direction over the last century, as exemplified by some of the "irrationalist phases" you identified. Similarly, I reject the now common Pinker-esque dogmatic affirmation of liberalism and its dubious objectives of liberty and equality. It was not at all my intention to suppose that THESE are the principles which are somehow realizations of innate moral principles against the barbarism of conservatism or even monarchy. Rather, as clarified above, I meant to appeal to much more basic moral principles such as human dignity which are largely non-partisan. If you wish to still challenge the universality of such basic principles, that's another matter which I think can be adequately addressed through anthropological considerations. Dealing with your actual point, the moral rightness of either society becomes a much more complex consideration when we no longer assume the simplistic dichotomy which I supposed above. For example, making a proper recommendation between a society dominated by liberal ethics of individualism and freedom and a traditional society guided by religious customs and a strict moral authority is not far from "solving politics" in some sense. As such, I won't even attempt to answer that question. Nevertheless, I don't think the existence of such hard problems constitutes a refutation of moral realism (and I'm sure you don't either). Despite each society holding drastically different values and making mutually inconsistent recommendations, it's still perfectly feasible that a right answer exists and that careful deliberation and study can lead us closer to this truth. Though this might seem a bit hopeful in the face of perennial disagreement, the negation (that moral and political statements have no truth value) has just as (if not more) preposterous consequences. There's certainly much more to be discussed though it would be better suited to a real-time conversation in which basic clarifications can be made on the spot. As for Williams, I've not actually read his book in its entirety (my recommendation list was largely based off of classics within the field and my understanding of them based on secondary sources) but I'd be happy to study up a little after my finals and discuss it with you next time.
D: I do think that your response is still at least slightly conditioned by the hegemonic Pinkeresque culture—as is admittedly inescapable for me—but I'll accept your claim that this is neither intended nor essential. The purpose of the example was to present a case where, for intrinsically similar concepts of a crime, socially-determined punishments have arisen which represent diametrically opposed principles of justice and legitimate action. These two moral systems are incompatible, having grown to be so. My larger aim was to illustrate my concern that whatever gains we have made in terms of reality could be eclipsed by the demands of relativistic interpretation. Unlike Schafer-Landau, I do think that the origins of moral thought are significant, especially with respect to the "nature/nuture" question. To the extent that a hierarchical view of their approaches is possible, I must accept that my puzzle is flawed.
D: Yes, we can certainly wait to discuss further. I didn't mean to embroil you in a lengthy Socratic dialogue (and I admit to cheating by using my laptop). Perhaps you can give his work a skim over the weekend or something, if you have the time.
4. Wittgenstein
R: What book is this from? Wittgenstein's comments are interesting, in that he seems to reject a necessary correspondence between mathematics and reality, thereby ruling out the presumed fatality of contradictions. If a bridge falls, it is only because we either made a computational mistake or our mathematics fails to model reality. Neither case depends upon a contradiction (as I understand Wittgenstein) and so contradictions don't seem as intractable as supposed. In one sense, I think Wittgenstein's response resembles the one I offered during our discussion, which was to ask: so what if there's a contradiction? On the other hand, Wittgenstein seems comfortable in attributing this indifference to the immateriality of mathematics, whereas I would want to attribute it to the imprecision of natural language.
D: I think Wittgenstein views mathematics and language as games where the rules permit contradiction because they are errant and poorly formulated. He (at least initially) believed that all philosophical problems were by nature linguistic. There is no consequence, because mathematics is an ideal construct. That ideal, part discovery and part creation, can be mistaken. I'm curious at your imprecision argument. Can we overcome this barrier?
R: Yes, I think that is his position. Although if he thinks the folly of language transfers over to endeavors which use language such as mathematics, it would seem that he must be open to contradiction even in physics and any other subject. This is not necessarily an objection, but it draws out the radical extent of his conclusions. As for overcoming the barrier, I think it's simply a matter of recognizing that not all statements are propositions, i.e. truth-apt. If we generate some paradox in language (like the liar's paradox or omnipotence paradox with a rock so heavy…) it's not clear to me why we should lend much significance to it. Why not just conclude that we've constructed a sentence which on the surface appears well formulated but in fact just exploits some quirks of grammar. I imagine Wittgenstein attributes much more significance to such paradoxes since he thinks that language is in some intimate sense tied to reality.
D: Well-argued. The important question to ask, however, is a modification of Wittgenstein's: why should there be no contradiction? In human intellectual systems, the answer lies in the fact that the rules, axioms, and statements are formulated by conscious actors and can be shorn of error. If a move breaks the rules, the rules can be changed to incorporate the move or the move can be withdrawn as invalid (normatively). I read about a famous example of the latter last night. At a 1952 conference on decision theory in Paris, Leonard Savage, a UChicago colleague of Friedman's, was caught displaying preferences inconsistent with rational-choice EU theory (of which he was a primary developer) in response to the notorious Allais paradox. He later wrote that his choice of lottery was an error and switched to the option designated by his theory. Interestingly, he switched from a correspondence/simplicity defense to a normative one soon afterward. In nature, the answer is less obvious. What does a "natural" contradiction mean? If we claim that logical systems are discovered, then perhaps logical contradictions might be construed as "real." It is unclear that this is the case in the sense that, say, the laws of physics or thermodynamics obtain and can be revealed—and here, the notion of contradiction seems out of place. The sciences reflect and describe things that actually happen; if something is predicted that does not occur, either the theory is incorrect or the evidentiary apparatus is flawed. Nature is not "wrong," I think—processes are (and are therefore "true," "factual," etc) or are not.
R: Apologies for the late response. I got caught up with finals. Hopefully yours went well. Regarding your points, I found the economic example very interesting and I agree with your analysis in the last sentence that nature is not "wrong," but rather our descriptions of nature. I think the crucial point is about how we evaluate the truth / "reality" of statements. The economic and scientific examples rely upon a correspondence with reality for verification, which is why contradictions can't be entertained with any seriousness. On the other hand, linguistic and (arguably) mathematical statements reside in the realm of pure thought. So, truth means something like "in accordance with the rules of the language game." As such, Wittgenstein rightfully challenges that contradictions shouldn't be feared. Though, I'm struggling to see what relevance this caveat really has. Isn't Turing right to say that any game which entails contradictions has no business describing reality? If so, what's Wittgenstein's point?
D: I think Wittgenstein might be arguing that, since all "contradictions" are linguistic problems and all languages are limited and constructed, there are no real errors in philosophy—simply failures to properly describe what is being discussed. That accords with his early, Russell-adjacent thinking, I believe. I'm not sure why he'd take issue with Turing on this count, though— seems like he's playing devil's advocate , or simply being unserious.
D: This is the central hypothesis of all linguistic philosophy: that philosophical problems are in fact "unreal," and will be immediately resolved once the proper means of communication is devised (whether in English, "natural language," or some other symbolic form).
R: It seems like this linguistic diagnosis might face a problem with infinite recursion if it is to be communicated through language. Of course, this may very well be the necessary bullet to be bitten.
5. Truth in Ethics
R: For our next discussion, I'd like to better understand what you mean for something to be true. For example, presumably you believe "A war happened in 1812" is true. In what sense? I'm trying to argue that moral statements can be true in the same sense. I'll try to think of better ways to communicate this (as well as just understand it for myself) next time. Should we meet next week same time?
D: When I say that "a war happened in 1812," I mean primarily that past human behavior resulted in the state of organized violence that we call war 208 years ago. I would also accept the implicit claim that I'm asserting that reasonable people should believe this statement.
R: I would agree with your interpretation, but only because we (putatively) have some shared understanding of the terms you used. For moral statements, their interpretation usually comes down to the semantics of "good"/"bad". I would argue that similarly "straightforward" semantics can be applied to these terms as to "human", "behavior", "violence", etc. Of course carefully explicating these meanings is never simple, but I'm arguing that such a meaning "exists", independent of what words one may use to refer to such a notion or whether or not one agrees that this is the "correct" interpretation. Do you accept this as the task of the (minimal) moral realist? I think our primary disagreement on Tuesday came down to oscillating between two different arguments. On the hand, you seemed to be giving what's called an "evolutionary debunking argument" against moral realism. That is, if a satisfactory evolutionary account can be given for the origins of human morality, then we should not regard these moral statements as "true" since natural selection is not truth-tracking. I gave a few responses to this. Firstly I argued against the idea that such an evolutionary account exists since many of our common moral intuitions are in opposition to evolutionarily derived instincts. Furthermore, the capability of robust moral deliberation seems again to resist this truncation to mere evolutionary instinct. Finally, I argued that truth should not be understood independently of our human nature (which presumably has a biological origin). I defended this point by noting a parity between moral and ordinary perceptual beliefs regarding their mutual susceptibility to an "evolutionary debunking." Ultimately, it seems to me, our understanding of truth is always relative to some mode of experience, and so I don't consider it appropriate to judge the truth of some belief outside of this experiential framework (i.e. from "the view from nowhere"). I don't consider this to be a concession or redefinition, but rather a clarification of what I (we?) mean by truth. More can certainly be said about this, but I'm optimistic that we can reach some sort of agreement on this point. On the other hand, you seemed to respond to this with a separate point about the social construction of morality. I think some of the confusion on Tuesday came down to not carefully distinguishing these two arguments. IF a fully adequate account of metaethics can be given by appealing to social construction, THEN I should concede my moral realism and join you instead. If, for example, it can be shown (with reasonable confidence) that there is nothing beyond our moral beliefs than what we're taught (in the broadest sense, including propaganda, upbringing, surrounding cultural norms, etc.) in the sense that is most likely true of our clothing preferences, then again I should reject moral realism. So I don't at all see a parity between a kind-of social construction and biological contextualization. That is, if my human nature grounds certain moral prescriptions, I don't see this as analogous to my circumstantial, cultural setting grounding my moral beliefs. I think this may have been a point of confusion during our discussion, leading you to call my stance a pyrrhic victory. If, according to you, a genuine victory for me means grounding morality outside of human nature, as part of the sterile universe, then I concede that endeavor. I don't even think ordinary perceptual reality can be grounded in that way. So once more, our dispute seems to rely on a nuanced understanding of truth and reality. For our next discussion, I think we should focus on this latter point, since it seems to be where the bulk of our substantive disagreement lies: I don't think morality is socially constructed, you do (I think).
D: This is why I prefer writing over spoken debate: talking past your conversation partner is more difficult when the words are set in type. Yes, I do accept this as the task of the moral realist. I would grant you whatever task you set yourself, of course, but this is what I conceived your immediate objective to be. To your first point, I return that the "purpose" of those behaviors that apparently run contrary to "instinct" is to fill the situational gaps where those innate tendencies fall short, as in the effort to coordinate in collective-action problems. Thus an evolutionary origin appears to be even more probable, as groups and individuals that cultivate moral sentiments—reciprocity, fairness, etc.—are able to form larger and more successful societies. Such beliefs have founded the rise of "Western Civilization," the most economically and militarily dynamic entity in history; moreover, when these notions break down, so do nations and social systems (Venezuela, Libya, Syria, Russia, etc.). Moral deliberation cannot be separated from evolution, being an extension of our biologically-developed capacity for problem-solving and "rationality," so I reject your second claim as well. As to your comparison of moral and perceptual beliefs, I discarded the analogy on the grounds that while sensation collects information of an exterior entity (of which we can obviously understand little), our ethical faculties create information internally, or rather discover deeply ingrained feelings. The two behaviors, while similarly susceptible to doubt, are not parallel acts. Is morality found within "true," or truth-tracking? I say not, given my previous discussion of what you label "evolutionary debunking." Social construction and evolution are closely parallel processes, and sometimes are one and the same. We act creatively in response to our environments in a manner much like our genes (see Dawkins on memetics), adapting to challenges that ultimately send successful variants of both beliefs and biologies to the top. "Human nature" and moral beliefs are both influenced by the surrounding environment; the latter is affected directly and indirectly, through our changing neurologies. If the establishment of a connection between biological factors (concrete psychological tendencies, from hawkishness/aggression to dovishness) and moral beliefs constitutes a victory for you, then I concede—moral beliefs would indeed be real, in the way that socialism, utilitarianism, and paranoia are real. We could say that certain acts are good or bad TO certain individuals or WITHIN a certain society, and this would actually be the case. I return briefly to the analogy of the game: a player's strategy in response to perceived payoffs is a real pattern, mixing biology and environment, which can be evaluated only within the context of the game—as hawkish, dovish, probing, neutral, suicidal, etc. I'm prepared to discuss the socially-constructed nature of morality, if you wish. As with the "nature-vs-nurture" debate, the answer's probably "a bit of both," but as I've argued above, this makes little difference to me.
R: Although I had hoped we would agree on my first point regarding the meaning of "truth" and "reality" and its being inextricable from biology, it seems this is still a major point of contention. I don't think it would be fruitful to discuss the purported social construction of morality before agreeing on this point, since even if I could convince you that there is an intrinsic, biological basis for our moral beliefs, it seems you would not regard that with much significance. Thus, my point is two-fold: (1) To demonstrate the insufficiency of your restricted understanding of truth in accounting for both ordinary perceptual and logico-mathematical knowledge (2) To argue that my understanding of truth satisfies the usual desiderata (e.g. grounding knowledge, deeming others to be in error, being "objective") and so should be accorded as much significance as any other notion of truth. From what I gathered reading your response, it seems like your understanding of truth is heavily tied to ontology. That is, it's true that "you're holding a phone right now" because there are two entities "you" and "phone" which PHYSICALLY exist in the mind-independent world, in the spatio-temporal relation specified by the proposition. It was on these grounds that you distinguished ordinary perceptual knowledge from moral beliefs, since the former (unlike the latter) are caused by entities in the external world. As such, there's something "real" about ordinary perceptual beliefs whereas moral beliefs are merely constructed by the mind. Unfortunately, in addition to the usual external world skepticism, I think this understanding of truth fails both in actually grounding ordinary perceptual knowledge as well as in accounting for conceptual knowledge (such as logic and mathematics). Even though our perceptual beliefs may be caused by external objects, the concepts within which these experiences are cognitively situated are relative to the mind. As an example, consider a different species with a distinct perceptual apparatus, such that they have no concept of color. Is it the case that this species is unable to see the world as it is, or simply that their mental experience of external objects is different, hence colors are "real" for us and not for them? If the former, then what privileges the biologically developed perceptual apparatus of humans? Why doesn't your evolutionary debunking apply to the reliability of our perceptual beliefs? Just noting that these beliefs are caused by external objects is insufficient for grounding knowledge. Instead, I argue that truth must be understood in relation to a mode of experience. We can't talk about "the way things are" independent of some conceptual structure; nevertheless, groups which share such a structure (such as humans, for the most part) may speak of "the truth" in contraction since specifying "relative to the human conceptual structure" would be redundant in communication between humans. Without understanding truth/reality in this way, I don't think you can make sense of even ordinary perceptual knowledge. If you still find trouble with comparing ordinary perceptual beliefs to moral beliefs, consider purely conceptual beliefs such as in logic and mathematics. In this case, our beliefs are not caused by external (material) objects, but instead are byproducts of the structure of the brain/mind. We come to know modus ponens or the law of noncontradiction by investigating the contents of our mind, not some empirical experiment. Any attempt of the latter would necessarily presume the reliability of the former. Nevertheless, we have no trouble speaking of truth in these domains. The fact that our beliefs are the result of an intrinsic biological structure does not preclude knowledge. Yet, if we take your skeptical argument seriously, we should have no confidence that logico-mathematical beliefs reflect reality. They are just products of an evolutionary history, so in what sense are they real/true? Once more, I think both these cases help to reveal what is actually meant by "truth"/"reality": something is true/real relative to a given mode of experience and conceptual structure. It's precisely in this sense that I understand there to be moral truths. I think there's a (biological) structure to our understanding of what is right/wrong which can't be reduced to social construction. If you consider this a pyrrhic victory, then as I've argued above, so too should you cast all conceptual knowledge (and even sensory perceptual knowledge) to the flames. Now, I'd like to address some worries regarding the potential implications of this understanding of truth. In your response, you suggest that there's not a meaningful difference between the social / biological construction of concepts. And so, to "establish a connection between biological factors and [truth]" would be to make socialism, utilitarianism, and paranoia all true in the same sense. I would like to clarify that I'm not simply establishing a correspondence between biological tendencies / instincts and truth. Rather, I'm claiming that truth has to be contextualized within a biological / conceptual framework, as argued for earlier. This framework includes reason, logic, and evidence, all of which will necessarily have some biological origin (at least in part) but which cannot be reduced to evolutionary instinct. This is why I distinguished moral deliberation from instinct with regard to having evolutionary origins. Otherwise, your debunking would apply to reason as well. So, I don't think my understanding of truth leads to absurd conclusions where anything with some biological component becomes "real". Additionally, my understanding of truth is perfectly capable of grounding moral knowledge in the "objective sense" that you agreed is the task of the moral realist. If there is some complex, intrinsic, biological basis for our moral beliefs, then it follows that some moral beliefs can be out of keeping with this foundation due to any number of external perturbing factors (e.g. culture, propaganda, ignorance). The ethical imperative is then to investigate these moral beliefs and determine which are true and which aren't. This process will undoubtedly be complex (like any search for truth), but it will likely follow some combination of empirical studies, consistency testing, introspection, model construction, etc. There's nothing uniquely impotent about these methods in investigating the truth. So, once again, I think what we should really focus on is whether moral beliefs are socially constructed or have some intrinsic (biological) foundation. I contest your indifference to this distinction for the reasons above. As I see it, if you think the latter is of little more significance than the former, then you do away with a coherent notion of truth altogether.
R: In thinking more about our disagreement and my attempt at refining a notion of objective truth which is not tied to ontology, I've come across some terms in the philosophical literature which seem to generally reflect my (admittedly underdeveloped) views. Here's two specific resources: https://plato.stanford.edu/entries/constructivism-metaethics https://www.jstor.org/stable/20012351?seq=1 The general thrust of these views is to admit a notion of objectivity / "realism" which doesn't entail mind-independent objects, existing in some ontological sense like Plato's forms. Instead, truth obtains as the natural culmination of a rational procedure, limited by the cognitive apparatus of the (human) mind. In this way, truth is "constructed" by the mind. This is quite an involved thesis, but I think it helpfully delineates between my view about ethics (and probably also mathematics) versus error-theory (which I take to be your view) and non-cognitivism.
6. British Imperialism in South Asia
D: Do you have any strong opinions about the rule of the British empire in South Asia? Asking for a friend.
R: It's certainly not something I know much about, so I can't claim to have any particularly strong opinions about it either. As I understand it, the general narrative is drastically split—on the one hand, British (more generally European) powers exploited Southeast Asian colonies by robbing them of their natural resources, persecuting them (often on racial grounds) and establishing oppressive institutions and self-perceptions which continue to hamper them, and thereby directing the fruits of the colonial labor primarily towards the interests of the imperial motherland; on the other hand, this interference did directly motivate the rapid modernization of these countries through the introduction of improved science, technology, education, infrastructure, labor opportunities, etc. Absent this colonial intervention, it's unclear how the under-developed nations would have fared in the contemporary environment. Nevertheless, I don't think the latter can be used as a justification for the former anymore than the trans-Atlantic slave trade is justified by the present status of African Americans compared to many African countries today. Likewise, the poor working conditions in many of these Southeast Asian countries is not justified by its being preferable to subsistence farming. The basic ethical realization is that there are no extenuating goods, only pros and cons; even in cases where the pros outweigh the cons, the pros do not therefore justify the cons. Sometimes it's retorted that if the pros are dependent upon the cons, then the pros also justify the cons. Yet not many would argue that the reuiniting of a broken family following the death of a mutual loved one thereby justifies the death; or that the strengthening of a soldier's mental fortitude following a traumatizing experience as a POW thereby justifies his torture; and so on. So, as a matter of ethical principle, I abstain from participating in the debate about whether the atrocities of that period of history were 'justified', though I do think that there are genuinely interesting and difficult questions to be answered. For instance, do you think this period of history was an inevitable consequence of divergent rates of industrialization combined with increasingly global politics, as facilitated by developments in technology and transportation? Or was there a more peaceful alternative, where less-developed countries had the opportunity to modernize at their own pace? The answer to this question, I think, determines whether those events should be properly regarded as morally equivalent to a genocide. What I suspect is that the historical period in consideration is simply too broad to characterize in general terms; perhaps the Bengal famine was morally equivalent to a genocide, whereas the missionary efforts of the Portuguese in the Philippines weren't. Additionally, to what extent were the purportedly humanitarian (e.g., "White Man's Burden") motivations of the colonizers genuine? To be clear, I'm not so concerned with intention as I am with reasonable expectation: Did the imperial powers not foresee the consequences of their actions on the colonial economies? Do their actions reflect, as their statements attest, an honest effort to prevent making the colonies economically dependent on the imperial countries? Can the function of the British Empire in Southeast Asia really be regarded as symbiotic, and not simply parasitic? As is usual, I'm left with far more questions than answers. Although it's politically popular to denounce colonialism and imperialism these days (and about as brave as condemning racism or sexism), I remain unsure regarding some key historical facts about British rule in Southeast Asia and the corresponding moral judgments. I'd appreciate your own perspective, especially given your greater knowledge of the relevant history.
D: A number of points: 1) I hesitate to draw parallels between imperialism and the slave trade, especially because the British Empire was an entity that mostly arose after Britain had itself committed to ending human trafficking. There is an extent to which the Empire was intended as a benevolent force for education, development, and commerce that is not reflected in the (private, non-administrative) removal of Africans to New World plantations. We need not seek extenuating goods in the case of the Empire; rather, we can simply evaluate it by the success with which its aims were achieved. 2) Imperialism was inevitable; if Britain had not conquered, others would have—the Raj was established on the ruins of French attempts to conquer the subcontinent during the Seven Years' War, for example, while every major European power tried to get in on the Scramble for Africa. The converse would have occurred if Asia had had the West's technological supremacy—witness the policies of Japan in Korea or China in Tibet. But I'm not sure that the answer to this question has much moral relevance. 3) The Bengal Famine was not a genocide; there are many plausible competing explanations, but "destruction of a pre-existing relief system" is not one of them. 4) The nineteenth Empire was not intended to create dependence, but rather to establish open channels of free commerce between complementary states—specialization according to comparative advantage. Whether or not this was beneficial for the colonial states remains controversial, but according to the economic science of the day, unrestricted trade (per Ricardo) could only benefit all participants. What is certainly true is that most colonies were money-losing operations; so if the British were plunderers, they were clearly either insane or incompetent. 5) The central problems remain Indian deindustrialization and stagnant growth, and whether the no-empire counterfactual would have resulted in the same degree of immiseration. I'm inclined to say that the situation would have been the same or worse in the absence of the Empire—without technology transfer, legal reorganization, and education, would India really have been better equipped to develop a world-leading textile sector behind tariff walls?
R: Thanks for providing your perspective. I've given a few quick point-by-point responses below: 1) To be clear, I wasn't drawing a moral equivalence between the actions of the British Empire in Southeast Asia and the slave trade. My intent was to illustrate what I consider to be a basic moral point that the positive consequences of some actions, even when they outweigh the negative consequences, don't thereby justify the negative outcomes. I agree with your last sentence that we should not seek extenuating goods, rather we should assess what I called the 'pros and cons' of any action (whether measured in outcomes or something else). 2) Would imperialism have been inevitable had there been strong international organizations to mediate competing interests, thereby giving voice to the interests of the Southeast Asian countries and possibly allowing them to develop independently and at their own pace, motivated primarily by internal rather than external pressures? Obviously nothing of the sort existed at the time, but I want to distinguish between inevitability due to 'others would have done it' vs. inevitability due to the necessity of worldwide industrialization simultaneously in response to an increasingly global market, for instance. As for moral relevance, if imperialism wasn't inevitable, then it was avoidable. Insofar as it was condemnable, the imperial powers had a moral duty to alter their behavior. If, however, imperialism was inevitable, then there can be no duty to do that which is impossible. 3) I think genocide was the wrong term to use on my part; I should have said something like political negligence / apathy resulting in avoidable mass starvation. My original point stands, that what took place under what is called 'imperialism' must be assessed individually, not as some unified effort / set of outcomes. 4) I tend to believe that expected outcomes should take precedence over actual outcomes in any moral evaluation. So, if the British Empire's expectation for their colonial pursuits in India was to establish free commerce according to the principle of comparative advantage, and not to create dependence, then that should be seriously taken into consideration. However, doesn't the actual behavior of the British Empire in Bengal, for instance—where they destroyed its manufacturing system and imposed harsh tariffs on its textile industry—contradict this narrative and even the principle of comparative advantage? If so, then the Empire was not merely incompetent but actually had ulterior motivations hence expectations, not just beyond but actively against establishing free commerce. 5) I agree that the counterfactual question is of great historical significance, but not as much moral relevance. This is the point I was trying to make in response to your first bullet point.
D: I'll respond at greater length to these points tomorrow, except for 4): the textile industry fell not through government agency or tariffs but the lack thereof: free trade led to the destruction of the hand-weaver by fair competition from Lancashire mills.
D: The historiographical controversy is whether Britain should have allowed Indian industrialists to set up tariffs, which they eventually received (through dubious political pressure) during the Depression.
D: Furthermore, much of the decline of Indian industry occurred before the rise of the British factory—amid drought and the disintegration of the Mughal empire during the eighteenth century.
D: 1) Right, I understand. My point was simply that our question should be "how benevolent was the Empire" rather than "did imperial parasitism have any positive consequences." Nevertheless, I do believe that there could be extenuating goods worth considering, especially given the agency of Africans in the slave trade. Countries make welfare tradeoffs every day (less defense spending might allow poverty relief in the South); if an odious evil was traded for a great good, would this not be justified (I don't believe that the slave trade made this happen)? 2) I should clarify that imperialism is not an inevitable result of capitalism, but rather of technological imbalances between core/hegemon states in a unipolar world. International organizations have emerged when power disparities have been leveled—the League of Nations when Germany and America caught up with Britain, and the UN when the USSR did the same to the USA. The contingent aspect is the form that hegemony takes—the liberal world orders of the Belle Epoque and the postwar miracle, or the absolute dominions of Rome, Ottoman Turkey, and Qing China. To be clear, I am attempting to justify British imperialism by asserting that others would have done it anyway. What is inevitable for a nation-state is not for an individual. One can rebel against the times and preserve one's virtue, if this is necessary. 3) Aggregation to some degree is certainly necessary—do we assess presidencies by individual events, or by the state of the country left behind (or the tenor of the reign)?
R: I'm not sure what 'extenuating goods' you're referring to 'given the agency of Africans in the slave trade.' Could you clarify? As for making trade-offs, I certainly don't object to the principle of weighing pros against cons as in the case of deciding to save five people rather than one person from a burning building. But in this instance the consequences of inaction would have been six dead, and the decision to act was unilaterally preferable (one dead or five dead > six dead). It is much more difficult to justify this kind of trade-off when the consequences of inaction are neutral, as in the case of deciding to kill one person and harvest their organs in order to save five others. The case of British imperialism in India is of this latter kind, where inaction would have cost no lives (at the hands of the British), and so it's much more difficult to justify any resulting goods given the requisite atrocities. I'm surprised by your characterization of the decline of the Indian textile industry, where the British simply won-out in a 'fair competition' facilitated by 'free trade', and that it began during the eighteenth century for reasons unrelated to Great Britain. Indrajit Ray, writing in the The Economic History Review, concludes that "Bengal's export market for cotton textiles started to decay after 1825." And in his survey of the relevant literature, even the earliest proposed start dates for the decline are right around the turn of the century. Furthermore, Ray identifies two factors, widely agreed upon by economic historians, pertaining to the decline of Bengal's cotton textile industry: prohibitive tariffs & technological innovations. These tariffs were instituted by Great Britain in accordance with infant industry protectionism, directly against Ricardo's principle of comparative advantage. The British Parliament enhanced tariffs on Indian textiles 12 times between 1797 and 1819; this directly created adverse market conditions for Bengal cotton textiles in Great Britain, contributing significantly to its decline. Whatever your assessment of this approach, it certainly cannot be characterized as 'fair competition' or 'free trade' and the 'lack' of tariffs. Please note that I don't intend to suggest that prohibitive tariffs were the sole factor in the decline of Bengal's cotton textile industry, as I of course acknowledge the role of technological innovations during the Industrial Revolution; I'm merely contesting your assertion that in fact the lack of tariffs led to the decline. Anyways, thanks for engaging my initial (far too long) response to your simple question. I didn't expect to go back-and-forth so many times but I certainly think that it's been a fruitful exchange so far. By the way, I'm looking forward to speaking with you about Carnap's book on Monday. What did you think of it?
D: I assumed that the irrelevance of "extenuating goods" referred to a situation where slavery (or any negative policy) was imposed unilaterally, so I was proposing a hypothetical—along the same lines as "conscription is immoral, but American victory over Nazi Germany and Japan was imperative." I see your point better now; in the case of the Raj, however, poverty was already endemic and—as I'll note later—industry was doomed both by global economic forces and internal structural factors. British failure had negative consequences, but so did British inaction. The question is whether British rule exacerbated or ameliorated India's economic woes. I am disappointed by your decision to cite a single paper as a "widely agreed upon" explanation of Indian decline, which compresses multiple distinct periods of history. The free trade era, for example, lasted from the 1830s at the earliest until the late 1870s, and during this time British tariffs on all goods fell to nearly zero. At the time, moreover, India was controlled by the East India Company, which would only lose its monopoly rights in 1833 and full government after 1857. The British Empire had not taken an interest in development. Furthermore, Ray's survey (2009, I presume) in that respect plays on Indian nationalist mythos. Indian exports were minuscule, so losing the British market cannot explain absolute decline. Per Tirthankar Roy (2002), India's premier economic historian: "The export trade in itself was tiny. The proportion of textile export to total textile production was very small, at its peak not more than 1 to 2%. To give a sense of scale, around 1795, India's net export of cotton cloth was 22 million yards, and domestic production was 1102 million yards." Worse, Ray himself attributes the decline to primarily technological reasons! A wealth of research, moreover, cites other internal factors, from labor market frictions to droughts and Mughal decline, which were beyond the reach of any government, let alone a nineteenth-century state with low fiscal capacity. See Wolcott 1997, Clingingsmith and Williamson (2008), and Williamson (2011) on these various points. Indian real wages, per the most recent empirical work by Pim de Zwart, had been falling since at least 1720, and were locked at subsistence long before British textiles became competitive in the Indian market after 1820. What primarily damaged Indian industry was not declining terms of trade but poor agricultural output, which raised the price of food and thus nominal wages relative to the world price of textiles. Nor did the tariffs actually help British industry; instead, they slowed the diffusion of technology and prolonged the existence of the traditional hand-loom sector that would be in any event destroyed by the emergence of the power-loom. Indian yarn, meanwhile, was not competitive with British factories during this period, so free trade would merely have led to the temporary replacement to British cottage industry with Indian low-wage labor until machinery annihilated both anyway. It's funny that you're attacking my "assertion" that the lack of tariffs destroyed Indian industry, because I don't think that this is true—this is the orthodox Indian nationalist critique of British imperialism which I seek to rebut. The fact that an article is published in a field journal does not make it representative of the current state of the literature, or even the consensus of the time.
D: Ray does not even say what you think he does: "First, although the British tariff policy depressed Bengal's cotton textile exports to that country until the mid-1820s, it did not seem to be a factor after 1826, when the tariff rates were drastically curtailed. The British policy cannot, therefore, explain the industry’s decline in Bengal that started in the mid-1820s and continued through 1860. Secondly, there was no discriminatory British bounty policy to promote the import of her cotton textiles into India, which, as we have pointed out above, actually devastated the industry. Moreover, unlike the case of Bengal’s salt industry“' or her indigo dye manufacturing, the cotton textile industry was never subject to severe policy discrimination in Bengal."
R: I cited just one study only because I took it to be representative of my own perspective and in order to explain why I was surprised by your characterization of the decline of Bengal's cotton textile industry. I didn't want to give the impression of having authoritatively refuted your position (since I obviously don't think I have done that, nor do I think that my perspective is entirely right and yours entirely wrong), hence why I simply gave some facts with citations in order to substantiate my earlier claims and counter some of your objections. Note that I was primarily contesting your claim that "the textile industry fell not through government agency or tariffs but the lack thereof." In doing so, I didn't need to show that prohibitive tariffs were the primary or even a substantial cause of the decline, only that they even existed and were impactful. It was for this reason that I was surprised by your claim. Reading your response, it seems that everything which you say is perfectly consistent with my own stance as characterized above. Where do you show that not only did the British Empire NOT establish prohibitive tariffs but that the "lack thereof" led to the decline of Bengal's cotton textile industry? You cite many alternative factors, none of which I contested, and point out that Ray considered technological innovations during the Industrial Revolution to be the primary cause of the decline, as if I didn't explicitly acknowledge this within my previous response! I think much of this disagreement actually comes down to an unfortunate misunderstanding. In both of my previous responses, I was speaking of prohibitive tariffs instituted by the British Empire in order to weaken Bengal's textile exports. I've now realized that you seem to have meant protective tariffs instituted by India in order to protect its own textile industry. Hence why I attributed the claim that the "lack of tariffs destroyed Indian industry" to you based on your earlier statement about the textile industry having fallen "not through government agency or tariffs but the lack thereof"; and hence why you eschewed the attribution since you interpreted 'tariffs' in the other sense, which would make the claim not yours but that of an Indian nationalist, as you say. At the end, you characterize my response as an "attack", which clearly reveals a failure on my part to communicate my intentions clearly. Obviously I've given the unintended impression of hostility in my previous responses, which seems to be a repeated issue in our discussions over text. I try seriously to formulate my words carefully and not to be combative during discussions, out of a recognition of my own ignorance in many areas and respect for my interlocutors in believing that they may have something to teach me. If you're able to highlight which aspects of my previous responses failed to convey this attitude then I would genuinely appreciate that so that I may prevent future misinterpretations. Hopefully this message clarifies my prior intentions and current positions. D: In short: I disagree with the perspective (a century-old one at least) that India's inability to erect tariff barriers against British manufactures was the cause of some spectacular collapse of Indian industry. Such a claim is ideologically motivated and contradicted by a swathe of literature in economic history. British tariffs on Indian goods are irrelevant here, because the loss of a tiny portion of total production cannot explain more than a fraction of a percent of India's decline! No Indian nationalist—or any sane human—would argue that India would have prospered if Britain had raised tariffs against Indian goods. Ray himself concludes that these tariffs were barely impactful on the economy—they only damaged the irrelevant export sector. I do not believe that you have been hostile, only that you have waded too swiftly into a literature that demands careful thought and multidisciplinary reading.
D: The standard, incorrect nationalist argument (repeated and resuscitated since the 1880s) is that the Empire was bad because the British wouldn't protect Indian manufacturers with infant industry tariffs. This is wrong. Industry declined as a result of, among other things, 1) rising grain wages eroding competitiveness 2) disruptions caused by turbulent Mughal politics 3) labor market frictions 4) a wage and price structure that disincentivized the adoption of machinery.
R: Your position is much more clear to me now. As I said before, I think the earlier disagreement arose out of a mutual misunderstanding about what the other meant by 'tariffs'. I don't believe what you call the Indian nationalist position and it's now clear to me that you don't believe what I thought you had said. I did believe that prohibitive tariffs instituted by Britain had more of a negative impact on Bengal's textile industry than you believe based on what I considered to be a general consensus among those who have researched the history, but I've now lowered my conviction in that belief. It appears that alternative factors played a much more substantial role in the decline of India's manufacturing.
7. The American Historical Review
R: Skimming through the March 2021 release of The American Historical Review, I definitely see your point about not feeling as if you've learned anything. The first point which struck me was the sheer number of book reviews, which constitute the vast majority of the journal; the rest includes some video game and film reviews (for some reason) and finally a handful of articles of varying significance. (Perhaps this is normal in journals for the humanities, but I've never seen so many / any reviews in academic journals for mathematics / computer science / physics.) The defining characteristic of these articles (as well as the books which were reviewed) seems to be a focus on telling a story, backed up with some citations and corresponding argumentation. There are no formal research questions, independent and dependent variables, statements of methodology, data analysis, discussion of biases, literature review, areas for further development, etc. In summary, there was no research, just story-telling. That's a bit of an exaggeration, but it accurately conveys my impression after having skimmed the journal and read through some of the articles in greater depth. Common approaches included "how our contemporary ideological biases influence our historical perspective on …" or "here's something that I've been thinking about, now let me tie it into a broader lesson about 'Science, Empire, and Capitalism'". While there's nothing necessarily wrong with these topics, they seem better suited to a blog post than an academic research journal. More specifically, this style of writing is anything but conducive to critical engagement, since much of it holds the tacit perspective of "this is just one possible point of view among many". As such, the principal motivation appears to be not one of interrogating the evidence in order to reveal the truth but gathering evidence in support of one perspective on some historical matter. I suspect that this is due to different foundational assumptions about the value and efficacy of studying history. The basic vision (somewhat naive) of history is one founded in a search for the truth (about what happened in the past and the relevant causal factors) and an attempt to navigate the various obstacles along the way, whereas this journal seems generally uninterested in such endeavors and instead preoccupies itself with quirky new modes of analysis and ways of thinking about things. Take one of the featured articles, 'Sounds of February, Smells of October: The Russian Revolution as Sensory Experience' by Jan Plamper, for instance. What exactly is that article's thesis? Is it arguing in favor of an auditory and olfactory approach to historical analysis regarding the Russian Revolution of 1917? Not quite, since that would obviously be absurd. Instead it's arguing that the experiences of both sound and smell shaped the reality of that historical moment in a way which is not captured by traditional historical discourse. Ok, fair enough. How does this analysis contribute to my understanding of history in a way that will allow me to make predictions about the future? or guide policy decisions? even personal decisions? It doesn't, and it doesn't intend to. It merely intends to provide yet another lens through which to view history. There is no attempt to proffer an explanation of history in a way that might be challenged; and so whatever value it has must be radically different from what I was expecting from the study of history. In an attempt to be fair, I don't think my characterization so far is accurate for every single article / book reviewed in the journal. In fact, the books reviewed (from what I can tell) generally seem to adhere more so to the traditional mode of historical analysis with which I'm familiar. But it is indeed worrying that a top journal in this field would be filled with so much writing of such little value.
D: Your reaction, to my mind, appears both fair and entirely warranted. Historians, ever the officious gatekeepers, will retort that "truth" is neither a meaningful nor productive end of historical inquiry, but the impossibility of objectivity does not give sanction to frivolous, nonrigorous modes of research. Oddly, I would be less aggrieved if there was more microhistory— empirical findings from archeological sites, discussions of textual sources, etc. Then we could give credence to the claim that history has abandoned theory on truly intellectual concerns. We read an archeological text in my Medieval History class last year, for example, which contained only a scattered series of reports on the items and structures found in various sites with only limited speculations about the social functions of these artifacts (though they couldn't help themselves from a little). You can use this! It's at least a coherent picture of life in Northwest Europe after the fall of Rome. But to abandon causal claims and refuse the solidity of fact-finding is to lose sight of the mission of historical analysis.
D: I don't mean proselytize, but have a look at the table of contents of the May 2021 issue of the Economic History Review, one of our premier publications: https://onlinelibrary.wiley.com/toc/14680289/2021/74/2. You'll find a much more satisfactory range of topics—"How fast did the British economy grow during the Industrial Revvolution?" "What were rural wages in pre-industrial Southern Europe?" "Why were Spanish immigrants to Argentina poorer than others?"
D: Even the single tokenist article tries to answer a valid question: in what partnership forms (if any) did women invest in British railway companies?
D: (I do mean to proselytize, actually. I'd selfishly love to have economic history discussions).
R: I can immediately tell the difference when looking through Economic History Review. Each article has a clear question with a refutable thesis. No story-telling, just actual research. I'm somewhat embarrassed to have not realized how poor historical scholarship is outside of certain subfields, since I used to maintain a defacto respect for academia based on my experiences in STEM. Also, I certainly wouldn't be against more economic history discussions. But they might resemble lectures more than discussions given our relative familiarity with the subject.
8. Economic History
D: "Do economic laws explain why civilizations rise and fall?"
D: That's the question that made me an economic historian.
D: My current paper effectively does this, proposing an economic-theoretical explanation for Portugal's decline.
R: That's a very interesting question. My immediate reaction is to say, "While it might be part of the explanation, it will never explain the decline on its own." Though I'm open to being challenged on that. It seems like if economic fortune/challenges lead to that rise/fall of a civilization, I would expect there to be external considerations driving that rapid change which might be modeled by but never (completely) explained by economic laws. Did you end up receiving helpful comments on that essay?
D: You'll make a wonderful historian ("it's part but not all of the answer" is our boilerplate statement for everything). I agree, though: genetic and climatic factors operate outside of economics laws and tend to alter them, though I think that most of these effects can be expressed in economic theory.
9. Jared Diamond on The Agricultural Revolution: "The Worst Mistake in the History of the Human Race"
D: On a completely different note, I wonder what you make of this essay by Jared Diamond: https://www.discovermagazine.com/planet-earth/the-worst-mistake-in-the-history-of-the-human-race. He reproduces it in The Third Chimpanzee, and I find that I still do not have an effective response after several years of consternation.
R: In response to Diamond, I agree with much of what he has to say in that essay, but I disagree with the overall thesis. I believe he begins to undermine his own argument when he acknowledges the intertwined relationship between agriculture and crowding. The fact is that everything which makes us special as humans relies upon the growth of societies: culture, science, art, technology, and so on; and societies require agriculture, as Diamond acknowledges. So the renunciation of agriculture must come alongside a renunciation of everything which makes us distinct from other animal species. These things weren't available to the hunter-gatherer "society", not because of lack of leisure time, but because of an inability / lack of incentive to transfer knowledge across hundreds of generations. Why bother developing a sophisticated writing system or beginning to study arithmetic when your day-today life is preoccupied with survival? The shift towards agriculture, on the other hand, motivated both of these developments and facilitated their gradual advancement through population growth and preservation of past knowledge. It's only consequently that we've been able to take advantage of our heightened potentialities as humans through the development of incredible cultures, civilizations, and knowledge. Diamond points out that this transformation also brought about great inequality, disease, and despair. (He didn't talk too much about despair, although I think it's one of the worst consequences of the Agricultural Revolution.) Nevertheless, all the other developments of societies (medicine, politics, technology, psychology), predicated upon the growth of agriculture, also offer solutions to all of these problems. It's also not as if hunter-gatherer societies were immune to terrible fighting and social hierarchy—consider the documentation of war between chimpanzee populations as well as infanticide, murder, rape, domination, and chronic stress among several primate species. Changes over the last 10,000 years may have exacerbated some of these problems, but they have certainly improved many of them significantly as well (and not just for the rich elite societies). We should therefore not view pre-agricultural societies with rose-tinted glasses as obviously preferable to modern society, the way I think Diamond's analysis occasionally does (mostly through what he leaves out, rather than through active misrepresentation). How many of us, after all, would give up our current lifestyles to become hunter-gatherers? It's notable that Diamond largely compares pre-agricultural societies to immediately post-agricultural societies, rather than modern-day societies. Although some of the trends such as about height persist, the trends regarding nutrition and health have been vastly counterbalanced by developments in medicine. (The hardest problem now is getting people to actually listen to the advice of doctors.) While this isn't quite true globally, that is slowly changing and the reasons for this disparity lie in the realm of politics; they are by no means predestined by the advent of agriculture. Diamond's claims about the relationship between agriculture and the subjugation of women seem similarly narrow-sighted. Once we recognize that the growth of societies (hence culture, science, art, and technology) was predicated upon the Agricultural Revolution, I think we begin to see that transformative period as a necessary hurdle rather than the egregious mistake which Diamond insists it was. In the end, I agree with much of Diamond's argument, and I think his point would be made even more compelling by focusing on the current prevalence of "diseases of despair", but I think it's a mistake to use his analysis in order to portray hunter-gatherer lives as preferable. As a final consideration, I'm intrigued by the impact Diamond's argument, if successful, would have on the frequent allusions to animal suffering within ethical philosophy. It's currently popular to paint animal life as "nasty, brutish, and short" in the manner that we would typically portray hunter-gatherer lifestyle; their lives are dominated by the biological imperatives to survive and reproduce, resulting in a life of constant anxiety and terror. If we should instead view this life as preferable to post-agricultural life, shouldn't we then envy the lives of animals, rather than despair over their suffering? That seems wildly implausible to me, and I'm not sure that it's even psychologically possible.
D: One issue that I have with the piece is that I think that Diamond agrees with you, and has merely adopted his "worst mistake" typology in order to attract attention. What he means to say is something less controversial: that adopting farming is not rational for the individual hunter-gatherer, such that accusations that those bands which have missed the Agricultural Revolution cannot be accused of "backwardness." I don't think he really believes that this short-run mistake can be extended into the long run. Even Diamond, who has lived among various tribal peoples, has not chosen to remain—he came back to his cushy home, opera booth, and UCLA lecture theater. By practically every metric that he examines, humanity has either improved or gained the ability to improve—heights and nutrition are obviously better than at any point in our history (as are life expectancies); gender equality is a widespread ideal, if often unpracticed; economic inequality can be rectified through democratic action (as in Europe, or the postwar US), and is in any case founded on path dependence, not violence; leisure time, meanwhile, would be abundant if we chose to consume at lower levels. Work has for many acquired a social purpose that actually prevents despair and makes leisure just one of many commodities that we purchase. The nascent technological potential of agricultural societies makes the long-run calculus obvious: we were better off for whatever sacrifices (if any) our ancestors made. The more interesting question is the one he actually considers: whether the Agricultural Revolution truly worsened the lives of those who participated, at least in the short run. I am highly ambivalent on this question. Agricultural societies were not so dominant as to be able to smother all hunter-gatherer tribes until at least the late classical era, if not much later—many of the barbarians that toppled Rome, for example, were only barely farmers, and the famed Scythians and Huns were mobile pastoral peoples. Who would have stopped defection by families who chose to live in the vast interstices between civilizations? There were gaps between and within all kingdoms until well into the Middle Ages. I struggle to believe, in short, that mankind was bullied into such a drastic shift. On the other hand, the biological evidence is unequivocal—average living standards must have declined. Did this mask some other changes—were there perhaps fewer children in pre-agricultural societies, forcing a "quality over quantity" focus in child-rearing? I am not familiar enough with the research to know.
D: I recognize that my analysis is colored by my assumptions about human rationality, but if any situation warranted paying the information costs, surely whether or not to make (or remain in) the transition was one—the price of failure was imminent starvation and death.
D: Hunter-gatherer societies, by the way, were not necessarily sustainable on a global scale—the history of mass macro-fauna die-offs lags quite closely the history of human population growth and geographical expansion. Marvin Harris, in his book Cannibals and Kings, argues that overhunting was one potential cause of the transition: in short, our destructively aggrandizing impulses were no weaker in the distant past.
D: Incidentally, my father believes that Diamond has subsequently adopted a degrowth perspective, arguing that the transition was an error from the perspective of the present as well.
R: Your restricted interpretation of Diamond's thesis is indeed more plausible. I find myself in the same boat as you in terms of not being familiar enough with the research to make a determination one way or the other. I do think that we might view the transition as a simple numbers game, though. If agricultural societies facilitated greater levels of sexual reproduction than hunter-gatherer societies, then they would prosper in the long run, despite any decline in living standards so long as this didn't significantly affect the disparate rate of reproduction. That kind of explanation, if true, would eliminate any mystery about why a worse lifestyle would be selected for in the short-term, though it leaves open the question of why agricultural societies originated in the first place.
D: I wonder if Diamond would support a top-down interpretation: tribal chiefs, noting the surpluses derivable from agriculture, force underlings to grow crops and use warriors to defeat nearby clans and enslave the subjects as peasant cultivators. This creates a larger surplus to feed a warrior class, satiate the leaders, and continue the process of expansion.
D: Of course, this presupposes the inequality that was agrarian society apparently produced!
R: That part of his argument confused me since there were certainly social hierarchies even in hunter-gatherer societies. This should be obvious given the extreme social disparities witnessed even in other primate populations (although those tend to fall along sex divisions, though not exclusively, since sexual dimorphism is greatly exaggerated among non-human primate species). I see how greater population sizes would result in greater inequality, but then agriculture is just an accidental rather than actual cause.
D: There are also within sex differences in height, strength, and intelligence that would tend to be exacerbated/perpetuated by the ability to gain access to better nutrition.
D: I don't quite buy the accidental cause distinction, though. If the path is taken, surely the consequences alone warrant a mistake/benefit judgement?
R: If an action A leads to a consequence C, it's not necessarily the case that A caused C. Consider, for example, the action of throwing a rock through a window, thereby shattering the glass and allowing the outside air to enter the inside of the home. Consider also that a second person happens to burst a container filled with toxic gas nearby, which finds its way through the broken window and into the respiratory system of the old lady sitting inside, ultimately killing her. In this case, A is the action of throwing the rock through the window, and C is the old lady dying. I don't believe that A caused C, rather that the toxic gas entering the old lady's lungs caused C. Action A is merely an accidental cause of C, since A could have happened without C provided different external circumstances.
R: I suppose my claim is analogous for agriculture and inequality if we accept that the increase in inequality was actually due to greater population sizes. Since agriculture didn't necessitate greater population sizes, and greater population sizes could have been achieved even without agriculture.
D: You contest the notion that agriculture led to greater population sizes? Growth is the primary consequence of the agricultural revolution—hence quality vs quantity.
R: No, I don't contest that. I merely contested that as the necessary relationship between the two. Clearly we can imagine a society which practices agriculture but maintains a small population. As long as there's no necessary relationship the two, I think my term 'accidental' applies per my analysis above
D: I agree. Would you accept, however, that the possibility for inequality is necessarily increased by agriculture?
D: Obviously (you know me) I don't tend to focus on inequality as a paramount social problem; just want to give Diamond his due here.
R: I don't think so, in fact I believe that would be an even harder claim to defend since it relies implicitly not only upon a necessary relationship between agriculture and inequality, but also that the corresponding degree of inequality will always be greater than the extent of inequality under the preceding pre-agricultural society. That seems like a hefty burden to take on, though I don't claim to have refuted it. Although can't we imagine a hunter-gatherer society with great inequality which is succeeded by an agricultural society which is more egalitarian? Where is the contradiction. (I understand that you don't necessarily believe this, so pose my question to Diamond instead.)
D: The claim is not that inequality is necessarily increased by agriculture, but that the scope for inequality is widened (better clothes/food vs mansions/yachts/jets). I could envision an agricultural society as egalitarian as a ln HG band, but probably not more—stationary production increases the chances of elite expropriation.
D: The last point is speculative, obviously, and I was on a very boring phone call, which muddled my thoughts.
R: Ok, now I understand your claim. In that case, I'm inclined to agree that agriculture will widen the scope of potential inequality insofar as it extends the possible social strata, in ways unavailable to the hunter-gatherers. However, I can still imagine a group of ruthless huntergatherers who operate like a tyranny where one male rules the rest and obedience is maintained via the threat of mutual extinction and power imbalance (e.g. in terms of strength and loyalty). This would seem to me to be less egalitarian than many possible agrarian societies.
D: Less egalitarian than many possible agrarian societies, no doubt. But could that group transition into a more egalitarian agrarian society? I think not. The tyranny of the male would be all the more powerful for the inability of his followers to move their food production and property elsewhere; rents can easily be calculated and extracted; and potential defectors monitored.
R: Hmm, I think I can agree with that. In that case I see your point about the capacity for agricultural societies to exacerbate inequality
D: Cool. I don't necessarily believe that it'll exacerbate inequality, but merely that it's possible for such a situation to occur.
10. Rigor vs. Flexibility in Definitions (Natural vs. Social Sciences)
R: Yeah I find his deflationary approach to defining capitalism compelling. Although I wouldn't say that definitions are merely arbitrary, I often find that conversation is more so hindered than helped by debating the definitions of terms, rather than just agreeing upon an interpretation for the sake of moving forward. A notable exception is in mathematics, where definitions are actually extremely important and can be the deciding factor in whether or not some theorem is true. For example, Lakatos' "Proofs and Refutations" provides an entertaining and insightful reconstruction of the historical debates surrounding the definition of a polyhedron, and its significance regarding Euler's "theorem" about polyhedra. The definition of a "hole" in topology has a similarly contentious but illuminating history. What's most interesting to me is how such debates engage substantive and meaningful considerations, whereas semantic disagreement in philosophy often feels like an exercise in futility
D: That's an excellent point. I'm actually trying to ascertain just what the difference is between the social and natural sciences in the respective utility of agreeing to definitions. I think it must have something to do with the fact that definitions are transient and easily undermined for the sake of one's own alternative theory in the former.
R: I think part of the problem might be that precision and rigor with respect to definitions exists on a spectrum with logic and mathematics on end through physics, chemistry, and biology up to the social sciences on the other end. The function of rigorous definitions is to have equally precise statements about the things being defined. So that even a slight change in the definition may upset the truth of some theorem. In other words, statements in fields like mathematics are incredibly sensitive to our definitions. This seems to be less true for psychology, for example, where a general statement like "people act in accordance with their desires" does not rely upon a specific interpretation of "people" or "desires" or their conceptual relationship. Whether or not children or women or other races are included under "people" doesn't necessarily impact the truth of the statement since the concept of a "person" is flexible (i.e. imprecise) enough to accommodate such clarifications. The overall relevance of this for having conversations is that we can (usually) afford to postpone a detailed elaboration of the meanings of our terms in fields like psychology or economics whereas mathematics relies upon precise definitions in order to even understand any statements made about those things being defined, and so it can't afford to simply set aside questions of semantics
D: The problem with your interpretation, though, is that social sciences should be sensitive to definitions. The question remains: why do definitions remain open to debate in some places but not others?
D: Just because you can get a general idea about what is being said without a precise definition does not mean that the argument would not be enhanced by the acquisition of one.
D: Flexibility, to my mind, implies a kind of conceptual laxness or weakness which need not be indulged.
R: As for your claim that definitions should be precise in all fields, and that flexible definitions could always be replaced by more precise technical definitions, I'm not quite sure. There's a definite downside to rigorous definitions, which is that concepts become intimately related to the theories/theorems which utilize them, in such a way that obscures the presence of the same/similar concepts in different theories. This problem is widely appreciated by mathematicians, for example, who often attempt to mitigate against it by initiating programs specifically designed to unite disparate subfields such as "algebraic geometry" or "arithmetic dynamics". To take an example from the latter field, this paper (https://annals.math.princeton.edu/2020/191-3/p05) made a significant discovery by establishing and then utilizing an analogous relationship between "torsion points" (number theory) and "finite orbit points" (dynamical systems) in order to transform a longstanding problem in number theory into the language of dynamical systems. The conceptual machinery which was ultimately necessary to tackle the problem was not available to the subfield of mathematics in which it was originally stated. That's an example where very precise terminology, which is demanded by mathematics, actually obscures deeper conceptual relationships and fosters insular thinking, hence why mathematicians sometimes feel the need to consolidate different tools from various sub-disciplines. Another point is that I'm not sure whether some fields are actually capable of making more precise statements without turning into different fields. Since the precision of one's terminology restricts the precision of one's statements, it follows that limited precision in one's terminology may actually be preferable/necessary. Consider, for example, the concept of "homeostasis" in biology. The basic idea is imprecise: a process by which a living organism maintains stable equilibrium conditions. It allows us to make statements like "an example of homeostasis in humans is the thermoregulatory system which maintains an internal temperature around 98.6 degrees fahrenheit"; but if we attempt to give a more precise description of this concept then we quickly go beyond the language of biology. For example, explicating the specific chemical pathways by which thermoregulation operates brings us into chemistry. Or detailing the subatomic structure of chemicals, cellular structures, tissues, organs, and organ systems brings us into physics. Homeostasis, therefore, with its built-in level of imprecision (hence flexibility) is the proper level of description appropriate for making statements within biology. If we insist on providing a more specific description, then we leave the realm of biology and lose the conceptual generality of "homeostasis" which allows us to unite thermoregulation with baroreflex and bone remodeling. So it's not clear to me whether more definition-sensitive statements are always preferable or even possible. I definitely see your point about how some bad actors will hide behind ambiguity in order to deflect criticism as mere misinterpretation. However, I think the solution is not to make the social sciences more like mathematics, which honestly I would expect to exacerbate the problem; since then each researcher will adopt their own idiosyncratic definitions of common concepts and restrict its usage to their own particular pet-theories. ("Oh I'm not talking about that kind of capitalism, I mean post-hypermega-lydian capitalism as elaborated in Lundenshellenberg's classic 1879 treatise and interpreted by Holmor in his …") Instead, it's important that researchers acknowledge a shared understanding of the essential features of some common concept and then engage in mutual criticism in order to determine its proper interpretation. So, for example, every psychologist should comprehend a shared (somewhat vague) notion of "the unconscious"; they should agree, for instance, that "the unconscious" operates without being mediated via thoughts and is responsible for regulating body temperature and breathing. When two psychologists disagree about a more precise characterization of "the unconscious" (say one is a Freudian and the other is a Jungian), I don't think they should respond like mathematicians by adopting two different concepts and simply elaborating the corollaries of each view. Instead, I think they should interrogate the deficiencies in each interpretation (by, for example, testing the respective predictions of each theory via experimentation) in light of their foundational agreement about the essential features of "the unconscious" in order to arrive at the "true" interpretation. And so it's important that psychological statements about "the unconscious" remain flexible enough to accommodate various further elaborations lest competing researchers convince themselves that they're simply speaking of different concepts and thereby abandon a common scientific enterprise.
D: While I was not necessarily calling for terminological specificity, I actually think that the scenario that you've so cheerfully outlined—"I mean post-hyper-mega-lydian capitalism"—is actually better than the one that exists at present. We can actually evaluate what you mean, in that case, by going to the work that you've cited or demanding the straightforward definition of the term, and the preconditions for its fulfillment. Does HML capitalism include the capitalistworker relationship and subsist on surplus value? If not, well then we can differentiate it from Marxian capitalism in this respect—in short, clarity in terms forces one to say what one means. This doesn't entail mathematicization—far from it. But we can separate general from specific concepts and recognize their respective utilities in discussion; we know that the former is a convenient reference point without much analytical power, while the latter attempts to describe a concept as exactly as possible. We need not even attempt a hierarchical family/genus/species structure with our terms—but we do need to know when, where, and why it's worth bickering about them.
D: When our terms cease to carry descriptive meaning as a result of changes in the field, then our "flexibility" should consist of a willingness to openly debate whether the terms and their definitions remain useful. If we think it necessary, we should absolutely change them to adapt to new circumstances. But flexibility and mutability (or even generality) need to imply uncertainty, which in the social sciences has too easily empowered the "bad actors" that you've described. Indeed, it's probably inhibited the formation of paradigms and research programmes altogether (beyond popular zeitgists).
D: Summary: not calling for all terms to be made more specific and inflexible, but rather for the existence of a broader arsenal of both specific and general terms for the more accurate and comprehensible discussion of complex topics. Greater conceptual clarity is the basis for research, and for discussion that is anything but individuals talking past one another. "Capitalism" is not a useless word at all, but it would behoove everyone to realize the limits of what can be conveyed by using it unaccompanied.
R: Then it seems we mostly agree. I certainly acknowledge that precision about our terminology sometimes facilitates clarity in discussion, in which case I'm totally in favor of it. Though I dispute the claim that it is always a virtue (which I wouldn't attribute to you). Hence why I gave the example of "Post-super-ultra-hyper-mega-meta-lydian capitalism" (that's the full form). My intention was to present an extreme example where adopt highly idiosyncratic terminology to the effect of obscuring a common subject of research, namely capitalism; I tried to illustrate the consequences of this with the "arithmetic dynamics" example. Though I can see why you actually interpreted my example favorably, since I didn't provide enough context within that quote itself. I wholeheartedly agree with your last statement about preserving general concepts and providing elaboration when necessary for analysis. My fear is that the unequivocal privileging of precision leads to the mathematicization of terminology, which suits neither mathematics nor science.
D: I think I understand your mathematics example; I was simply pointing out how PSUHMML Capitalism might actually help discussions by forcing people in the social sciences to be clear about what they mean, because I find that the problem isn't idiosyncratic terminology—as long as people then explain themselves—but the use of vague terms masquerading as commonality as shields for obscurantism.
11. Democracy
D: By the way, I'm sorry if I came off aggressive at all yesterday re: democracy. I felt that you were arguing for the sake of disagreement, but I should have taken you more sincerely.
R: Sounds good. No worries about yesterday, I didn't feel that you were aggressive. I think we may have talked past each other a bit, though. At some point in our discussion, the relationship between democracy, science, and freedom got confused and that led to miscommunication. My point was just that democracy does not necessarily restrict liberty insofar as it also facilitates liberty (both through preventing its legal suppression as well as actively promoting the exercise of those freedoms, e.g., speech and religion). But then the relationship of this analysis to science was muddled since Feyerabend was advocating "democracy" in science explicitly in order to promote freedom; and so we had concluded that "pluralism" may be a better description of his view.
D: Right, I think I understood that. I was arguing that democracy necessarily constrains some liberties in order to achieve other ends—sometimes freedoms, but on other occasions safety, public order, or redistribution. Whether the balance of liberties distributed across the mass of the citizenry increases is, to my mind, irrelevant: certain behaviors have been blocked.
D: That may be an intrinsic good. Most or all citizens may believe that it is so. But the point is that the goal of political formation is to regulate the organization of society. It may be right that we restrict the ability of firms to emit CO2, and that in so doing we open up possibilities for young children to live longer, healthier lives. But this boon has come at the cost of some economic liberty.
D: Which, in the end, is why—I think—we returned to the notion of pluralism, which does not have the same regulatory mechanism.
R: I agree will all of that. I think I would be inclined to say that the ability for young children to live longer is not merely a good, but a fundamental liberty; that's why I was objecting to the characterization that democracy limits freedom in order to promote greater goods, since I think many of those greater goods reduce to freedoms themselves, in which case we may actually be promoting greater freedom. But this is, I think, not a particularly important semantic disagreement.
D: I actually think that this distribution is important, but that's talking as an ex-classicalliberal/libertarian. I think the reduction is, well, reductionistic. I also resist this notion of "aggregate freedom" as inimical to the human spirit.
D: But I agree that we don't need to settle this to resolve the scientific quandary.
R: Indeed, the scientific question is quite separate. The first point is whether greater "freedom" (pluralism) in science is actually efficacious with regard to discovering the truth. Beyond that, we may ask whether it's nevertheless morally desirable (for the sake of social harmony, for instance). It seems that Feyerabend would respond affirmatively to both, whereas both of us (I think) are at least skeptical about the former and therefore deem the latter to be beside the point, insofar as we declare the pursuit of knowledge to be the primary goal of science (in which case any auxiliary benefits of pluralism would seem irrelevant).
12. William MacAskill - Are We Living At The Hinge Of History?
R: Thanks for sharing. I thought the article was interesting and generally compelling with regard to its appraisal of HH as characterized in the essay. My primary reaction, though, is to object that (I suspect) most people who make a claim about us living "at the hinge of history" are typically comparing the present to the past, not the present to the future (or even all points in time). If that's true, then the scope of the claim is for more restricted (only the past 6000 years or so of human history) and consequently our Bayesian priors should be much higher than if we were comparing our current "influentialness" to the potential influentialness of all points in the future as well. The author objects to my presentation of HH on the basis that if there were some more influential time in the future, then we should actually be investing our current resources into that future decision instead. However, proponents of HH typically contend that we will not ever get to that future more influential time if we don't first overcome our current hurdles (e.g. climate change and threat of nuclear war), in which case "investing in the future" and "tackling our current obstacles" are identical. The author also objects to these restricted interpretations of HH on the basis that they are incompatible with what he terms the "Bostrom-Yudkowsy view on superintelligence." Without having read their books, I'm skeptical that they would argue that our response to the threat of superintelligence will constitute the most decisive moment in human history EVEN IF we succeed in eliminating it, in which case there seems plausibly to be room for more pressing threats in the future. This asymmetry seems to defuse a lot of the skepticism since the grandiose claims are restricted to the case where the threat isn't eliminated, in which case there is no future human history and so our claim of maximal influentialness concerns only a marginal period of time compared to the indefinitely long span of human history should we succeed in overcoming the threat in question. My final consideration is just to point out that in addition to the salience bias associated with determining the influentialness of our current time, there is a competing bias which naturally disposes us towards not wanting to accept that we're living in the most influential (or even just an enormously influential) time because that scenario immediately imposes a high degree of responsibility upon us—especially the kinds of people who have the luxury of reading that article in the first place.
13. Historical Economics, Mathematical Notation, Teaching History of Science
R: Just read your latest post, "The New Historical Economics is Self-Aware". I found your distinction between economic history and historical economics to be enlightening and, once laid out, quite clear. It's one of those concepts which seems obvious once you encounter it, but never previously occurred to you. I'm reminded of the remarkable invention of analytical geometry (think graphs on the Cartesian plane), which connects algebra (relationships between variables) to geometry (shapes and curves) in a way that now seems to me obvious, but at the time must have required a great spark of ingenuity. Another such innovation is modern algebraic notation; the use of "x" to denote an unknown variable and the use of superscripts for exponentiation were not invented until 1637 by Descartes in his work, "La Geometrie". For your amusement (and perhaps to inspire some gratitude for the conveniences of modern mathematical notation), here's a collection of polynomials, first in modern notation, and then in the notation of Diophantus (c. 150 AD):
[TODO: insert images]
[]
[
]
R: Imagine trying to solve even simple quadratics, let alone complicated problems in econometrics, using the notation of Diophantus!
D: Hey Rajat, good to hear from you. I'm very glad that you enjoyed the latest blog post, though I do have to admit that it's sort of a poorly edited ramble. But that's all that I have time for these days, and I assume that some people will like that better than nothing at all. In any event, it's something that I had to get off my chest. No, that revelation was not obvious to me either. Someone accused me later of stealing an idea they had been trying to push on me for months; and I said well, if you were saying that I had really no idea, and couldn't have realized it until I was more of a practitioner. They thought that this was fair.
D: I really have no idea what Diophantus is doing there. Reminds me of an old Chinese parable about a young child learning the first four numbers as characters, and then trying to count to one billion.
R: Personally, I don't mind the somewhat conversational tone of your posts. It makes it more accessible for laymen like myself, whilst retaining higher standards for rigor and documentation than is typically found in popular articles. Regarding Diophantus, I'm as lost as you. I'm always impressed by how much progress was made in mathematics (and the sciences) using the rhetorical style of Euclid, especially when compared to modern texts which are filled with symbols. If you've ever tried to read documents from those periods, like Kepler's Astronomia Nova or even Newton's Principia, you'll recognize it as a daunting task; it's telling that modern renditions of these topics deviate so significantly in style. For example, the famous "F = ma" never actually appears in Newton's writing; instead you find "A change in motion is proportional to the motive force impressed and takes place along the straight line in which that force is impressed." It's obvious which form is more convenient for trying to solve projectile motion problems.
D: Oddly enough, there are close analogs of this in the history of economic thought. There is a big debate about whether we should interpret Adam Smith, David Ricardo, and Karl Marx as they wrote, in terms of the concepts that they were probably thinking of, or in terms of modern economic models.
D: One compendium, for example, explains Capital and the wealth of nations using calculus and various supply demand graphs. Having read the former, I think you can probably guess that this is somewhat anachronistic.
D: This also reminds me of some interpretations of Greek and medieval science and philosophy as well.
D: Say, the treatment of atomism.
R: I remember Kuhn also talking about this in The Structure of Scientific Revolutions. He mentioned how textbooks usually present the history of their science as a linear accumulation of knowledge, with key figures making their particular contributions, and later scientists simply adding on to this work. Atomism is a notable example, since our concept of what an atom is has changed so drastically since it was first suggested by pre-Socratic thinkers like Democritus, whose understanding of an atom was as something indivisible, yet today we speak of sub-atomic particles. People like Boyle (c. 1661) later spoke of corpuscles, like atoms in the sense that they are fundamental constituents of matter, but not necessarily indivisible. Dalton (c. 1808) approached our modern understanding of atoms with his law of definite proportions, but it solved a very different problem than the one addressed by the Bohr model (c. 1913). This is to say nothing of the post-quantum understanding of the atom. So one might justifiably wonder why we use the same word "atom" for this concept which has taken on such disparate meanings over time. Are we faithfully reinterpreting the work of past scientists updated via modern observations, or are we anachronistically imposing our modern predilections on to past science in order to present a simple narrative?
R: I don't have a clear answer to this question, but here's a few thoughts: The big problem, it seems to me, is when we reinterpret the work of past thinkers using modern terminology, but then retain their conclusions stated in this new language without appropriate modification. My suspicion is that this is not uncommon with reinterpretations of Marx, for example. In science, however, I think we tend to update our terminology / understanding of concepts only when they're accompanied by new conclusions, so this is not as much of a concern. For instance, with advancements in our understanding of the basis of heredity (i.e. genetics), we updated our understanding of evolution to mean "change in the frequency of alleles in a population", and so we accordingly gave up Darwin's theory of pangenesis as an account of heredity, but retained the world "evolution".
R: Another concern is whether we are misleading new students, when we teach them about old concepts using new terminology, about the kinds of questions with which these thinkers were concerned and the way in which they approached them. This is almost definitely the case, but the tradeoff is that we can learn from past thinkers with much less effort. I'm glad that I can learn about Diophantine equations without having to learn the notation of Diophantus! So it's not clear that this is a practice which we should change, but maybe it would be good to at least point out what we're doing.
D: I tend to agree with the notion of a simple narrative, but it's certainly a complex question. The analogy between physics and economics in this case is imperfect, because in the first instance antiquated terms are being preserved across history to describe several phenomena consciously by the users who adopt them. Atomos, for example, is Democritus' word, and subsequent thinkers have borrowed it for their own purposes. It has the meaning that each of them has endowed it. Whereas something like general equilibrium, say, is a term developed in a more recent economic era (the late 19th century) and then grafted onto the works of Smith.
D: Conceptually speaking, I do think there is a simple narrative aspect to it. Using the same words doesn't help, but I would say that the narrative in this case is a consequence of language and not the reason for the shared terminology. I think we have spoken before and largely agreed about this myth of cumulative progress.
D: As for educating students, I am generally of the opinion that the marginal return to education in the history of science is very low. I think that in general, people tend to compartmentalize what they know about history from what they do in practice. I probably know more than the median economist about the history of economic thought, but it doesn't help me do anything on a daily basis. Now, as a historian the question is different; interpretations from over a century ago are, if not completely valid, still provocative talking points today. Max Weber still gets papers arguing that he's wrong.
R: I agree that it's important to distinguish between those concepts which have been preserved across time organically by the practitioners themselves, such as with atom, and those concepts which are artificially introduced into the work of past thinkers, like your example of general equilibrium. That being said, the effect is the same: we're constructing a simple narrative whereby we pretend that people have been dealing with essentially the same concepts for hundreds of years, just slightly modified to reflect developments in our knowledge and understanding. As for whether the narrative is a consequence of the language, or the language is a consequence of the narrative, I'm sure that in practice they develop alongside each other. Regarding education, I think that studying the history and philosophy of science generally has some immediate practical value, but with strongly diminishing returns. So it's probably good for scientists to understand why the notion of a singular scientific method is so misleading, and to realize that the history of science is not a story of gradual accumulation of knowledge, and also to appreciate that there is no definite, unchanging concept of what is "natural/material", rather this notion is constantly evolving with the latest developments in scientific theory. But all of this can be learned in a couple of lectures, and additional details don't add much value.
D: On the subject of education, I think it can be useful to spend a little bit of time talking about the history of mistakes, and ensuring that students know that we are if not equally still prone to them now. For example, the replication crisis in the behavioral sciences should, if we are serious about getting better at applied statistics and experimentation, be taught to every future generation of empirical economist, sociologist, and social scientist in general until the end of eternity.
D: I wonder how much we are really susceptible to this simple narrative. Obviously it is true that we believe there to be greater similarities between the Greek version of the atom and our own. But I can't think of many contexts in which this issue is taught where the nuance is not discussed. You would really only learn about Greek atomism a philosophy or a philosophy of science class, and I cannot think of a reputable professor who would not draw the distinction.
D: I don't really argue with you here, I'm just musing.
R: I agree that a physics professor probably wouldn't mention Greek atomism, and if he did, he would surely point out its differences with the modern conception. But there are also significant differences between our modern conception and Dalton's conception, or Bohr's conception, potentially enough to warrant a different classification scheme. Yet, I think the practical value of retaining the same name for the sake of simplicity in teaching probably outweighs the dangers of overloading terminology
R: A similar point can be made about the concept of the gene in biology, by the way. It's postulation based on the observation of Gregor Mendel was highly inferential and vague. Since then, we've gained insights into the physical structure of DNA which allow us to more precisely define genes, yet this introduced difficulties in reconciling our new understanding with Mendel's work, so we amended the concept but retained the term. Since then, even more difficulties have been revealed with existing definitions, leading to further modification (e.g. one gene-one enzyme hypothesis was challenged by the discovery of genes which can encode multiple proteins, and there were further challenges with the discovery of overlapping genes), whilst retaining the term. I think a similar case can be made for the concept of species too. Overall, this process seems to be nearly ubiquitous in scientific practice, even though I think it receives little attention in most textbooks, which are silently updated with the newest definitions.
D: I think I may be slightly missing your point: are you saying that our concepts need to be flexible, and thus our terms will be changing constantly if we are not flexible with their definitions?
D: i.e. we may be making a definition in the present that maybe invalidated by future knowledge, so discarding a term every time you're forced to refine a concept is a recipe for chaos?
R: Yes, that's basically what I'm saying. The consequence of this is that we are presented a misleadingly simple narrative of this history of science, since the same term is used to describe sometimes wildly different concepts; the benefit is that we don't need to learn a new language every time we wish to learn from past thinkers. So long as the differences between the modern conception and previous conceptions of these terms, such as atoms or genes, is made clear, it seems to me quite useful to simply update the meanings of these terms as our knowledge advances, rather than to invent new terms, even if this involves a bit of misrepresentation of the history (e.g. I was surprised to learn that Darwin's conception of natural selection was intimately related to a physical struggle for existence and the elimination of the weak, because this interpretation is explicitly cautioned against in contemporary treatments of natural selection following the modern evolution synthesis).
D: My general view, I think, is that the history of science must be useful to the scientist. That is, the true history of science is for historians, but whatever is useful to the modern scientist, who is ill equipped to understand the nuances of history, should be kept and everything else scrapped. If a term is misleading but useful it should be kept, so long as it doesn't mislead from a scientific point of view.
D: I'm not suggesting that a science should necessarily forget it's founders, although I think the loss from doing so is small. Instead, I am suggesting that a science should remember its founders selectively. We should remember a Canon of heroes and villains who serve as instructive parables, like the American founding fathers, Churchill or Napoleon.
R: I think I'm mostly in agreement with you. As much as I regret the misrepresentation of the history of science involved in the overloading of scientific concepts, it's clearly very useful for the working scientist, and the difficulties only come into play when asking questions about the history of science (e.g. what was Darwin's conception of natural selection?) not question about science itself. As for whether the teaching of science should focus its attention on a small selection of its founders, I'm more conflicted about this…Presumably you're suggesting that we should just remember those founders who were mostly right (and pretend that they had it completely correct, in terms of the modern understanding), and not mention the names of those who made valiant and potentially important contributions which were later overturned. But what about Lamarck? Isn't it quite instructive to know his name and theory, especially as contrasted with that of Darwin? I'm tempted also to suggest that we should forgo mentioning names altogether, and just talk about the theories as understood currently, so that we don't distort the history of science; but it's clearly useful to associate theories with names and historical landmarks, so I don't think this suggestion is tenable.
D: I wasn't suggesting that we only remember like three people for each discipline, but rather be selective about what we do remember. We shouldn't be nagging our scientists about the extent to which previous discoverers who were wrong were actually a little bit right, or previous discoverers who made contributions that were flawed or sent us down dead ends. Accumulative picture of science is useful, as is the notion of debunking, because it encourages cooperation between researchers and a healthy dose of skepticism about existing theories which motivates people to test them empirically. I do not think that we should focus on names. However, you must acknowledge that having some great biographical figures is a motivation for entering and being more passionate about the discipline. Emulating your heroes is a great reason to do science.
D: As regards Lamarck, we should remember him as a failure, no matter how close he got to being right, unless we rebound and restore lamarckism someday. I don't know how prone biology is to quackery, but any encouragement of heterodoxy of this kind in economics is only an encouragement of wasted time.
R: I agree with what you say about not nagging scientists with historical details, and also about the psychological importance of associating scientific achievements with persons (or small groups of people), even when this is somewhat misleading (as it usually is).
R: Regarding Lamarck, I think your assessment is too harsh. After all, the neo-Lamarckians played an important role in contributing to the modern evolutionary synthesis, and don't forget that Darwin also believed in the inheritance of acquired characters (it was considered an obvious deduction from the observed interplay between structure, function, and environment which couldn't really be accounted for by natural selection until it was supplemented and refined by an understanding of the basis of heredity and variation, i.e. genetics). It's also worth pointing out that epigenetics now supports the inheritance of acquired characters, although only in restricted contexts; and I don't want to be misinterpreted as saying that epigenetics vindicates Lamarckism, because it doesn't (Lamarackism requires its peculiar form of inheritance to be an inherent feature of all living things, and for it to be a self-sufficient cause of evolution, neither of which is provided by epigenetics).
R: So was Lamarck was wrong in some pretty fundamental ways, but it seems clearly wrong to call him a failure, given that he was the first to take the leap of really defending evolution as a fact, and his suggestions were important counterbalances to the excesses of the neo-Darwinians, who attempted to reduce all adaptation to the mechanism of natural selection as understood by Darwin.
R: By the way, I may have slightly misinterpreted the point of your remarks about Lamarck. If all you're saying is that it's not fruitful to continue the project of neo-Lamarckism today, and that it should be regarded as a historical curiosity of some significance, then I'm in agreement with you. But I would still object to calling Lamarck a failure
D: I'll respond to the rest tomorrow, but on Lamarck I was sort of saying the last point. I do not regard him as a failure, and I was too strong and suggesting that we should regard him as a failure. What I meant is that we should take him as an example of honest mistakes that made sense at the time.
R: Yes, that makes sense. Probably I reacted too strongly to the word failure in your earlier remarks
D: Looking back, I think I was speaking a bit too strongly and trying to be a bit provocative. Darwin shouldn't be called a failure because his ideas were flawed, and he shouldn't be called the greatest hero because our modern evolutionary tradition is drawn from his heritage. But I am rather calling for us to remember the best parts of Darwin as an inspiration to future generations of biologists. Reading the voyage of the beagle or the origin of species is a transformative experience, kind of like how watching Jacques Cousteau on television made my dad want to be a marine biologist.
R: Fair enough. I agree with the importance of having heroes in the history of science. As long as we fairly acknowledge their flaws as well, then we escape the trap of idolatry. I wonder whether identifying such heroes will become increasingly difficult as science (as well as mathematics) becomes more collaborative. I know that this is something which awards committees have already encountered, such as with the discovery of the Higgs-Boson. The original paper had 5154 authors, but of course the Nobel Prize only went to two people.
D: Didn't know that. That's insane. I was thinking about the collaboration issue and whether that sets a bad precedent to speak of heroes. I think it's probably useful to emphasize that heroes did cooperate, like Darwin with Wallace.
R: Right, collaboration has always existed, but just not at the current scale. It's probably more accurate to speak of lone scientific geniuses in the past like Galileo, Newton, Maxwell, and Darwin. Even though they did collaborate with others, it's not usually too misleading to say that they made incredible advances by themselves. It's difficult to think of anybody like that in the last century
R: Even Einstein, though he made great advances, particularly with general relativity, is somewhat unfairly singled out among his colleagues upon who's work he heavily depended, such as Bohr, Heisenberg, and Schrodinger.
D: Now, I think we've discussed enough inspiration. What remains to be settled is the history of science that is most productive for achieving results. Now, an inspirational history of science may achieve better results through increasing the mass of practitioners. But the question remains whether increased mass actually produces negative returns at some point. I think the answer is not, so long as there are high enough quality standards for entry. But that is no guarantee.
R: My impression is that the distorted presentation of the history of science given in most science textbooks is not that directly related to issues with scientific practice. I think most scientific work involves working within a paradigm and elaborating its details, which requires careful experimentation and strong analytical skills (both of which are emphasized in textbooks), but not necessarily groundbreaking ingenuity or willingness to challenge prevailing dogmas (precisely those things which are underemphasized in the textbook's history of science). I'm sure that someone like Feyerabend would strongly disagree with me, since he actively encourages iconoclasm and heterodoxy (he calls it "democracy") in science. But I see it like this: if we emphasize agreement in the history of science, then we are sometimes delayed in adopting correct theories which challenge the existing paradigm, but we retain a general order (i.e. consensus) on basic points which facilitates the discovery of new facts, which are then amenable to further refinement or later dismissal in light of new facts; if we emphasize disagreement and revolution in the history of science, then it's difficult to see how we would proceed past the stage of protoscience.
R: So the history of science which best supports scientific achievement seems to be that which emphasizes agreement and conservatism (sometimes even dogmatism), but which allows for the introduction of new ideas provided enough prodding. I'm reminded of the period around the turn of the 20th century when evolutionary thought was in a state of turmoil, which Huxley memorably termed the "the eclipse of Darwinism". There were the neo-Darwinists vs. neoLamarckians vs. orthogenesists vs. mutationists vs. finalists, all disagreeing quite adamantly and attempting to gather evidence for their preferred theory. The disputes were eventually resolved (mostly) with the modern evolutionary synthesis, facilitated by advancements in the understanding of the basis of heredity and variation (i.e. genetics). But until then, many scientists working in that field at that time expressed defeatism regarding the hope of ever deciding the right theory of evolution. I fear that a history of science which emphasizes disagreement would render ordinary periods of science like that of "the eclipse of Darwinism".
R: On the other hand, it's quite true that the modern evolutionary synthesis would not have occurred if we didn't tolerate the uncomfortable period of "the eclipse of Darwinism". Instead, we might have comfortably accepted one of the competing (and wrong) theories and proceeded until we ran into a brick wall and were forced to start anew. So the important distinction seems to be between artificial and natural disagreement among scientists, i.e., is the disagreement due to a genuine uncertainty regarding the interpretation of the facts (natural) or is the disagreement due to contrarians who want to challenge the accepted beliefs (artificial)? The eclipse of Darwinism reflects a period of natural disagreement, whereas the state of science between Aristotle and Galileo reflects something more like artificial agreement (and so could have benefited from some disagreement). I guess the difficult question is as to what social conditions facilitate natural disagreement (which may begin as artificially stimulated) whilst allowing for a return to general agreement once the natural disagreement is resolved and all that remain are contrarians. I don't think that our current institutions do such a bad job at this, but they probably lean towards shouting down disagreers a little too much.
D: I'm struggling with your distinction between natural and artificial disagreement, because to my mind it seems to imply a sort of outside view for which you've rightly criticized me for holding. That we can only know post hoc that one disagreement was productive, and another was not, and even then we are not sure. We can perhaps look to the motives of the people disagreeing, and to the basis for disagreement. If the basis for disagreement is an absence of knowledge, then perhaps this is to be considered natural? Whereas attempts to challenge established theory without much adequate evidence is not?
D: I agree about institutions, however I worry that this is perhaps because of my own personal status quo bias. I freely admit to that. But I think that institutions that make people pay a price for attempting to disrupt consensus, and raise the bar for inflation to the consensus, are generally a good thing. To make a comparison to American politics, it appears that the bar is simply too low for participation in both discourse and choice. You can vote from the comfort of your own home, spew ridiculous opinions from the comfort of your own home, and do so within anonymity, such that you never pay a price for failing to collect sufficient information on the subjects about which you opine. But universities don't let you do that
R: By natural disagreement, I mean to refer to cases where the known facts are legitimately ambiguous with respect to a range of plausible theories which explain them. In such cases, I think we should encourage the disagreement in the hopes of shedding light on new information which will ultimately allow us to decide on the right theory (which may end up being entirely different from the currently available proposals). By artificial disagreement, I mean to refer to cases where the known facts approach something like a convergence on a single well-accepted theory, yet, inevitably, due to the nature of underdetermination of empirical theories by facts, some people insist on proposing alternative theories which are substantially incompatible with the prevailing theory (i.e. posits totally different objects and/or natural laws) yet explains the known facts equally well. In such cases, in the interest of advancing normal science, I think we should ignore these alternative theories unless and until they are shown to be (significantly) more empirically adequate. Let there be the few outliers who persist with their pet theories, and maybe they will die off, or maybe they will eventually prove successful, in which case my conservatism would have delayed its acceptance, but I'm alright with that tradeoff. In retrospect, perhaps natural and artificial are not the best descriptors for my concepts. My thinking was that natural disagreement stems "naturally" from the ambiguity in the facts, whereas artificial disagreement is created "artificially" by the contrarian tendencies of some scientists. Let me know if you can think of better labels.
R: About institutions, we seem to share a general conservatism, and I also acknowledge that this a value judgment, not plainly decided by the facts alone. As for the analogy to politics, I'm always hesitant to speak of a competency to vote, because I think that some of the most important areas in which the "voice of the people" is needed are not those which require any particular expertise, but simply a certain experience / perspective. For instance, I don't think that being able to recognize the deprivation of civil rights in the US during the last century required any expertise in social science; it just required you to be black. Obviously the solution to these problems usually requires lots of careful weighing of considerations, which laypeople motivated by passions are terrible at. But that's why we don't have a direct democracy, which would be a travesty. Yet, it's still important for those laypeople to be heard, lest we risk ignoring issues which are pertinent to them. So the "voice of the people" seems to be important for pointing out problems which would otherwise be ignored, but not so good at providing solutions to these or other problems.
D: My point is not exactly about the education required to be a voter, although I was using this morning about making college free and then mandating that voters be college graduates, and wondering what kinds of political economy results that might bring. I was also considering how you could measure the economic effects of franchise extensions, and try to identify causally whether economic policy improved when women got the vote, or when poor people got the vote.
D: I was talking more about making voting costly in some way such that people took the time to invest in their decisions, acquiring information about the issues at hand and the politicians such that they had a better opportunity of voting for the common good. Because most people vote for the common good as they perceive it, and not for themselves.
D: See Brian Caplan, the myth of the rational voter.
D: As for your distinction between natural and artificial disagreement, I remain uncomfortable, but I think that you may have a distinction in your head that is stronger than the one that I'm seeing on paper, and at the very least I do understand the intuitive logic. Natural and artificial kind of work as terms, especially the latter in its most literal sense, as artifice meaning fabrication or construction.
D: But perhaps the right words are simply ambiguity versus contrarianism. One stems from the absence of knowledge, the other from a desire to contradict received wisdom.
14. Canadian fur trade
R: Just finished reading your paper on the Canadian fur trade. It's well written, with a clear and interesting thesis and methodology, and remarkably suggestive findings (despite the many appropriate qualifications you make in the paper). I thought that "distance to nearest enemy post" was a clever and surprisingly simple proxy for measure of competition, and Section 2 did a good job of summarizing the relevant historical context for someone with minimal background information. I was also surprised to learn about how accommodating the Europeans were with native traders and the active role that native traders played in negotiation, since it challenges their characterization as merely passive victims of European colonialism which I'm more used to. I have just a few lingering questions: Can your conclusion be interpreted to suggest that HBC and NWC would have mutually benefited from maintaining independent monopolies over distinct regions of Canada rather than engaging in competition? If so, is this an example of the Prisoner's Dilemma? Since you measure prices as a percentage of the company's comparative standard, does this control for fluctuations in demand from Europe for native goods, which might otherwise influence the relative prices paid to natives (apologies if this question is malformed, I'm not sure that I have an adequate grasp of the economic concepts)?
D: Just briefly on the separate monopoly areas question: Yes, this would have been the optimal outcome, and both companies try to negotiate a solution in this form. One of the puzzles to be explained is why they failed to come to an agreement.
D: I don't think it would be a prisoners dilemma, however, because I'm not sure that the Nash equilibrium here is actually defect.
D: On the question of fluctuations in European demand, no, we do not control for that. I am in the process of collecting data on European prices, which have been used by other authors for the pre-1763 period but have not been transcribed subsequently. That could definitely have an effect on the relative price paid, although it's not clear about what direction it should go.
D: The willingness of the European traders to accommodate native Americans is one of the motivating forces behind this paper, because it stands in such dark contrast to the evils perpetrated on natives in the United States. Or, indeed, upon natives by imperial regimes or concessionary companies around the world.
R: That makes sense about fluctuations in European demand. About the accommodations, you're right that it's a stark contrast to the treatment of natives elsewhere. However, if the interpretation of A.J. Ray is correct, that excessive gift-giving, especially in the form of alcohol, had the effect of making natives dependent on European trade, then the contrast becomes less extreme.
D: Still fairly extreme, in my opinion. The natives accepted the alcohol of their own free will— along with tobacco, it was one of their favorite products. There's also evidence to suggest that contact with the Europeans changed living standards for the better, for example by supplying metal tools and cooking equipment for the heating of food. Whereas the damage in the United States was done intentionally, for the sake of acquiring land, the damage that was done in Canada was done accidentally, and only resulted because of the incompatible natures of Western European and native society.
D: Even the competitive provision of alcohol/tobacco was of another order of magnitude.
R: Yes, that's true. By the way, the damage done by the introduction of European pathogens in Canada only really took effect after the fur trade had died down, right? That's why it wasn't relevant for your analysis?
D: Well, more like before and after. I think most of the die off occurred prior to the advent of the fur trade, but around the early 19th century there was a pandemic of smallpox that proved quite destructive. However, it was mitigated by the company's efforts to vaccinate.
D: To be honest, it's also not that important for our analysis because I just don't have the data right now.
R: I see. It's interesting how business interests seemed to facilitate positive relations between European and native traders, to the extent of motivating them to develop a vaccine. Do you know why this didn't occur in the American colonies? Were business incentives not so strong there?
D: You may have missed this part in the paper, but in the discussion section we talk about a model whereby participants in unequal trade fare better when they provide a service that cannot easily be replicated by another party. In this case, they are much less likely to be expropriated. I think in Canada, the natives ability to navigate inland and make contact with other tribes, as well as their long expertise in beaver trapping, and gave them a comparative advantage that could not be replicated by the small number of European traders, who were basically terrified of moving inland anyway.
R: Yes, that makes sense how the principle of comparative advantage made trade mutually advantageous in Canada. What I'm wondering is why a similar explanation didn't apply to the American colonies.
D: Ah. Easier conditions for settlement, different reasons for settlement (ie. building a new England vs. getting furs), and different economic bases (trade vs. ag) would seem to be at the heart of it. The HBC was never trying to settle so it never really had any land hunger
15. Theoretical Virtues, IBE
D: The conversation was I think about the different ways to do historical research, comparing Marxist and economic history paradigms (or rather not being able to compare them).
D: Ruling out programs based on their results rather than pre-existing premises about why they should/shouldn't work.
R: Interesting. That's definitely an important question in the philosophy of science: What role should theoretical virtues, like simplicity or parsimony, play in our evaluation of theories? Typically, I see them used as tie-breakers for theories which explain the known observations roughly equally well. But, theoretical virtues can also play the role of motivating scientists to consider theories worth pursuing, in order to ultimately demonstrate their empirical adequacy/superiority. This was the case with heliocentrism, of course
D: I think I completely agree? I was making a similar sort of argument to a friend the other day, who was asserting that messianism was rational. Sure, we have as much evidence for a world with a messiah as for one without one, and maybe messianism would make us happier/behave better, but that's the least economical explanation for the universe—you introduce additional mechanisms and assumptions to the fabric of reality for which we have no evidence, not even observed patterns.
D: The "tie-breaker" formulation makes a lot of practical sense to me.
R: I agree with your messiah example. Another of way of thinking about it is in Bayesian terms. Let E be all the available evidence to be accounted for, and let M be the messiah hypothesis and let N be some incompatible hypothesis, e.g. naturalism. Then even though P(E | M) = P(E | N)—meaning that each hypothesis explains the evidence E equally well—we should side with whichever hypothesis has the higher prior / intrinsic probability, P(M) or P(N), since that would maximize our posterior probability, P(M | E) or P(N | E), according to Bayes' rule.
R: The main difficulty, of course, is when we have competing hypotheses where one explains some piece of evidence better than the other, but its prior probability is lower. So we have to try and balance these points when evaluating the posterior probability of each hypothesis, which is very difficult without an explicit quantitative model for determining the probabilities. Additionally, these difficulties are compounded when we incorporate multiple pieces of evidence, which may favor various competing hypotheses or have subtle relationships which make it difficult to identify probabilistic independence
D: That's a really interesting perspective. However, my interlocutor would probably say that we cannot determine The prior probabilities of messianism and non-messianism.
D: In effect, he would probably claim that we've just assumed what we are trying to prove.
R: Hmm, then you would probably have to make the case for a low posterior probability on messianism after starting with with equal prior probability to some competing hypothesis. So you would have to find pieces of evidence which favor the competing hypothesis, say naturalism. This could be things like the gratuitous suffering which we observe ubiquitously and can even infer about the past given the violent mechanisms of evolution, which presumably would be unexpected given a loving messiah but are totally expected on naturalism.
R: But, if your interlocutor insists that we cannot even begin to calculate the posterior probabilities because we can have no knowledge whatsoever about the prior probabilities, then I would challenge him on this point. For example, it seems that we can know that for two independent hypotheses, A and B, we have P(A & B) < P(A) as long as P(B) < 1, which should always be true if we are (as we should be) fallibilists about knowledge.
R: Additionally, it seems like we are reasonable in lowering the prior probability of hypotheses which lack theoretical virtues. For example, if they are unnecessarily complicated, or ad-hoc, or conform to some psychological bias, like motivated reasoning or wishful thinking, then it seems reasonable to lower our prior probability in these hypotheses. If we can't, then it would be impossible for us to distinguish between reasonable scientific hypotheses and crazy ones which were maliciously contrived in order to fit all the known data but lack an overall cogency, predictive utility, simplicity, parsimony, intuitive plausibility, and so on.
D: Interesting to frame this discussion in terms of evidence for naturalism as opposed to against —insert bad philosophy—. Although suffering really is just evidence against a messiah—one has to make greater logical leaps and contortions to make the theory and 'reality' fit together. I like your probability argument, but I think we may have to flesh it out some more. I presume you are arguing that A = see evidence of the world and B = existence of a messiah/creator. But in this case no creationist would accept the independence of A and B– the only reason we observe A is because B, so P( A & B ) isn't necessarily less than P(A). So my friend is constantly concerned with "complicating" (blasted deconstructionist literary major) our notions of rationality and bias—he sees value in the illogic of thought given the holes in other parts of logical reasoning. Indeed, the notion of the word "reasonable" disgusts him, in a semi-Kuhnian (he hasn't read it) sense that research programmes are incommensurable and self-consistent from the inside. I don't think he's satisfied with our discussion of practical approaches to understanding the world and being "less wrong," as he doesn't see that as being any different from the impossible effort to become "more right."
R: Regarding the distinction between arguing fro naturalism vs. arguing against some incompatible hypothesis, as long as we are following the probabilistic style of reasoning which I've described, there's no real difference between the two. That's because any evidence which favors one of two competing hypotheses will (by definition) oppose the alternative hypothesis, and vice versa; so how we choose to describe it doesn't change the calculation. For A and B, I actually wasn't thinking of any specific hypotheses. Rather, I was just using those letters to denote two hypotheses which are assumed to be independent. Under that assumption (plus fallibilism about knowledge), it follows that P(A & B) < P(A), which proves that we can (at least sometimes) in fact have knowledge about the relative prior probabilities of two competing hypotheses; in this case, we've shown that more complex hypotheses have lower intrinsic probability than simpler hypotheses. We could try to apply it to the naturalism vs. messianism case by arguing that messianism is more complex since it involves positing the existence of a messiah in addition to the natural laws, whereas naturalism only requires positing the existence of the natural laws. But this formulation is quite crude, and so it's usually more fruitful to just argue about posterior probabilities and not quibble over intrinsic probabilities. As for deconstructing rationality, I'm not quite sure what to say…I'd be interested to ask him directly whether he believes in such a notion as truth. If yes, then I would just say that rationality is a process of thinking which is aimed towards the truth. If no, then I would ask him whether he believes that the things that he's saying have meaning. If yes, then I would argue that meaning presupposes a theory of truth, since to assert <p> is simply to assert <p is true>, so that as long as <p> is meaningful, so too must <p is true> be meaningful, hence there must be some notion of truth. If no, then I guess I would just ignore anything he says since, by selfadmission, it has no meaning.
D: Certainly it is not fruitful to quibble over intrinsic probabilities if you do not believe such things exist. I find your reputation extremely compelling, but I remain unconvinced that one can really halt a skeptic. You could probably deal with this particular skeptic, because he's not particularly fixed in his opinions and, like most people of his type, prefers to poke holes rather than stand on any kind of ground.
D: He's particularly interested in Derrida, which leaves him to question the intrinsic meaning of language altogether. The last line kind of seems like a bit of a gotcha, though. If language really did not have a meaning but someone was trying to convey this impression to you, then You would be unjustified in ignoring what he said.
D: I guess the question might be to try to understand, moving away from my crazy friend, whether it matters whether language that is perhaps inherently meaning has a meaningful intent, such that multiple different expressions might be used to import the same central concept. Whatever we argue, is that real? What are our concepts?
D: I guess my suggestion, Ill informed as it is, would be that an inherently meaningless language can still be meaningful if it is paired with meaningful intentions. But can we separate intentions from language?
R: Typically, I think of meaning being determined by usage. On that view, all language is inherently meaningless, as it depends for its meaning on our decision to use it in certain ways. But I think that you're suggesting something more concerning. What if two people appear to use the same term in the same way, but internally attach different meanings to it? Here's a classic example: What if, due to some optical quirk of mine, my internal representation of "red" corresponds to your internal representation of "blue" and vice versa? So when we point to the same object and agree that it's "red", or agree that it's "blue", it seems that we're using the terms in the same way, and hence they mean the same thing, but the mental experiences to which these terms correspond are actually the opposite between us.
R: I think that such scenarios are theoretically possible but in actuality quite implausible. That's because of the interrelated meanings of terms. For example, when you eventually ask me whether some object is closer to red or orange, I'll look at you with confusion, insisting that it looks nothing like "red" (really blue) and so is clearly closer to orange. We'll probably quibble back and forth, comparing different colors (e.g. I say "red" is similar to violet) until we realize that we've been using "red" and "blue" in opposite ways. To avoid this scenario, we would need to suppose that my optical quirks actually reverse the entire spectrum of visual light, so that we will agree in our statements comparing colors. But now our scenario involves many more postulates, and so it's much less likely. Therefore, these kinds of hidden miscommunications (which aren't immediately revealed through ordinary language use) are either unstable or unlikely.
R: There's a final, more damning, interpretation of your worry, which is that language does not gain its meaning through a correspondence with certain concepts (as established through usage); rather, language gains its meaning through coherence alone. On this view, there is no sense in asking which internal mental representations correspond to the terms "red" or "blue", because these terms are only defined in relation to other terms, which are themselves defined in relation to still more terms, and so on. We're left with a complex web of associations which form an internal semantic structure through formal relations and rules of inference. For example, the terms "1", "2", "3", and "more than" are defined in such a way that <2 is more than 1> and <3 is more than 2> are "true" (within this language), and we can infer that <3 is more than 1> is "true" whereas <1 is more than 3> is "false" via the transitivity of "more than" in this language, which effectively establishes its "meaning". Importantly, on this view, "true" or "false" don't (necessarily) correspond to any experiences or facts about the world; a proposition is "true" simply if it's coherent with respect to the individual "meanings" (i.e. usages) of the constituent terms, and "false" if it's incoherent.
R: Now, I think that this view of language is at least plausible, but it has the potentially worrying consequence that language has no definite meaning, even after fixing its usage (i.e. specifying the meaningful terms and their formal relations/rules of inference). That's because the usage is determined only by the formal structure of the language, which is independent of any correspondence between the terms and certain experiences/mental representations. At this point, my comments are very speculative, but I think that we may be able to overcome this worry by noting that our experiences/mental representations have a structure themselves too (e.g. many experiences are hierarchical, like the increasing experience of heaviness when carrying 1 lb, 2 lbs, 3 lbs, etc.); therefore, any language which attempts to describe these experiences must share that structure in the relevant terms. Presumably, the unimaginable complexity of our experiences precludes nearly all possible languages from being adequate for the purpose of describing our experiences, so that, in effect, there's only one language whose structure perfectly corresponds to the structure of our experiences, in which case we can fix its meaning accordingly. Thus, we've rescued the definite meaning of language.
D: If two people label an object red, but the sensory experience corresponding to "red" for one individual is the analog of "blue" for the other person, then I would say that the red-blue internal distinction is… potentially meaningless? Unless the only attribute possessed by color is our perception of it, which could be fair. Otherwise, however, what seems to be important is that we can have a consistent naming scheme upon which everyone can agree. I think it's probably obvious that my sensory experiences are different from yours, even abstracting from how well our respective glasses work.
D: Let's talk about this "ordinal" theory of language—i.e. that the meanings of words lie in their relations with each other. What is orange? That which is some mixture of red and yellow. What is red? Or blue? Nothing more definite than "one." I think you may be close to rescuing us with your "hierarchical experiences." But we know that observation is theory-dependent, and that our words for things give structure to our concepts. I will often have an epiphany when someone "tells me the word I was looking for," because now I can see and understand clearly. Because the word structured my thoughts and experiences in a way that makes sense. In response to the weights example, I think people would fail to notice the difference between 1/2/3 lbs if there were no labels on the objects themselves. Certainly this would be the case for objects of 1.3, 1.5, and 1.25 lbs. This experience would be undifferentiated, but introducing the weights would create an artificial hierarchy. Just how people overrate the quality of paintings that were made by acknowledged masters if they know this to be the case beforehand.
D: As to the "single language" postulate with which you close, I am sympathetic but want to dive deeper. There are hundreds of languages that (have) capture(d) human experience. Languages are shaped by eccentricities of geography, random chance, cultural patterns, etc., and they lead different people to think differently about the same things. Was it necessary that dinosaurs should have been called lizards, not birds? And our perception of the former case leads us to specific pictures of allosaurs, T-rexes, etc. that may not correspond to actual reality. Perhaps different languages presently have different competencies in different situations, such that in combination and compromise they more accurately describe the world. Think of the French and Latin expressions that have been brought into English for which there is just no translation. Maybe some intersection of (modified) languages gives a better correspondence. But is there a necessary one?
R: You make a very interesting point about the active role that language sometimes plays in structuring our experiences. I hadn't considered it before, but it seems true that our experiences are sometimes vague until they've been described in language, which in effect resolves the ambiguity in our experience. But, there are many possible ways to resolve this ambiguity, so that whichever terms we end up choosing to describe our experience don't merely reflect some structure already present in the vague experience, but in fact also introduce some new structure into our final, disambiguated experience. If this is true, then my simple model where the structure of language merely reflects the structure of experience won't work, but it will still be the case that the structure of (vague) experience limits the possible languages which are capable of (unambiguously) describing that experience.
R: I think we can resolve the issue where the differences in weight are too granular to be noticeable, which appears to illustrate an incongruence between the structure of our experience (i.e. undifferentiated) and the structure of the corresponding language (i.e. hierarchical). We simply need to realize that the numerical descriptions of the weights aren't meant to describe our experience of their heaviness, but rather a fact about the physical constitution of the weights. As for describing our experience of their heaviness, the description "equally heavy" will suffice, so the language is indeed equipped with the necessary structure to describe our experiences, so long as we use the right terms.
R: Regarding the "single language" postulate, which I readily admit is speculative, it seems that we might be able to hold on to it by suggesting that there really is just one language (understood as a collection of meaningful terms equipped with semantic relations and rules of inference) which perfectly captures the structure of reality, and that our human languages are approximations of this one true language. Some human languages, like French and Latin, agree with this perfect language better than other human languages for some descriptions, and so we recognize this by borrowing those terms when needed.
D: I think this idea of a two tiered experience of the natural world is interesting. We should pursue it further. I think the implications, however, will depend upon the extent to which the first tier, primary unfiltered experience, resembles the structured reality that comes later. If the unfiltered experience is just the perceptual equivalent of a big jumble of pixels, blurry and unfocused, then language is doing a lot of work. Indeed, we might say that the first level experience is just raw data, and not experience at all. Then our words might actually be reflecting nothing.
D: I agree that numerical weights are describing facts. My issue here is that we are trying to escape the problem of purely relative terminology, and attempting to ascribe to actual objects real properties. I agree that we have the linguistic capability to describe the sensation, indeed, the fact that we have this capability is why we are able to discuss this problem. But sufficient is very different from accuracy. And it raises the question of what words are attempting to describe: real things, or psychological perceptions?
D: I am enthusiastic about the single language concept. I've actually long hoped that this might be possible, and looked once to linguistic philosophy to provide the answer. Maybe I should delve into that again.
D: Two more thoughts on that head though, one, we should see words as approximations of the "correct" terminology in the single descriptive language, because we should probably despair as much of knowing true words as of knowing true things. And second, there is the issue that language formation is context dependent. So even bundling together many different languages from many different areas will encompass a greater diversity of experiences—but will it be right? Or will it be encouraging a slew of slightly variegated images of things that are actually the same?
R: I agree with the need to clarify the extent to which the unfiltered experience is structured. The main difficulty in investigating this question is that language and experience are quite deeply entangled, so that it's not easy to tear away the language and look at the experience itself. We've tried to do this by looking at cases where we have some vague sensation which becomes focused once the appropriate description is given to us, but these events need not suggest that the final, focused experience REQUIRED the language in order to be experienced. It could just be that the description was the stimulus which evoked the experience but didn't supplement it with any new structure which wasn't already present in the final experience. So we need to know whether there are any experiences which actually depend upon language for their full expression.
R: I'm not sure that I fully understand the issue that you describe with the weights example. I think that language can describe whatever we want, including both mind-independent things as well as psychological states. I've suggested that the reason why language has the ability to describe anything, even in the case where language is understood as purely relational (i.e. without content), is that language has structure, and that this can be used to describe things which have an analogous structure. It doesn't matter whether the thing being described is mind-independently real, merely psychological, or even fictional, so long as it embodies the appropriate structure.
R: I agree with your hesitations about the single language concept. Our human languages will always be mere approximations, though we are able to refine them as we discover more about the world. Additionally, there may be structures in reality which our minds are not capable of comprehending, in which case there may be large chunks of meaning which will never be accessible to us as humans. As for how we come to know whether our language is correct, that's a difficult question. One clue would seem to be when several languages have their own words for the same concepts, like numbers or shapes. Of course, this universality might be artificial if we think that it's the result of conquest, in which case we will require some historical analysis to establish that these concepts were developed truly independently. In general, determining whether a language is correct or not will depend on its domain of application. So to determine whether a scientific language is correct is no different than determining whether the corresponding scientific theory is correct, which is done via appealing to the standards of scientific evaluation (e.g. explanatory power, simplicity, predictive utility, etc.). To determine whether a description of our experience is correct, we may appeal to the standard of communicability, i.e. if I were to provide this description to someone else, would they experience the same thing?
D: The first part of our analysis revolves around the extent to which prelinguistic people have fully realized experiences of the world, rather than a vague haze of muddled sensations. So I guess the proper question might be, what kinds of experiences don't rely on language for their full expression?
D: I think my objection to the weights example was that in our discussions, we usually don't presuppose the ability to describe or properly comprehend mind independent things, do we? I know that you have often reprimanded me for taking a false "outside" perspective on the world. And so even if we reduce the question to that of analogous structures, how are we to know what the true structure of the mind independent things is? Still don't know what the structure of our language is giving what we perceive a relational structure.
R: About the weights example, I accept that it's difficult (maybe impossible) to gain knowledge about the mind-independent world. (Science seems to be the most plausible candidate for this kind of knowledge, but there are still scientific anti-realists.) However, I think that this is independent of the points I was making about language, since those points only hinge on a possible correspondence between the structure of language and the structure of the thing being described, irrespective of its ontological status. I don't think that worries about gaining knowledge of the mind-independent world have much, if anything, to do with language; rather, they typically have to do with the fact that we seem to gain knowledge through experience, yet the link between our experiences and the mind-independent reality are unclear at best and nonexistent at worst.
D: I think the notion of independent language evolutions is critical. Indeed, this extends to our concepts. If we could possibly identify instances in which entire fields of study evolved independently of one another in different regions, their convergence upon similar conceptual structures would be an indication that there is something inherently accurate in our research programs. Or, at the very least, that we are doing our best given the limitations of our cognitive abilities.
D: So maybe I do not see that there can really be a distinction between the ontological status of a thing being described and this perceived structure? Isn't the model that we use to describe, say, a particular kind of molecule tied to the very existence of the molecule? In which case it will matter a lot about the kind of language that we use to describe it, because that in turn can shape our model.
R: Hmm, I guess I don't see the link between the ontological status of a thing being described and its description. Can't we just describe the thing in question and then worry about its existence later? For example, I can describe the concept of a God (omniscient, omnibenevolent, omnipotent, mind, timeless, spaceless, etc.) without needing to know whether God actually exists. Even in your molecule example, wouldn't it be the case that I can describe a hypothetical molecule or particle whose existence is uncertain? Wasn't this the case with the Higgs boson, whose field was described long before its existence was experimentally verified? It seems that a description just consists in a specification of all the properties of a thing, and since existence isn't a predicate (otherwise we could define things into existence by specifying existence as one of its properties), we need not know whether something exists in order to provide its full description. And once we have a full description, we have a full account of the structure of the thing being described. Then it's just an empirical question whether that thing actually exists.
D: In theory, our description of the thing should reflect its structure. I could theorize a relationship between objects X, Y, and Z without observing them, but it is probably that my description would be altered if I did observe how they relate in a structure. Our description is also complicated by the fact that our words for describing the hypothetical molecule are derived (IMO) from other observations in less relevant fields.
D: We cannot just say anything
D: So the two points would be A) our theoretical description may not correspond with our empirical description and B) our theoretical description is circumscribed by the language that we have eked out from previous experience.
R: I think I misinterpreted your earlier comments. When you said that the description of a thing is tied to its existence, I understood you to mean that we can't describe something without thereby commenting on its existence. I now understand you to have meant that when we formulate a description (D), there are really two objects: the actually existing thing which we are attempting to describe (X), and the hypothetical object which fully matches our description (Y). The point is that X is the actual thing, whereas Y is merely our model of X based on our description D. Relating this to your second point about language, it seems that our observations of X will permit a range of various possible descriptions (D1,D2,D3,…) each of which agree on the current observations but disagree about potential further observations. Ultimately, the description (Dn) which we end up choosing will be determined in part by the fact that some of the possible descriptions are not meaningful within our current language, since those descriptions may contain terms which don't have an analogous concept in our language. Thus our model (Yn) of X will be determined not just by our observations of X but also our language, which limits the available concepts. The recommendation, then, seems to be that we should attempt to distinguish between those aspects of our description of X which are unique to Dn versus those which are common to all possible descriptions (D1,D2,D3,…). The latter is what we are truly justified in ascribing to X based on our observations, whereas the former are merely artifacts of our choice among the possible descriptions, which was determined by irrelevant things like what concepts were available to our language. In the end, it seems that our model of X shouldn't be Yn but rather the collection of features which are common to all the possible models (Y1,Y2,Y3,…) which are consistent with our observations. Two worries: (1) How can we, as it were, transcend our own language in order to identify those aspects which are unique to our description of X? (2) What is the role of theoretical virtues (simplicity, coherence, falsifiability, etc.) in adjudicating among the possible descriptions? Why should we treat all descriptions as equally plausible by only admitting those features which are common to all the possible models?
D: Your elaboration of my argument is fascinating and provocative. I think your weighting scheme in averaging models has a lot to do with your pre-commitments. You and I would probably use priors to weight highly unevenly. Feyerabend would argue for an even weighting. Further, it is possible that searching for commonalities is incorrect. If all descriptions are formed using an inadequate observational apparatus, then all may share wrong features. If some use adequate tools, by contrast, then we may have complete disagreement on salient issues.
D: The further complication that I had originally intended by my point was that our language itself is limited by the set of objects already observed. Having seen object set X at time t, we are limited to vocabulary set V in modeling potentially observable outcomes in t+1. So we cannot have any description or model in a period.
D: The "gene" or the "bacterium" were not terms available to Aristotle, for example, so he cannot have devised the genome or germ theory.
R: I think you're right about priors coming into play when weighing the possible descriptions (hence possible models). If Feyerabend is to truly live up to his defense of counterinduction, I think he might actually go beyond just an even weighting to saying that those descriptions/models which are more different from our current models should be given a higher priority than those which are less different!
R: I also appreciate your point that the true description may not (most likely won't?) reside in the commonalities among all possible descriptions, since it might be that the true concepts only belong to a proper subset of the possible languages. However, what this commonality approach guarantees is that we do not draw any conclusions beyond those which are strictly supported by our observations; that's because all of the possible descriptions are assumed to be equally supported by the current observations. So, if our observational capacities are too limited to hone in on the one true description, then too bad! We'll have to settle for a weaker, more general description, unless we allow for evidence beyond observations (like theoretical virtues of simplicity, coherence, falsifiability, etc.) which would allow us to further distinguish among the possible descriptions. If we do allow for such further considerations, it raises the question of why such theoretical virtues are aimed towards the truth rather than merely convenience/intelligibility or something else. At least in the case of observation, its status as evidence seems to be supported by a causal theory of knowledge, namely that our observations are causally related to what's "really out there", hence why our observations are (generally) taken to be a reliable source of evidence.
R: Your last point really highlights a fundamental limitation with actually implementing my "possible descriptions" scheme. We can't possibly know all the possible descriptions consistent with some observations, especially those which utilize concepts from languages which have yet to be invented; and so, in practice, we'll be limited by our imagination. Additionally, the theory-ladenness of observation makes it practically difficult (maybe not theoretically impossible) to disentangle the aspects of our observation which are tied to our current conceptual schemes from those which are inherent in the thing being observed. As some consolation, I do think that my "possible descriptions" scheme succeeds in avoiding the pitfalls of language in relation to observation and evidence in principle, just not in practice.
D: Great point on Feyerabend. While I think you're right, it does seem to highlight the absurdity of his position. Kind of how a literary studies scholar might insist that we "complicate" our position by considering incorrect views. We don't need to do that!
D: On your second point, I do see the underlying precautionary logic. However, I am not certain that this is actually how science occurs. I think that instead of settling for vague heuristics in describing phenomena, we instead propose a best ambitious theory supported by your "external" criteria. It is not clear why we should favor these criteria—should we expect the world to be simple and intelligible rather than bizarre and complex? This is why I say "criteria" and not "evidence," because we cannot know whether a theory "seeming" right does so just because it reflects the "real world" best or because we are cognitively designed to prefer the structure of the argument. What I do agree on is that we have no reason to believe at any point in time that our observations are an unreliable source of information on the real world. Wouldn't it be fair to say that incorrect views (geocentrism) were formed not because our observations were bad but because we interpreted existing facts poorly? Worryingly, however, you could say that we pursued… external criteria like simplicity and intelligibility!
R: I agree that science as actually practiced goes beyond the "mere commonalities" approach which I outlined. Generally, it follows IBE (inference to the best explanation). I think that all rational inquiry should follow this approach, but the criteria which determine the BE will depend upon the domain of inquiry. For science, a standard list of theoretical virtues (i.e. those criteria which determine the best explanation) includes consistency (internal and external, meaning with itself and with other theories), empirical accuracy, unifying power (a.k.a. "scope"), simplicity, and predictive power (a.k.a. "fertility"). Note that, on this view, empirical accuracy (i.e. agreement with the observed facts) is just one criterion among many. Some will amend this by considering full empirical accuracy a necessity and then treating the remaining theoretical virtues as useful for further adjudication, whereas others will choose to just weigh empirical accuracy very prominently without treating it as fundamentally different; I think that the former is more popular during "normal science", whereas the latter is more acceptable during "paradigm shifts", when nascent theories are permitted to "fix up" the discrepancies with observation later on.
R: Now we get to the hard question: Why should IBE be taken to reveal the true nature of reality? Much ink has been spilled on this question, and it's fundamental to the scientific realism/antirealism debate. I'll just say that my philosophical studies have disillusioned me to the very concept of the "real"; I think that it takes on various meanings in different contexts and we simply need to be cognizant of its meaning in any particular context of usage. Colloquially, "real" tends to mean correspondence to the mind-independent world, i.e. the world "as it really is". I don't know of any context in which this understanding of "real" is defensible or actually adhered to, and (in my opinion) attempts to preserve this understanding in all contexts have led to much pointless philosophical speculation. In science, "real" just means in accordance with the best explanation once all the observable facts are known (where "best explanation" and "observable" are measured according to human standards, so that the world as seen by super-intelligent bats might yield a different scientific "reality"). In mathematics, an object is "real" just in case it's coherent and productive for mathematical investigation as practiced by humans (and so aliens might have different psychologies within which the so-called natural numbers are quite unnatural, in which case they would not be "real" according to alien mathematics). In literary criticism, "real" is defined by the facts of the story. And so on…
R: If we insist on maintaining a mind-independent reality for all of these subjects, then we're forced into a strange kind-of Platonism, where numbers, sets, and fictional characters all have some queer mind-independent existence; but I think that there are much more natural interpretations of "truth" and "reality" which are context-sensitive and don't rely on these speculative philosophical postulates. You might remember that this deflationary attitude towards "real" is the basis upon which I argued for moral realism.
16. Brief Return to Moral Realism
D: What is truth and how can we know it? What is the good life? Without answering these questions, it's hard to do or say anything.
R: Agreed, and I actually believe that we've made progress in answering these questions, even though we keep coming back to them.
D: We should try to assess where we've moved at some point
D: I know that I was at least a little bit immature when we were talking about moral realism. Too emotional and prideful.
R: Yes, I think that would be helpful. As a starting point, I'll note that I've softened in my stance on moral realism. At first, I would have said something like "to do what is good is no more than to do what is rational", but I no longer believe that. I still believe that we can be incorrect in our moral judgements, and that's the core sense in which I still hold to a kind-of realism in ethics
D: From reading the sequences, I have been toying with a somewhat different view, namely, that it is rational to do what we believe is moral.
D: Yudkowsky terms this "instrumental rationality"—systematically working to achieve our ends.
R: I agree with that, and I think it's a lot easier to believe than full blown "categorical rationality", the idea that we have reasons to do/believe things even when they don't align with our conscious desires
R: But it's difficult to let go of the idea that someone who simultaneously believes <p> and that <p implies q> SHOULD also believe <q>, whether or not this final belief aligns with his desires. This is called an "epistemic norm"
D: Right, I agree with that too. The question would be whether this sort of logic can be applied to all of our beliefs. If you believe in equality of opportunity, for example, what empirical facts about the world can we tie this to?
D: Perhaps some psychological findings about the effects of striving on human well-being? But most people can't become Gates/Obama/Musk/Dylan etc.
D: And many don't even want to.
R: Well, I take it that to "believe in equality of opportunity" is to believe that "equality of opportunity" is a good thing. Since this is a moral proposition, we can only deduce other moral propositions, such as that "John should be given a tractor instead of a shovel so that he has an equal opportunity to Jack for digging the hole."
R: Then I think empirical findings about human psychology would help us determine whether we actually believe that equality of opportunity is valuable, and studies in sociology and economics would help us to determine how to achieve this if so.
D: Yes, I slightly misinterpreted your previous comment. My issue is this: does John really need a tractor to garden in his backyard? We know that humans have "satiation points" and that there are diminishing returns to capital in any project. Yet there are those who believe that even those who can flourish need more opportunity—or even guaranteed equality. Are they wrong? What are they basing their views on?
D: To rephrase: many acts that are not psychologically, socially, or materially beneficial (even some that are harmful) are deemed moral. What does it say that we must use noninstrumental criteria for evaluating the goodness of these behaviors?
D: And of course there is the deeper question of why we should pursue these ends anyway.
R: I approach it like this: We observe in humans this peculiar concept of normativity/reasons, i.e. the notion that we should do some things. Some reasons are purely logical (like the epistemic norms), and others are moral, which I'll focus on. How do we determine what the correct moral beliefs are? The same way we determine what the correct logical beliefs are. We start with certain compelling intuitions and attempt to formulate a system of principles within which these concrete intuitions are explained. During the process, we'll likely come across conflicting intuitions, or a conflict between certain concrete intuitions and general principles which we've formulated. In such cases, we must either discard the conflicting intuition or refine our principle so as to accommodate the apparently conflicting intuition. There are no simple rules to follow, and so we must rely on good judgment.
R: Applying this to the moral sphere, I think we realize that we have many intuitions, only some of which say that it's good to do what is psychologically/materially beneficial for ourselves. For example, many people believe that they have an obligation to do what's best for their child, but this often conflicts with what is in the direct psychological interests of the parent, who has to make all sorts of personal sacrifices in order to care for their child. With this simple illustration, we should see that the starting point of morality shouldn't necessarily be taken to be self-interest. Rather, we should start with those moral intuitions which are most compelling; and then the task of moral philosophy is to construct the system which best reconciles, explains, and predicts these intuitions. This answers your question about non-instrumental criteria, since they shouldn't be treated as fundamentally different from any other moral intuitions.
R: As for why we should pursue those ends which systematically make sense of our intuitions upon careful reflection, I think that the question is misguided for the same reasons that I gave when talking about the pointlessness of talking about "reality". These are simply the ends which we have as humans, and so they define for us what we have reasons to do. So we should pursue the human ends because we are humans.
17. Return to IBE
D: Let's go back to IBE for a bit. I think that we're actually closing in on something important. Let's suppose there is a world in which we have a weighting scheme between different theoretical virtues in which all of those factors that you mentioned have non-zero weights. Before we even start to discuss why IBE reveals the true nature of reality, we need to figure out why we should privilege a certain weighting scheme. What I have been suggesting is that our weights really correspond to our preconceived notions about what the world is like. If we believe in a knowledge base that is undergirded by unity and intelligible structure, then of course we should privilege scope and consistency above all. But is there an ex ante reason to believe this? How would we come to understand the fundamental structure of knowledge without resorting to the very tools that we are trying to justify? If we do not expect reality to conform to a unified structure—say, if we believe it possible that different branches of knowledge extract different kinds of truths that fit together like a puzzle, rather than directly overlapping—then we should privilege empirical accuracy and predictive power instead. This reminds me of important distinctions that are made when choosing to use a causal model or machine learning in econometrics. If you understand the phenomena beforehand, then you can set up a theoretical model for how the actors should work to achieve expected outcomes, and then set up your equations such that you can test the model using data. This is kind of the essence of the natural experimentalist paradigm that I was describing to you on Thursday. Instead of using kitchen sink regressions, you do research to understand the question at hand, and then design your equations so that they specifically follow what you hypothesize as the model of the scenario. But if you don't have a model, then you may be justified in taking an ML approach, which as you obviously know privileges prediction from the optimal sample of regressors no matter what your story about the phenomena is. Trouble is, of course, that you no longer have the causal model, and that you are forced to take whatever results that you find and make a story about them.
D: I think that I inherently told sort of Platonist assumptions. It's hard to get rid of them, if it is necessary to do so. But let unbundle your definition of scientific reality. I worry that we are facing something circular here. If reality corresponds to the theory that best fits the facts, then our definition of reality is theory-dependent. Then by endogenizing reality to our best theory, we are no longer in a position to test the theory. Because we already know that what is real is what fits the facts.
R: To your point about weighing the various theoretical virtues, I'm tempted to say that these weights are always changing based on what proves to be most productive in scientific investigation, sort of like feedback mechanisms. For example, someone who overvalues simplicity might propose very simple theories which have difficulty accounting for some of the known facts, and so he continues to make minor adjustments, but new difficulties keep arising, until he finally gives up and admits that reality is more complex than he had hoped for; this would be a case where the theoretical virtue of empirical accuracy is providing feedback to the theoretical virtue of simplicity. We could also imagine an opposite case, where a very conservative scientist is totally reluctant to consider new theories which can't immediately account for all of the known facts, until a new theory is proposed which is marvelously simple and over time all of the known facts are accommodated, so that this conservative scientist is forced to concede. The hope is that, although different scientists may weigh the various theoretical virtues differently, the true theory will exemplify sufficiently many of the virtues to a sufficient extent that a consensus will eventually emerge.
R: As for circularity, I think we can avoid this charge by noting that reality is defined not with respect to our current theories, but rather with respect to an ideal theory which exemplifies all of the theoretical virtues (weighted according to our standards) and is based on all the knowable facts (not just those which are currently known). This shouldn't affect our ability to test theories since we can still follow the ordinary procedures of comparing our observations with the predictions of the theory. All that my definition of reality achieves is the elimination of the concern as to whether reality is rationally discernible by us given our cognitive capacities, since I'm defining reality according to our cognitive capacities and procedures for rational inquiry (thereby sidestepping the worry about whether this corresponds to some mindindependent reality).
D: An issue with deriving our weighting scheme from empirical data fit is that we do not know which part of the theory is causing the error—which virtue is overvalued— and cannot test them in isolation. Thus you are left with an ML approach where you move to the point with minimum loss, but don't really learn anything that generalizes. And is this even possible?
D: We must also consider what the virtues are doing in the theory. Feyerabend might say that simplicity/neatness is coming at a cost to correctness, which is why we juxtapose internal consistency with data fit in the first place. So often we cannot just adjust the parameters on the different theoretical virtues, because each of them serves a different goal in theory development.
D: A little bit confused by your last comment. Are you assuming that there is an ideal theory that corresponds perfectly to reality, then embodies all the theoretical virtues, and is perfectly intelligible by us through some combination of deduction and induction with sufficiently capable instruments?
D: BTW, I found a good definition of "causal" in an econ sense for you: A causes B if A is the only difference between groups T and C and B is the average outcome for T.
R: I think my point about the weighting scheme is that minor disagreements about the weighting scheme only matter during an early exploratory period prior to the emergence of a consensus. However, as groups of scientists pursue what they perceive to be the best candidate theory (according to their own weighting scheme), more and more evidence will accumulate until one theory emerges which exemplifies all the theoretical virtues better than the alternatives; at which point a consensus will develop. The only people who will refuse to accept this consensus are those who have very unusual weighting schemes where some of the theoretical virtues are prioritized way more than others, and so I'm assuming that most scientists consider all the theoretical virtues to be significant, but just have minor disagreements about their order of importance.
R: Regarding the ideal theory, all I'm saying is that, were all the observable facts to be known (observable being defined in the broadest sense, limited only by our human cognitive capacities, not by any technological advancements), then there would be various theories which explain these facts (due to underdetermination), and at least one of them would best exemplify the chosen theoretical virtues according to a given weighting scheme (existence of global maximum is guaranteed under assumption that candidate theories are bounded above, uniqueness is likely but not guaranteed). Importantly, this ideal theory is intelligible by definition, so if there happens to be a better theory which is not knowable via observation, then it won't be considered. Additionally, I'm not making any claim about whether this ideal theory perfectly corresponds to mind-independent reality, because I have no idea what mind-independent reality is actually like or whether our best theories reveal it. Instead, I'm defining "reality" in science according to what this ideal theory says.
R: I'm assuming that A and B are events, T and C are collections of events, and that by the average outcome for T you're saying that if we introduce the event A to the collection C (thereby creating T), then the expected outcome is event B. If I'm understanding you correctly, then wouldn't it follow that "buying a lottery ticket" (event A) causes me to "lose the lottery" (event B) since that's the expected outcome?
D: Will get to other comments later but T and C are groups, treatment and control. Suppose an RCT where you randomly assign people (unwittingly) to treat and control. On average both groups have the same characteristics. A is a treatment (a drug), B is the outcome observed. In this scenario A is the only difference between T and C, thus A is said to have caused B, which obtains for T and not C.
R: That makes more sense. RCTs are obviously a gold-standard when it comes to establishing causality in the social sciences, but I think it might be mistaken to interpret them as defining causal relationships, rather than as generally useful tools for determining causal relationships. Firstly, it's important to distinguish between perfect control (no differences between the control and treatment groups prior to intervention) and control in expectation (no difference between the members of each group on average, which is what randomization achieves). Secondly, even supposing perfect control, this test would only establish a necessary but not sufficient relationship between the treatment and the outcome; so it could be misleading to call A the cause of B since this relationship may depend on other conditions (C1,C2,C3,…) without which we would not observe the association between A and B. This is what happens in my lottery counterexample, since purchasing a ticket (A) is a necessary but not sufficient condition for losing the lottery (B), but since the remaining necessary condition (i.e. that the purchased ticket is a losing ticket) is so overwhelmingly likely, we may fail to realize that B is causally dependent upon it just as much as B is causally dependent upon A.
R: By the way, for some interesting criticism of RCTs, see this paper (https://www.sciencedirect.com/science/article/pii/S0277953617307359) by economist Angus Deaton and philosopher of science Nancy Cartwright.
D: I agree from philosophical point of view that the RCT does not define the causal relationship. Indeed, randomization often obscures causes. For example, one natural experiment used quasi random variation in subscriptions to Diderot's Encyclopedie to proxy for local levels of upper tail human capital in French industrialization. Nobody thinks that industrialization was actually caused by blind parachute drops of book subscribers into various French cities. All you get to see is the average effect of the presence of upper-tail human capital. But I think your second point is slightly confused. In order to identify a causal relationship, those C factors that create the relationship between A and B would be called confounders, and randomization should equalize these across groups. It may well be that A does not cause B after controlling for C1, C2, and c3. But the purpose of an RCT is to make these controls happen.
D: Yes, Deaton is a famous anti-RCT prophet. Especially in development economics, RCTs kind of were and still are kind of annoyingly dominant. You should talk to my friend Oliver if you want to hear stories about stupid RCTs. But by and large, the natural experiment is framework that I have repeatedly described to you I believe to be a powerful and, more importantly, easily accessible tool for experimentalists to identify causal relationships between variables.
D: On weighting schemes, it seems plausible to me that some theories will not, in their best form, embody all of the theoretical virtues better than all of the options. Furthermore, there is the risk that the kind of evidence produced by the present theory will, as Feyerabend argues, be unable to offer evidence against said theory. I think the biggest concern with this entire discussion is the presumption of the ability to freely move around parameters to favor one theoretical virtue over another. I think combinations are sticky, such that you can only have a couple of them at a time, and must regress on one axis in order to advance on another. In machine learning terms, you must at some point suffer on calculable losses on model complexity in order to achieve near perfect fit. And worse, you cannot simply smoothly trade off between the two. You can just select a couple of different combinations. I think this possibly leads to bad equilibria.
R: I was using C1,C2,C3,… not to refer to confounders but rather conditions which were in fact held constant across the two groups, but which were causally relevant in bringing about the outcome. So we would say that A,C1,C2,C3,… were collectively necessary and sufficient for causing B, but neither A by itself nor C1,C2,C3,… by themselves (like in the control group) are sufficient for causing B. So the criticism was that the RCT (even when modified to assume perfect control) would establish that A is a necessary but potentially grossly insufficient cause for B, and we don't get any information about just how insufficient A is for causing B from this kind of test.
D: Ah, I see. Yes, that is true. Killing Franz Ferdinand may have precipitated the July Crisis but if Germany didn't exist then Austria wasn't invading Serbia.
R: Yes, exactly. I'm still thinking about your comments on the difficulties with weighting schemes. I'll probably respond to it in person during our meeting today