Follow-On Thoughts to The Upsilon Factor: Doxastic Plasticity, Parsimony, and a Critique of Brian Tomasik’s Critique of Moral Nihilism

Posted Dec 27, 2025


This is a slight update following the publishing of my book, The Upsilon Factor, that also forms a challenge/response to Brian Tomasik’s critique of moral nihilism. This update does not change the core point of the Upsilon Factor, but it perhaps couches it in more context around the parsimony inherent in the book, and presents more color on what it means to abandon moral beliefs. For Brian’s sake, i’ll suggest that this 10-minute video may be the best introduction to the core principles of the Upsilon Factor so one doesn’t have to read the book to have the foundations to interpret this response to Brian’s critique.


At the highest level, I will say that there are many options for worldviews that exist in a space of non-provability and non-disprovability at the same time. One of the most common examples of such pairs of contradicting worldviews, wherein neither can prove nor disprove the other, is atheism vs deism. One can choose to believe that there is a creator of the universe, and it would be impossible for modern science to disprove that theory. And one can alternatively choose to believe that the ultimate “force” behind existence can be either unknown or happenstance but is not a willful deity as creator. And once again, we lack the tools to prove or disprove that theorem. When an axiomatic assumption can neither be proven nor disproven, it can’t really be called a fact by both believers and deniers of that assumption. So in a sense, such assumptions are essentially “opinions”. E.g, “I am of the opinion that there is / is not an omnipotent deity overseeing the universe”. The latinate word doxasty refers to the formation of such opinions. And we have the plasticity to choose which opinion to hold on that potential assumption. Thus, humans have what I call “doxastic plasticity”.


In the realm of such available worldview doxasties, a few include:


Now specific to Brian’s points, I feel he conflates (a) and (b) a little. For example, he states that a moral nihilist believes that morals are essentially “pointless” and then he says that nihilists (without the qualifier “moral”) “contradict themselves” (because pointfulness would be pointless).


I would like to address this divergence in perspective from a neurological standpoint first. As most non-human animals, especially those without highly developed pre-frontal cortices, demonstrate, the “reptilian” brain stem, cerebellum, amygdala, insula, etc have enabled many animals to “encode” implicit learnings that benefit their survival and replication. Among humans, we might call these subconscious instincts, or “gut” feelings. The avoidance of fire is likely one of these non-cognitive inherent protections. Then there are other “gut-like” processes, but that I would surmise are slightly distinct from the purely survivalist gut. These include emotion (emotional feelings like sadness, jealousy, desire), belief in God (to the degree there’s a vmPFC-connected analog of the fusiform gyrus specially optimized for prioritizing one or more omniscient beings that allows for the rapid adoption of deist beliefs among humans, which evolved for obvious social binding and rule-following benefits), belief in deontic morality (perhaps dlPFC-mediated), empathy (mostly TPJ-mediated), and heuristic racial (and other) stereotypes. While the PFC is implicated in a few of these processes, and these processes are likely interconnected via angular cingulate cortex and other neuromodular interactions rather than being fully modular, they still seem to be “native” and commonly repeated throughout cultures and even to a certain degree non-human primates.


Now there comes the question of, using Brian’s language, which ones do we “turn off”. I argue in The Upsilon Factor, that one of these circuits that many intellectuals choose to turn off (or attempt to attenuate as much as possible, but as implicit association testing shows, often can’t really perfectly turn off), is the ‘racial stereotyping’ circuit. Oversimplistically put: babies are racist, modern intellectual adults in developed democracies try not to be.


Brian suggests that the moral center “ought” to be employed, rather than “turned off”. And the argument that moral nihilists (which he equates with nihilists generally) are self-contradictory, in a sense adds fuel to this argumentative fire. I’d like to highlight what the Upsilon Factor presents as an alternative.


I should caveat that the Upsilon Factor does not “recommend” a “should” or “ought”. Rather, it states that if the KPI being optimized for is consensus, then if all decision-making participants in a societal design were to turn off their moral systems, that consensus could be reached. The main crux of the argument is that altruistic, society-preserving, and suffering-reductive behavior, evolved not exclusively through moral reasoning. In fact, the relationship between morality and empathy can largely be seen as analogous to the relationship between racial-stereotyping and analytical reasoning. In the case of murder, before abortion was even a possibility, it of course would have reduced unnecessary mental distraction to simply genetically hard-code predispositions to beliefs like “murder is wrong” rather than cause the human to have to think through their empathetic feelings for the pain that a murdered individual might suffer, the pain that their family and friends might suffer upon their death, and the potential pain others (including possible the murderer themselves, including through revenge actions) might suffer as a result of the loss of that life in the long-term. But it is possible to employ more sophisticated empathy-modulated reasoning to come to the same conclusion not to murder someone, and in a modern world where options like abortion exist, this more sophisticated view might allow for optimizations like selective abortion where it benefits all involved.


Essentially, I distinguish a moral nihilist from a nihilist. I agree with Brian that a nihilist has no interest in engaging in any of these conversations, to the degree they question all of existence itself, and wouldn’t have any shared goals of this argument making convincing a nihilist of any views one way or the other on morality a lost cause. However, a moral nihilist can abandon (practically if not completely, see next paragraph) belief in morals and still have an interest in shared human cooperation and even suffering reduction, because their interest in reducing both their own suffering and their empathetic suffering for other sentient beings is preserved even in the absence of moral truth. And it’s not merely an “academic” interest, it’s an actual biological impulse mediated by the TPJ. One benefit of abandoning these moral centers is that it reduces yet one more “assumption” that needs to be made to seek agreement/alignment on altruistic policies (or if not agreement, alignment on what variables lead to disagreement) between people. Namely, the assumption that “suffering is wrong”, which Brian likes because of the “magic” but which is technically unnecessary to come to similar end conclusions about what policies might be preferable to people. A second benefit is that this purely “observative” approach (i.e. negating the need for an “ought” or “should” anywhere in the realm of policy decision making or discussions of altruism), can also have the fidelity to account for individual differences in weighted empathy levels (which can change like any other neuroplastic dynamic variable, but I would argue are less “fickle” than constructed “moral valences” that require a house of cards of assumptions build on top of one another about “rights” and “wrongs”: empathy is just a channeling of raw pain neurons via the TPJ mirror neurons). I’ll even go so far as to say that it is really this empathetic circuit that drives Brian’s purported “moral” belief, rather than a true belief in an arbitrary right/wrong deontic absolute, when he says: in my own case, the core of my moral view (namely, the overwhelming importance of reducing extreme suffering) was set by age 19


Now, I caveated the word “abandon”. Because realistically, humans can’t just fully turn off any mental circuit that over millions of years has been interconnected with all our other mental circuits. And more generally, many intelligent people would claim that a reliance on various forms of “gut” instincts has actually helped them avoid pitfalls from purely cognitive rationalism. It’s not impossible to see why this might be the case. If the reptilian parts of the mind have had many more millions of years to develop and incorporate learnings from the environment, there could certainly be instincts that “sniff out” dangers or opportunities that the less evolved cognitive cortex miss out on. This is similar to how complex neural-net machine learning algorithms can identify patterns that might not be cognitively obvious because more advanced processing (like our pre-cognitive brains) can process more signals, faster, with more millions of years of learning. The wisdom of crowds examples (guessing jelly beans in a jar, guessing the relative surface area of a square covered by an inscribed circle) all demonstrate how this “pre-cognitive” guesses take advantage of our much more developed sub-cognitive brain capabilities. And moreover, because we have multiple instincts, many of which can have a stronger influence over our behavior than our sheer cognitive will (as anyone who’s failed the marshmallow test or otherwise failed to delay gratification can confirm), having “balancing gut instincts” may be the only way to refrain from irrational behavior that is self-harming. In this way, having “gut feelings” of the form of emotion, risk/paranoia, morality, and empathy can all help humans “stick” to their defined objectives even when their brain is behaving irrationally or simply “missing cues”. So what might allow for a human to maintain logical consistency with more doxastic parsimony (i.e. limiting doxastic assumptions in their logical framework, such as assigning objective moral truths), would be to allow these instinctual centers to provide “signal” to the human, and then for the human to employ cognitive capabilities to assess whether that signal is “onto something” and requires doing a double check of the surroundings. There are many situations where humans have imperfect information, or are knowingly in conflicted mental spaces (under the influence of alcohol, or tempted by a nearby marshmallow, for example). In these situations, a reflexive obedience to instinctual “gut” feelings, including generalized “moral wisdom” can be beneficial. But when it comes to studied design of policies and collaboration with others on developing joint agreements of resource allocation, moral nihilism can allow for such discourse to proceed without the self-contradiction that Brian Tomasik claims must exist.


I want to be clear that the Upsilon Factor is not saying moral nihilism is more “true” or what we “ought” to believe, merely that it does not negate the possibility of (even altruistically-driven) societal reasoning about policy, and may enhance consensus-building. The fundamental premise of the work is that parsimony of belief (i.e. getting closer to [a] above and further from [d]) stands to minimize the surface area for potential controversial assumptions that could jeopardize consensus. In theory, a person could be a “doxastic maximalist” and in that case, might believe in god, an organized religion, moral truth, and potentially even a host of superstitions that can’t be disproven or that even if disproven perhaps represent faults in the premise of logic itself rather than represent cracks in the superstitious beliefs. Perhaps this person believes that one “must” wear red on wednesdays, and only wednesdays, and that to do anything different is cursed and will lead to torture in the afterlife. However, if another person comes along and believes one must wear blue on Wednesdays, and can wear red any other day of the week, under the same superstitious auspices, then these two people would come into conflict. Were they to attempt to agree on policies for a society, they would struggle to overcome these conflicts. And there might well be good reasons for each to have these beliefs. Perhaps a random genetic mutation centuries ago in an island where birds had a weekly rhythm involving attacking anything not red, led to a human being predisposed to adopt a belief that red was a protective color to wear once a week, and that led to survival of his extended family and their genes on that island. And similarly for a different climate where blue was protective. The modern humans have no idea why these beliefs evolved because those bird species are long gone and the humans have migrated elsewhere, but there was an implicit evolutionary bottleneck that led to a path-dependent cultural “gene” to evolve and take hold by way of ritual and religion. Parsimony is essentially a doxastic minimalism. It’s an attempt to limit once professed beliefs to only those that can be verified, such that they can only be denied by non-believers of logic and/or existence. And given The Upsilon Factor’s stated aim of achieving consensus, which depends on beliefs in conditional logic and existence (as described in the book), the fact that no further beliefs are made that can’t be verified and thus could be subject to controversy and disagreement, maximize the likelihood of consensus on the framework. Now it could of course be argued that parsimony does not maximize consensus, because for the doxastically maximal, the eradication of their beliefs from the framework imparts an immediate obstacle to acceptance. However, my counter-argument is that it is less of an obstacle to overcome non-inclusion of unproven beliefs, than it is to overcome conflicts between competing unproven beliefs. That it may be possible to convince all members of a society that for the sake of consensus, it may be best to form societal rules only premised on those beliefs which can be proven and verified by all decision making members, aka parsimony (doxastic minimalism).