Featured

Religion and Humility: Rationality, Diagonalization, and the Hardness Criterion

This summer I had a good old-fashioned Crisis of Faith.

It became apparent that I’ve let myself go a little in terms of having a ready retort on hand for spontaneous atheist arguments. I spent some time this summer at a conservative think tank, full of minds like mine (if significantly more libertarian) and blest with a high degree of Catholic literacy. Although I was regaled there daily with requests to mathematically prove God’s existence, thankfully the majority of the religious arguments my classmates took up with me ran along the lines of “Is the seat of Peter empty?” rather than “How can you believe in miracles?”

My return to New Haven engulfed me in the world of mathematics, leaving little time for theological debate. Mathematics departments nationwide run the gamut from very religious to Dawkinsian (it’s hard to be a Humean mathematician, and impossible to be a Humean statistician). Ours enjoys a variety of religious viewpoints, with the majority falling secular agnostic. Thus, when a mentor posed a new and unusual atheist argument to me, I was caught unprepared.

The Problem

I’ve seen all the inane, readily neutralized atheist claims–“Do you really believe in virgin birth?,” “Haven’t terrible things been done in the name of Christ?,” or “Religion was established to keep citizens complicit.” SSC raises a hilarious one about whales not being fish.

Nonetheless, arguments from epistemology are more compelling. Not the “argument from unknowability,” per se. I’ve long considered the existence-of-God problem undecidable. This doesn’t bother me, because I’m not a logical positivist; physical facts are not the only important components of a system. I don’t care that atheists and I don’t disagree on any physical matters that can be finite-time decided, and I don’t think the criterion of falsifiability is useful.

The counterargument with which I was presented was much slicker, and imbued with all that meta-level, logically contradictory, late-Inferno-style contrapasso of which I am so fond: “Throughout history, people have realized how much they don’t know. The more we learn, the more there is to learn. Religion, in presupposing the ultimate answers, is the Platonic form of hubris.” Steelmanned: “Religion is prideful, but prides itself in being humble.”

That got to me. My discipline of choice is a field in which we constantly know less than we did before, in a certain sense, because every answer prompts questions that didn’t previously occur to us. We learn “calculus” in high school and think we know what integration is, then learn vector analysis in college and think this time we really know what integration is, then learn Lebesgue theory and realize we’ll never know what integration is. Humility is both necessary and proper to the discipline of mathematics, as it is to the discipline of theology. But mathematicians don’t claim to have solved the (perhaps undecidable) Collatz conjecture, whereas theologians do claim to have solved the (probably undecidable) God problem.

Religious sensibilities are more insidious than religious confession. My mother, an evolutionary biologist and enthusiastic Dawkinsian atheist, is terrified by The Exorcist and has admitted to me that she’d never attend a LaVeyan meetup because she could not sign her soul over to Satan even though she believes him nonexistent. She’s one of many nonreligious I know with religious sensibilities ranging from the theological to the social to the moral, yet I know no believers who have the faith but lack the sensibilities. I believe these inclinations precede confession; they are a necessary but not sufficient prerequisite to genuine faith. So what happens when religious sensibilities undermine religious conviction? What happens when the truth claim of religion is at least in some sense hubristic, but the sensibilities of religion are humble?

I go to a highly-regarded research university and thus constantly make use of the immediately available option to knock on the office door of one of the smartest people in the world and demand answers. My theology thesis advisor wasn’t in, so I stopped in at the first office I could find: that of a professor specializing in the intersection of religion and political theory. Perfect–exactly the kind of person who’d know all about the theory of the “opiate of the masses.” I walked in, introduced myself, and explained my problem: the priors for religion are heavily dependent on humility, but the truth claims of religion are hubristic. How can I be both Bayesian and Catholic? Help!

A Helpful Digression

The paradox with which I confronted the professor is related to signaling theory and what I’ll describe as the “hardness criterion.”

Definition 1. Hardness Criterion. A map F defined from the set of tuples on the space of choices to the space itself, where F(a,b,…) = argmax{difficulty(a), difficulty(b),…}.

In other words, the Hardness Criterion is the belief: “When presented with multiple options of action, I should do the one that is most difficult.” Naturally, “difficult” can mean a bunch of different things, some of which may be contradictory. For example, being a doctor is more technically difficult than being a garbage disposal worker, but the latter is more psychologically difficult for an Alpha on the alpha island in Brave New World.

The Hardness Criterion seems obviously wrong at a first glance, but I urge my readers to consider it more carefully. Steelmanned, it tells us that Man has a duty to pursue the highest spheres of work, self-analysis, and the search for truth, and to reject hedonism, which seems observably true. It doesn’t beget any of the silly fallacies detractors would like–“But everyone can’t do the universal hardest thing; some people have to do something else, or else we have a society of doctors” and “If everyone does the hardest thing, no one will be good at his job”–because what’s difficult differs by person, and how hard something is, in my experience, is orthogonal to how good I am at doing it. I’ve never been able to gauge how good I am at mathematics because it seems roughly equally difficult no matter how good you are at it, like cross-country running but unlike music or politics.

Those who deride religion for providing cushiness and a “Heavenly Daddy” figure are unknowingly, implicitly employing the Hardness Criterion in a way similar to Occam’s Razor. The argument goes like this: Religion permits an emotional solace in the form of the promise of eternal life, whereas atheism does not permit such solace. Therefore atheism is more difficult and I should do it.

Of course this requires the Hardness Criterion, because there is no other grounds for rejecting religion on the basis of its provision of emotional solace. One can only reject this solace if they believe the solace to be bad, which requires the Hardness Criterion, because in theory, whether a belief provides emotional solace is orthogonal to whether it is true. Sure, emotional solace might discredit the epistemic honesty of one’s acceptance of the framework, but it bears no consequences for the truthfulness of the framework itself–unless you’re willing to categorize “things that provide emotional solace” as “things I should not believe,” which utilizes the Hardness Criterion.

To reject the Hardness Criterion properly requires diagonalization. It’s noticeable that “hardness” generalizes to the meta-level, which prompts the question, “Is the algorithm ‘do the action that is hardest’ the hardest algorithm? Doesn’t doing the easiest thing all the time place me in opposition to the Hardness Criterion, which is, if I believe in the Hardness Criterion, an intellectually difficult space in which to operate?” This counterargument works beautifully, because at the meta-level, “choose the most difficult thing all the time” is a very easy algorithm, in that there aren’t any hard choices, given that your options are well-ordered. It seems to me that one could prove the Hardness Criterion is not well-defined in much the same way one can prove the halting problem is undecidable.

This is the reason the Hardness Criterion argument against religion is easily deflated. On the meta-level, “believing the thing that is harder” provides a degree of emotional solace that stems from finding one’s beliefs to be in accordance with the Hardness Criterion, while being religious is “harder” in that sense. Similarly, while atheism is “harder” than religion in terms of lacking the component of emotional solace, religion is “harder” than atheism in terms of a meta-level hardness factor: the difficulty the religious face in rationally justifying their beliefs given their first-order apparent rejection of the criterion. This ultimate point–that under the Hardness Criterion, the most contrarian seems always to win–deals a death blow to its acceptance as a useful algorithm.

The Solution

I ended up talking to the professor for about thirty minutes, and she did not disappoint (how I love this school!). We had a fruitful discussion about the fallacy described in the digression section, and she forwarded me an article she’d written arguing that support of Islamic political parties in Muslim-majority countries is rational insofar as the emotional support provided by religion eases stress. Naturally, I and my Dante-meets-Borges-meets-Bostrom mindset loved this because of its seeming counterintuitiveness: as strange as it is to accept, the emotionally easier option is of course the more rational one, in the sense of utility maximization.

I went home and thought about this for hours. Hours turned into weeks, which turned into months. And finally I figured it out. Would it be possible to create an inverted Hardness Criterion labeled the Ease Criterion, affiliated with a straightforward Kahneman/Tversky-type utility function, yielding a bijective relationship between Ease Criterion rankings and outputs of some rational choice function? Definitely. Pick the option with minimal difficulty.

But does this Ease Criterion collapse as obviously as its negation does? In one sense, the Ease Criterion is easy on the meta-level because the choices it provides are well-defined. There’s no simple, Berry-paradox-type situation in which the Ease Criterion falls apart. For all intents and purposes, the Ease Criterion is at least as good as Occam’s Razor, because I can imagine some situation exists in which the algorithm that uses simplicity to pick a course of action is not the simplest algorithm. Does there exist one in which an algorithm that uses ease isn’t the easiest (if we admit “emotional solace” as a stand-in for “ease”)?

Indeed there does. The Ease Criterion on two variables always picks what the Hardness Criterion doesn’t pick, so the inverse diagonalization produces a contradiction. I can readily imagine somebody emotionally tortured by the notion that he’s always choosing the easiest option! A theorem of this flavor feels like it ought to follow:

Theorem 2. No operator that definitionally outputs a single choice from a choice set by a metric of difficulty or complexity is consistently defined.

I don’t think the generalization to multidimensional operators works, but that isn’t really relevant here, as no one claims two religions. The conclusion: if we allow difficulty, ease, simplicity, or complexity to serve as a stand-in for “rationality,” then we cannot consistently behave rationally. (Aside: I know rationality isn’t everything, but it still benefits us to create a more nuanced notion of what rationality is.)

The contradiction my mentor voiced was, as you may have by now realized, isomorphic to the problem with the univariate criteria described above. I could now see that the problem he had presented was that the Humility Criterion is inconsistent, and his claim was definitely legitimate. The Humility Criterion makes truth claims! Of course, on the meta-level, it isn’t humble.

Central Question: Does Christianity actually make use of the Humility Criterion?

Naturally, the only way to disarm the paradox of the humility of faith versus the pride of faith is to reject the notion that Christianity uses a so-called “Humility Criterion”–e.g. while humility is a virtue, it is not the methodology one uses to arrive at Christian conclusions.

Virtues are not algorithms. Consider the algorithm “Do the thing that is virtuous, or if multiple virtuous options exist, the one that is most good” (so phrased because I don’t like the notion of “most virtuous”). If you’re an effective altruist, it’s clear this algorithm is virtuous, which is not self-contradictory. But performing this algorithm is not a virtue any more than entering a convent is a virtue. They’re both methodologies used to pursue virtue. (This is why I love that Christianity enumerates so specifically what the virtues actually are.)

Not convinced? Consider the following argument why virtue is not meta-level. Take the action of “cultivating an environment in which I can better pursue the virtue of almsgiving.” It’s clear that an almsgiving person who cultivated such an environment and an almsgiving person who didn’t are both almsgiving, and thus are both manifesting the virtue of charity. The person who didn’t cultivate such an environment might even be a better person, by dint of emerging triumphant against more temptation.

Similarly, I recently posed the following thought experiment to a Catholic close friend: Mr. Brown doesn’t want to give alms. Which is worse: for Mr. Brown to falsely tell mendicants he doesn’t carry his wallet, or for Mr. Brown to deliberately leave his wallet at home so he doesn’t have to lie when he tells that to mendicants? We agreed that it was the latter, because it eliminates the possibility of repentance (cf. Guido da Montefeltro in Inferno XXVII).

Humility is not an algorithm; it is consistent for Christians to use algorithms that are not themselves humble in order to maximize their humility. And because humility is not an algorithm, it is not used to discern truth, and thus it cannot be a contradiction that the “lux” part of faith is so glorious. The centrality of the nonexistence of a Humility Criterion is paramount! Without it, “do the things that are humble” does not imply “believe the things that are humble.”

“Humble yourselves before the Lord, and He will lift you up.” -James 4:10

-TEVM

Advertisements
Featured

Borrowing from Peter to Pay Paul: Atrocities, Guns, and the Misuse of Expected Utility Theory

“We often find, upon a thorough review, that our expedients, while they have for a time seemed to produce very valuable results, have in fact corrected one evil by creating or enhancing another. We have borrowed from Peter to pay Paul.” -Frederick Law Olmsted, A Journey through the Back Country

It is, I think, for the reason of epistemic modesty that even the most virtue-inclined of us quaver at the prospect of discussing the great atrocities of the past: slavery, genocide, mass murder. We want to believe that our modern values are correct, neither too cruel nor too enabling. But the best counterargument to our new and improved morality is a simple wave at the vast cohort of people throughout history who similarly believed themselves to be right while upholding principles we have since decided are unthinkable.

The conventional defenses of these grave affronts to human decency all ring the same. James Henry Hammond, governor of South Carolina during legalized slavery, claimed, “It is by the existence of slavery, exempting so large a portion of our citizens from labor, that we have leisure for intellectual pursuits.” Historian Tony Judt reports that in war zone surveys of German nationals during World War II, a majority of the population claimed that the killing of Jews and other non-Aryan peoples was “necessary for the security of Germans.”

The idea that citizen nationals turned a blind eye to the evils of slavery and genocide is a myth. Indeed, they recognized the shattering and sundry sufferings of their fellowmen as not only manifest, but instrumental to their own fulfillment. As Olmsted succinctly surmises, “We have borrowed from Peter to pay Paul.” In this case, the Pauls that the participants paid were themselves and their cultures, and the Peters were very unfortunate indeed.

So why, then, when I encountered the Olmsted quote above, did I think first not of atrocities but of the type of moral justification I hear daily? I and the people around me are aware of the great quantity of suffering in the world, and at least profess that we want to bring about better circumstances. In so doing, we claim that the reasons we’ve chosen to pursue a university education rather than join up with an international relief organization are ultimately altruistic. We are bettering ourselves, we declare, so that we can be of better help to others someday. But how much do we need to be “improved” before we are ready to go put such improvement to use towards the final end of reducing human suffering? We say of our meta-values that it is virtuous to pursue the kind of world-class education that will permit us to grow in virtue. But is there really nothing immoral about the kind of “virtue development” that tells us to spend four years enjoying the company of friends, languishing in well-furnished dormitories, gorging ourselves on ready-made food? We claim that the time we spend here gives us the direction we need to figure out in what way we can best help the world in the era of specialization. But how long and costly must a cost-benefit analysis be before just taking the plunge and guessing becomes a more effective mechanism?

In his famous essay “Singer’s Solution to World Poverty,” Peter Singer (with whom I generally don’t agree at all) argues, “The formula is simple: whatever money you’re spending on luxuries, not necessities, should be given away.” I’ll slightly reformulate Singer’s point in a way that makes more sense for a long-term value maximizer of the type I’ve described: “Whatever time and money you’re not spending toward your own necessities should be redirected toward altruistic ends.”

But the alleged long-game virtue-seeker worms his way out of this request using an application of expected-utility theory (EUT). “If I were to right away begin a life of service,” he responds, “I’d be losing career opportunities that would later allow me to cut a fatter check for charity.” He plugs in some arbitrary values for probability and reward and breathes a sigh of relief at the implication that he is, in fact, living virtuously. I’ve seen people go ridiculously far down this line of reasoning: “I’m morally obligated to attend a party with my colleagues tonight instead of volunteering at X shelter, because that way I marginally increase my chance of getting a promotion from the boss, and thus will receive more money for my later donation.” And depending on his Bayesian priors, he can prove this to be true. Sure, “self-centered” doesn’t necessarily mean bad, but these two scenarios lie on an increasing axis of puerility. Somewhere before either of them is the point where self-improvement turns into self-indulgence and altruism becomes a weapon against itself. Either that, or letting Econ majors and politicians use EUT for effective altruism just happens to create results that seem very convenient in light of their first-order desires.

We readily acknowledge the depravity of slavery and genocide, but we don’t often talk about why they are so depraved–because, as it turns out, the best defense of these unconscionable acts is also the best defense of the way many of us currently live. If we ignore for a moment the vast difference in moral charge between allowing the deaths of millions of people worldwide and literally killing them firsthand, then the perpetrators of atrocities meet our reasoning with uncanny accuracy, almost point for point. If they had declared that they wanted to grow in virtue for the particular end of being better toward the people whose well-being they were sacrificing to accomplish these ideas of virtue, then the situations would be isomorphic. But they didn’t. They instead espoused a perverted brand of preferencing, e.g. “Paul’s net worth is more important than Peter’s,” to justify their moral abuses. We, living in the era of equality, have rejected this notion and replaced it with the affirming market-based rhetoric of “If he takes enough of Peter’s money to begin with, then Paul can invest it through Bain and pay Peter back tenfold in two years!” And to me, this line of thought doesn’t seem all that much safer.

I’m not claiming working in finance to “save up” for charity donations is anywhere near as awful an act as genocide. The manner in which the two are comparable is purely at the meta-level. Using EUT-based morality calculations to justify our ceaseless borrowing is actively destructive to the fabric of ethics in a way that selective usage of the concepts of freedom and nationhood wasn’t. This conclusion stands regardless of how much more pain mass atrocities caused, or how much more directly its authors were involved in its causation. How poor must Peter be before we pay him back? It doesn’t matter anymore, amidst all the dollar signs that will pull him from destitution in two years…if he survives that long. Yes, we are not responsible for Peter’s death in the way that the perpetrators of genocide were. But their meta-principles, unlike ours, didn’t validate the ability of anyone, anywhere, to use the tools of morality against morality itself. In fact, the reason we’re aware of their moral fallacy at all is that at the meta-level, their principles were not even coherent. Our successors will have a much harder time disproving our meta-principles than we did theirs.

This is all unnecessary, because one really doesn’t need to consider the two evils as comparable in order to want to avoid both. The old “killing versus letting die” distinction is toothless here. Singer is aware of the salience of ignoring moral qualms in which we are not directly involved. “To be able to consign a child to death when he is standing right in front of you takes a chilling kind of heartlessness; it is much easier to ignore an appeal for money to help children you will never meet,” he admits. But the metric of difficulty should not be confused with the metric of virtue. We do plenty of good without even thinking about it (and if you can’t recall an example, that’s equally likely an indicator that you never perform unconscious virtuous acts as it is that you often do and don’t even consider them consequential enough to remember). Anyways, even the strong version of this argument–that killing someone is, de jure, worse than letting her die–does not mean that we should a priori take to be supererogatory any moral actions toward remedying situations which we did not cause. How virtuous should we be expected to be? My answer: virtuous. There’s no spectrum. I’ve long believed that “good” and “bad” are measured in degrees, whereas “virtuous” and “sinful” are binary. A friend asked me today, “If you’re off the mark, doesn’t it matter by how much?” I answered, “Yes, but first you check whether you hit the target, which is a Yes/No question, and then you measure the distance in terms of inches or feet.” Similarly, consider two pregnant women; one is due in 3 months, the other in 6. Would you regard one as “more pregnant” than the other? No, that’s ridiculous–“pregnant” is a binary variable, and “time until due date” is a continuous, distinct variable. Just so for morality: we should turn the “virtue” switch on, and strive to maximize the continuous variable of “good.”

We could perhaps take this blatant misuse of EUT more charitably and claim that the Paul paid in the exchange is not ourselves, sheepishly holding up an IOU, but future recipients of aid, endowed with all the good we can instantiate with this carefully augmented nest egg. In such a calculus, we’re foregoing present good for the sole purpose of maximizing future good, and we do not enter into the equation as agents. But then isn’t it a failure of epistemic modesty for us to assume we can invest these “Schrodinger’s utils” more reliably to produce a long-term sum than can the organizations and people we’d be helping if we forked it over now? It’s at least paternalistic, which is usually a warning sign. And in the case of those economic geniuses who are truly best suited to monitor the growth of the utils, I find it hard to believe that they’re doing so purely to provide invaluable, unpaid financial management to the United Way. Even if they were, how is it not completely self-defeating to take from the present to give to the future–when those from whom we are withholding real-time aid would perhaps otherwise have become the parents, educators, missionaries, and relief advocates for the agents of the future?

It doesn’t matter if killing is different from letting die, because this kind of reasoning, as it turns out, causes both. A few days ago, my nation was hit by a devastating act of violence, only the most recent in a series of cases that have seemed to demonstrate an upward trend in mortality. On the afternoon of Ash Wednesday 2018, a former student opened fire in Marjory Stoneman Douglas High School, killing 17 students and teachers at the scene and injuring 15 more.

The Wikipedia articles for school shootings in all other world countries are consolidated into a single page. But America has perpetuated and suffered enough violence in this avenue to constitute its own page, as long as the rest combined. Yet lawmakers who refuse to clamp down on gun sales attribute their reticence to precisely the manner of thinking we’ve just explored.

Yes. Proponents of easy market access to automatic weapons use exactly the method of poorly-applied EUT to justify their farcically circular reasoning. They are, quite straightforwardly, robbing innocent lives to grant what they perceive as a public right that will then theoretically be used to prevent further loss of life. After the First Baptist Church shooting, two civilians grabbed their own rifles and pursued the shooter until he crashed his car, a heroic action that launched them into the national spotlight. One of these vigilantes, Texan Johnnie Landendorff, received overwhelming praise from far-right media such as Townhall and Breitbart. Said one Townhall staff member, “In the hands of a ‘good guy,’ a gun is what finally put an end to the massacre.”

Any good NRA hack will tell you that the best defense against a mass shooting is an armed, vigilant citizen. Sure, he argues, it is a drain on the aggregate value of society for some dangerous individuals to have access to guns, but this is a necessary price in order to allow the righteous citizens of America to purchase the guns they’ll use productively–to grow in virtue to the point where they can overcome such criminals. The logic of needing any guns in the first place if their telos is to resolve violence caused by people wielding other guns is blatantly self-defeating, like all instantiations of classic misuse of EUT. But no matter! We must pay Paul! It’s just too bad if we are both Peter and Paul, and we never break even.

Now you see the stark line between atrocity and innocent misuse of expected-value theory is not so stark as we thought. This Peter/Paul reasoning can feed the continuity of tragic and evil acts. It is as dangerous as they.

Given that even statistically-minded me has a penchant for permitting gun ownership, I’ll give any pro-gun readers some epistemic credit: there are better reasons not to want stringent gun control than this paradoxical argument. Using it is an insult to our intelligence and our values. If you are pro-gun, then other factors are entering into your reasoning than absolute aggregate utility–factors like freedom as a first principle, or concerns about effectiveness of the governmental measures themselves, or a non actor-network theoretic understanding of the relationship between weapons and people. For reference, nobody who believes in the legalization of heroin explains their position with “If everyone has access to heroin, we can use our experience with heroin to grow in virtue to better help addicts.” If you strongly believe that gun control legislation is wrong, then you can make a better case by avoiding such pathetically post facto justification.

The point is, there’s clearly something absurdly wrong with using EUT for post hoc justification of selfishness. This problem stems partly from the fact that it doesn’t make any sense to consider every present action we take through the lens of expected-utility theory. I posted earlier about the paralysis this approach causes. But I espoused a principle in that article that needs clarification in order to be consistent here: that we should care just as much, in certain situations, about future actors as we do about present ones. What does this mean, when the starving children represented by the $1K you didn’t pay Peter ten years ago are now dead because of your inaction, and any reparations you can make by paying forward your $10K to Peter’s sons will go to completely different people? What does it mean when the healthy children at Stoneman Douglas are now dead, rendering worthless the glorious bullets that would have been used to save them when the Civilian Hero arrived on the scene?

We should preference existent actors over nonexistent actors in cases where the existent actors are in clear and present danger, especially if helping them may help to avert the need by future actors of help.

To do this rigorously, we could use multivariate EUT in which probability/value pairs are Borel products with different distributions that are time-sensitive to earlier selections. Karni (1989) has already done good work on EUT over multivariate probability distributions, which could be a place to start.

In less mathematical terms, we could think of this as a binary indicator. Is there danger for existent actors that is greater than some small epsilon? Probably. There likely will be for a long time–or at least until we stop using backwards EUT to implode morality. If there is such a danger, then act now to stop it instead of saving up to help people who will be born into suffering in the interval in which your bond was doubling–people who might’ve been born into comfort if you’d helped someone else. (Aside: “those people might not have existed, exactly” isn’t a counterargument worth rebutting here; helping earlier agents may linearly help later agents that don’t yet exist, regardless of the counterfactual details.)

This comment brings me to an important corollary. I don’t believe my metric applies to work that is done to overturn a long-term political principle that is contrary to virtue, or to instantiate one that is in accordance with virtue. In these cases, the future population in danger will likely look the same regardless of the individuals helped, and working to stop the cause of the danger rather than its effect may be more potent. If your current actions are directly and non-hypothetically correlated with a virtuous end goal that you can’t yet enact, perfect! Tell the truth when asked: that instead of joining Teach for America, you’re pursuing your law degree or self-studying machine learning in order to illegalize the death penalty or better implement AI alignment strategies, respectively. The moral threats you have diagnosed as salient–for you just can’t find everything salient–are best helped by what you are actually doing. This argument, I think, only works for systematic risks, in which one can have a good idea of what the climate of his concern of interest will look like moving forward. If you’re this kind of person, you’re probably already donating or tithing to the cause of current moral actors, so continue to do that in tandem with your altruistic project. One such altruistic project might constitute bipartisan endeavors to create a better understanding of what “guns” are and should be, who should have access to them, and what they should be used for.

So, reader, even if you’re just not going to donate to charity or volunteer for a helpful cause no matter what argument I present, stop using EUT to justify that impulse. Regardless of what Singer says, it would be unreasonable to expect you to donate all of your resources to relief, and the idea that this is the “only” moral way to live is a contributing factor in the emergence of post facto bad use of EUT. Perhaps going out for a nice dinner to clear your head will meaningfully improve the good you can do in the coming week, but this conjectural claim isn’t the way to frame your choice if someone angrily asks why you’d do such a thing. Neither is the default response that you are entitled to such days off, which harkens ominously back to the infamous atrocities of history–whose instigators claimed justification through the preferencing of their own abstruse interests over other moral actors’ very basic rights. You are not entitled to a day off from morality any more than you are entitled to a gun.

This shouldn’t be taken as a claim that there is only one model for a virtuous life, a model that always puts others before the self. When Luther wrote “On Temporal Authority,” he argued that God would not begrudge a thoughtful and principled monarch the occasional joust or hunt, but a monarch who strove to be thoughtful and principled would find little, if any, time in which to pursue such pastimes. As humans, we are constantly sinning in small ways by preferencing our immediate interests over more important concerns. The fact that it is to some degree inevitable doesn’t make it any better–but honesty does. No, it’s not moral to value your $200 fuzzy parka over the life of the child you could have saved with the money you used to buy it. In fact, it’s probably immoral. But it’s not anti-moral to admit that uncomfortable truth. It is anti-moral to say you bought the parka to keep yourself dry so that when you one day climb a mountain to save a stranded Himalayan child, you’ll have slightly less slippery skin. And it is deeply anti-moral to claim that the reason you own a gun is to someday be the hero who steps in to protect somebody from a villain wielding a gun just like your own.

If I have convinced you, call (800) 367-5437 now to donate to UNICEF. Go to http://www.gofundme.com/stonemandouglasvictimsfund to support survivors of the Stoneman Douglas tragedy. All you need is a credit card and a sense of doubt.

-TEVM

The Knowledge Problem in Soteriology: Risk-Reward Paradigms and the Montefeltro Metric

“A man…called his servants and entrusted his wealth to them. To one he gave five talents of gold, to another two talents, and to another one talent, each according to his ability. Then he went on his journey. The man who had received five talents went at once and put his money to work and gained five more. So also, the one with two talents gained two more. But the man who had received one talent went off, dug a hole in the ground and hid his master’s money. After a long time the master of those servants returned and settled accounts with them. The man who had received five talents brought the other five. “Master,” he said, “you entrusted me with five talents. See, I have gained five more.”

His master replied, “Well done, good and faithful servant! You have been faithful with a few things; I will put you in charge of many things. Come and share your master’s happiness!”

The man with two talents also came. “Master,” he said, “you entrusted me with two talents; see, I have gained two more.”

His master replied, “Well done, good and faithful servant! You have been faithful with a few things; I will put you in charge of many things. Come and share your master’s happiness!”

Then the man who had received one talent came. “Master,” he said, “I knew that you are a hard man, harvesting where you have not sown and gathering where you have not scattered seed. So I was afraid and went out and hid your gold in the ground. See, here is what belongs to you.”

His master replied, “You wicked, lazy servant! So you knew that I harvest where I have not sown and gather where I have not scattered seed? Well then, you should have put my money on deposit with the bankers, so that when I returned I would have received it back with interest. So take the talent from him and give it to the one who has ten. For whoever has will be given more, and they will have an abundance. Whoever does not have, even what they have will be taken from them. And throw that worthless servant outside, into the darkness, where there will be weeping and gnashing of teeth.”

-Matthew 25:14-30

The parable of the talents is universally recognized as one of the most famous of Jesus’ stories, and has generated commentary so exhaustive and profound that I can’t offer anything new on the topic. Echoed in every commentary I’ve read, though, is the condemnation of the final servant. The general consensus is that his attribution of selfishness and property seizure to his master was a last-minute excuse to obfuscate the true reason for his failure: a lack of motivation to serve the master.

Yet I urge my readers to consider, for the time being, compassion for the unfortunate servant. I see one argument according to which the unfortunate, tooth-gnashing man should be spared.

The Argument

There’s something weird about the above parable, which is that the Rule of Three is invoked and then sort of not exploited. That is to say: there are three servants, but only two meaningfully different outcomes. There is an important distinction between Servant 1 and Servant 2 in that 2 has less to invest, but behaviorally, Servants 1 and 2 are isomorphic, and teleologically, the master treats them the same.

So, what if Servant 2 had lost the money? This is seemingly a crucial counterfactual that the parable ignores. Given that the two scrupulous servants invested the talents in some economic pursuits that could have gone awry, they might have lost the talents in the process, a situation allegorical, in all ways except its conclusion, for living the pious life. I see three possible explanations for why Jesus didn’t reveal the counterfactual:

  1. The counterfactual would’ve changed nothing (the master still would’ve received the servants with rejoicing if they’d lost his money) and thus isn’t important.
  2. The counterfactual would’ve changed everything (the master would’ve rebuked the servants if they’d lost his money), which would’ve made a less compelling story.
  3. The counterfactual is an unimportant situation for the allegorical meaning of the story (it’s impossible to “lose” by investing spiritual capital).

I think Option 3 is most likely, but Options 1 and 2 still bear considering. So set aside for a moment the spiritual implications of the allegory. In 30 A.D., a talent of gold was worth about two years’ salary for a skilled laborer. Investing even that much money–let alone five times that value!–was clearly risky, especially since the money did not belong to the servants themselves. In the absence of a clear heuristic or algorithm for taking risks with items that do not belong to us, burying the talents is the only “safe” choice. To work up the chutzpah to invest the money–even in a bank–we need some sort of utility theory, and some notion of safeness, and the two need to be connected.

But EUT doesn’t take us very far. We can calculate an expected payoff, but expected utility is risk-neutral; a high-risk algorithm and a low-risk algorithm with the same average payoffs will integrate to the same expected result. We can establish an Ellsberg-type theory around salience to tell us how marginal a difference in risk must be to justify preferencing a “risky” option over a less risky one.

In general, though, there are attributes of probability and risk that we have good reason to believe are conceptually orthogonal. That probably sounds crazy–you might argue that a machine that has a 50% chance of killing you and a 50% chance of giving you $1m is riskier than a machine that effects those outcomes 1% of the time each, and does nothing the other 98%. But that’s begging the question, by already taking the span of outcomes as given. Before we can decide how “risky” an outcome is based on its probability, we need to understand what the symmetric span of outcomes looks like. First, how terrible a world can obtain? Only second can we bring the likelihood of such occurrences into the calculus. So I claim there are two parts of risk: one dealing with likelihood of bad outcomes, and one, “symmetric span,” dealing with the possibility that those bad outcomes might obtain. The latter part looks unrelated to probability.

And it is–until we account for statistical entropy. Both are obviously related to it: the lower the average probability of an outcome, the higher the entropy; as entropy increases across a symmetric distribution, risk lowers (in that new, less extreme outcomes pop up on both the good and bad sides). Much like von Neumann eigenfunctions, probability and risk can be treated as independent, but they certainly aren’t unrelated.

In behavioral economics, we tend to see EUT-type preferencing as largely monotone and risk-preferencing as indicating attributes of the consumer, much like preference for one-shot versus gradual resolution of uncertainty. We have to account for these factors in our utility functions themselves. And even a bounded rationality, Kahneman/Tversky-type model does not differentiate between risks taken with things belonging to us and things of which we are merely stewards. Thus I conclude that even we, out of the Bronze Age thousands of years hence, have little machinery with which to advise the wayward servant.

Salience and Infinity: the Montefeltro Metric

Okay, so maybe the original “wayward servant problem” as I posed it isn’t as hard as I’ve made it out to be. The servant doesn’t need all that much counsel in expected utility theory. The master pretty much gets it right: “You knew that I harvest where I have not sown and gather where I have not scattered seed? Well then, you should have put my money on deposit with the bankers, so that when I returned I would have received it back with interest.” In other words, the servant’s prior that the master will eviscerate him if he doesn’t grow the investment is probably high enough to achieve the requisite activation energy for him to invest the money. But there’s a generalization of the wayward servant problem in which the answer really isn’t so simple. The talents in the parable allegorically represent an infinite investment, so let’s drop the allegory.

As my readers surely know, I’m a Catholic, combinatorialist, effective altruist Dantista with a Borgean bent. Naturally, my utility functions look bizarre. In my paradigm, infinite risks and rewards aren’t a Dutch-books trick–they’re a constant reality. As such, I lean on salience a lot in decision-making. With all else finite, infinite things are salient. With multiple infinite things on the table, ordinal numbers and well-orderings (e.g. infinite money < infinite DALYs; quantifiable infinities < unquantifiable infinities) help somewhat, but not tremendously.

I think a lot about the Large Hadron Collider. It seems ridiculously obvious to me that it shouldn’t exist. And before you start accusing me of being a Luddite (that’s fine, I’ve been called worse), really? How are our priors so vastly disparate that you think any chance of destroying the known universe is worth taking, for the sake of knowledge alone–knowledge that is inapplicable, and will likely remain so for a long time? Why does it matter how small the odds are, when an uncountable penalty is at play? Bear with me; I’m not conservative with risk-taking in general. I think the high-risk, often life-saving surgeries my father performs every day are incredibly worthy. I think AI researchers should proceed with caution, and that the world’s generally been better since the invention of nukes. But note that in all these situations, there is a positive to balance out the danger. Successful surgery? Many DALYs. Aligned superintelligent AI? Potentially infinite DALYs. No more war with another nuclear power? Potentially infinite DALYs. But is the good that emerges from LHC research seriously unquantifiable?

It’s not that I’m a consequentialist, and it’s obviously not that I hate knowledge. There’s a kind of meta-intentionalism at work here, which sounds complicated but is actually probably the least confusing, most Kantian consistency metric I’ve ever presented on this blog. I’m going to call it the Montefeltro metric, after the unfortunate counselor of fraud: You evaluate what your intent isIf the system you’re using aligns with your intent, proceed. If it doesn’t, don’t. In this way, you can see whether the object of your will is in contradiction with the method of your will. For instance, if the world is sucked into a microscopic black hole, there will be no world to study. The ascertainment of knowledge is empirical; without data, there is no studying. This meta-operator doesn’t shut down the counterexamples I posed, because the nonexistence of the world in particular disallows the discovery of knowledge about it. It doesn’t disallow making it more peaceful, or glorifying it, because existence isn’t a predicate. Maybe this is a sketchy argument. My point is, at least the contradiction of will and outcome isn’t as obvious in the latter cases.

The Montefeltro metric has its flaws. It’s weak in the sense that it can only tell you what risks not to take. Also, ideally I’d like an operator that reflects my sensibilities more in its evaluations (e.g. would reject the LHC because it’s unreasonable, not because it’s contradictory). But the nice thing about Montefeltro is that it can tell us not to bet on a symmetrically infinite distribution: even if infinite knowledge can be gleaned, the LHC fails the Montefeltro metric.

Knowledge and Soteriology

I mentioned Dante, and EUT + infinity $\implies$ Pascal’s Wager, so you’re probably waiting for the part where I talk about Heaven and Hell in the value calculus. I posted earlier about certainty and the human condition, but again, infinity throws EUT out of whack. I’ll start with the following question, which I get asked a lot: if Heaven is an unquantifiable good, and Hell is an unquantifiable bad, how on earth do I and other Christians not spend our entire lives single-mindedly devoted to getting as many people into Heaven and out of threat of Hell as possible?

Think about it this way. For any event you have prior probability P(A obtains). Now consider the prior for your prior–P(my assessment of P(A obtains) is accurate). For instance, my prior that I will eat ramen tonight is .9, but my prior for that prior is only .1, because I literally made up the number .9 while typing, so now I’ll stick with it, but I could have picked any other single-digit number.

(Now I have of course thought of a question that no one has asked, and that no one will, but that I’ll answer anyway: “But Tessa, doesn’t the existence of “priors of priors” imply that probability is real, which is false?” No, it doesn’t, as long as you sum over classes of events. Just like, in theory, you could repeat the conditions for event A a bunch of times and come up with a good approximation of P(A), you could introspect a bunch of times about your priors for various similar events, run trials, and plug into Bayes to get posteriors. Oh, and also, the existence of free will means that counterfactuals can obtain in the realm of human mental state.)

As I get up into higher and higher levels of meta-priors, the one for “Heaven and Hell work the way I expect them to” shrinks faster than my priors for any other events–much faster than my priors for non-soteriological elements of Christian doctrine. I think Hell is empty. What’s my sureness that I’m right? Almost zero. (Of course, it can’t get to zero, because then all my subsequent priors would have to be 1.) Soteriology has the optimal mix of divine unknowability, human unpredictability, and a generous sprinkling of Will that makes it utterly impossible to consider in a Bayesian framework. Knowledge doesn’t make any sense when applied to soteriology–which isn’t true for other theological disciplines. 

There’s a common misconception, especially in the Age of Reason, that knowledge spells doom for religion. Detractors point to the oft-repeated aphorism that our faith should be like that of the children–wrongly, because the reason we seek to espouse children’s faith is not because of its blindness but because of its sincerity. We’re told to “know thyself,” so obviously, unless introspection is an exercise in futility, knowledge can permit us to grow in faith. Knowledge, when applied correctly, generally helps theology, just like it helps all other fields. Why, then, am I claiming this isn’t true for the theory of salvation?

Consider Guido da Montefeltro. “No power can the impenitent absolve,/Nor to repent, and will, at once consist,/By contradiction absolute forbid.” One can’t will and repent simultaneously, of course–doing so precludes one from repenting at all. But then what’s different about doing something you know you can repent for later, and counting on that contingency? You’re simultaneously intending to sin and intending to repent. You intend to sin only because you know it is feasible to repent later. But then you can’t truly repent for the sin–you can’t be sorry you did it. One must rue all the consequences of sin in order to repent. But it’s impossible to rue debauchery if you enjoyed it, when you know that it didn’t cause you any harm because you could later repent! We then have the following system:

If you do something for which you know you can later repent, and for which you intend to count on later repenting, you cannot sincerely regret it. Therefore you cannot repent.

But then you can repent if and only if you can’t repent. Is this a theological Russell’s paradox? Do we need a new ZF(+/- C to taste) for Catholic doctrine? Let’s see where we went wrong. Decision theory got us into this mess, so it has to get us out of it. Recall:

You intend to sin only because you know it is feasible to repent later.

I emphasized the wrong words in that sentence. The word that ought to have been emphasized was “know.” Clearly, though, you don’t know you can repent later, because based on the paradox, it turns out you can’t. But the only reason you intend to sin in the first place is because you can later repent. If you know–or at least very much suspect–that you can’t repent later, then you can repent later, because you can absolutely rue the consequences you wrought upon yourself as architects of your inherent damnation! But this seems to have only worsened the paradox, because now neither “sins that are redeemable” nor “sins that are irredeemable” is a viable category. Which means sin doesn’t exist. I’m digging myself into a hole to China here.

Consider what the black cherub tells Francis and Guido:

  1. Absolution requires repentance.
  2. One cannot repent and will simultaneously.
  3. Therefore, Boniface’s absolution of Guido was illegitimate.

The fact that one can’t repent and will at once does not preclude further repentance after the fact. Thus this only works because Guido did not further repent for the fraudulent counsel (he thought he had already been absolved). This is an important distinction. Guido didn’t particularly enjoy counseling Boniface to trick the Colonnas. Being a Franciscan, he got pretty much nothing out of it, so he experienced no later gratification that he couldn’t rue because of its consequences for him. If, then, his sin was not “redeemable,” that occurred inasmuch as his reasoning (about whether he had repented) was mistaken. He erroneously assumed he’d repented.

Redeemable isn’t a qualifier that can be applied to sin at all. It’s only one that can be applied to people, post hoc, based on whether they repent. The fact that absolution requires repentance implies that redemption is posterior to repentance, which means that considering sins repentable or not begs the question. The problem, then, isn’t that “sins for which you can repent” and “sins for which you can’t repent” cause contradictions as categories; that was a red herring all along. Of course, there’s no such thing as a sin for which one can’t repent, but not because of this “contradiction.” All sins belong to the category “sins for which one can repent,” but the tricky word “know” is a game-changer.

How can this be true? If all sins are “sins for which you can repent,” and you know that, then aren’t all sins “sins for which you know you can repent”? How can changing the location of the word “know” change a set from enormous to empty?

When will or intent is involved, knowing you can do something changes whether related things are doable (cf. Toxin Puzzle). This isn’t just about uncertainty–even thinking you can do something can have that effect. The Will plus uncertainty plus infinite risk and reward plus a notion of technical predestination that, when taken too far, spells despair, make repentance, absolution, and soteriology the kind of thing I cannot profess to know anything about. My priors that whatever methodologies I’m using to help people attain salvation are the correct ones are constantly changing. Ergo, I devote my time not to yelling at strangers on the street that they can be saved, but to yelling at strangers on the Internet that they can be saved but I don’t know if they’re saved and if I knew that they were saved, that might somehow mean they aren’t saved.

Q: “But shouldn’t you tell people to repent? It can’t possibly hurt!”

I do tell people to repent. What I can’t do is tell people, “If you repent, then you’ll be saved.” Because while from a divine perspective that must make sense, from a human perspective it’s true if and only if it’s false.

Q: “Why don’t you devote every possible free second to trying to get more people to repent, then?”

Obviously, it’s not very effective. I make a good-faith effort to convert people, but I also think it’s dangerous for me to spend too much time trying to convert atheists, because they might convert me.

Q: “Wait, what? Don’t you want to believe what’s true? If an atheist convinced you out of faith, wouldn’t that mean you came around to believe that atheism is true? Why do you want to avoid truth? Are you a Trump voter? You’re a coward! You’ll never understand the enveloping curves of sequences of ratios of 1-periodic functions, you ignorant Catholic! RELIGIOUS PEOPLE LIVE UNDER ROCKS AND AVOID INTELLIGENT DISCUSSION!” *smashes table*

So, some Freudian slips in there, but glad you asked. Not sufficiently explaining the answer I gave above provokes cries precisely to that effect. First off, I converted from atheism, and I still have some atheist sensibilities, so there are atheist arguments that strike emotional sympathy in me even though I think they’re dumb (e.g. “How can you believe in virgin birth?”), and when I feel that way, it causes me to worry that I’m not being sufficiently rational in my evaluation of arguments, and ergo that arguing with atheists brings out my knee-jerk tendencies that prove to have negative consequences for my analytic thinking. I’m not living under rocks in the meantime; I’m working to improve on that intellectual flaw that I have.

Secondly, there is a majorimportant, and tremendously culturally ignored difference between “something that someone is saying sounds believable” and “something that someone is saying is true.” When I was a freshman entering a debate society, I went back and forth between two upperclassmen for a week arguing about populism. I’d talk to the first one, who’d convince me populism is a good ideology, and then the second one would promptly talk me out of it. “While i < k, i = k + 1. While i > k, i = k – 1. End. Print i.”

I don’t want to get too deep into algorithms here (although I do feel like I’m supposed to, because my imaginary opponent questioned my mathematical abilities), but here’s an analogy: For any input I’ve tried, the Collatz process terminates. This boosts my priors that the Collatz conjecture is true, but the returns diminish as I keep trying inputs, because I’m either exhausting small inputs (which doesn’t tell me anything about the conjecture in general) or I’m trying random big inputs (which doesn’t tell me anything about the conjecture in general). It seems obvious that if I keep multiplying odd numbers by 3 and adding 1, I should eventually hit a power of 2, but that’s intuition, not truth, and truth is not inductive. This isn’t to bash inductive reasoning. Empirical evidence is helpful, but not that helpful, and the slope of the graph of prior vs. dataset size decreases as x increases. Abstruse reasoning works much the same way, replacing any pretensions to universal empiricism with the anecdotal, and often justifying premises ex post facto.

This is a big problem, given that:

  1. We live in societies that attempt and have always attempted to systematically apprehend truth by arguing;
  2. Everyone’s a missionary, and fringe theorists abound; and
  3. The more you argue with a fringe theorist, the better he will get at winning arguments.

The fact that large swaths of people believe something is not always a good reason to believe it. Or here’s a better distinction: the fact that large swaths of people believe something that does not have a perceptible effect on their daily lives is not a good reason to believe it. (If an overwhelming majority of employees really likes its boss, that’s evidence that the boss is a good boss. If an overwhelming majority of employees believe dinosaurs are our immediate ancestors, that is not good evidence for anything except maybe that that particular company has been indoctrinating its workers with wrong ideas about evolutionary biology.) Even the fact that large swaths of smart people believe something is not a good reason to believe it. Very intelligent people are prone to different, but no less insidious, fallacies than people in general, myopia being the one that comes to mind first, and intelligence signaling second. Many apocryphal texts are very convincing, which illustrates that there’s a crucial disparity between “compelling” and “correct.”

I can’t offer much by way of solution. The same notion–that charismatically arguing tends to convince people–is the notion I’m using to try to convince you right now. But I do think that separation from face-time helps somewhat. While I don’t argue with atheists all the time, I read a lot of Dawkins, listen to Harris, etc., and doing so rather than firsthand discussion, I think, permits me to focus more exclusively on the logical.

The Upshot

Q: “Okay, Tessa, so you went off on a ridiculously long tangent about algorithms, Bayesian priors, and rational argumentation theory. Are you ready to do that thing you do that you think is so clever where it turns out that your weird tangent is somehow related to the problem at hand?”

You got me.

As I expressed above, every empirical experience we have changes our priors ever-so-slightly toward 1 or 0 or, perchance, some limiting value bounded away from 1 and 0. Perhaps human uncertainty can actually help us here. It’s impossible to repent for something we know we can repent for. But repenting for something we’re only .9999 sure we can repent for neatly avoids the paradox because not repenting never wins the day under any expected utility model. Finite positive times infinite is infinite. Our repentance is sincere.

I hope this is an unexpected upshot, because it was unexpected to me: under the lens of the knowledge-and-soteriology paradox, we can justify two things that seem unsavory in theology.

First, the fundamental uncertainty of God’s existence makes sense in light of the paradox I described. I think this examination actually presents a very elegant notion of why the unknowability of God is a good thing. In the human mind, probability 0 times infinite risk is indeterminate. But any small epsilon times infinite risk is infinite. Uncertainty is good insofar as it avoids the logical paradox of intent and repentance. Unknowability is crucial to Pascal’s wager, which is what makes it so compelling in the first place. With total faith in God, there’s no need for any analysis whatsoever. But the lack of perfect knowledge permits both the presentation of elegant mathematical ideas in the realm of theology and the avoidance of complete sureness of the doctrine that repentance is in any way related to redemption.

Second, the model of “I’m only .999 sure that I’ve repented adequately/that my repentance was sincere/that I can repent at all” provides some understanding of why human self-centeredness isn’t altogether terrible. The fear of eternal damnation isn’t a good reason to repent (having strayed from the Good is a much better reason). But the latter isn’t as easily quantified–or rather, as easily categorizable as unquantifiable–as the former. I’ve spoken before of the binary notion of virtue and sin; there isn’t a readily emergent way to order how much we’ve strayed from the Good. Say what you want about Hell, but you can’t say it’s not the salient risk in pretty much any outcome spread it finds itself a part of. There will be weeping and gnashing of teeth…

-TEVM

The Libertarian Theodicy

There is an eminent tautology that emerges when we consider where federal power ends. All powers that are not claimed by the national government are left to and reserved for the subsidiary bodies of government, especially the states. The question of where, exactly, federal power ends has been one of much controversiality in American history, from the Virginia and Kentucky Resolutions to the Civil War to the Seventeenth Amendment to the civil rights movement.

Tremendous swaths of people have fought bitterly for legal decisions to be made closer to home, whether home was Texas or Massachusetts. Indeed, it’s a common misconception that states’ rights were the Southern battle cry during the Civil War. The Northerners used those arguments too–especially in their rally against the Dred Scott decision and Fugitive Slave Act, which they saw as federal infringements upon their states’ illegalization of slavery. The claim “on our land, you follow our rules” has further manifested in recent times, surrounding the Obergefell decision. Even today, the legality of state nullification and secession is an open question.

States’ rights’ advocates will tell you that the argument against these extensions of federal power is orthogonal to any metric of virtue. Small-government enthusiasts claim not to oppose the federal enfranchisement of minorities or gay people out of racism or fear or even the belief that these laws are wrong. They won’t defend their desires to have the matters relegated to state courts for any ideologically-based reason. Rather, they’ll tell you they fight such measures because of their nefarious sweeping scope. Libertarians detest the notion that the national government is settling what they see as intra-state affairs on states’ behalf.

1865 libertarians wanted the issue of slavery left to the states. 1965 libertarians wanted the same for civil rights. Abortion, capital punishment, gay marriage, marijuana? Advocates and detractors alike of the individual issues have argued for the legality of these actions and products to be determined on a smaller scale.

In a world where righteous indignation has always been the motivator of activism, it seems remarkable that so many have crusaded for the right to be wrong–not, in fact, believing themselves to be correct, but rather seeking the opportunity to determine their own stance; to have moral choices not prescribed unto them.

Why fight for states’ right to decide, even if that means some states won’t legislate the way I want them to? At first blush, the problem looks isomorphic to that of non-natural theodicy, the consideration of why men commit acts of evil in the world. The canonical answer is that free will and perfect goodness are mutually exclusive. God had to pick one.

While the natural world is an object-level entity, the realm of the divine ought be considered as the meta-level whose principles manifest in the world that we see. In the eyes of the divine, when is free will better than goodness? Clearly, if there is either something good in free will exercised in and of itself, or something bad in permanent and immutable goodness. Object-level goodness is not the same as meta-level goodness; choosing the option that maximizes the amount of good is the correct solution, and free will both allows individual choice (meta-level good) and results in some people who perform acts of virtue (further instantiation of object-level good). Instilling perfect goodness allows only the latter. It is better for Man and for God’s glory that humans have ownership and possession of the acts they perform.

Now is the part where I show my cards and say that there’s an enormous logical fallacy taking place in the application of human theodicy to government. The two are not isomorphic, because states’ rights constitute a case in which ownership can differ greatly from possession, involvement, and agency, all of which need to be present to justify free will.

Can a Texan claim that he has more ownership of local politics than of federal–that a court decision made locally is more his than one made by SCOTUS? Probably. He has a small chance of sitting on a Texas jury, and none of sitting on a federal one. He has some tiny margin of influence on ballot propositions in his home state, again narrowly beating out his influence vector for federal propositions. Any positive epsilon is greater than zero; given infinite time, our Texan will, sooner or later, find himself the determining vote in an issue important to him. So his state’s decisions belong to him more viscerally. So far, so good.

Is his home state more likely to side with him in terms of deciding legal matters? Definitely. Statistically, the population of Texas is likely to look more like him than the voter base of the nation at large, as there is some self-segregation due to shared state values. His ten neighbors are more likely to agree with him politically than ten people selected at random from the United States voter base, by sheer dint of the fact that they chose to live near him, which they probably wouldn’t have done if his politics were anathema to them. This also makes his state’s decisions more his, because they are more likely to resemble his own decisions.

But does the Texan have more involvement in his state’s decisions than in national ones–that is, more of a say? Not really. It is more likely that the outcome will turn out in a way he’d find agreeable, but his participation had near nothing to do with it. In terms of issues, he has narrowly more of a say in state matters, but not so in elections. In Boolean terms, the influence of individual voters within a state (especially a small one) is just about as negligible as that of voters in a federal election.

Does the Texan have more agency in his state than in the nation? No–that one’s just ridiculous, because agency is concerned more with actions than results. Not that the results of his political activities are really that different in the discrete spheres. He can write an angry letter to his Congressman, to his governor, or to the President. None of those people is significantly more likely to read the letter than the others. (One of his congressman’s staffers will probably read it, but won’t be able to do anything about it. So in each case, he took the same actions and got the same null results.)

So the Texan has more ownership of his state’s politics, but no more involvement or agency in their determination. The free will argument doesn’t hold up, because even if God predetermines our actions, they still belong to us, and the culpability is ours. Calvin, the father of determinist theology, argued in Institutes of the Christian Religion that in the predeterminist worldview, Man retains full ownership of his actions. For Man to have free will, he needs something greater than ownership–involvement and agency, which the Texan gets no more out of Texas than out of the U.S.A.

Thus I conclude that he wants the issue at hand to be left to the states not out of a want of self-sovereignty but because he concludes, not incorrectly, that the population of his state is more likely to vote his way. That seems pretty likely when you realize that none of the approximately three Green Party members in Texas wants the issue of abortion determined closer to home.

Is there a saving grace for the states’ rights argument? Yes! The empirically validated belief that communities themselves make the rules that work best for them. But this is clearly an argument from epistemic modesty, rather than from the lofty rhetoric of freedom as its own good. Groups of people should be self-governing because they are more likely to do a good job than their detached overseers, not because their freedom is viscerally important in the abstract.

This understanding, radically different in its first principles, doesn’t change political results all that much. It does, however, alter the approach with which we view the negative externalities I mentioned above: that some states won’t legislate the way want them to. For issues that are one-size-fits-all, we no longer need gaze upon our fellowmen in different territories with sympathy at the unfortunate political results their freedom has prompted. We might instead assume their model works better for them, and wonder whether it would for us, too.

Error, Noise, and Data: The Allegory of the Lightning Strike

The Allegory

Adam and Eve lived with their children Cain and Abel in an otherwise empty world. Cain and Abel did not go hungry, for their parents farmed the land; they were not bored, for they had each other for amusement. But they were uncertain, for, having been expelled from God’s presence, they had no heavenly voice to tell them what things they did were good or bad.

“Hey, Cain!” Abel cried. “I want to roll this marble, but I fear that if I do so, it will roll under the bed, and Mom will be annoyed.”

Cain responded, “I wonder if God could tell us whether you should roll the marble or not. Maybe he’d send a sign–a lightning strike, or a sudden gust of wind.”

“Good point.” Abel took a deep breath. Then he bellowed, “Heavenly Father! If the marble will roll under the bed, then send a lightning strike!”

No lightning strike emerged from the cerulean blue sky. It was a pleasant spring midday, after all. So Abel went ahead and rolled the marble, and it rolled under the bed.

“Huh,” said Cain. “We know God exists, is benevolent, and is listening. But he did not warn us that the marble would roll under the bed. Therefore, I conclude that the consequences of the marble’s rolling weren’t important enough to get God’s attention.”

It became something of a game between the brothers: whenever a small decision weighed on their minds, they’d ask God to send a lightning strike to elucidate their thinking. No such strikes occurred, and so they went ahead with their actions, and got the expected outcomes most of the time.

As they grew older, Cain and Abel found themselves facing more challenging and important decisions. What should they get Mother and Father for their joint birthday celebration? How much corn should they plant this year? Which of the suddenly emergent womenfolk should they marry? And it was with regards to this last question that a strange event proceeded to occur.

“Heavenly Father, send a lightning strike if you wish me to marry Wilhemina!” cried Cain, and to his immense surprise, lightning came screaming from the blue skies.

“Wow!” Abel yelled. “Heavenly Father, send me a lightning strike if you wish me to marry Philomena!” No lightning. Cain married, and Abel remained a bachelor.

Later that year, they were playing with marbles when Abel wondered if he might roll a marble under the bed. “Heavenly Father,” he cried, “send down lightning if this marble will roll under the bed!”

Lightning.

“Wait, Abel,” cautioned Cain as Abel prepared to put his marble away. “God only responds to important questions, remember? This lightning was just a fluke.”

The Interpretation

And so Abel rolled the marble. The story ends there, and it doesn’t matter whether the marble rolled under the bed. Cain and Abel grew so used to the absence of lightning caterwauling down in their (comparably many) small decisions that they felt that it should only be used as a value metric in their (comparably few) large decisions. Rendering this in probabilistic terms, we can establish the following. First, lightning is very unlikely to occur from clear skies; secondly, if it does occur, it’s more likely that God sent it than that it was random.

  1. P(no lightning strike) >> P(lightning strike)
  2. P(God sent it|lightning strike) >> P(It was random|lightning strike)
  3. P(God sends lightning strike|important question) > P(God sends lightning strike|unimportant question)

Cain and Abel assumed this third, non-verifiable relationship at the get-go. Ultimately, out of the myriad “unimportant” questions they asked God, only one was ever answered by a lightning strike (Abel’s last marble roll). Let’s say Cain and Abel asked 1,000 unimportant questions. Out of the maybe ten “important” questions they asked God, one was also answered by a lightning strike. It’s obvious that 10% > 0.1%, so the brothers’ initial assumption that God was considerably more likely to send a lightning strike in response to an important question seems empirically verified. Nonetheless, in concluding that the final lightning strike was accidental, they make a logical error.

Cain and Abel’s assumption that the lack of lightning strikes about unimportant issues means that the ultimate lightning strike was a fluke is unfounded. It’s a reductio argument: if God didn’t discriminate between important and unimportant questions, and if one lightning strike occurred in each, then the respective presence or absence of lightning strikes in each category up to that point is incredibly unlikely.

For instance, assuming that God does not discriminate in response between important and unimportant queries, then if we use the unimportant question set to gain an approximation of the likelihood of question response, the odds that one of the first ten important questions would be answered by a lightning strike (given that the approximate odds of response are roughly one in a thousand) are approximately 1%. This does not, however, mean that the lightning strike was a fluke; it merely means that we are faced with two unlikely-looking explanations, one of which has to be true:

  1. God responded to a question within the first ten, which is very unusual.
  2. The lightning strike was random.

How likely is a random lightning strike? If P(It was random|lightning strike) < 1%, then Option 1 is favorable.

The margin at which we decide it is no longer more likely that God engineered the lightning strike varies based on which set we use as the probability determiner. If we instead use the important-question set (which makes sense, because a lightning strike answered a question out of that set first), then we can assume that the odds are about 1/10, so assuming God is equally willing to answer unimportant questions, the probability of his not doing so for 1000 questions is a whopping 2E-44%. But the operative question, still, is not How unlikely is that? but Is that less likely than the alternative (a random lightning strike occurrence)?

Importantly, since both of the possible implications of a lightning strike (randomness or divine intervention’s occurrence seeming very non-Bayesian) are fairly unlikely in this framework, the situation itself–lightning strike occurring once in the large data set and once in the small–is highly improbable. Given that it has obtained, though, the more likely explanation is probably the correct one.

Noise

Something about these calculations might feel off to you, and if you think that, you’re right. The problem with the above reasoning is that when considering the overall probabilities of God-sent lightning strikes via the totality of data, I didn’t weigh in the possibility that one of the two lightning strikes Cain and Abel experienced might have been a random fluke.

That probability is small, but given that we have no way of knowing after the fact whether any individual lightning strike was God-sent or random (only that both have a greater-than-zero probability of occurring), it must be considered.

Random lightning strikes can here be considered a stand-in for noise in data sets in cases where that noise could significantly confound our prior probabilities. We could liken this to an arbitrarily long Boolean string in which a value of 0 in the kth place in the original string is highly unlikely, and the string then undergoes some fuzziness after which the k-value is, with some degree of likelihood, altered. The question that this poses to statisticians, which is more intuitively apparent in the lightning strike example, is the following:

If we attain our prior probabilities through an empirical analysis of the data that we have, knowing that noise can be a factor, and then find that our distributions are odd, in what cases does this make it likely that random noise altered some of the strings in the original set?

If noise is known to be less or more probable than the result we are looking for, we would conclude accordingly; the problem emerges in cases where we cannot see the original data set, and thus have no way of knowing what our priors should be that noise occurred. In the lightning strike allegory, there is a solution to this problem, that perhaps could be generalized.

The Solution

Cain and Abel should repeat their experiment arbitrarily many times–let’s say, until they have achieved sufficiently many lightning strikes that the number of strikes is suitable for data analysis. Perhaps after one hundred years of asking questions, Cain and Abel receive one hundred lightning strikes. In this case, there is an obvious mechanism through which the brothers can discern whether lightning strikes are God-sent: whether the lightning strikes make correct truth claims. (E.g. if Abel asked for a lightning strike if the marble would roll under the bed, a lightning strike occurred, and the marble did not roll under the bed, then that lightning strike was not divine in origin.) The following claims seem to be reasonable:

  1. P(lightning strike is accurate|lightning strike was God-sent) = 1
  2. P(lightning strike is accurate|lightning strike was random) = appx. .5, barring cases in which one of the outcomes was very unlikely, in which case the brothers wouldn’t have needed divine advice anyway).

Now we have a set of probabilities through which we can come to ascertain the approximate proportion of random lightning strikes to divine lightning strikes.

The same can be said of noise in data sets. For cryptography, there is an obvious, immediate parallel to the case I describe: does the resultant string make sense? If not, then it is much more likely to have been noise-altered. By ascribing sufficiently accurate prior probabilities to these cases, we can discern the relative likelihood of noise to that of the genuine obtainment of unlikely outcomes, and then can proceed with the analysis described in “Interpretation.”

-TC

 

 

Why Fermi’s paradox is neither

I will attempt here to demonstrate why the so-called Fermi paradox is not a paradox. The blog post title is a little flippant (the idea isn’t really Fermi’s; Tsiolkovsky was the first known asker, but that’s neither here nor there).

The idea of the Fermi paradox is as follows:

  1. There is a high probability that there are many (perhaps infinitely many) extraterrestrial civilizations.
  2. Given 1, there is a high probability that many extraterrestrial civilizations have achieved space colonization, by or before 10^8 years ago.
  3. A generous time estimate for the process of traversing nearby galaxies is 10^8 years.
  4. If extraterrestrial civilizations had colonized space, we would know it.
  5. So where are they? Why is there no evidence that any extraterrestrial civilizations have done so, in the form of their observable presence?

There are a lot of realms for fallacy here. 4 comes out of nowhere, and isn’t necessarily true–discarding the conclusion doesn’t necessarily require one to disagree with Premise 1.

But we will here focus on the derivation of 2 from 1. To explain this, I’ll posit a thought experiment. Imagine the following:

A function will be held that you have vague reason to believe is very important or interesting–suffice it to say, you’re relatively sure it’s a good idea for you to attend, although it isn’t mandatory. You don’t know much about what the event will be. Entirely through your own logic, you’ve reasoned that a famous person is likely to stop by, or a new technology will be unveiled there, or you’ll get to witness and participate in an important political interaction unfold firsthand, an anecdote that will later be written about in many newspapers. Or maybe you just think it’ll be a great deal of fun.

It is not easy for you to go to this event. It’s pricey and located in a different country. Also, you know the day on which the function will occur, but not the time–it’s possible you could arrive very early or very late, although the event is likely to continue late into the night. You’ve put the necessary steps in place to cover the costs of attendance and transportation. The function will occur on a high floor of a skyscraper, a sought-after event venue that will be entirely booked for the occasion. This is problematic for you, because you have gripping claustrophobia, and do not use elevators. Thus, you’ll have to climb up many flights of stairs. You also have weak knees, so this process will take considerable time and effort.

You can’t see the top floor of the skyscraper from the ground, but you have good reason to believe it is well-lit and windowed. When you get there, you will be able to see everyone who is in the room with you, and some of the people who are on the higher floors of other nearby hotels.

After landing in the city, you rent a car and drive toward the event. As you pass through a tollbooth, you tell the operator that you are attending the event, and he argues with you about why you shouldn’t, saying that you’re going far out of your way for a development that will inevitably cause bad consequences for your life. The interaction takes fifteen minutes. Finally, you get to the venue, and while standing in the parking garage, you realize something odd: no other cars or people are in the garage.

You have not communicated with other attendees in advance of this event and have no idea who’s coming and if they’re driving themselves. Maybe others are using taxis, or bicycles, or trains, or subways. The only evidence you have that if there are people, then there should be cars is that you drove yourself.

You start climbing up the many stairs, and you see no one. As you approach this event–to which you have not yet arrived–and see not a single fellow person, is it rational to wonder, “Where is everybody?” Furthermore, is it rational to conclude that because you haven’t seen any people or cars, no one else is going to show up?

Of course not. Anyone who didn’t drive himself to the event wouldn’t have needed to stop by the garage, and certainly few, if any, people are taking the stairs. You know it’s possible to likely that you’re very early or late, which would explain not seeing people as you move from the ground floor to the top floor. As you noted, the occupants of the top floor can only be seen from the top floor itself.

This experiment is an allegory for the human quest to expand indefinitely. We have vague reasons to justify this desire, just as you have some idea of why it will be a good use of your time to attend this event, but our justifications are more innate than rational. It’s greatly difficult to accomplish our task: technological progress itself is difficult and costly, and we put up with, at every turn, our own or others’ desire to just let things be. Similarly, you had to deal with paying, arranging your travel, the rhetoric of the annoying tollbooth operator, and your claustrophobia and weak knees. Furthermore, there’s an insufficient knowledge problem: we have no idea what this end looks like if properly achieved. Just so, you can’t see the top floor, and you don’t know when the event starts. The idea that you can’t see the occupants of the top floor means that perhaps the most sustainable form of expansion is not observable, e.g. living digitally.

There are of course meaningful differences between the two situations, and I’ll handle them in turn. First, you have fair reason to believe that your being early or late or in the stairwell is a good reason for not seeing anyone even if there are plenty of other people, which probably makes it too easy for you to self-edit your own observer-selection bias. It’s reasonable to expect the other attendees not to have claustrophobia. So let’s change the situation so that you don’t, and you can take the elevator. Let’s say, too, that you have high prior probabilities that the other attendees–if they exist–don’t know when the event starts either. You arrive at the venue as before, and you get in the elevator, which is empty. As you pass each floor, you listen for the sound of conversation, or the sound of the button as people just arriving call for the elevator. You hear nothing, nothing, nothing. You’re now passing floor 4, and you haven’t seen or heard anyone.  Is it now rational to think no one else is coming?

No, although it might be more so than it was in the earlier example. The fact that the elevator was empty when you got there is of note, but that fact also is the reason that you see nobody now, because you’re still in an elevator that will remain empty until you get to your destination unless some of the attendees can walk through walls. Not hearing chatter isn’t necessarily a good reason to believe the venue is empty; for all you know, the event might be a silent movie screening. While certain occurrences (the silence, the emptiness) may be evidence that no one else is coming, they might just as well be indicators of the nature of the event itself.

If you get to the top floor and find it empty, then of course nobody was there, and the alternate explanations of not seeing or hearing anyone until then were useful rather than misleading. But let’s say you get to the top floor and find it to be a rather uninteresting silent movie screening. Then the real question that falls out of my allegory, and the question most important to the deconstruction of the Fermi paradox, is this one: given that you didn’t know pretty much anything about this event, why did you want to go in the first place–and why would anyone else?

I’ll modify my thought experiment to address this question. You have survived a global catastrophe, and to the best of your knowledge, you are the only survivor. You’re right by the mountain K2, to the top of which you can climb to better survey the surrounding area. For the sake of the metaphor, let’s say the physical attributes of K2 and its air pressure are such that if you yell from its peak, you can be heard for hundreds of miles. You climb the mountain so that you can be heard, although it takes you considerable time, and along the way you see nobody, and when you get there you see nobody. The whole time, your ears are primed for the noise of someone yelling from the mountain, and you hear nothing. Is everyone, then, necessarily dead?

No! First off, you can’t see from that high up if anyone is scavenging through the remnants of the destroyed buildings, and you can’t see anyone who’s indoors (which, I suspect, would be the premier place of refuge for intelligent apocalypse survivors, although that may not be what the movies would have us believe). So the only evidence you’ve gathered is that nobody who’s loudly careening across the plains, making their presence obvious to the AI and predators, and their skin primed for infection by the zombies and supervirus, has survived. Congratulations! You’ve rederived evolution.

“But wait!” you cry. “If other survivors exist, then they want to see if others are alive, because they want to find food, protection, reproductive partners, and company! So they too should’ve climbed to the top of this mountain to survey the area, and I should hear them yelling, or if they’re still climbing, I should see them on the mountain face now!”

It is this facet of Fermi-paradox reasoning against which I take up arms. From first principles like “food,” “protection,” “company,” “reproduction,” and “consolidation and expansion into a greater civilization,” we do not get to the obvious outcome of “climb to the top of K2.” It’s possible other survivors don’t know this unique sound capacity of K2. Or it’s possible they just don’t want to waste their existent resources on this wild goose chase. When you plugged your desires into your inner calculator, you decided the rational way to maximize utility was climbing K2, but I think that was dumb. That should be concerning to collectivists, because you and I are both people; we share almost all our DNA sequences.

Okay, sure, maybe nobody in that situation would actually climb the mountain, so I’ll provide an alternate anecdotal claim: If two friends lose each other in a crowd, they often don’t start looking for each other in the same place, even though their ends (in this case, “find my friend”) are exactly the same. And while greater discourse and broadening the horizons of communication would seem to have restricted the space of obvious choices, we find that the opposite is true: intra-American disparity in political beliefs is higher now than it’s ever been in a single population. It seems to be the case that with a more universal notion of what our ends are comes a less universal notion of how to achieve them. If you and I can’t agree on a first-order action item that best instantiates our values, how can we expect biological entities that look nothing like us to agree with the outcomes we societally derive–if they even have the same values in the first place?

With that said, this is the problem with the Fermi paradox: It is reasonable to expect extraterrestrial biological agents to have an interest in survival, expansion, and self-improvement. It is unreasonable to expect them to manifest those interests in the way we do.

We have good reason to believe that extraterrestrial life would have also developed from evolutionary processes; therefore, extraterrestrial life forms are probably invested in pursuing, avoiding, or maintaining things like heat, pressure, certain molecular compounds, food, shelter, and so forth. From here, we don’t right away get to “expand”–this doesn’t fall straight out of “reproduce.” Yes, small population size makes a species more prone to extinction, but there is an optimal population size, and it’s less than infinity. Unbounded growth tends to destroy environments necessary to the very survival the pursuance of which would motivate a species to expand. Yes, one solution to this is “consume resources further away in the universe,” but it isn’t the obvious solution, just like climbing K2 wasn’t the obvious solution above. At any rate, extraterrestrial intelligent beings may not have arisen out of the kind of carbon-hydrogen-oxygen-and-nitrogen-based cellular “life” with which we are familiar. But even if we assumed that in the cosmos some (lesser) number of biological populations have this generalized “expand” principle, that gives us zero grounds to claim that even one other group would seek to instantiate it in the same way we are now considering.

Objection: “But Tessa, if there are infinitely many extraterrestrial civilizations, then there are infinitely many that are exactly like us, and therefore would colonize space! And some of those must have occurred by however many years ago it would’ve taken for them to now be visible!”

Response: Okay, okay. My diatribe above is only true in a finite universe. Framing the paradox in an infinite universe means that there necessarily will be civilizations interested in space colonization, but it also means that they are overwhelmingly likely to be significantly temporally or spatially displaced from us. If they lie outside our observable light cone, we’ll never interact with them. Period. And the amount of “stuff” in our observable light cone shrinks tremendously every nanosecond.

In general, I fear people have a difficult time rendering distinct the concepts of “very large” and “infinite.” That’s dangerous, because very large things and infinite things work quite differently, as is apparent to anyone who’s ever studied Knuth up-arrow notation or ordinal numbers. If you put an infinite number of monkeys in a room with infinite typewriters for infinite time, then one will eventually type War and Peace. This is true of even one monkey and one typewriter, or even of a typewriter by itself, given slight variations in atmospheric pressure. The point is that the operative factor in the saying is infinite time. The saying isn’t true of a billion monkeys with a billion typewriters for a billion years, especially if the prior probability of typing War and Peace is lower than one in a trillion per year. The concept of “everything in the probability space happens given infinite time” is useless in most situations, because the odds of a monkey typing War and Peace, or a black hole emitting a complete and functioning human brain (cf. Bostrom, who discusses this in Anthropic Bias in a different context), are so low that the probability of them happening within some time or space bound within which we are operating is functionally zero.

Objection: “But even if we don’t have access to infinitely many alien species, there must still be a large enough number of them accessible to us that at least one would have achieved observable space colonization! The priors can’t be that low!”

Response: Actually, they can. In a finite universe, it is very possible that the prior probability that “some other species X will have the desire to colonize space and the means to instantiate it” is low enough that this won’t happen in our lifetime even if there are many other intelligent beings in our observable light cone.

Objection: “But Tessa, intelligent species have a tendency to become outward-looking!”

Response: Says who? We have one reference point. And even if you’re willing to extrapolate wildly from this one anecdotal case, our particular anthropomorphic brand of intelligence can at best maybe be called “outward-looking.” Do we think about space colonization? Sure. Do we wish the good of the other for the sake of the other? Eh, perhaps. But we have spent eons trying to characterize the “human condition,” trying to catalogue the mechanisms of the human body, roiling at the mercy of nationalist and isolationist ideologies. Anyone who thinks humans are uncontroversially outward-looking should crack a book, because it’s all there writ large across the pages of history: large groups of humans are the most myopic structure known to the Earth.

And even if we arbitrarily draw a line through this single and controversial reference point, other civilizations could cognize “space exploration” like “climbing to the top of K2 in a post-apocalyptic wasteland”–technically in accordance with their ends, but not something on which they’d waste breath.

Thus there is a solution to the Fermi paradox that isn’t either the depressing option that we are alone in the universe, or the depressing option that space colonization is impossible for some Great-Filter-related reason (e.g. intelligent life self-destructs, periodic natural extinctions). And if I’m all wrong, and some other civilization out there is musing over their version of the Fermi paradox, then let’s take to the stars so we can find it! I’d posit that it constituents would be happy to meet us: any civilization with a notion of loneliness likely has a notion of hospitality. That’s why I’m excited about the implications of my argument. It’s likely that other civilizations don’t look enough like us to have “space colonization” as a proposed project, but if they do, perhaps they share other ends with us and aren’t just searching for resources. If we ever find ourselves at the mercy of a Type IV civilization that arrives from the skies, I think we have good reason to believe it will be friendly–because it will likely resemble us, in the important ways at least.

-TEVM

(P.S. Regarding my last sentence: yes, given human militaristic history, perhaps “friendly” and “resembles us” are not codependent concepts. If there’s interest, I’ll write about this topic as it relates to space colonization later.)

Bostrom’s Paperclip Problem: Certainty and Artificial Intelligence

If the AI is a sensible Bayesian agent, it would never assign exactly zero probability to the hypothesis that it has not yet achieved its goal.” -Bostrom, Superintelligence: Paths, Dangers, Strategies

Decision theorist and philosopher Nick Bostrom has posited the following caveat in our machine design: a superintelligent AI given the final goal of maximizing paperclip production would seek to turn everything in the observable universe into paperclips. It would create unlimited amounts of computronium to help it determine how to achieve this goal, and cover any surface with paperclip-producing technology. All humans would inadvertently be either converted into paperclips or unable to survive due to the AI’s commandeering of their biological necessities for the purpose of making paperclips.

The paperclip experiment is an enlightening toy problem. Bostrom has framed a non-obvious conjecture about the behavior of nonhuman entities in an intuitive, familiar setting. The harder implicit claim here is that when an agent includes free parameters in its decision calculus, these unbounded variables will be set to extreme values if that makes it easier to maximize the other variables. From the perspective of humans, this is not obvious, because we fail to take into account the way that we subconsciously bound even the parameters we consider free in our calculus. A superintelligent human paperclip-maker with human values (maybe a self-improving, high-speed, faithful whole brain emulation) probably wouldn’t turn the entire universe into paperclips–not only because he knows that then the paperclips would never be used. The AI too is aware of the uselessness of the paperclips once humans are all dead. But this is of no import to the AI, because it is single-minded in pursuance of its final goals.

The better reason for the superintelligent human not to make paperclips out of all materials at his disposal is that he does not consider the amount of resources he is permitted to consume in doing so to be unlimited. Thus, his limit for “percentage of the observable universe that should be converted into paperclips” is set far lower than the default bound. Humans often say “I’m going to do X, no matter what,” but they never mean it. There are no free parameters in human value calculus. Part of this is related to the liquidity of human capital: consider the case of an individual who wants to maximize the number of stores from which she buys a carton of milk. The longer her road trip, the more she pays for gas, and this money then can’t be used for buying milk. The kinds of goals humans will have also weigh into this phenomenon: why on earth would anyone pursue this geographically-disperse-milk end, except for some sort of competition, the victory of which would constitute a finite reward? This thinking implies its own parametric settings, e.g. the total costs of gas plus milk plus whatever else cannot together exceed the competition prize, or must be lower by some lambda-value. But the more interesting aspects of human value calculus have to do with the preferencing of the self and others as ends, which de facto prevents setting parameters arbitrarily high in ways that would have negative externalities for mankind. In the sense of a priori parameters, this reasoning seems circular, but humans don’t have any prior parameters in the true sense. We learn simultaneously in rationality and virtue.

So the superintelligent AI makes as many paperclips as it can with no heed paid to the destruction of the universe. Critics of this first iteration of the paperclip problem respond, “Easy! Let’s tell it to make exactly one million paperclips–no more, no less–and then turn off.”

This is where Bostrom’s true genius manifests, in the form of the quote excerpted above. The new-and-improved AI in the second iteration of the paperclip problem will make exactly one million paperclips, and then might direct a ridiculous quantity of computronium to the purpose of doing calculations that will heighten its credence that it really exists, that the paperclips really exist, and that it has therefore achieved its goal. Perhaps it will build factories for the microscopic inspection of each paperclip, or superprocessors to read and evaluate Hume and Berkeley and Avicenna and Descartes. But one thing it likely won’t do is shut down–because, as I mentioned before, it is single-minded in pursuit of its goal. Here, single-mindedness looks like achieving certainty that it has in fact completed the goal, with probability 1.

This kind of reasoning is alien to us. Because all human value calculus works exclusively with bounded variables, certainty is not a human first principle. We do the best we can, given the circumstances, because our resources are not unlimited in any direction, and therefore we cannot conduct evidence-gathering whose results have diminishing returns as probability approaches 1. A superintelligent human paperclip-maker might have some low Bayesian prior that she is dreaming, or hallucinating, or in a simulation, but this fact does not weigh into her paperclip-constructing calculus substantially, because the amount of time and energy she deems necessary to come to a definitive conclusion on whether 1 million paperclips have certainly been produced–which is probably asymptotically infinite–exceed the amounts she is willing to contribute to this end.

Bostrom is acutely aware of the fact that the certainty dilemma might not adversely affect AI alignment at all. Superintelligent machines will probably throw out the Bayesian framework and find a better one. Perhaps there are lines of reasoning far beyond our human capacity that would allow an advanced agent to come to certainty of his and the paperclips’ physical existence in finite time. For aught we know, it’s perfectly possible to prove a system works from within the system, even though Godel and Tarski have shown this is impossible within some of our systems. Maybe there are meta-principles other than the deductive, inductive, or abductive ones–systems such that conclusions stand outside the system. On the other hand, maybe the opposite is true: that higher intelligences will only be more torn by this problem than our finest logicians have been. They’ll likely conceive of scenarios we couldn’t dream up–ways in which the paperclips could be real when the AI isn’t, for example–that will then weigh into their value calculus for one side or the other.

Nonetheless, it would be reasonable to expect AI to behave roughly as Bostrom describes, and this behavior could have disastrous consequences, so we should investigate how to avoid intelligent machines’ characterization of the universe at large as a bunch of free variables. (See? I, a human thinker, don’t use certainty at all–just Bayesian priors and expected-utility theory, both of which have been specifically built to avoid certainty-paralysis.) Here, the difficult thing for human thinkers to parse through is the relationship between the ends “completion of goal” and “attainment of certainty that goal is complete.” Let’s say some AI was instructed to make a million paperclips and then [immediately] turn off, and it spent 30 minutes making the paperclips and 12 years proving definitively that it had in fact made a million paperclips. In retrospect, it can conclude that it completed its goal 12 years ago–so did it obviate the goal of turning off right afterwards? How should we view the goal of “immediately” turning off in light of a) making a million paperclips and b) achieving certainty that it had done so?

If we better understood what our values are, we could attempt to instantiate some metric that would give intelligent machines goals in terms of finite quantities of resources. For instance, consider an AI told to “make exactly 1 million paperclips, but spend no more than 1 thousand utils doing so,” for some arbitrary util metric involving space and time and matter and human happiness flourishing. But if the AI had good reason for high Bayesian priors that it wasn’t real, then subjecting it to this metric could lead it to spend the thousand utils researching existential philosophy instead of making paperclips at all. Or perhaps the juxtaposition of this NP-hard certainty question with finite resources would lead the AI to first engage at the meta-level, in a cost-benefit analysis, the completion of which would use up some of its resources and still might lead it to conclude it should research its existence rather than make paperclips. In these cases, the AI would be useless to us, which is better than disastrous, but defeats the purpose of creating intelligent machines in the first place. Alternatively, we could program our seed AI to uphold as a final goal “don’t weigh [Bayesian] certainty in your calculations of whether you’ve completed your other final goals,” or “accept with probability 1 that you and the things you produce are real.” Or we could give it our unproven meta-principles: “Take as proven the inductive method on sets larger than [arbitrary] n.” But if the superintelligent machine could prove the verity of existence, then we are wasting our time using it for such a meaningless task. We need an approach that will diversify such that the AI will perform a menial task when that’s what we would want it to do if we were smarter, and research existential philosophy when we’d want it to do that instead. And anyway, the inductive method is not always right–what will happen when the AI inevitably discovers an exception, and realizes it is pursuing its other final goals using unsound logic?

I think the proper approach is to program the AI to preference human-motivating probability metrics over absolute certainty in its goal completion criteria unless it is epsilon-sure–defined in terms of a human metric–that it can do better in polynomial time, for some high epsilon. Of course, we’d have to formulate this such that the AI doesn’t then devote non-polynomial time to deciding whether it’s epsilon-sure, and such that the programming doesn’t preclude the AI from developing better frameworks than Bayesianism if it can do so quickly. This approach will require us to energetically revisit our own uses of rapid cost-benefit analysis for CS problems. Graph-theoretic problems are generally easy to categorize at a glance into P, NP, and NP-hard, so this might be a good place to look.

Earlier, I sidestepped the issue of how a superintelligent human would handle the issue of certainty, assuming they’d operate similarly to how we do. We can see very clearly why an intelligent machine would pursue certainty as a corollary to its end goal, but faithful WBEs are trickier. Consider a superintelligent WBE instructed to make exactly 1 million paperclips (and who values this as an end goal), but has no free parameters in decision calculus and values inherent human values, whatever they are. In other words, in what way will an increase in intelligence affect human certainty preferences? I’d be inclined to say it won’t change much, but, as I’m human, I must admit I’m not sure.

-TEVM