Featured

An oddity in Yudkowsky’s functional decision theory

_w__type_events_and_yudkowsky_s__functional_decision_theory_

Advertisements

An Evaluation of the “Women in STEM” Public Interest Program

There are going to be two parts of this post. In the first, I will put on my apologist hat and give a very sympathetic defense of “It is difficult to be a woman in STEM,” relying largely on my own experience (and meant to convince people with no experience being women in male-dominated fields). In the second, I will explain why we need to go about having a public interest in women’s being in STEM very differently than we are currently doing. In that section, I’ll open by nitpicking with the definitions. I am going to claim that regarding the lack of women in STEM is a problem is self-undermining: it relies on, or at least implies, a principle that is contradictory to what the thrust of its message is supposed to be. This requires me at least to suspend judgment on whether the proportion of women in STEM is a problem; I’ll go further and claim that it is not a problem. There is a problem, which is related (but not identical) to “not enough women in STEM.”

Why We Need More Women in STEM and Other Male-Dominated Fields of Academic Research

What I describe in the title–the “women in STEM” public interest program–can be more generally paraphrased as follows: There is a significant public interest in increasing the proportion of women in historically male-dominated research fields. Many of these are STEM, but some aren’t (e.g. philosophy, war studies); I’ll focus on STEM here. There is a generally accepted algorithm for how to increase this insufficient proportion of women:

  1. Support the women already in these programs to prevent high woman attrition.
  2. Encourage women who have interests/abilities in other fields to try male-dominated fields.

My Own Experience

It is obvious to me that 1 is a good initiative. I don’t just support Initiative 1 because it’s nice–logically, I have to support 1, to avoid my own program of study’s relying on a self-undermining argument. To want to be a brown woman taken seriously in Field X commits me to wanting brown women to be respected in Field X (or at least not disrespected); it is requisite that my future plans include an interest in opening my field to the contributions of people like me.

But I also support Initiative 1 because it’s nice. No, the women in male-dominated fields don’t need reinforcement and warmth qua women, but as a brown woman the majority of whose undergraduate courses have been in theology, mathematics, and philosophy, I can confirm that the feeling of striding into a lecture hall where nobody looks like me is unpleasant.

People bristle to hear this. I’ve heard STEM women especially–more than any other demographic–shrug off the claim “it’s no fun to be the only woman in this course.” This noncommittal attitude strikes me as bizarre. I understand that being perceived as laid-back is a boon to one’s personal reputation, but that’s not the same thing as being a doormat. From my experience, being the only woman in a mathematics or materials science course makes others immediately perceive you as either the smartest person in the room or the least smart. Some are almost hostile; you’ll occasionally get parsed as an affirmative action admit to your program. But some reach the almost less sensible conclusion of “Wow, she’s the only woman here? She must be brilliant,” and chase you down to help them with problem sets.

The behavior of male students in STEM departments toward their women peers is a mixed bag. I’ve been reached out to a lot by male students who are low-income or ethnic minorities for advice and support, and how wonderful this network is cannot be overstated. But pointing out that I’ve been and seen women on the receiving end of condescension from men (almost ENTIRELY students, not professors) is not a social justice crusade, not unreasonable, and not “anecdotes rather than data.” This has happened to me a lot. (To be fair, I tend to dish it right back. That is probably not great, but when you play stupid games, you win stupid prizes.)

There is an unsavory thing that I have to talk about even though I’m sure I’m going to get flak for it (if anyone actually reads this blog). People don’t talk about this fact as much as they talk about the condescension thing or the being parsed as an affirmative action admit thing. But it is a big problem: male students often make amorous advances toward and solicit romantic advice from their women peers.

Of course, context is everything here. I’ve given solicited and unsolicited advice of this kind to my math friends (in which case it isn’t weird), but I’ve also been approached in this regard by math people I barely know (in which case it’s very weird and in such a way that it’s clear the salient catalyst is that I am a woman). There is a two-step factor, I think, that contributes to the fact that STEM women have to put up with way more of this than they should.

  1. The ratio of men to women is very high.
  2. Almost all the women in the field are socially adept. The proportion of men that have strong social skills is much lower.

2 seems outrageous to some extent. Prima facie, women across disciplines don’t have higher “social skills” on average in the sense I’m referring to here than men do–this isn’t EQ, just normal behavior. But I’ve collected enough anecdotal data that it cannot be a fluke. Even if 2 is unpleasant to admit outright, people seem to be aware of it, which, I think, is why the general response to “Male students in Field X harass their women counterparts” is “Boys will be boys!”

Even assuming that’s an okay response in the abstract (not going to get into that here), it isn’t true in this case. I have a humanities double major, woman-dominated, in which I’ve taken half my classes and never been the recipient of garbage like this. What the interlocutor at the end of the above paragraph means is “Boys [in Field X] will be boys [in Field X].”

This claim relies on 2 above. Men in the science disciplines have worse social skills than their women peers. Here’s a possible explanation. Young adult men and women are socialized by their friends. Women in the sciences tend to be friends with other women from across disciplines; men in the sciences tend to be friends almost exclusively, or at least mainly, with men in the sciences. There is a bizarre generational problem in which men in the sciences are being socialized by men who were never properly socialized themselves. That’s two degrees out from normal socialization! They end up thinking they are socialized, whereas what they are actually being is trained: trained to interact in the sphere of “Men in Field X” and not outside it. This still doesn’t explain how the problem originated (why was the First Man in Field X not properly socialized?). But that’s sort of irrelevant, and at least somewhat believable by dint of examining the kind of characteristics a math education selects for: drivenness and intensity, being able to work alone, exceptional intelligence (which often travels in hand with arrogance), microfocus, and an excess of free time (which, to be fair, is easier to have when you’re not spending time doing social activities).

To tell women that the men in Field X who have made them uncomfortable are exculpated by reason of being typical men in Field X is to tell women, “If you continue in this field, you will have to put up with this forever.” That same principle stands for the other untoward experiences STEM women have: condescension, resentment, unprompted idolization. No wonder the attrition rate is so high!

The rest of the argument is simple. The treatment of women in STEM has largely to do with the fact that women are viewed as a novelty in STEM. The obvious way to make women not a novelty in STEM is to have more of them.

A Position Reversal

From what I’ve written above, there is a claim that may follow that I wish to do away with.

Proposal 1: “The reason women don’t go into STEM is because women in STEM are mistreated. Therefore, we should get more women into STEM.”

I didn’t defend the first clause in my argument above, and I don’t think it’s necessarily true that that’s the reason women don’t go into the fields. But assuming it is true, putting more women into STEM (to be presumably mistreated) is a rather roundabout way of solving the problem!

Let’s look at things more generally. For what it’s worth, I hear almost daily the battle cry, “More women in math!” I only occasionally hear “More women in philosophy!” and I never hear “More women in theology!” (And I’m not suffering from systematic cherry-picking here. The other thing I study is philosophical theology. If people were saying these things, I’d be hearing them.)

This strikes me as odd prima facie. Proportionally, there are even fewer women in philosophy and theology than in math and physics. (The War Studies ratio is probably the sharpest–especially given that the general discipline of history is woman-strong.) But apparently, that doesn’t automatically merit the proportion’s appearing, to the public consciousness, to be a problem.

Occam’s Razor is helpful here. I have no trouble believing men (probably somewhat by nature and somewhat due to socialization) are categorically more interested in studying war than women are. This–not sexism–strikes me as the reason that War Studies is a male-dominated field.

I also have no trouble believing men are generally more interested in theology, especially because in many denominations, there are fewer vocations for women in theology. Again, therefore, I’m not worried that the climate of the theology research field is disincentivizing women to enter it. The very reasonable explanation, which doesn’t require me to believe in some degree of conspiracy or systematic bias, is compelling enough for me. “Being a problem” and “needing a solution” are different, but the War Studies and theology gender ratios do neither.

But get this: I also have no trouble believing men are generally more interested in mathematics than women are. Or physics. Or chess! You ever met a female grandmaster? Me neither. But I don’t care all too much. Being a GM is impressive and all, but I don’t lie awake thinking “We need a woman chess champion!” because, while I enjoy watching a strong player, chess doesn’t strike me as a summarily important human exploit. This “sensibility toward importance” is orthogonal to visibility. I’m glad we have women veterans, even if they get less TV airtime than women news anchors (who are comparatively less important to society, I think). “We need a woman astronaut/CEO/life-saving firefighter” sounds much more reasonable to me than “We need a woman DJ/winning horse jockey/exceptional poker player.”

Yes, this is a subjective metric, but there is something absolute about it. The people who disagree with me on whether we need a woman grandmaster will disagree because they think chess is important to human society, which further proves my point that whether something is important is a vital consideration to whether we should make increasing women’s access to it a central goal. This sensibility–that I don’t care if unimportant things are male-dominated–is, I claim, the reason people care about women in math but not women in theology. Most Americans with whom I interact don’t believe in God and therefore don’t care about theology. Many Americans, however, have come to believe that “the future is STEM.”

Therefore, I will make the following claim. A low proportion of women in Field X is not concerning to me unless:

  1. I have very good reason to believe that proportion diverges significantly from what I know about women (If 5% of the people studying opera singing were women, that would strike me as in further need of an explanation, given what I know about the grand history of women in opera; women have primarily been the people made famous by opera), or
  2. I think Field X is important to humanity. (I think women should do important things. So sue me.)

On at least the second front, I think most people agree with me. People want women (and men) to do important things. Some of these are obvious (exonerating the wrongfully convicted, creating inventions that make life easier, raising morally upright children), but some (apparently, mathematics) aren’t. This brings us to the question of why people think math is important–of why, despite that we might reasonably say the gender ratio is not a problem (because men are more interested in math), it still needs a solution. Let’s look at a couple more proposals.

Proposal 2: “STEM fields are more lucrative; therefore, women should be in them to make more money [and making money is the salient “importance” fact].”

But the drive for “women in math” extends especially to “women professors in math.” Professors in math don’t make more money than professors in history. (Maybe per capita they get more project funding, but that’s just because the math academy is smaller than the history academy.) Not all women who go into STEM will work for Google. People know this, and they still encourage women to go into STEM.

Proposal 3: “STEM fields are just harder than humanities fields! Techies are smarter than fuzzies! Therefore, STEM fields are more important, and we should be pushing the best and brightest women into them.”

This proposal strikes me as correct, but not true: I think it’s dumb, but I also think a closeted belief in it is the real reason people think we should have more women in math.

“Techies work harder than fuzzies” is a compelling claim because it is easy to believe there’s more preparatory work to be done before generating original research in the sciences than in the humanities (“you can BS a history paper, but not a problem set”). Even that weaker claim isn’t true (for starters, techies forget that fuzzies have to learn a billion languages), and the people who say it are usually the people who blatantly misinterpret the historical context of a primary source and then get upset they didn’t get an A on their history essay. STEM fields are not harder than humanities fields. Your field is as hard as you make it. Show me a trivial history “essay assignment” and I’ll show you ten trivial combinatorics “results.” Just as not every history thesis ever written is transcendent, not every mathematics paper written is a proof of the Ending Lamination Conjecture.

But people do think STEM is harder–I hear too many casual remarks to that effect to believe otherwise. So why, then, do people think that?

There is a very obvious demographic reason people would think it. If you think men are smarter than women, then the fact that most men go into STEM fields, while most women go into humanities fields, suggests that STEM is for smart people.

Yes, you heard me right. I’m claiming that people who indiscriminately want “more women in STEM” are subscribing, at least implicitly, to the notion that men are smarter.

You might think I’m putting the cart before the horse here. Are there other possible explanations we should discount first? If there are, I can’t think of them. It seems naive to think that the fact that the majority of humanities majors are women and the majority of STEM majors are men is unrelated to the public perception of STEM as more difficult. But fine, I’ll provide an argument for why I think the reason STEM is regarded as harder is that more men do it. (Disclaimer: I am not suggesting that men are smarter than women. I’m merely explaining why someone might conclude that.)

I said earlier that a low proportion of women in Field X is not concerning to me unless (1) it’s incongruous with what I know of women or (2) the field is important. A low retention rate of women in Field X is always concerning to me, because the easiest explanation for it is systematic rather than dispositional. If I hear that 10% of the people who go into SkyZone are women, I’ll think, “Huh, women must not like jumping as much as men do.” If I hear instead that 50% of the people who go into SkyZone are women but only 10% of the people still there after an hour are, there are two possible reasons that come to mind:

  1. Women get tired from jumping more quickly than men do.
  2. Women are being mistreated in SkyZone.

This is perfectly analogous to the two possible reasons for a high attrition rate for women in mathematics:

  1. Women are not as good at math as men are.
  2. Women are being systematically forced out of studying mathematics.

Being a woman math major myself, I am at least somewhat sympathetic to 2 (although I’d rephrase it in a less “CONSPIRACY”-sounding way). But if I weren’t a woman in math, I’d think it sounded ridiculous–even whiny. By Occam’s Razor, I’d be more likely to believe 1. I could show all this using Bayes’ theorem–my posterior probability for 1 is much higher than it was before I knew about the low retention rate. A low proportion of women in a field means women won’t do something; a low retention rate suggests that they can’t.

Note, of course, that “Women are not as good at math as men” does not imply “Men are smarter.” But if we know only that women are not as good at math as men, and not that men are smarter, whence the reason to want more women in STEM? The evidence shows women are not as good at basketball as men. What do we do about that fact? We ignore the WNBA–the exact opposite of calling for more women in basketball, which would be parallel reasoning.

The WNBA case is a strong analogy here, because it is similar to the women-in-STEM case in all respects except for the assumed link to intelligence. It is uncontroversial that women are not as good as men at basketball. This is because men are stronger and faster. This difference, however, is not regarded as requiring redress, because basketball isn’t “important.” The reason women aren’t as good at basketball is trivial: we aren’t as strong as men. Being a pro basketball player is associated with the attribute “strong” because we see strong people do it. Being a professional mathematician is associated with the attribute “smart” because we see “smart” people (overwhelmingly men) do it. “Smart” itself is a post hoc label–it isn’t quantifiable in the “How much can you deadlift?” way. I know I’m being heavy-handed here with my use of Occam’s Razor, but the following syllogism strikes me as so simple. We can establish the following complete argument to get to “more women in math”; if I didn’t know that Premise 2 is flawed, I’d believe it myself.

  1. Men are smarter than women.
  2. “Smart” is the unique relevant difference between men’s and women’s mathematical experiences.
  3. Men succeed in math more than women do.
  4. Therefore, math requires intelligence.
  5. Things that require intelligence are important.
  6. Therefore, math is important.
  7. More women should do important things.
  8. Therefore, more women should do math.

This is dangerous reasoning–the very altruism of the claimed feminist goal of women in STEM shows itself through this argument to be baldfaced sexism.

I’m not saying “women in STEM is a bad goal.” I’m rather saying “Lowering attrition rates of women in STEM is a much more coherent goal than wanting to draw more women into STEM.” I hope you can believe this; I think we should tackle the problem from the attrition end–if only because by saying “more women in STEM,” we may merely be saying “more women should do the things men do–because the things men do are better.”

-TEVM

The Knowledge Problem in Soteriology: Risk-Reward Paradigms and the Montefeltro Metric

“A man…called his servants and entrusted his wealth to them. To one he gave five talents of gold, to another two talents, and to another one talent, each according to his ability. Then he went on his journey. The man who had received five talents went at once and put his money to work and gained five more. So also, the one with two talents gained two more. But the man who had received one talent went off, dug a hole in the ground and hid his master’s money. After a long time the master of those servants returned and settled accounts with them. The man who had received five talents brought the other five. “Master,” he said, “you entrusted me with five talents. See, I have gained five more.”

His master replied, “Well done, good and faithful servant! You have been faithful with a few things; I will put you in charge of many things. Come and share your master’s happiness!”

The man with two talents also came. “Master,” he said, “you entrusted me with two talents; see, I have gained two more.”

His master replied, “Well done, good and faithful servant! You have been faithful with a few things; I will put you in charge of many things. Come and share your master’s happiness!”

Then the man who had received one talent came. “Master,” he said, “I knew that you are a hard man, harvesting where you have not sown and gathering where you have not scattered seed. So I was afraid and went out and hid your gold in the ground. See, here is what belongs to you.”

His master replied, “You wicked, lazy servant! So you knew that I harvest where I have not sown and gather where I have not scattered seed? Well then, you should have put my money on deposit with the bankers, so that when I returned I would have received it back with interest. So take the talent from him and give it to the one who has ten. For whoever has will be given more, and they will have an abundance. Whoever does not have, even what they have will be taken from them. And throw that worthless servant outside, into the darkness, where there will be weeping and gnashing of teeth.”

-Matthew 25:14-30

The parable of the talents is universally recognized as one of the most famous of Jesus’ stories, and has generated commentary so exhaustive and profound that I can’t offer anything new on the topic. Echoed in every commentary I’ve read, though, is the condemnation of the final servant. The general consensus is that his attribution of selfishness and property seizure to his master was a last-minute excuse to obfuscate the true reason for his failure: a lack of motivation to serve the master.

Yet I urge my readers to consider, for the time being, compassion for the unfortunate servant. I see one argument according to which the unfortunate, tooth-gnashing man should be spared.

The Argument

There’s something weird about the above parable, which is that the Rule of Three is invoked and then sort of not exploited. That is to say: there are three servants, but only two meaningfully different outcomes. There is an important distinction between Servant 1 and Servant 2 in that 2 has less to invest, but behaviorally, Servants 1 and 2 are isomorphic, and teleologically, the master treats them the same.

So, what if Servant 2 had lost the money? This is seemingly a crucial counterfactual that the parable ignores. Given that the two scrupulous servants invested the talents in some economic pursuits that could have gone awry, they might have lost the talents in the process, a situation allegorical, in all ways except its conclusion, for living the pious life. I see three possible explanations for why Jesus didn’t reveal the counterfactual:

  1. The counterfactual would’ve changed nothing (the master still would’ve received the servants with rejoicing if they’d lost his money) and thus isn’t important.
  2. The counterfactual would’ve changed everything (the master would’ve rebuked the servants if they’d lost his money), which would’ve made a less compelling story.
  3. The counterfactual is an unimportant situation for the allegorical meaning of the story (it’s impossible to “lose” by investing spiritual capital).

I think Option 3 is most likely, but Options 1 and 2 still bear considering. So set aside for a moment the spiritual implications of the allegory. In 30 A.D., a talent of gold was worth about two years’ salary for a skilled laborer. Investing even that much money–let alone five times that value!–was clearly risky, especially since the money did not belong to the servants themselves. In the absence of a clear heuristic or algorithm for taking risks with items that do not belong to us, burying the talents is the only “safe” choice. To work up the chutzpah to invest the money–even in a bank–we need some sort of utility theory, and some notion of safeness, and the two need to be connected.

But EUT doesn’t take us very far. We can calculate an expected payoff, but expected utility is risk-neutral; a high-risk algorithm and a low-risk algorithm with the same average payoffs will integrate to the same expected result. We can establish an Ellsberg-type theory around salience to tell us how marginal a difference in risk must be to justify preferencing a “risky” option over a less risky one.

In general, though, there are attributes of probability and risk that we have good reason to believe are conceptually orthogonal. That probably sounds crazy–you might argue that a machine that has a 50% chance of killing you and a 50% chance of giving you $1m is riskier than a machine that effects those outcomes 1% of the time each, and does nothing the other 98%. But that’s begging the question, by already taking the span of outcomes as given. Before we can decide how “risky” an outcome is based on its probability, we need to understand what the symmetric span of outcomes looks like. First, how terrible a world can obtain? Only second can we bring the likelihood of such occurrences into the calculus. So I claim there are two parts of risk: one dealing with likelihood of bad outcomes, and one, “symmetric span,” dealing with the possibility that those bad outcomes might obtain. The latter part looks unrelated to probability.

And it is–until we account for statistical entropy. Both are obviously related to it: the lower the average probability of an outcome, the higher the entropy; as entropy increases across a symmetric distribution, risk lowers (in that new, less extreme outcomes pop up on both the good and bad sides). Much like von Neumann eigenfunctions, probability and risk can be treated as independent, but they certainly aren’t unrelated.

In behavioral economics, we tend to see EUT-type preferencing as largely monotone and risk-preferencing as indicating attributes of the consumer, much like preference for one-shot versus gradual resolution of uncertainty. We have to account for these factors in our utility functions themselves. And even a bounded rationality, Kahneman/Tversky-type model does not differentiate between risks taken with things belonging to us and things of which we are merely stewards. Thus I conclude that even we, out of the Bronze Age thousands of years hence, have little machinery with which to advise the wayward servant.

Salience and Infinity: the Montefeltro Metric

Okay, so maybe the original “wayward servant problem” as I posed it isn’t as hard as I’ve made it out to be. The servant doesn’t need all that much counsel in expected utility theory. The master pretty much gets it right: “You knew that I harvest where I have not sown and gather where I have not scattered seed? Well then, you should have put my money on deposit with the bankers, so that when I returned I would have received it back with interest.” In other words, the servant’s prior that the master will eviscerate him if he doesn’t grow the investment is probably high enough to achieve the requisite activation energy for him to invest the money. But there’s a generalization of the wayward servant problem in which the answer really isn’t so simple. The talents in the parable allegorically represent an infinite investment, so let’s drop the allegory.

As my readers surely know, I’m a Catholic, combinatorialist, effective altruist Dantista with a Borgean bent. Naturally, my utility functions look bizarre. In my paradigm, infinite risks and rewards aren’t a Dutch-books trick–they’re a constant reality. As such, I lean on salience a lot in decision-making. With all else finite, infinite things are salient. With multiple infinite things on the table, ordinal numbers and well-orderings (e.g. infinite money < infinite DALYs; quantifiable infinities < unquantifiable infinities) help somewhat, but not tremendously.

I think a lot about the Large Hadron Collider. It seems ridiculously obvious to me that it shouldn’t exist. And before you start accusing me of being a Luddite (that’s fine, I’ve been called worse), really? How are our priors so vastly disparate that you think any chance of destroying the known universe is worth taking, for the sake of knowledge alone–knowledge that is inapplicable, and will likely remain so for a long time? Why does it matter how small the odds are, when an uncountable penalty is at play? Bear with me; I’m not conservative with risk-taking in general. I think the high-risk, often life-saving surgeries my father performs every day are incredibly worthy. I think AI researchers should proceed with caution, and that the world’s generally been better since the invention of nukes. But note that in all these situations, there is a positive to balance out the danger. Successful surgery? Many DALYs. Aligned superintelligent AI? Potentially infinite DALYs. No more war with another nuclear power? Potentially infinite DALYs. But is the good that emerges from LHC research seriously unquantifiable?

It’s not that I’m a consequentialist, and it’s obviously not that I hate knowledge. There’s a kind of meta-intentionalism at work here, which sounds complicated but is actually probably the least confusing, most Kantian consistency metric I’ve ever presented on this blog. I’m going to call it the Montefeltro metric, after the unfortunate counselor of fraud: You evaluate what your intent isIf the system you’re using aligns with your intent, proceed. If it doesn’t, don’t. In this way, you can see whether the object of your will is in contradiction with the method of your will. For instance, if the world is sucked into a microscopic black hole, there will be no world to study. The ascertainment of knowledge is empirical; without data, there is no studying. This meta-operator doesn’t shut down the counterexamples I posed, because the nonexistence of the world in particular disallows the discovery of knowledge about it. It doesn’t disallow making it more peaceful, or glorifying it, because existence isn’t a predicate. Maybe this is a sketchy argument. My point is, at least the contradiction of will and outcome isn’t as obvious in the latter cases.

The Montefeltro metric has its flaws. It’s weak in the sense that it can only tell you what risks not to take. Also, ideally I’d like an operator that reflects my sensibilities more in its evaluations (e.g. would reject the LHC because it’s unreasonable, not because it’s contradictory). But the nice thing about Montefeltro is that it can tell us not to bet on a symmetrically infinite distribution: even if infinite knowledge can be gleaned, the LHC fails the Montefeltro metric.

Knowledge and Soteriology

I mentioned Dante, and EUT + infinity $\implies$ Pascal’s Wager, so you’re probably waiting for the part where I talk about Heaven and Hell in the value calculus. I posted earlier about certainty and the human condition, but again, infinity throws EUT out of whack. I’ll start with the following question, which I get asked a lot: if Heaven is an unquantifiable good, and Hell is an unquantifiable bad, how on earth do I and other Christians not spend our entire lives single-mindedly devoted to getting as many people into Heaven and out of threat of Hell as possible?

Think about it this way. For any event you have prior probability P(A obtains). Now consider the prior for your prior–P(my assessment of P(A obtains) is accurate). For instance, my prior that I will eat ramen tonight is .9, but my prior for that prior is only .1, because I literally made up the number .9 while typing, so now I’ll stick with it, but I could have picked any other single-digit number.

(Now I have of course thought of a question that no one has asked, and that no one will, but that I’ll answer anyway: “But Tessa, doesn’t the existence of “priors of priors” imply that probability is real, which is false?” No, it doesn’t, as long as you sum over classes of events. Just like, in theory, you could repeat the conditions for event A a bunch of times and come up with a good approximation of P(A), you could introspect a bunch of times about your priors for various similar events, run trials, and plug into Bayes to get posteriors. Oh, and also, the existence of free will means that counterfactuals can obtain in the realm of human mental state.)

As I get up into higher and higher levels of meta-priors, the one for “Heaven and Hell work the way I expect them to” shrinks faster than my priors for any other events–much faster than my priors for non-soteriological elements of Christian doctrine. I think Hell is empty. What’s my sureness that I’m right? Almost zero. (Of course, it can’t get to zero, because then all my subsequent priors would have to be 1.) Soteriology has the optimal mix of divine unknowability, human unpredictability, and a generous sprinkling of Will that makes it utterly impossible to consider in a Bayesian framework. Knowledge doesn’t make any sense when applied to soteriology–which isn’t true for other theological disciplines. 

There’s a common misconception, especially in the Age of Reason, that knowledge spells doom for religion. Detractors point to the oft-repeated aphorism that our faith should be like that of the children–wrongly, because the reason we seek to espouse children’s faith is not because of its blindness but because of its sincerity. We’re told to “know thyself,” so obviously, unless introspection is an exercise in futility, knowledge can permit us to grow in faith. Knowledge, when applied correctly, generally helps theology, just like it helps all other fields. Why, then, am I claiming this isn’t true for the theory of salvation?

Consider Guido da Montefeltro. “No power can the impenitent absolve,/Nor to repent, and will, at once consist,/By contradiction absolute forbid.” One can’t will and repent simultaneously, of course–doing so precludes one from repenting at all. But then what’s different about doing something you know you can repent for later, and counting on that contingency? You’re simultaneously intending to sin and intending to repent. You intend to sin only because you know it is feasible to repent later. But then you can’t truly repent for the sin–you can’t be sorry you did it. One must rue all the consequences of sin in order to repent. But it’s impossible to rue debauchery if you enjoyed it, when you know that it didn’t cause you any harm because you could later repent! We then have the following system:

If you do something for which you know you can later repent, and for which you intend to count on later repenting, you cannot sincerely regret it. Therefore you cannot repent.

But then you can repent if and only if you can’t repent. Is this a theological Russell’s paradox? Do we need a new ZF(+/- C to taste) for Catholic doctrine? Let’s see where we went wrong. Decision theory got us into this mess, so it has to get us out of it. Recall:

You intend to sin only because you know it is feasible to repent later.

I emphasized the wrong words in that sentence. The word that ought to have been emphasized was “know.” Clearly, though, you don’t know you can repent later, because based on the paradox, it turns out you can’t. But the only reason you intend to sin in the first place is because you can later repent. If you know–or at least very much suspect–that you can’t repent later, then you can repent later, because you can absolutely rue the consequences you wrought upon yourself as architects of your inherent damnation! But this seems to have only worsened the paradox, because now neither “sins that are redeemable” nor “sins that are irredeemable” is a viable category. Which means sin doesn’t exist. I’m digging myself into a hole to China here.

Consider what the black cherub tells Francis and Guido:

  1. Absolution requires repentance.
  2. One cannot repent and will simultaneously.
  3. Therefore, Boniface’s absolution of Guido was illegitimate.

The fact that one can’t repent and will at once does not preclude further repentance after the fact. Thus this only works because Guido did not further repent for the fraudulent counsel (he thought he had already been absolved). This is an important distinction. Guido didn’t particularly enjoy counseling Boniface to trick the Colonnas. Being a Franciscan, he got pretty much nothing out of it, so he experienced no later gratification that he couldn’t rue because of its consequences for him. If, then, his sin was not “redeemable,” that occurred inasmuch as his reasoning (about whether he had repented) was mistaken. He erroneously assumed he’d repented.

Redeemable isn’t a qualifier that can be applied to sin at all. It’s only one that can be applied to people, post hoc, based on whether they repent. The fact that absolution requires repentance implies that redemption is posterior to repentance, which means that considering sins repentable or not begs the question. The problem, then, isn’t that “sins for which you can repent” and “sins for which you can’t repent” cause contradictions as categories; that was a red herring all along. Of course, there’s no such thing as a sin for which one can’t repent, but not because of this “contradiction.” All sins belong to the category “sins for which one can repent,” but the tricky word “know” is a game-changer.

How can this be true? If all sins are “sins for which you can repent,” and you know that, then aren’t all sins “sins for which you know you can repent”? How can changing the location of the word “know” change a set from enormous to empty?

When will or intent is involved, knowing you can do something changes whether related things are doable (cf. Toxin Puzzle). This isn’t just about uncertainty–even thinking you can do something can have that effect. The Will plus uncertainty plus infinite risk and reward plus a notion of technical predestination that, when taken too far, spells despair, make repentance, absolution, and soteriology the kind of thing I cannot profess to know anything about. My priors that whatever methodologies I’m using to help people attain salvation are the correct ones are constantly changing. Ergo, I devote my time not to yelling at strangers on the street that they can be saved, but to yelling at strangers on the Internet that they can be saved but I don’t know if they’re saved and if I knew that they were saved, that might somehow mean they aren’t saved.

Q: “But shouldn’t you tell people to repent? It can’t possibly hurt!”

I do tell people to repent. What I can’t do is tell people, “If you repent, then you’ll be saved.” Because while from a divine perspective that must make sense, from a human perspective it’s true if and only if it’s false.

Q: “Why don’t you devote every possible free second to trying to get more people to repent, then?”

Obviously, it’s not very effective. I make a good-faith effort to convert people, but I also think it’s dangerous for me to spend too much time trying to convert atheists, because they might convert me.

Q: “Wait, what? Don’t you want to believe what’s true? If an atheist convinced you out of faith, wouldn’t that mean you came around to believe that atheism is true? Why do you want to avoid truth? Are you a Trump voter? You’re a coward! You’ll never understand the enveloping curves of sequences of ratios of 1-periodic functions, you ignorant Catholic! RELIGIOUS PEOPLE LIVE UNDER ROCKS AND AVOID INTELLIGENT DISCUSSION!” *smashes table*

So, some Freudian slips in there, but glad you asked. Not sufficiently explaining the answer I gave above provokes cries precisely to that effect. First off, I converted from atheism, and I still have some atheist sensibilities, so there are atheist arguments that strike emotional sympathy in me even though I think they’re dumb (e.g. “How can you believe in virgin birth?”), and when I feel that way, it causes me to worry that I’m not being sufficiently rational in my evaluation of arguments, and ergo that arguing with atheists brings out my knee-jerk tendencies that prove to have negative consequences for my analytic thinking. I’m not living under rocks in the meantime; I’m working to improve on that intellectual flaw that I have.

Secondly, there is a majorimportant, and tremendously culturally ignored difference between “something that someone is saying sounds believable” and “something that someone is saying is true.” When I was a freshman entering a debate society, I went back and forth between two upperclassmen for a week arguing about populism. I’d talk to the first one, who’d convince me populism is a good ideology, and then the second one would promptly talk me out of it. “While i < k, i = k + 1. While i > k, i = k – 1. End. Print i.”

I don’t want to get too deep into algorithms here (although I do feel like I’m supposed to, because my imaginary opponent questioned my mathematical abilities), but here’s an analogy: For any input I’ve tried, the Collatz process terminates. This boosts my priors that the Collatz conjecture is true, but the returns diminish as I keep trying inputs, because I’m either exhausting small inputs (which doesn’t tell me anything about the conjecture in general) or I’m trying random big inputs (which doesn’t tell me anything about the conjecture in general). It seems obvious that if I keep multiplying odd numbers by 3 and adding 1, I should eventually hit a power of 2, but that’s intuition, not truth, and truth is not inductive. This isn’t to bash inductive reasoning. Empirical evidence is helpful, but not that helpful, and the slope of the graph of prior vs. dataset size decreases as x increases. Abstruse reasoning works much the same way, replacing any pretensions to universal empiricism with the anecdotal, and often justifying premises ex post facto.

This is a big problem, given that:

  1. We live in societies that attempt and have always attempted to systematically apprehend truth by arguing;
  2. Everyone’s a missionary, and fringe theorists abound; and
  3. The more you argue with a fringe theorist, the better he will get at winning arguments.

The fact that large swaths of people believe something is not always a good reason to believe it. Or here’s a better distinction: the fact that large swaths of people believe something that does not have a perceptible effect on their daily lives is not a good reason to believe it. (If an overwhelming majority of employees really likes its boss, that’s evidence that the boss is a good boss. If an overwhelming majority of employees believe dinosaurs are our immediate ancestors, that is not good evidence for anything except maybe that that particular company has been indoctrinating its workers with wrong ideas about evolutionary biology.) Even the fact that large swaths of smart people believe something is not a good reason to believe it. Very intelligent people are prone to different, but no less insidious, fallacies than people in general, myopia being the one that comes to mind first, and intelligence signaling second. Many apocryphal texts are very convincing, which illustrates that there’s a crucial disparity between “compelling” and “correct.”

I can’t offer much by way of solution. The same notion–that charismatically arguing tends to convince people–is the notion I’m using to try to convince you right now. But I do think that separation from face-time helps somewhat. While I don’t argue with atheists all the time, I read a lot of Dawkins, listen to Harris, etc., and doing so rather than firsthand discussion, I think, permits me to focus more exclusively on the logical.

The Upshot

Q: “Okay, Tessa, so you went off on a ridiculously long tangent about algorithms, Bayesian priors, and rational argumentation theory. Are you ready to do that thing you do that you think is so clever where it turns out that your weird tangent is somehow related to the problem at hand?”

You got me.

As I expressed above, every empirical experience we have changes our priors ever-so-slightly toward 1 or 0 or, perchance, some limiting value bounded away from 1 and 0. Perhaps human uncertainty can actually help us here. It’s impossible to repent for something we know we can repent for. But repenting for something we’re only .9999 sure we can repent for neatly avoids the paradox because not repenting never wins the day under any expected utility model. Finite positive times infinite is infinite. Our repentance is sincere.

I hope this is an unexpected upshot, because it was unexpected to me: under the lens of the knowledge-and-soteriology paradox, we can justify two things that seem unsavory in theology.

First, the fundamental uncertainty of God’s existence makes sense in light of the paradox I described. I think this examination actually presents a very elegant notion of why the unknowability of God is a good thing. In the human mind, probability 0 times infinite risk is indeterminate. But any small epsilon times infinite risk is infinite. Uncertainty is good insofar as it avoids the logical paradox of intent and repentance. Unknowability is crucial to Pascal’s wager, which is what makes it so compelling in the first place. With total faith in God, there’s no need for any analysis whatsoever. But the lack of perfect knowledge permits both the presentation of elegant mathematical ideas in the realm of theology and the avoidance of complete sureness of the doctrine that repentance is in any way related to redemption.

Second, the model of “I’m only .999 sure that I’ve repented adequately/that my repentance was sincere/that I can repent at all” provides some understanding of why human self-centeredness isn’t altogether terrible. The fear of eternal damnation isn’t a good reason to repent (having strayed from the Good is a much better reason). But the latter isn’t as easily quantified–or rather, as easily categorizable as unquantifiable–as the former. I’ve spoken before of the binary notion of virtue and sin; there isn’t a readily emergent way to order how much we’ve strayed from the Good. Say what you want about Hell, but you can’t say it’s not the salient risk in pretty much any outcome spread it finds itself a part of. There will be weeping and gnashing of teeth…

-TEVM

Religion and Humility: Rationality, Diagonalization, and the Hardness Criterion

This summer I had a good old-fashioned Crisis of Faith.

It became apparent that I’ve let myself go a little in terms of having a ready retort on hand for spontaneous atheist arguments. I spent some time this summer at a conservative think tank, full of minds like mine (if significantly more libertarian) and blest with a high degree of Catholic literacy. Although I was regaled there daily with requests to mathematically prove God’s existence, thankfully the majority of the religious arguments my classmates took up with me ran along the lines of “Is the seat of Peter empty?” rather than “How can you believe in miracles?”

My return to New Haven engulfed me in the world of mathematics, leaving little time for theological debate. Mathematics departments nationwide run the gamut from very religious to Dawkinsian (it’s hard to be a Humean mathematician, and impossible to be a Humean statistician). Ours enjoys a variety of religious viewpoints, with the majority falling secular agnostic. Thus, when a mentor posed a new and unusual atheist argument to me, I was caught unprepared.

The Problem

I’ve seen all the inane, readily neutralized atheist claims–“Do you really believe in virgin birth?,” “Haven’t terrible things been done in the name of Christ?,” or “Religion was established to keep citizens complicit.” SSC raises a hilarious one about whales not being fish.

Nonetheless, arguments from epistemology are more compelling. Not the “argument from unknowability,” per se. I’ve long considered the existence-of-God problem undecidable. This doesn’t bother me, because I’m not a logical positivist; physical facts are not the only important components of a system. I don’t care that atheists and I don’t disagree on any physical matters that can be finite-time decided, and I don’t think the criterion of falsifiability is useful.

The counterargument with which I was presented was much slicker, and imbued with all that meta-level, logically contradictory, late-Inferno-style contrapasso of which I am so fond: “Throughout history, people have realized how much they don’t know. The more we learn, the more there is to learn. Religion, in presupposing the ultimate answers, is the Platonic form of hubris.” Steelmanned: “Religion is prideful, but prides itself in being humble.”

That got to me. My discipline of choice is a field in which we constantly know less than we did before, in a certain sense, because every answer prompts questions that didn’t previously occur to us. We learn “calculus” in high school and think we know what integration is, then learn vector analysis in college and think this time we really know what integration is, then learn Lebesgue theory and realize we’ll never know what integration is. Humility is both necessary and proper to the discipline of mathematics, as it is to the discipline of theology. But mathematicians don’t claim to have solved the (perhaps undecidable) Collatz conjecture, whereas theologians do claim to have solved the (probably undecidable) God problem.

Religious sensibilities are more insidious than religious confession. My mother, an evolutionary biologist and enthusiastic Dawkinsian atheist, is terrified by The Exorcist and has admitted to me that she’d never attend a LaVeyan meetup because she could not sign her soul over to Satan even though she believes him nonexistent. She’s one of many nonreligious I know with religious sensibilities ranging from the theological to the social to the moral, yet I know no believers who have the faith but lack the sensibilities. I believe these inclinations precede confession; they are a necessary but not sufficient prerequisite to genuine faith. So what happens when religious sensibilities undermine religious conviction? What happens when the truth claim of religion is at least in some sense hubristic, but the sensibilities of religion are humble?

I go to a highly-regarded research university and thus constantly make use of the immediately available option to knock on the office door of one of the smartest people in the world and demand answers. My theology thesis advisor wasn’t in, so I stopped in at the first office I could find: that of a professor specializing in the intersection of religion and political theory. Perfect–exactly the kind of person who’d know all about the theory of the “opiate of the masses.” I walked in, introduced myself, and explained my problem: the priors for religion are heavily dependent on humility, but the truth claims of religion are hubristic. How can I be both Bayesian and Catholic? Help!

A Helpful Digression

The paradox with which I confronted the professor is related to signaling theory and what I’ll describe as the “hardness criterion.”

Definition 1. Hardness Criterion. A map F defined from the set of tuples on the space of choices to the space itself, where F(a,b,…) = argmax{difficulty(a), difficulty(b),…}.

In other words, the Hardness Criterion is the belief: “When presented with multiple options of action, I should do the one that is most difficult.” Naturally, “difficult” can mean a bunch of different things, some of which may be contradictory. For example, being a doctor is more technically difficult than being a garbage disposal worker, but the latter is more psychologically difficult for an Alpha on the alpha island in Brave New World.

The Hardness Criterion seems obviously wrong at a first glance, but I urge my readers to consider it more carefully. Steelmanned, it tells us that Man has a duty to pursue the highest spheres of work, self-analysis, and the search for truth, and to reject hedonism, which seems observably true. It doesn’t beget any of the silly fallacies detractors would like–“But everyone can’t do the universal hardest thing; some people have to do something else, or else we have a society of doctors” and “If everyone does the hardest thing, no one will be good at his job”–because what’s difficult differs by person, and how hard something is, in my experience, is orthogonal to how good I am at doing it. I’ve never been able to gauge how good I am at mathematics because it seems roughly equally difficult no matter how good you are at it, like cross-country running but unlike music or politics.

Those who deride religion for providing cushiness and a “Heavenly Daddy” figure are unknowingly, implicitly employing the Hardness Criterion in a way similar to Occam’s Razor. The argument goes like this: Religion permits an emotional solace in the form of the promise of eternal life, whereas atheism does not permit such solace. Therefore atheism is more difficult and I should do it.

Of course this requires the Hardness Criterion, because there is no other grounds for rejecting religion on the basis of its provision of emotional solace. One can only reject this solace if they believe the solace to be bad, which requires the Hardness Criterion, because in theory, whether a belief provides emotional solace is orthogonal to whether it is true. Sure, emotional solace might discredit the epistemic honesty of one’s acceptance of the framework, but it bears no consequences for the truthfulness of the framework itself–unless you’re willing to categorize “things that provide emotional solace” as “things I should not believe,” which utilizes the Hardness Criterion.

To reject the Hardness Criterion properly requires diagonalization. It’s noticeable that “hardness” generalizes to the meta-level, which prompts the question, “Is the algorithm ‘do the action that is hardest’ the hardest algorithm? Doesn’t doing the easiest thing all the time place me in opposition to the Hardness Criterion, which is, if I believe in the Hardness Criterion, an intellectually difficult space in which to operate?” This counterargument works beautifully, because at the meta-level, “choose the most difficult thing all the time” is a very easy algorithm, in that there aren’t any hard choices, given that your options are well-ordered. It seems to me that one could prove the Hardness Criterion is not well-defined in much the same way one can prove the halting problem is undecidable.

This is the reason the Hardness Criterion argument against religion is easily deflated. On the meta-level, “believing the thing that is harder” provides a degree of emotional solace that stems from finding one’s beliefs to be in accordance with the Hardness Criterion, while being religious is “harder” in that sense. Similarly, while atheism is “harder” than religion in terms of lacking the component of emotional solace, religion is “harder” than atheism in terms of a meta-level hardness factor: the difficulty the religious face in rationally justifying their beliefs given their first-order apparent rejection of the criterion. This ultimate point–that under the Hardness Criterion, the most contrarian seems always to win–deals a death blow to its acceptance as a useful algorithm.

The Solution

I ended up talking to the professor for about thirty minutes, and she did not disappoint (how I love this school!). We had a fruitful discussion about the fallacy described in the digression section, and she forwarded me an article she’d written arguing that support of Islamic political parties in Muslim-majority countries is rational insofar as the emotional support provided by religion eases stress. Naturally, I and my Dante-meets-Borges-meets-Bostrom mindset loved this because of its seeming counterintuitiveness: as strange as it is to accept, the emotionally easier option is of course the more rational one, in the sense of utility maximization.

I went home and thought about this for hours. Hours turned into weeks, which turned into months. And finally I figured it out. Would it be possible to create an inverted Hardness Criterion labeled the Ease Criterion, affiliated with a straightforward Kahneman/Tversky-type utility function, yielding a bijective relationship between Ease Criterion rankings and outputs of some rational choice function? Definitely. Pick the option with minimal difficulty.

But does this Ease Criterion collapse as obviously as its negation does? In one sense, the Ease Criterion is easy on the meta-level because the choices it provides are well-defined. There’s no simple, Berry-paradox-type situation in which the Ease Criterion falls apart. For all intents and purposes, the Ease Criterion is at least as good as Occam’s Razor, because I can imagine some situation exists in which the algorithm that uses simplicity to pick a course of action is not the simplest algorithm. Does there exist one in which an algorithm that uses ease isn’t the easiest (if we admit “emotional solace” as a stand-in for “ease”)?

Indeed there does. The Ease Criterion on two variables always picks what the Hardness Criterion doesn’t pick, so the inverse diagonalization produces a contradiction. I can readily imagine somebody emotionally tortured by the notion that he’s always choosing the easiest option! A theorem of this flavor feels like it ought to follow:

Theorem 2. No operator that definitionally outputs a single choice from a choice set by a metric of difficulty or complexity is consistently defined.

I don’t think the generalization to multidimensional operators works, but that isn’t really relevant here, as no one claims two religions. The conclusion: if we allow difficulty, ease, simplicity, or complexity to serve as a stand-in for “rationality,” then we cannot consistently behave rationally. (Aside: I know rationality isn’t everything, but it still benefits us to create a more nuanced notion of what rationality is.)

The contradiction my mentor voiced was, as you may have by now realized, isomorphic to the problem with the univariate criteria described above. I could now see that the problem he had presented was that the Humility Criterion is inconsistent, and his claim was definitely legitimate. The Humility Criterion makes truth claims! Of course, on the meta-level, it isn’t humble.

Central Question: Does Christianity actually make use of the Humility Criterion?

Naturally, the only way to disarm the paradox of the humility of faith versus the pride of faith is to reject the notion that Christianity uses a so-called “Humility Criterion”–e.g. while humility is a virtue, it is not the methodology one uses to arrive at Christian conclusions.

Virtues are not algorithms. Consider the algorithm “Do the thing that is virtuous, or if multiple virtuous options exist, the one that is most good” (so phrased because I don’t like the notion of “most virtuous”). If you’re an effective altruist, it’s clear this algorithm is virtuous, which is not self-contradictory. But performing this algorithm is not a virtue any more than entering a convent is a virtue. They’re both methodologies used to pursue virtue. (This is why I love that Christianity enumerates so specifically what the virtues actually are.)

Not convinced? Consider the following argument why virtue is not meta-level. Take the action of “cultivating an environment in which I can better pursue the virtue of almsgiving.” It’s clear that an almsgiving person who cultivated such an environment and an almsgiving person who didn’t are both almsgiving, and thus are both manifesting the virtue of charity. The person who didn’t cultivate such an environment might even be a better person, by dint of emerging triumphant against more temptation.

Similarly, I recently posed the following thought experiment to a Catholic close friend: Mr. Brown doesn’t want to give alms. Which is worse: for Mr. Brown to falsely tell mendicants he doesn’t carry his wallet, or for Mr. Brown to deliberately leave his wallet at home so he doesn’t have to lie when he tells that to mendicants? We agreed that it was the latter, because it eliminates the possibility of repentance (cf. Guido da Montefeltro in Inferno XXVII).

Humility is not an algorithm; it is consistent for Christians to use algorithms that are not themselves humble in order to maximize their humility. And because humility is not an algorithm, it is not used to discern truth, and thus it cannot be a contradiction that the “lux” part of faith is so glorious. The centrality of the nonexistence of a Humility Criterion is paramount! Without it, “do the things that are humble” does not imply “believe the things that are humble.”

“Humble yourselves before the Lord, and He will lift you up.” -James 4:10

-TEVM