TANSTAAFL!

On the varieties of free lunch worth having

You’re not paying for it, but is it free?

There ain’t no such thing as a free lunch! Or at least that’s what this acronym stands for. The pithy but unpronounceable term is a favorite rhetorical cudgel of libertarians, who are convinced that there is a deep truth in it: a core economic principle that (conveniently) justifies anti-social behavior and uncontrolled greed.

Libertarian SF writer Robert Heinlein popularized it in his novel, “The Moon Is a Harsh Mistress“, and libertarian economist Milton Friedman entitled one book “There’s No Such Thing as a Free Lunch“, which is more grammatical but less catchy. The Libertarian Party put it on their flag made it their logo. And anytime anyone anywhere suggests helping people, libertarians can be counted on to repeat this mantra as if it was self-evident proof that morality is pointless and counterproductive.

The supposedly profound core is that even a meal you get to eat without paying for isn’t “truly” free because there’s a cost that someone, somewhere is paying, individually or even (gasp!) collectively. This is, depending on how you interpret it, either trivially true, significantly false, or completely irrelevant.

It is trivially true that even something that isn’t charged for is not free “all the way down“. Those “free” salty peanuts at the bar are more than paid for by the additional drinks that thirstier patrons buy. Here, the cost is displaced and concealed but remains.

It is significantly false, in that making something freely available can take advantage of economies of scale and create sufficient positive externalities to more than pay for itself. For example, the road system generates much more wealth for society than it costs to maintain, so not only don’t we bother charging people to cross the street, but these streets are better than free because they’re profitable. Charging money would make them less profitable by discouraging their use and creation.

It is completely irrelevant because, to the person eating it, the lunch is simply free. It might not be free in a way that satisfies a libertarian, but who cares? Libertarians are greedy idiots, so what they think doesn’t matter anyhow.

Idiocy is not rare. There’s a long history of deep thinkers mistakenly insisting that something is essential when it doesn’t even exist. They raise the bar impossibly high, then act as if it matters that nothing real can meet their arbitrarily high standard. The classic example of the failure of this essentialist approach is vitalism.

Like the rest, this one begins with people facing a question that, at the time, has no available answer and being unwilling to accept this with intellectual honesty. Not knowing what it is that makes an organism alive, some were not content with admitting they had no clue, but instead hung a lampshade on their ignorance by plastering over it with a meaningless label.

They declared that the source of life was some sort of “vital essence” (or if you’re French, or pretentious, “élan vital“). Like the God of the gaps argument, this spurious placeholder not only offers no additional explanatory value, but leads to artifacts and illusions because it creates a rift between the behavior and its cause. Due to their theoretical baggage, they find themselves compelled to deny the legitimacy of what is observed.

With vitalism, the obvious artifact is that it becomes logically possible for an organism to act in all ways as though it’s alive while somehow lacking that “vital essence” that is (they insist) required for “true” life. In this way, vitalism allows for the existence of zombies.

Worse, it makes it impossible, even in principle, to ever distinguish someone from a zombie because no amount of “merely acting” alive is (according to them) sufficient. And if you believe that a vital essence is required for life, then the absence of any evidence for such a thing must mean that nobody is “truly” alive; we’re all zombies.

A similar, but more explicitly religious, error is the dogma that what makes us alive is a supernatural essence; the soul. The result of this move is that anyone denying the existence of souls will be accused of denying the existence of life itself. If not for souls, they insist, we’d just be “bags of chemicals“. It’s zombiism all over again, only with Jesus.

Parallel to these errors is the claim that what makes us conscious is yet another ineffable and utterly undetectable essence: qualia, which are defined as experiences (somehow) severed from behavior and behavioral dispositions. A consequence is that, according to this idea, it is now possible for someone to act in all ways conscious while somehow not being conscious: a mindless philosophical zombie.

It’s almost as if there’s a pattern here, and it’s a frustrating one. It’s hard to know what to say to someone who insists that there’s more to a behavior than everything about the behavior itself, and who defines it so that no evidence about the matter is even logically possible.

Earlier, I casually (yet entirely fairly) bashed libertarianism, but this term has an older meaning from philosophy. In contrast to political libertarians, who are just greedy anarchists and Nazis, metaphysical libertarians are convinced that what makes free will possible is yet another very special essence.

According to them, the only reason we can be held accountable for our actions is that they are (somehow) uncaused, making them ours and ours alone. In other words, they define free will as will that is free from causality itself, and conclude that we obviously have it. They attribute this seemingly magical ability to everything from Cartesian dualism and God to emergence and quantum mechanics. (In other words, their excuses span both the genres of fantasy and science fiction.)

Ironically, the opposite-but-equal hard determinists agree on the requirements but disagree on whether they’ve been met. They say, correctly, that acausality is physically impossible. They say, incorrectly, that this means we don’t have free will. I don’t blame them, though, because they were obviously predetermined to make that mistake.

Collectively, these two groups are called incompatibilists because of their shared belief that determinism is incompatible with free will. If you try to tell a metaphysical libertarian that it’s physically impossible for our actions to be uncaused, they’ll accuse you of denying that you have free will. If you say the same to a hard determinist, they’ll nod and say that this proves you don’t have free will. Tweedledee, meet Tweedledum.

The shared mistake is that they misdefine free will. First they ask the wrong question, then they disagree about the answer. But a broken question can’t be answered, only itself questioned and ultimately unasked. It generates a problem that cannot be solved, only dissolved.

Free will is, first and foremost, a type of will; wanting some things over others. We are capable of wanting because we form beliefs about how the world is and ought to be. These are formed as the consequence of interacting with external reality.

The world at large causes our will, so anything interfering with this, whether it’s supernatural or random, only undermines that will. If you want something for no reason at all, in what sense do you want it?

Causality is a requirement for any sort of will; acausality doesn’t make our will free, it destroys it. So why would we even want our will to be free from cause? Why would we make it a requirement?

Incompatibilists believe that, if our choices are determined, then we can’t be held accountable for them. As in: “Sure, I killed her, but I was predetermined to do so, therefore it’s not my fault, so you have to let me walk.”

This is bad, not only because the conclusion is both morally repugnant and obviously mistaken, but because we actually want to be able to be held accountable and to hold others accountable in turn. Otherwise, how can we cooperate? Without it, we cannot form a social contract.

Consider a more literal contract. In order to make a binding agreement, both parties must accept a responsibility to follow it and be held accountable for choosing not to. But this is only possible if entering into it was your choice in the first place; if you agreed to it of your own free will.

If you sign a contract with a gun pointed at your head, you didn’t really have a choice. Rather, you acted under duress—under the control of another’s will—so you can’t be bound to it. Likewise, if you genuinely agree but then violate it at gunpoint, that too was under duress and therefore not your fault.

We don’t have will that’s free from causality, because that’s physically impossible, but we don’t need it because it doesn’t matter. Will that is free from duress is the only sort of free will required for us to be held accountable and to hold each other accountable. Freedom from duress is not only sufficient for this, but unlike freedom from causality, it’s actually possible. In fact, it’s ubiquitous.

The conclusion that causality and free will are compatible is, predictably, called compatibilism. Much as those who deny the existence of various other essences—élan vital, souls, qualia—are accused of denying the existence of what the essence purportedly explains, compatibilists are accused of denying that free will exists.

This is true and false. Compatibilists do deny that acausal free will exists, but they also deny that it’s the sort of free will we need and have. They believe in free will, but not Free Will. When they make this clear, they’re accused of equivocating, of lying about what free will is, but the issue is deeper than definitions.

While compatibilists and incompatibilists disagree about the meaning of “free will”, this isn’t a purely semantic argument. It’s not just about which sort of freedom is meant, but which matters: which one we ought to mean.

Therefore, compatibilism isn’t distinguished by its definition of free will, but by its endorsement of this sort of free will as ethically sufficient. All three stances agree that freedom from duress exists, but compatibilists are the only ones saying it is worthy of fulfilling the ethical role of free will.

Why do incompatibilists disagree? I suspect it’s because they are in the thrall of an ontological error. Attributes like will, and beliefs, and thoughts, and so on, are mental. They are only defined in terms of minds, hence only visible at the level of the intentional stance. Physical causes therefore cannot undermine the will, but are instead required for it.

It’s something of a subtle point, but minds are not caused by bodies; they supervene. The mind exists in terms of the body, much as software exists in terms of hardware, as a pattern in it, an abstraction. But hardware doesn’t cause software; software is hardware seen from a distance, much as words are formed from letters but not caused by them.

(Naturally, yet another form of essentialism, Platonic idealism, insists that these abstractions are more fundamental than what they abstract from. Essentialism is the gift that keeps on giving: like herpes.)

This confusion about the relationship between the physical and the mental is what drives incompatibilism. It creates a categorical error, akin to trying to answer “Why did the chicken cross the road?” with “Because its molecules moved in that direction”.

Such an answer misses the point so badly that it’s not even wrong, and it’s a daunting task to even begin to unwrap the layers of false assumptions that underlie it in order to get through to them. You have to break down their beliefs, educate them with replacements, and then show them the connection.

So when compatibilists say that we have free will (because it’s free from duress) and incompatibilists insist that it’s not free enough for their standards (which require freedom from causality), it’s much the same as a political libertarian bothering you as you eat your free lunch by insisting that it’s not free enough.

And I counsel the same reaction: just enjoy what’s free and ignore them. Their requirements are solely their own and therefore completely irrelevant. Also, use condiments, even if they tell you not to.

I, too, was (allegedly) a sexual harasser

Al Franken, Kirsten Gillibrand, and the politics of accusations and blowback

Note the shadow beneath the fingers; he’s not touching her.

This one’s personal. So there’ll be no food analogies segueing into the topic. I’ll just get right to it.

In the wake of Gillibrand’s departure, I’ve been arguing on Twitter about what happened to Al Franken, and one of the points I made is that the injustice of it only harms the #MeToo movement. Franken never got his day in court; he was pushed out before the investigation which he demanded had a chance to clear (or damn) him.

So, in the spirit of full disclosure, I’m going to reveal the one time I was accused of sexual harassment. This way, you can decide if my defense of Franken is just self-serving, personal bias from a creep. Or, at the very least, you can calibrate against whatever bias you detect.

To protect the guilty, I’m going to avoid sharing most of the details, but I’ll otherwise do my best to be accurate. I’m also going to stick to gender-neutral terms, quite intentionally. It’ll be interesting to see what assumptions readers make.

So, at some indefinite but fairly distant point in the past, I joined an unspecified large company and encountered a co-worker whom I found attractive. They didn’t work in the same part of the company or in the same role, but I did run into them from time to time. And when I did, I was distracted.

There was evidence that the attraction was mutual, but I was still recovering from a relationship that had ended badly and wasn’t really in the market. Besides, dating a coworker seemed like a bad idea at the time. In fact, it turned out to be. But my common sense had to contend with more basic urges, and it was a losing battle.

About six months later, you could cut the sexual tension with a waterjet. It was so noticeable that co-workers were openly commenting about it. The gist of the peanut gallery’s good-natured but nosey remarks was that, if we weren’t already a couple, we should be. We should just get it over with, or get a room already. That room turned out to be an elevator.

At the end of a workday, we wound up alone together in an elevator heading for the streets, and it was awkward. In the middle of my desperate attempt at small talk, they interrupted to question me about why we weren’t dating yet. I didn’t have an answer to that, so I suggested that we should have dinner together. I remember that we were both pretty happy about this at the time. We were relieved, after all that will-they/won’t-they tension.

Later that week, it was the Friday of our first date. I was surprised and wary when my manager sternly called me into his office, and even more so when I noticed that he had a witness in there with him. I could tell that this wasn’t going to be a casual chat.

He didn’t waste any time: he told me flatly that I had been accused of the sexual harassment of a co-worker and that this was a very serious matter. I honestly wasn’t sure what to make of it all. My first thought was that maybe my date had second thoughts or something, but I’d just seen them in the hallway and they seemed enthusiastic about our plans for the evening.

So I asked my boss whom I was accused of harassing. He wouldn’t say. I then asked who had accused me. Same response. More annoyed than flustered, I pointed out that, if I didn’t know what this was about, then I couldn’t say anything, either. That stumped him, so the meeting ended. However, my relationship with my manager took a big hit.

That night, over Chinese food, I told my date this story and they laughed about it, as confused by the whole thing as I was. That dinner went well, and over the course of the next month or so, we went out a few more times before we broke it off, amicably and by mutual agreement. We just didn’t have that much in common, despite that attraction. And after we’d given in to it, we found that there was no basis for anything deeper.

Still, whenever we bumped into each other in the office, it became clear that the sexual tension had not gone away, and had perhaps gotten worse because we knew what we were missing. We tried to keep things professional, but there were still many awkward moments. We even relapsed briefly, kissing in the elevator, before immediately coming to our senses. After that, we did our best never to be alone together, with mixed results.

We hadn’t told our co-workers that we’d been dating, so we didn’t have to tell them once we stopped. As a result, they continued the comments about how we’d make a cute couple and all that. I wanted to just say to them that, no, we weren’t really compatible. But I hadn’t forgotten that bizarre yet scary sexual harassment accusation, so I kept my mouth shut.

To this day, I have to wonder if the relationship would have gone better if we hadn’t had to keep it on the down low. Probably not, but still, I have to wonder. It doesn’t matter. By that point, I had accumulated a variety of good reasons to leave this job, besides my misadventures in dating, so I started looking. It didn’t take long for me to find a place that would let me make a fresh start. When things are uncomfortable, leaving is the natural reaction.

Before I left, I did find out more about that sexual harassment claim, through a backchannel. It turns out that, as the timing suggested, it had actually been about my date. However, the accusation came from a third party; a co-worker who had hit on them and been rebuffed. Presumably, they saw me as a rival and went after me out of some sort of jealousy.

The bottom line is that I was falsely accused of sexual harassment, so I naturally have some sympathy for others who face such allegations. In my case, it didn’t really amount to anything, but I didn’t know that at the time. All I knew was that I faced a faceless claim against me and had no way to defend myself. About half of Franken’s accusers were likewise anonymous and the one who started the whole thing had questionable motives, much as my rival did.

I also knew my job was on the line, and the fact that my manager didn’t have my back was further motivation not to bother sticking around. That’s why I don’t blame Franken for resigning under pressure when his own party threw him under the bus. It’s not a sign of guilt, but of despair; of wanting to get away from a situation that’s unpleasant and uncomfortable, when those you count on to protect you from unfair treatment are not on your side.

Some people might read this heavily-censored autobiographical account and take home the idea that I’m only defending Franken because I was falsely accused myself. Others, I hope, will consider that my experiences have made me more sensitive to how it feels to be on the receiving end of such an accusation, and more sympathetic to someone who gives up when they lose faith in their colleagues.

My sympathy is not one-sided, because I’ve been on the other end of things. I was sexually harassed earlier in my career, by my own manager. It was a quid-pro-quo request in order to keep my job, and I chose to leave, but didn’t bother reporting it.

When I talk about the problem of false accusations, what bothers me most is that, because there is so much stigma and risk around accusing someone of sexual harassment or worse, most claims made publicly and without the shield of anonymity are true. As a result, every visible instance of a false claim is used to undermine the legitimate ones that vastly outnumber them. I don’t want my defense of one particular person to be abused into a defense of the guilty.

This is what I meant when I said that the Franken debacle harms #MeToo. There is a culture of exaggerating the risk of false claims so as to undermine victims, and what feeds this narrative are the rare exceptions: the illegitimate accusations that get disproportionate publicity precisely because of their rareness. “Man bites dog” is newsworthy, “dog bites man” is not, so you’d think from reading the papers that dogs fear men biting them and not the other way around.

The only way to undermine this attempt at intimidation is to starve it of support. Yes, #MeToo taught us to #BelieveWomen, but this has to mean that we take their claims seriously and investigate them neutrally, not that we rush to judgment in either direction. False accusations hurt real people, not just the falsely accused but the victims who aren’t believed because there’ve been a few well-publicized false accusations. So we need to trust but verify, not trust blindly.

Some accusations are malicious, others stem from some level of misunderstanding, but the overwhelming majority are legitimate. These legitimate accusations are the ones we need to protect by blocking the illegitimate and mistaken. Moreover, as Pence shows us, a world where women are seen as an occupational risk is not good for women. Excessive zeal to punish the guilty creates harmful blowback that hurts the innocent.

Bringing home the bacon

On the use of force and the utility of impeachment; intentions vs. consequences

Did you just say “down, boy” to me?

We are currently in the throes of a food fad based on adding bacon to everything, but while we love the sweet, salty flavor, we don’t think much about where the meat came from. And when we do think of pigs, we imagine the tame farm animals; all pink, rotund, and cute. But these were bred from a much scarier creature.

The wild boar is a large, powerful beast, more than capable of goring a human to death, and more than willing to do so when enraged; it is easily enraged. Boars have a thick, protective hide, dense bones, and lots of muscle, and once they get angry, they do not quit.

Traditionally, wild boar was hunted with long spears from horseback, the better to keep those deadly tusks far away from human flesh. One characteristic feature typically found on boar spears is their crossguard, whose job is to keep the impaled boar from pushing itself up the shaft to attack the person holding it. Think about that for a moment.

As you might imagine, the wild boar is not an animal you can control through pain. And yet the strategy of pain compliance is all too often taught for defending against other humans. In particular, it has become a mainstay of “women’s self-defense”.

Most of us have seen carefully-staged videos of tiny women stomping on a large, male assailant’s instep or bonking him on the nose or twisting his wrist, causing such agony that the man gives up. It works in the videos so it must be true, amirite?

In reality, this approach is not necessarily a good idea. When the assailant is timid and unsure, expecting no resistance and perhaps unaware of the line they’ve crossed, a bit of pain might actually dissuade them, like shouting “no” but harder to ignore. But a motivated opponent, especially one who is already riled up and running on adrenaline, one unwilling to take no for an answer, may hardly feel the pain as painful and is likely to react by escalating further.

Like the wild boar, pain just makes him angry and more violent, pushing him past the point of no return. And given that the victim is fighting off someone bigger and stronger, this could end badly.

Am I suggesting that she just take it? No, not at all. Resistance isn’t futile, but it has to be based on damage, not just pain. Twisting a wrist is one thing, breaking it is another. The defense strategy that works is to take away their ability to harm, not count on psychological discouragement. To put it another way, taunting the boar is suicidal, but shooting it dead works.

Which brings us to impeachment. If we could cause Trump damage, not just pain, with impeachment, we should. So if we could follow up that impeachment in the House with conviction in the Senate, expelling him from office and exposing him to arrest for his various felonies, it would be worth doing. This remains the case even if it means incidentally providing fodder for the right-wing persecution complex.

But we can’t. The corrupt, traitorous Republicans control the Senate, and they wouldn’t convict Trump even if he confessed to the entire nation. We cannot harm Trump with this, only cause pain. And while I don’t have any hesitancy about making Trump’s life less pleasant, this is as counterproductive as smacking a boar’s snout.

If the House attempts to impeach him and either fails outright (currently likely, given the lack of support among even Democratic Representatives) or succeeds only to be blocked by the Senate, how will this damage Trump? It’ll cause him some pain, but the fascists in America are already enraged past the point of being discouraged by pain.

Instead, they will be encouraged by our show of weakness. We’ll have taken our best shot to no effect. They will see that they have nothing to fear from us, so they’ll rush to the polls, feverishly excited to re-up the fascist-in-chief’s tour of duty and hog wild about crushing “libtard snowflakes”. Meanwhile, dejected, fickle liberals will stay home and cry like sore losers, while the populist left makes a feast of Democratic misery in the primaries, further weakening the DNC and aiding Trump.

I’m sorry to say that impeachment was never the answer. Like election, impeachment is a political process, not a judicial one. It represents the will of the people, but only a minority of citizens support it. Not only is impeachment unpopular, but it’s becoming more unpopular; support dropped 12 points among Democrats between January and July of 2019, even as Trump’s approval rating has plummeted.

It’s fine to cause Trump pain through public hearings about his crimes, but the goal has to be to motivate the left, discourage the right, and appeal to the middle. That’s how we won in 2018. It’s how we will win in 2020, and when we win, Trump loses more than his job. He’ll move from the White House to the courthouse to the big house to the graveyard of history, where he belongs. It’ll be “That’s all, folks” for him.

History does not give consolation prizes for good intentions; only consequences matter. We might think we’re doing the right thing, but if the results aren’t right, then we were wrong. The moral high ground is already ours; we don’t need to do anything just to retain it. What we need is to use it to remove the party of white supremacy from power. Nothing short of that—no symbolic victory or good intentions—will do. We need to bring home the bacon, not just rile up the boar.

Foiled again

What separates conspiracy theory from conspiracy fact?

I wear the hat; it does not wear me.

Tin foil is a lie!

We use aluminum for foil these days, not tin, because it is cheaper and stronger, but we persist in calling it tin. The tin foil is inaccurately named, and everybody knows it, but I’m not proposing a swift, orderly change because this isn’t some sort of conspiracy, just imprecise language.

Whatever we call the foil, it’s pretty useful. I like to line pans and cookie sheets with it so that I don’t have to scrub them, but it’s also good for covering the thin parts of large pieces of meat to prevent burning, and of course, for storage. One thing I don’t do with it is wrap it around my head and wear it as a hat, because I’m no conspiracy theorist.

We laugh at conspiracy theorists, and we are right to do so. Whether it’s the nuts who claim we faked the moon landing or the loons who say the government is controlling our minds with fluoridation or chemtrails or microwaves, they are fools to believe as they do, and doubly so for thinking us fools for disbelieving. Perhaps the worst theories are the ones that are fundamentally political and often blatantly racist: consider such antisemitic favorites as the Protocols of the Elders of Zion, the blood libel, and Holocaust denialism.

At heart, conspiracy theories posit simple-sounding, emotionally-satisfying explanations for why specific things are bad. As a result—instead of having to deal with a cause that is abstract, speculative, and statistical—believers have a villain to hate.

The psychological rewards are obvious: if there’s a bad guy, then they’re the good guy. If there’s a secret plot that is hidden from all eyes, then they’re special for seeing right through it and being in the know. And if there’s something horrible that they really want to do to other people (see above), there’s a justification so overwhelming that it is (ahem) hard to believe.

Conspiracy theorists believe as they do because they want to, not because they have to. The evidence didn’t force them to accept the conclusion; the conclusion was accepted regardless of or even despite the evidence because it was desirable in itself and for what it brings. Sometimes, they posit these theories to explain away inconvenient truths that they cannot accept. And often, those who create and spread these lies do so on a knowing, self-serving basis.

What gives it away is just how unwilling they are to consider that they might be wrong. They believe (or say they do) because they want it to be true, not because it is. They implicitly recognize this, which is why they overreact to criticism by doubling down (“the more you try to dissuade me, the more convinced I become”), circular reasoning (“the fact that you’re denying it is proof that it must be true”), and paranoia used to reject expertise (“trust no one”).

But not every theory about conspiracies is a conspiracy theory in the normal sense, because there are two necessary elements. The first element of a proper conspiracy theory is that it’s about an action, often an ongoing one, that requires the long-term cooperation of many people who are working in concert to achieve their goals.

This part is actually easy; it’s literally the whole point of a political party or a corporation or a glee club (which is why we should never trust any of them unconditionally, especially not glee clubs). People “conspire”, in this limited sense, all the time, often quite successfully. The second element, which turns out to be the tricky part, is that the conspiracy has to effectively remain secret. After all, it’s not much of a conspiracy if everyone knows. Or is it?

What makes conspiracies implausible, even ridiculous, is that the more people they supposedly involve and the broader the supposed actions are and the longer they supposedly go on, the less likely it is for them to keep it all secret. With so many people, it’s only a matter of time before one of them spills the beans, or screws the pooch and is noticed.

Sure, you can try to explain this away by positing secondary conspiracies to silence, discredit, and even kill those who tell the truth about the primary one, but it quickly stretches all credulity. Two can keep a secret, if one is dead. True secrecy therefore requires a murder spree.

Consider one theory about a truly depraved conspiracy. Imagine if a prominent individual, such as a slimy, Jewish Wall Street billionaire who owns a gossip magazine, were to make a habit of hiring girls—and I do mean “girls”, as many were in their early teens—to “massage” him and perform various sex acts, sometimes by forcing them physically.

Further, imagine if he had “lent” these girls out to famous, powerful people to generate blackmail material and ensure that he was owed favors so he was able to continue enjoying his child sex ring unbothered by law enforcement. Imagine if this involved over 75 victims and went on for over 6 years. Imagine if this remained an open secret; known by many but not acknowledged, much less acted upon appropriately.

Preposterous! Except that it happened and you probably know all about it.

Ok, fine, it happened, but it’s not a proper conspiracy theory because he was unable to keep it up indefinitely. He was, however, able to keep it under wraps for a long time, and then almost entirely avoid the consequences of his crimes. He got “the deal of a lifetime”, and pretty much walked away scott free.

This travesty of justice has since received increased scrutiny, and now he’s under arrest again, so perhaps the arm of the law is long enough that even he can’t escape it, but if so, then the wheels of justice have turned exceedingly slowly, perhaps too slowly. He’s 66 years old right now, and still filthy rich, so he just might be able to drag this out until he dies. If not, he’ll die in jail, which would be just.

The lesson here is that the sort of thing that would be easy to dismiss as a conspiracy theory can actually happen in real life, it just can’t be kept secret forever. It may, however, be possible for the guilty to get away with it for quite a while. The dirty deed would not be a secret, but it also would not be broadly accepted as factual, much less result in intervention and punishment.

Now consider another theory about a conspiracy, this time with even bigger stakes. Imagine if a corrupt foreign government were to use hacking, social networks, and sexy spies to compromise powerful political organizations and even a major party so as to ensure that their asset becomes the American president. This is wild shit, straight out of the Manchurian Candidate, and yet the claim came from President Jimmy Carter and is supported by the conclusions of 16 intelligence agencies.

It turns out that it’s entirely possible to do this sort of thing, at least if you’re Vladimir Putin and Donald Trump, and to get away with it for years, though not to keep it secret. It’s been in plain sight since the primaries, but there is a gap between the truth being apparent to anyone paying attention and it being incontrovertible to the point where it cannot be ignored, even by those who would prefer to.

So far, nothing much has happened to Trump, and he may yet get re-elected instead of impeached. He may get away with it, even though his presidency is entirely illegitimate and he is a corrupt, traitorous pawn of Russia. He only has one term left, and he’s 73 and in poor health. The grave may get him before justice ever does.

There is precedent for this. Consider that Nixon was not just guilty of ordering the break-in of the DNC HQ in the Watergate Hotel, but was variously corrupt and criminal, yet it took years for him to be brought to justice. Even then, most of his violations were ignored, and he dodged the bullet by getting pardoned instead of being impeached. He never even faced criminal charges.

So, where does this leave us? Well, Carter has pointed out that the American emperor wears no clothes. Mueller did, too, albeit in drier terms and at greater length. The wheels of justice are turning, however slowly, and we can only hope that they grind exceedingly fine. Even if we never stop Trump, perhaps we can purge the Russian taint from the American right wing and block the political aspirations of the next generation of fascists, including Trump’s own children.

In the meantime, we should expect that anyone who mentions the plain fact that Trump is a traitor and the fake president can expect to be dismissed as a conspiracy theorist. With so much evidence, though, you’d have to wear a tin foil hat and pull it down over your eyes to deny the plain truth about the man in the Oval Office. The real conspiracy theory is the idea that Trump is the legitimate POTUS.

Salad forks on the center left?

Formal dinners aren’t the only time when left, right, and middle matter.

Words matter; they have meaning.

I’ve ranted before about the way terminology can change over time and leave people confused or misled. Now, I want to focus primarily on the left-to-right political axis and how it relates to the current incarnations of the political parties.

Probably the most abused term these days is centrist. While it has a legitimate meaning, it’s almost exclusively used instead as a slur by the far left against anyone who’s not far enough to the left to satisfy their pathological need for ideological purity.

First, the legitimate meaning. A centrist is someone who’s neither left- nor right-wing on the whole. They might have views that are fairly neutral or weakly held, or a spread of positions scattered on both sides of the line, or maybe they’re rocks and don’t have opinions at all. You never know with those inscrutable centrists and their bizarre neutrality.

None of this accurately characterizes American modern liberals, as they are left of center. So when the extreme left calls liberals “centrists”, this is hyperbole. More bluntly, it’s a lie that verges on false equivalence. They’re basically admitting that they’re so far to the left that they lump everyone else together.

Liberals are also correctly described as being center left, which just means that they’re moderate. They’re to the left of center, but not all the way to the left. To put it another way, they’re the part of the left that’s nearer to the center than to the extreme. But while liberals are center-adjacent, they’re not centrists.

So where do we draw the borders? Well, liberalism has room in its big tent for anyone left of center, so it’s pressed up against the center on one side. On the other, it can go as far to left as it likes, just so long as it doesn’t cross the line into radicalism. What specifically defines radicalism is the refusal to cooperate with those who think differently.

Because a liberal is moderate, they’re willing to team up with almost anyone, at least on issues where they find common cause, and to the extent that they do. So while a liberal might not agree with someone who’s centrist or far left or moderate right on most matters, they’re usually willing to at least try to work with them where they do agree, to find a compromise, when one is possible, so as to get things done.

It is this ability to cross the center that allows democratic government to work. So long as there are moderates in power on both sides of the center, there is bipartisanship, so things keep running smoothly. Without overlap, there is just partisanship, hence either gridlock or winner-takes-all radicalism. Moderates can work together because they respect competence and pragmatism, whereas extremists do not.

So what defines the left border of liberalism is not how far to the left they go, but how unwilling they become to cooperate with anyone else. Liberalism ends where ideological puritanism begins. One consequence is that some liberals are considerably to the left of extreme leftists on key issues.

While there are no issues where liberals are far to the left, they can be more consistently to the left across all issues than the extremists because the extremists’ populism leads them to pick and choose what matters to them. Extremists either care too much or not at all, with nothing in between. And when they do care, they take an all-or-nothing, no-compromise approach.

An example of this would be civil rights, which liberals are deeply committed to but the far left disparages as mere “identity politics“. This is not just a theoretical divide but a practical one. The far left is apathetic about a woman’s right to choose, and doesn’t want to help refugees and other immigrants. It’s also lacking in commitment to gun control or protecting minorities against police violence.

Zooming out, all liberals are leftists, but not all leftists are liberals. Pretty simple, but what really confuses things is the term “progressive“. Skipping over its history, progressive, as an adjective, means left-leaning. We can talk about whether single-payer is more progressive (further to the left) than ObamaCare is. This is very similar to using liberal as an adjective.

As a noun, its meaning is ambiguous and is still shifting. Back in the days of Reagan, it became another label for a liberal but, lately, it has come to refer to just the extreme left, not liberals. So, for example, while Obama’s policies were progressive, Obama is not a progressive; he’s a liberal. Moreover, while the people who identify as “progressives” today are far left, they’re also populists.

Briefly, populism is a style of politics which is based on the conceit that the numerical minority it represents consists of the citizens who truly matter; the real people. Its leaders likewise claim to be independent-minded outsiders who are “authentic” and will lead the good guys (it’s always guys) to victory over the “establishment” elite, which consists of everyone who’s not one of them. Those people are seen as the enemy, and therefore inherently “inauthentic” and “corrupt”.

Populism is often right-wing, such as with Trump or various European fascists, like Le Pen. It can also be left-wing, such as Sanders or the so-called Justice Democrats. Despite being on opposite extremes in one dimension, they share a great deal in common, not only in style but substance.

Among these elements are demagoguery, nationalism, and pandering. Put simply, populists promise the impossible, which is why their positions are so extreme. They also demonize everyone who’s not as extreme, which is what creates the insatiable demand for ideological purity.

What’s fascinating is that populism is a second political dimension, allowing the extremes of left and right to come together to form a horseshoe.

Twisted in the populist dimension

The far left and far right are united by populism against their common enemy, which they call “centrists” but really mean everyone who’s anywhere near the center. In other words, all extremists hate all moderates. They hate them even more than they do the opposite extreme.

When populism moves people away from the center, it also shifts them from conventional to radical views. On the far left, this means Marxism, which includes democratic socialism and outright Communism. On the far right, this means Fascism, which includes Christofascism and neo-Naziism.

Switching away from populism and back to the left/right spectrum, I only have a little bit more to say because there’s just not much left of the conservatives. These people are (were?) center-right, which is to say moderately to the right. They often opposed progress and were casually bigoted, but they weren’t monsters.

Many conservative politicians, like Eisenhower, were competent, honorable, and had positive accomplishments. Moreover, it was possible to work with them productively, and while they did drag us down, they weren’t an anchor to sink us. They even served the useful purpose of keeping the radical left in check and being a buffer against the radical right.

I miss them, but they’ve lost power and faded away. Their last gasp came when the Tea Party movement took over the RNC and left them without political representation. Some of the more moderate ones occasionally vote for Democrats, but while we welcome them, they’re just not ever going to be comfortable with our liberalism.

The left faces the same risk, as a constellation of far left people and organizations, such as the Justice Democrats, Our Revolution, Bernie Sanders, The Young Turks, Jacobin, and The Intercept, are all working to do to the DNC what the Teabaggers did to the RNC. They are the Herbal Tea Party, and the herb is toxic populism.

Hopefully, this rant will help you set the political table in an orderly fashion. Remember, the spectrum starts on the far right with fascists, moves towards the moderate center with conservatives, transitions to the moderate left with liberals, and then goes back off the deep end with Marxists.

Addendum:

A favorite talking point used by both the American far left and some Europeans, is that American liberals would be center-right in Europe, and that America therefore has no true left. It’s really hard to take this seriously.

Foreigners have great difficulty mapping the issues that distinguish political stances across national divides because only issues that are controversial in that country serve as useful measures of political orientation. So, for example, the NHS has broad support in the UK, across parties, whereas support for M4A in the US is strongly correlated with party affiliation. In this case, Europe leans more to the left, but there are other issues, such as immigration, where Europe leans more to the right.

What further complicates such cross-cultural comparisons are differences in political systems, where parliamentary governments allow for small factions to be considered full political parties. In such systems, you can vote for a fringe party which then joins a coalition, whereas such a vote in America is wasted as an empty protest. As a result, the major parties represent the compromise that the coalition has settled on, and the most extreme views are intentionally lost in the shuffle.

Of course, when the American far left claims that there is no left wing in America, this is telling on themselves. They’re bragging about their ideological puritanism, admitting that they’re edgelords who refuse to recognize any distinctions among those who fall short of their impossible standards. They’re basically claiming that everyone to their right is right-wing and that only card-carrying Marxists should count as left-wing.

This sort of both-siderism erases the distinction between the moderate left and the far right, which is absurd.

Taste-testing for quality control and identity

Does this taste like identity politics to you?

A classic TV commercial depicts diners at a fine restaurant being informed that the coffee they just had with their expensive meal was really just freeze-dried Folgers from the supermarket. Naturally, they’re surprised that it wasn’t the fresh-brewed, gourmet drink they thought they were getting.

This says a lot about how people allow their expectations to undermine their objectivity, as well as raising the question of whether the identity of the product matters as much as its quality. Does it matter if it’s Folgers if it’s good? Is choosing a premium brand important or are generic and off-brand products acceptable? These questions of identity affect not only food, but also politics.

What qualifies something as “identity politics“, anyhow? Officially, it’s defined as the sort of politics where “groups of people having a particular racial, religious, ethnic, social, or cultural identity tend to promote their own specific interests or concerns without regard to the interests or concerns of any larger political group“. (Emphasis mine.)

In other words, it’s explicitly partisan; identity politics is intended to help one group above others, as opposed to promoting equality. Of course, when a group has been pushed below others, attempts at equality can look partisan, especially when viewed from above. The shrinking of an unfair gap can appear to be bias when it’s your advantage that’s doing the shrinking.

The way to tell whether it’s about equality or partisanship is not to focus on the spin or rhetorical style. Instead, we have to consider whether their proposals show a disregard for others, as opposed to seeking to help everyone.

So, for example, BLM doesn’t suggest that cops should be free to shoot anyone they want, just so long as they’re not black. Instead, their proposals seek to prevent all unjustified shootings, with the focus on black people explained by the disproportionate impact. That’s not partisanship, despite any appearances.

In the other direction, being conspicuously neutral (i.e. color-blind) about forms of bias that don’t happen to affect you, or admitting that the bias is real but claiming it will somehow automagically go away when your own, more general problems are fixed is an indication of partisanship in disguise.

To be clear, by overwhelming numbers, the most common form of actual identity politics in America is white supremacy. Strangely, it’s often not considered identity politics because it’s taken for granted.

Logically, white supremacy is a great example of identity politics. Practically, the term has come to be used selectively as an insult; a slur by the more-than-equal to denigrate anyone who promotes the equality of the less-than-equal. There’s a saying about fish not having a word for water because it’s all around them: that’s how white supremacy is. It’s ubiquitous.

Accusing a minority of identity politics is a dog whistle, like saying that they’re uppity (because they want equality) or well-spoken (for what they are) or don’t know their place (which is the bottom). When you hear that whistle, look for white supremacy and you’re likely to find it. Coincidentally, Bernie Sanders has used the accusation of identity politics for years to smear his opponents, while supporting conspicuous color blindness that is quietly but distinctly white supremacist.

When Sanders harangues that it’s not enough for someone to say “I’m a woman! Vote for me!”, he is implying that the only reason to vote for her is because she’s a woman. Of course, when he aimed this attack at Clinton, it was insulting and laughable—she was the most qualified candidate we’ve seen in decades—but that didn’t stop him. For that matter, it didn’t stop him when he aimed it at Obama and tried to have him primaried.

Lately, Bernie Sanders and his surrogates having suggested that all the excitement about various minority candidates—women, black people, Hispanics, homosexuals—is due to “identity politics”; due to partisanship. This is, again, an insulting lie. Candidates such as Kamala Harris are at least as qualified as Sanders is and there is no shortage of good reasons to pick them instead of him.

When pushed, Sanders supporters like to redefine “identity politics” by shrinking it down to exactly match the specific way Sanders uses it. Instead of referring to partisanship as a whole, they say it’s literally only about voting for someone solely due to a shared identity. This is still dishonest and insulting, but it does raise an interesting question: Is it necessarily partisanship to allow someone’s identity to influence your vote?

I don’t think so. Assuming we’re talking about choosing candidates whose qualifications are comparable, there are legitimate reasons to prefer the minority. I’ll focus on two: signals and representation. And then I’ll discuss some anti-patterns.

When two candidates for a job look about the same on paper but one is a minority, the latter is statistically likely to be better because minorities are systematically undervalued; they have to work twice as hard to get half as much. Minorities are assigned lower grades, lower interview results, and lower performance scores than they deserve, due to implicit bias. As such, minority status among high achievers is an additional signal of quality, not some sort of noise to be filtered out. It shows that they’re even better than they might appear, because they had to overcome a societal handicap.

The other reason is representation. People are fundamentally equal, so when we see unequal outcomes, this has to be explained somehow. And, in the absence of a better reason, the default one is bias. When the demographics of a field don’t match those of the general populace, unless there are other factors demonstrably at play, it means they’re being unfairly selected. Therefore, intentionally choosing an equivalent candidate who differs only in being a minority is a reasonable way to make up for that by making the field more representative.

When doing this, the preference among minorities should not be towards your own, if any, but whatever is statistically justified. This is one of the reasons I somewhat favored Clinton over Obama in 2008: women are a larger “minority”, so large that they’re not even a numerical minority in the population at large. They’re a minority in the sense of being less than equal, which is why they’re a numerical minority in prestige fields such as politics.

Minority candidates can be better just because they’re minorities. They are likely to be more directly aware of and personally motivated by the issues that disproportionately, or even uniquely, face them. For example, when you see a room full of white men explaining why women shouldn’t have bodily autonomy, it’s hard not to think that the absence of women is relevant.

A related notion is that of role models. When people do not see themselves as being represented in our leadership, it has a chilling effect. Somewhat rightfully, they feel that this shows that it’s not their government and they’re not seen as important. This discourages political activism; especially voting, but also other forms of participation, including running for office. Every minority in power is therefore a role model for equality, encouraging and legitimizing buy-in. This is a huge boost for democracy.

Can this go wrong? Sure. I’ve been critical of the notion that you have to be a member of a group in order to care about it or that those who fight for equality should be relegated to the inherently-inferior status of ally if they’re not part of your group. In particular, I argue that, by virtue of not being members, they have a strong, built-in defense against accusations of partisan “identity politics”.

Another way it can go wrong is when the candidate is a traitor to their group, guilty of the same bigotry that the group suffers under. Consider Sarah Palin or Margaret Thatcher or Milo Yiannopoulos. Ironically, it’s not that unusual for the earliest examples of a member of a minority group openly entering a field to be one of those who are hostile towards their own identity; “self-hating”. After all, it is this very hostility that makes them more palatable and acceptable to the majority, which lets them get in.

Consider how a woman entering a field dominated by men might feel a pressure to show that she’s “one of the guys“, emphasizing her masculine traits and deemphasizing her feminine ones in order to be taken seriously. Another example would be a black doctor who keeps his hair closely trimmed and goes golfing. A third is the intentional use of respectability politics as a cover for denigrating others of their group and establishing themselves as “one of the good ones“. In all these cases, they’re overcompensating for their minority status by playing down their identity and throwing the rest of the group under the bus.

Of course, the most obvious way it can go wrong is when the candidate is underqualified or flatly unqualified, yet favored by members of their identity group. The example that comes to mind, both of this and the earlier problem of overcompensating, is Pete Buttigieg. While he doesn’t hide his homosexuality, he was closeted until very recently and is not really a member of the gay community in any social sense. Moreover, he identifies more strongly with being white than gay and wears his Christianity on his sleeve, hence his ongoing outreach to the bigoted “white working class”.

Pete is not the worst possible candidate, but he’s just not that impressive if you look at him objectively. His political experience is limited to being mayor of a small city, and his previous attempts to get traction at even the state level were unsuccessful. As suggested above, his political views lean away from liberalism and do not energize the base. If he was straight and wasn’t a white male, he’d be ignored by the press.

Aside from being a white man, why is he getting so much publicity? Much of it is not despite being gay but because of it. I can’t help but to notice that his candidacy has received undue attention from gay reporters and activists, such as Maddow and Takei. They’re so excited about finally getting some representation that they’re allowing themselves to be blinded to his faults and weaknesses.

This is unfortunate, because his attempt to appeal to white folks at the cost of throwing the Democratic base under the bus (which is like the back of the bus, only worse) will not work. No matter how hard he tries to blend in with the majority, the bigots will not vote for him. Not only is he gay, but his color-blind bias just can’t fire up the white supremacists the way Trump’s overt bigotry does.

On the other hand, Kamala Harris is clearly competent, and being a black woman (with Jamaican and Tamil ancestry) offers the non-white, non-male Democratic base the motivation and inspiration they need to overcome Republican voter suppression and work to get their votes counted. She is the positive side of so-called identity politics, whereas Sanders and Buttigieg are the negative.

Chickens and eggs

Abortion, viability, and rounding errors

The optional sunny-side-up stage in the life cycle of the chicken.

What came first, the chicken or the egg? Actually, that’s a stupid question: it’s the egg, of course. The egg is an early stage in the life cycle that, if all goes well, ends in a chicken. This fact is embodied in the admonition not to count your chickens until they hatch.

But note how how this saying inadvertently promotes an egg to a chicken. You’re counting “chickens” that aren’t even chickens yet, and might never become chickens, which is why you shouldn’t be counting them. Effectively, it “rounds up” the egg to what it might one day become, and therein lies the problem.

This part really isn’t complicated: a thing is not (yet) what we expect it to become. It is potential, not actual. A seed is not a tree, even if it may one day be. A person is not corpse, even though that’s really only a matter of time. If and when the time comes, fine, its status changes and we treat it differently. But not until then. Why jump the gun?

We don’t bury the living just because they’ll die someday. Yet this sort of confusion about the actual and potential status of things forms the basis of arguments against a woman’s right to choose. You can see this in the self-contradictory term, “unborn child”, which makes as much sense as “living corpse”.

Come back here, you living corpse, I’m here to bury you! Stop insisting on your rights as a person; I’m rounding you up to a cadaver!!!

The ethics of abortion are often framed in terms of personhood. If it’s a person, it has rights, so killing it is murder. But this quickly turns into a game of Pin the Tail on the Donkey with blind attempts at sticking a pin through the magic moment at which personhood is achieved. Spoiler alert: there is no such moment because there’s no such thing as magic. Real life is more complicated.

An ovum and a spermatozoon are individual cells, and I don’t think anyone mistakes either for a person. If things go well, however, they might join together to eventually become a newborn in about 40 weeks. Just as uncontroversially, it doesn’t seem as though anyone denies that this newborn should be treated as a person. So, somewhere between these two points in time, in this gray area, the potential person transitions into an actual one. That’s where the controversy is to be found.

Those who oppose female bodily autonomy justify it by prematurely promoting a potential person to an actual one. Many of them argue that life (by which they mean personhood; they don’t understand ethics) begins at conception (by which they mean fertilization, not implantation; they’re ignorant about medicine, too). This is muddled and entirely arbitrary, but it yields their desired conclusion, so they stick with it.

A more recent trend is to claim it starts with having a heartbeat, but since that’s about 5 weeks in, it’s usually before the woman even knows she’s pregnant, so it serves the same purpose. (Even then, it’s not an actual heartbeat, as there’s no heart yet, just a measurable electrical signal.) Either way, they want us to treat something which cannot survive on its own as a person.

This is relevant because, so long as the embryo or fetus is wholly dependent upon the pregnant woman, there is no way for us to grant it rights except by taking hers away. And while the personhood of a fetus is questionable, there’s no question about the woman being a person. It’s her body, her rights, her choice. If she chooses to give up some of those rights to transfer them to the fetus, that’s fine so long as it’s her choice and not ours.

A note on terminology. When a woman decides she will carry the pregnancy to term, it’s entirely fair to round her up to a mother and round the fetus (or, really, even embryo or zygote) up to a child or baby. There’s nothing offensive about that and doctors do it routinely. But if she hasn’t, then such rounding up is both dishonest and emotionally manipulative. It’s where you get bullshit phrases like “mothers murdering their babies” in reference to abortion.

It’s not murder because the fetus has not earned any rights on its own and the woman has not chosen to give it rights at her own expense. If she did, then killing it would indeed be murder. So if someone sticks a knife in a pregnant woman’s uterus and kills the fetus, that’s murder, but an abortion isn’t. By the same token, there is no contradiction between allowing abortion and opposing pregnant women doing things that would lead to a newborn that is unhealthy.

This all goes back to viability. I said before that there’s no magical point, and that’s because it’s gradual. Fetal viability is not a phase change, like ice melting into water. It’s more like tar slowly turning soft until it flows. There’s solid tar, liquid tar, and a whole range in between, where it’s sticky.

Under our current technology, no embryo is viable. At 9 weeks in, the embryo is considered a fetus, but there’s still no chance of surviving outside the womb. It’s not until about 22 weeks that there’s any chance at all, and it remains very low: about 5%. Even then, this is a measure of survival, not health. Pre-term babies suffer from serious issues, and these don’t all go away even if they live: long-term disabilities are common, and many of these are dire.

At around 24 weeks, viability increases dramatically and reaches about 50%. A couple of weeks later, viability is up past 90%, and the last few percentage points slowly come in as the 38th week approaches. This is also around the time that even a premature birth will still likely result in a healthy newborn. Childbirth is usually around 40 weeks in, though viability never does reach 100%.

So while there’s no magic point, there are three stripes which blur into each other. There is a clear black zone (up to 22 weeks), a gray zone (22 to 27), and then a white zone (27 to 38+). With modern medical technology available, we tend to round up from the halfway point, considering a 24-week fetus to be viable enough to deserve intervention, but even so, death is still the most likely outcome.

When a fetus cannot survive on its own, aborting the pregnancy entails killing it. Once it can, there’s no such connection. Doctors could just induce labor or perform a C-section and hand the baby off to someone who actually wants it.

In practice, this is a largely a non-issue because elective abortion of pregnancies past 26 weeks is nearly nonexistent. Women don’t request them and doctors won’t perform them. There are still a handful of abortions even this late, but they’re therapeutic, not elective. In other words, they’re for medical need, for desperate circumstances such as the fetus not being viable or the woman’s life being at risk.

Back to that newborn that we all agree is a person. Let’s be frank: it has not earned personhood through its own merits; even dogs are smarter. Their status is based on their potential, but it’s safe to round up because we don’t have to round anyone else down in the process.

Ultimately, the morality of abortion comes down to distinguishing the potential from the actual so that we don’t count our fetuses as babies unless we can do so without counting women as mere incubators. We put the actual rights of actual people above the potential rights of potential people. The alternative would be immoral.

Washing your hands and other food-safety tips

Memetic Hygiene, Contagious Hate, and Empathy


Not the infection you should be worrying about.

Legally, restaurants must provide three bathrooms: male, female, and employee. (Insert your own joke here about genderless worker drones.) Despite this, employees do use the customer bathrooms, so you’ve probably seen that small sign near the sink which reads: “Employees must wash hands before returning to work.”

There’s a bit of humor in the fact that only employees have to do this, but the topic of sanitation is not all that funny, especially if you’ve ever come down with food poisoning from a restaurant. Ask me how I know.

Still, while we all understand the need to prevent foodborne infection, it’s not the most dangerous kind. The most dangerous kind is mental. Contagious diseases of the mind—often, political diseases—are a far greater threat to our safety. I’ll explain.

Richard Dawkins coined the term meme by analogy to gene, as the unit of the transmission of ideas. The idea of wearing a baseball cap backwards is a meme that spreads mostly by observation and imitation. The idea of Christianity is a meme that spreads vertically by childhood indoctrination, horizontally by proselytization. The idea of a meme is itself a meme that spreads by books and by pedantic rants from online sandwich-makers and political pundits.

Just as a virus is a bundle of genes that spreads itself around, a bundle of memes can act as a mental virus. This cluster of memes—called a meme complex—can spread and become popular, not because it is true or even good for its hosts, but because it has attributes that make it good at spreading for its own sake or for the sake of non-believers who benefit from it.

This has been understood for some time now. Over two thousand years ago, Seneca wrote that: “Religion is regarded by the common people as true, by the wise as false, and by the rulers as useful.” A belief can be completely false or even nonsensical, yet remain common because it serves the interests of those who don’t even hold it.

That is the chief insight of meme theory: something can be successful in the marketplace of ideas despite having no merit whatsoever. Even if it harms the host, or kills them—think of the Jonestown mass-suicide cult—it can still benefit itself by propagating faster than it dies out. Take that, sociological functionalism!

The virulent meme complex that has been the focus of much of my attention for a few years now is white supremacy, a constellation of self-serving bigotries against (obviously) those who cannot pass as white, but also women, gays, Muslims, Jews, Hispanics, and others who are not entitled to be on the top rung of society. It is, and has long been, the dominant form of bigotry in America.

Like infection with HIV, there is no broad, reliable cure for white supremacy, or even a vaccine for it, but there are effective treatments. I’d like to explore this analogy further.

With HIV, antiviral medications are used to prevent HIV-positive people from getting full-blown AIDS and also stop them from being contagious. The same drugs can be used for prophylaxis, which means HIV-negative people taking the medicines in advance so that they don’t become infected if exposed, protecting them much as immunization does. And, of course, there are barrier methods, such as condoms, dental dams, and gloves.

With white supremacy, the best we can do is the moral equivalent of antivirals; we can suppress the harm it causes and hinder its proliferation, so that it will diminish and perhaps eventually die out. Barrier methods play only a minor role here: we can lock up white supremacist terrorists, but we’re not monsters; we follow the Hippocratic oath’s admonition to “first, cause no harm”. So we’re not going to take a page from their playbook by separating children from adults, much less running concentration camps.

But it does start with children, because they aren’t born infected, so we can protect them by effectively immunizing them through a comprehensive, honest education. Schools have to inculcate critical thinking skills and the scientific method so that students can resist the indoctrination that we can’t block. Rather than vaccinating against specific diseases, we are strengthening their immune system against all of them.

An important part of this education is an anthropological survey of the cultures of the world, exposing them to the variety of beliefs that exist so as to curb unthinking ethnocentrism and provincialism. Schools also need to be desegregated, have federal-level financing and curricula, and teach the whole truth about the history of colonialism, slavery, and Jim Crow, as opposed to the whitewashing myth of the Lost Cause of the Confederacy.

Even when we fail to prevent white supremacy from taking root, including in the adults who missed their chance, there is still much we can do. Without a cure, we can only treat and suppress: prevent their bigotry from being expressed through discriminatory action, biased social policy, and socially-acceptable hate speech. The goal is a societal version of herd immunity, where the infection is contained because enough people are resistant, even though not everyone is.

We can’t tell people whom to befriend, but we can and should criminalize discrimination in all but the most private matters. This means laws against bias in housing, jobs, schools, businesses, and so on. We can counter institutionalized discrimination and even counter its lingering historical effects through reparations.

In addition to laws, we can personally hold those who spread bigotry accountable for their hate speech by ensuring that they are shunned and perhaps even lose their jobs. To the extent that we can do so without undermining the necessity of free speech in a liberal democracy, we must work to deprive them of opportunities to proselytize. For example, when a business takes a stand in favor of bigotry, we should very pointedly spend our money elsewhere.

We need to understand that white supremacy isn’t merely an individual moral flaw, it’s a social disease. And like the smallpox blankets intentionally given to Native Americans in an early form of germ warfare to serve the interests of colonizers, the disease of hate is disseminated from above because it serves the interests of the very rich.

Bigotry separates poor whites from their natural allies: minorities who are impoverished by bias and lack of opportunity. It motivates whites to vote against their own interests by opposing progressive taxation and social programs that benefit everyone, because they may well benefit minorities more. Thanks to bigotry, they can be counted on to choose policies that harm themselves so long as they believe they harm minorities more. When they suffer, as they will, it is through their own malicious choice, but their suffering is nothing compared to the suffering they cause to those with less privilege.

There is much we can do, but none of it involves “empathizing” with bigots or otherwise coddling them. We know that the so-called “white working class” is not suffering from “economic anxiety“. Their anxiety is about losing some of their illegitimate lead over minorities. They’re not afraid of the increasing gap between rich and poor or the shrinkage of the middle class, they’re afraid of having to deal with an even playing field where being a mediocre white man might not be enough anymore to guarantee success.

Let’s be real: we’re not going to change minds and win hearts here. The way we stop the white supremacists is to politically crush them. We should therefore write them off entirely and not pander one bit towards them, even by omission. Instead of hoping to make our platform color-blind enough that perhaps some bigots will swing our way, we should focus on ensuring that all of our votes are counted. We cultural minorities hold a numerical majority, so we must turn it into a political majority by voting the bigots out of office.

Is this a purity test? Only if you think that opposition to white supremacy is an optional part of the liberal agenda, and I certainly don’t.

Mystery-meat wingtips, the other white meat, and herbal tea

When it comes to politics, the tips are not made of the same meat as the rest of the wing. The demographics of the supporters show that it’s some sort of white meat, but it’s not clear just what sort, hence the mystery.

In the traditional right/left spectrum in America, the people who lean to one side or the other are called conservatives and liberals, respectively. But some people don’t just lean, they fall over, and this makes them qualitatively different.

Despite being on opposite extremes in one sense, these radicals are united by a shared political style: populism. But they often hide behind misleading terminology that allows them to deflect criticism, generally by sounding like they’re not extremists. This rant is mostly about calling them what they are instead of allowing them to maintain their disguise.

The correct term for the extreme right—whether it’s the hard right, far right, or the trendy alt right—is not conservative. The literally correct term is fascist. Of course, this word has long applied to the lunatic fringe: the neo-Nazis, neo-Confederates, Birchers, and many libertarians.

Back then, these people were taken for granted by the mainstream Republicans—after all, it’s not like they could vote for the other party—and pointedly excluded from public events because they are embarrassing nuts.

The Republicans would still feed them red meat in the form of dog whistles and tacit support for bigotry. But they maintained plausible deniability by pretending that their actions were in the service of high-minded, bland-sounding, abstract principles such as small government or individual responsibility or states’ rights.

Conveniently, the social programs they attacked so as to harm minorities were the very same ones that the oligarchy hated. In this way, poor and middle-class white people were tricked into supporting policies that helped only the rich. It was a con, and it worked.

That con is no longer necessary. Where it once would have been hyperbole to call the Republicans, as a whole, fascists, things have changed. There are still conservatives in the party, particularly among the voters, but the people in charge are overt fascists.

Trump, Miller, Bannon: not one of these is a conservative in any sense. They are not defined by their caution about radical changes or their adherence to tradition. They’re just goose-stepping fascist scum.

Now, particularly outside of America, but increasingly even inside, fascists are often referred to as nationalists and populists. Trump even bragged about his nationalism. This is accurate, but doesn’t tell the full story. The source of confusion is that both of these terms also fit the other side: the left-wing extremists.

In America, the far left refers to itself as progressive, which is misleading in many ways. The biggest problem is that the term is sometimes used by actual liberals, due to the history of Reagan turning the l-word into a slur. Since the rise of St. Bernard, socialism has also been embraced as a label, but it’s not necessarily socialism in any Marxist sense, except when it is.

It’s also misleading in that it omits their populism, which is what distinguishes them from liberals, even more so than their extremism. Whereas liberals are equally focused on social and economic justice, left-pops give lip service to the former but care only about the latter. They are also nationalists, although more isolationist than expansionist.

Populism is a style of politics that entails both rhetorical and policy commitments, and is overlaid on top of political extremism on both ends. The rhetoric defines supporters as the only legitimate representation of “the people”; the ones who actually matter. Invariably, these special people are primarily white and male and otherwise non-minority.

Populism demands radicalism, activism, and ideological purity, and has no respect for experience, objectivity, or competence. There is no room for progress, only immediate, revolutionary change, and it doesn’t matter that revolutions always kill people. Populism denigrates the competent people as “the Establishment” and insists that, due to a willingness to compromise to get things done, they are inherently corrupt. This is ironic, as populism is, in practice, strongly associated with corruption.

While the populist right deserves to be called fascist, there is no equally handy term for the populist left. As I’ve written elsewhere, socialism is inherently ambiguous, and it since become a boogie man used by the Republican fascists as a cudgel against all Democrats, even the liberal base. But there is clearly a constellation of left-populist associations, which include such things as the Justice Democrats, Our Revolution, the Democratic Socialists of America, the Young Turks, and Bernie Sanders, and they need a name.

Aside from the generic left-populist, the best term I’ve found is based on their parallel with the Tea Party Movement, which is the right-populist faction that took over the RNC. Taking over the DNC is the stated goal of the left-pops, which is why some of us call them the Herbal Tea Party.

But you don’t have to love or use that term, unless you want to. You do have to distinguish between conservatives and fascists, and between liberals and leftists. That’s because extremism is an entirely different beast, no matter which extreme.

Equality, equity; boxes of peanuts and crackerjacks

Equality vs. Equity. (Craig Froehle is an artiste.)

The base of the Democratic Party, the group that votes consistently and reliably for it, consists largely of women and people of color. One of the ways that some Democratic presidential candidates have been differentiating themselves while playing to that base is by backing reparations for African-American slavery.

This idea not only takes the moral high ground, it is a genuinely liberal goal that outflanks Bernie Sanders from the left while showing how his color-blind approach to (primarily economic) equality does not serve the base. Politically, it’s a bold move because it’s very much one of those broad, sweeping agendas that is ambiguous yet capable of alienating. In its current form, we can expect it to turn off white people broadly, even ones who genuinely oppose racism.

One answer to the ambiguity and skepticism comes from Marcus H. Johnson, a Twitterati celeb with a solid track record of compelling, well-thought-out political analyses. This rant is my sympathetic but partially dissenting response to his most recent one, entitled “Here’s What A Reparations Plan Could Look Like“. Read it. I’ll wait here.

There is much to admire about the approach he takes here, but also one fatal yet fixable flaw. First, he makes the moral case, which is honestly the easiest part. No amount of money can make up for what America did to the people they kidnapped, imported, and enslaved, but it can go a long way to countering the lasting harm by, as he says, “closing the racial wealth gap”. Otherwise, slavery’s effects continue through the generations unabated.

Second, Johnson does the math about how much it might cost, showing that it’s economically feasible, not only in terms of being a manageable ongoing expense but also due to savings from second-order effects, such as lowering incarceration rates.

If he wanted to, he could probably make an even stronger argument here. For example, any substantial downward distribution of wealth has beneficial collateral consequences because of the increase in demand and subsequent creation of jobs, leading to a positive feedback cycle that strengthens the economy.

Finally, Johnson recognizes that reversing the effects of slavery, Jim Crow, and institutionalized racism (particularly redlining) is a multigenerational project, not something that can be accomplished with a lump sum payment.

He offers a few alternatives, including a persuasive hybrid. And he is entirely cognizant of how such a program would be at risk of being, as he says, “siphoned off by outside actors”, both before and after the money is spent.

The problem that remains is the elephant in the room, which is that “reparations would be race-specific as opposed to a race-neutral plan”. Only Black people—by whatever definition—would be eligible. He defends this by saying that, while race-neutral plans are popular, they “have a poor track record of actually curbing the racial wealth gap”.

That may well be the case, but I don’t believe that race-specificity is necessary, plausible, or good for reparations.

We know it’s not necessary because Jim Crow laws, not to mention modern voter-suppression techniques, successfully target Black people while ostensibly remaining race-neutral. It’s a simple trick, but one that goes both ways.

The fallout from historic white supremacy is plainly visible in a variety of metrics, so economic reparations can be targeted to disproportionately help those who were disproportionately hurt while still remaining race-neutral in form. This would amount to turning the methods of systemic racism back against itself.

It comes down to the difference between equality and equity. The truly color-blind approach leads to equality, which helps everyone but still leaves some people out. That’s because it assists those who don’t really need it while not giving enough to those who truly do. However, helping people on the basis of their identity uses group membership as a proxy for need, with errors in both directions and at the cost of abandoning equality, which generates backlash.

The alternative is to bring equity by focusing on context, not color. Under this doctrine, people get as much as they need of what they need, not just a nominally equal share, with distribution based on metrics, not demographics. For example, in the cartoon above, the second frame shows boxes assigned based on height, not hair color.

Race-specificity is not plausible because predicating benefits on racial identity is politically and intellectually self-defeating. It feeds the narrative that the equal rights movement is nothing more than a constellation of partisan agendas, each seeking to boost its own identity group over others.

This false narrative thwarts intersectionality, undermines support from equalitarians, and provides cover for white supremacists. St. Bernard will deride it as identity politics, and just this once, he’ll have half a point. Moreover, the whole concept depends on an essentialistic notion of race as a biological fact, which is literally the core of racism. This will never fly.

Practically, there is no principled, reasoned basis upon which to define Blackness for this purpose. The #ADOS movement insists that reparations should be limited to “American descendants of slaves”. By that measure, neither our first Black president, Barack Obama, nor our (hopefully) next one, Kamala Harris, would qualify. After all, neither Kenya nor Jamaica are part of America, and both of these people are “mixed race”. Many Americans would, in practice, be unable to prove that they qualify due to a lack of birth records dating back to the days of Lincoln.

Johnson’s version is more sane and would apparently include both Black presidents, but it’s not clear where he’d draw the line or on what basis. If history teaches us anything, I can only imagine that any attempt at legislating Blackness at this level would collapse into the same absurdities that led to terms like quadroon, octoroon, and hexadecaroon, all of which are based on the overtly-racist “one drop rule” of hypodescent. This just won’t work.

Race-specificity is not morally good because it leaves out all the other groups that are the targets of systematic oppression, both historical and ongoing. African-American slavery was not just an economic crime, but a cover for rapes, beatings, and murders, yet it exists on a spectrum of badness that is populated by other forms of oppression against other victims.

Native Americans were enslaved, massacred, and forced into reservation ghettos. Where are their reparations? How about the Japanese-Americans who were rounded up into internment camps? Or the Hispanics who were mass deported in the 30’s or the ones who have more recently been separated from their families and caged? Or the women who could not vote, were excluded from professions, and to this day make dimes on the dollar? Or the gays or the Jews or the Muslims? Why some but not others? It’s not fair.

A wise man, martyred for the cause, once said that “injustice anywhere is a threat to justice everywhere”. Yes, we do need reparations, much along the lines that Johnson outlines, but they have to be aimed at closing the discrimination gap for everyone; not in a blind way, but in one that counters all forms of discrimination against irrelevant and effectively-immutable traits instead of feeding them.

We want to create an America that not only has a low Gini coefficient, but where Black men can feel safe when the police drive by, and women can feel safe in their own homes. By serving the greater goal of fostering equality of both civil rights and economic standing, we can achieve true social justice.

The two goals are interdependent, so we must fight both heads of the hydra—bigotry and oligarchy—at once in order to achieve them. If we focus on just one aspect of hierarchy at a time, the other will defeat our efforts. Systemic and social discrimination are not just attacks on the obvious targets, but a proven way to undermine unity among the oppressed so as to let the already-rich and already-powerful become even more so. Likewise, equality of opportunity depends on economic equality in order to yield equitable results.

The solution is intersectionality; the recognition that, whether seen in terms of identity or class, there is only one war for equality, no matter how many fronts its battles are fought on. We can only do this by crafting policies that focus on context, not color and use metrics, not demographics so as to serve each person in each group according to their own needs, never leaving anyone behind.

There is strength is unity, but only if the unity is fair. That’s why I support all the reparations for all the people in the form of a Newer Deal. This includes reparations for African-American slavery, but is not limited to them.