How not to break an egg.

Gun control is just like cooking, only with lead.

If you cook, it’s only a matter of time before you drop an egg and make a huge mess. The obvious solution is to hold onto it more firmly, but if you grip too hard, you crush the shell and get egg all over you.

This lesson in moderation-in-all-things also applies to public policy. Even when something is often harmful, overregulation or criminalization can make it even worse. We learned this from Prohibition, and then again from its unfortunate sequel, Prohibition II: The War on Drugs. Both of these have treated regular people like criminals while encouraging real crime, filling our prisons (especially with minorities) and corrupting the police. The sane alternative is a harm-reduction approach.

Policymaking is social engineering, driven by cost/benefit analysis and seeking the greater good. So unless something is inherently and necessarily harmful, it should be legal but regulated, with the regulations carefully balanced between the two possible extremes so that they neither punish the innocent nor let the guilty go free.

Nowhere is this more necessary than with firearms regulation—gun control—where we are faced with extremists who reject all regulation or demand blanket criminalization. The insanity of the former should be obvious, while the latter seems overcautious but perhaps justifiable until you consider the foreseeable consequences of enforcing prohibition in a society that fetishizes guns.

I’ve been told that nuance is dead and slogans matter more than policy, and yet this rant is an attempt at a nuanced stance on gun policy that will not please extremists of either stripe. Upon due consideration, I am strangely ok with that.

Nationwide laws

Laws regulating firearms should be consistent and uniform across the country. When they’re not, there are three major harmful effects: impaired compliance, arbitrage, and local regulatory extremes.

Worst. Jigsaw. Puzzle. Ever.

For the first, consider a truck driver who wants to keep a firearm in the tractor-trailer cab but passes through so many jurisdictions that it’s infeasible to obtain a permit in each. What they want is not unreasonable given that they sleep in their trucks, which often carry valuable goods that make them an attractive target, and they’re on the road in places far from police stations.

If a reasonable fear for their safety compels people who want to comply with the law to instead break it, we have failed by gripping the egg until it cracks. Rather than the de facto ban on interstate firearms possession, which only drives it underground, national laws would allow us to regulate the practice.

This is just one example of the hodgepodge of seemingly-arbitrary local laws that make compliance difficult, sometimes putting people in the ugly position of having to decide between potentially becoming the victims of criminals and becoming criminals themselves.

For the second, there is money to be made through straw purchases of firearms where regulations are lax so that they can be illegally resold where they are tighter. This accounts for the overwhelming majority of firearms that are used in crimes in better-regulated areas. Disparity drives smuggling. The only way to stop the flow is at its source, and that means removing the weak links in the chain of regulation.

For the third, there are some areas with “constitutional carry“—a patriotic-sounding euphemism for no gun control at all—and there are others where it’s extremely difficult, if not quite impossible, for a law-abiding citizen who’s not a police officer or security guard to obtain a permit for any sort of firearm, even a rifle to be used only for sport.

The lack of centralization means that the eggs are either dropped or crushed, depending. And this means that someone who could never qualify for a permit could nonetheless legally obtain firearms by shopping around for a more lax jurisdiction. Once they already have the weapon, anything we do is just closing the barn doors after the tanks have rolled out.

Nationwide laws will not please everyone, and that’s a good thing. The extremes on both sides are wrong here, and the right answer is somewhere in between. By having a single set of laws, we have a single battleground to fight over, hence some chance of attaining sanity.

Moreover, if only red states allow guns, then only red-staters will have them, and that’s not a healthy situation for America. I don’t mean that rednecks will invade the cities, armed with shotguns and flatulence. Rather, the existence of what are seen as gun-free zones feeds the paranoid mythology about the government confiscating all firearms, and this empowers right-wingers to win votes by pandering to gun nuts with promises of opposing even the sanest regulations.

Centralized registration

Every firearm should be registered to its owner. I don’t know that this needs any further justification, as I can’t think of a single plausible argument against it. If you own a gun and it leaves your hands, we should be able to track it right back to you.

The only objection I could find was the above-mentioned paranoia about the “gubmint takin’ away our guns”, mostly from the sort of people who actually deserve to have theirs taken away. Having said that, to the extent that we can make it clear that our goal is to regulate, not confiscate, we undermine their incoherent objections.

It’s got to be in here somewhere…

We need a computerized national registry, so that it’s efficient and accurate. All transfers, whether by sale or other means, whether in a store or between individuals, must update the registry. If your firearm gets stolen, you have to report it so that we can update the registry. And, yes, we will notice if you buy many guns that get “stolen” and then turn up at crime scenes.

For much the same reason, we should make it as easy as possible to match a weapon used in a crime to its owner, using whatever methods and technologies are feasible.

One practice is to maintain a database of striation patterns, to match the bullet to the rifled barrel. This is made more complicated by the fact that changing barrels can be very easy and barrels are not regulated, much less centrally registered. Perhaps they should be both. It also doesn’t apply to the smoothbore barrels of shotguns. Another way to match bullets is chemically, which mostly serves to narrow down and rule out, not identify.

Since cartridge cases are usually forcibly ejected and left behind, though typically not with revolvers, there are attempts to require microstamping to link them to the firearm, where each gun imprints its serial number upon firing. While, in principle, cases could come with serial numbers, keeping track of these disposable yet reusable pieces of brass seems infeasible.

One thing that sometimes goes unmentioned is that those shiny brass cases preserve fingerprints, though careful use of gloves when loading prevents that. Also, we would need to have the shooter’s fingerprints on file in the first place. This fits in nicely with the next topic.

Permit required

No one should be allowed to possess or own a firearm without a permit. Again, you would think this is a no-brainer, and yet there is controversy. It is easily dispelled.

Conveniently, it’s also a license to wound.

Nobody questions the wisdom of requiring drivers’ licenses, yet a firearm is not only potentially deadly but designed to be. Requiring a person to jump through some hoops to obtain a firearm permit prevents impulsive decisions and provides us an opportunity to perform a background check to screen for reasons why they should not have access, like their history of violence.

It also gives us something to take away should they show that they can no longer be trusted with a firearm. An example of this is a red-flag law that targets those who are a danger to others, as when they have a restraining order against them. Simply put, if you scream, “I’m going to kill you”, you don’t get to keep your guns. Guns don’t kill people, people with guns who make death threats kill people.

Having said that, the goal is not a one-strike removal of a person’s ability to ever legally have a firearm again. As tempting as it seems, it goes too far, which creates political blowback due to its unfairness.

Right now, it’s pretty easy to get a restraining order applied to someone, as there’s a much lower bar than for criminal conviction. If those orders had these sort of permanent consequences, judges would be less likely to sign them, putting more people at risk. That would go against our goal of harm reduction.

Permits reduce harm another way, by allowing us to require basic safety training, which addresses the number one risk of harm from firearms: negligent discharge. This should include how to maintain a firearm safely, as cleaning accidents are a common type.

Note that the combination of centralized registration and required permits amounts to a mandatory background check on all sales.

Concealed carry

In many places, a permit lets you own a firearm and typically allows you to store it in your home and transport it to a range or to go hunting. It usually does not let you carry a pistol concealed, which is a good thing because that requires additional training. I don’t mean how to draw from a holster gripped firmly between your butt cheeks, but rather when and how not to shoot people who haven’t just kicked in your front door.

No, I’m just happy to see you.

A basic firearm permit should be like a learner’s permit. All you actively have to do to get one, past perhaps having a token amount of hands-on experience, is pass a written exam showing you know the safety rules. It doesn’t mean you’re fully qualified, just that now you can be trusted enough to practice under safe, supervised conditions until you get better and can pass the “road test”.

To upgrade a learner’s firearm permit to a carry permit, you need more than just the eight hours of watching borderline-snuff films while being being repeatedly admonished not to point that gun at yourself. You need training in the practicalities and legalities of firearm use in a public setting, including when and if and how to intervene to protect yourself and others, with a focus on conflict avoidance and resolution.

The idea is that a person with a gun should remain unobtrusive and go out of their way to steer clear of trouble, not imagine themselves to be invulnerable heros or righteous vigilantes. When it’s a matter that can wait for the police to come and handle—and let’s be honest here: most are—you need to learn to reach for your cell phone, not your boom stick.

In fact, I’d argue that anyone carrying a gun should be legally obligated to carry a phone, especially since they’re going to need to use it immediately if they ever fire at anyone.

The required training should include role-playing and multiple-choice questions, as well as qualifying with the specific concealed-carry weapon on the range to prove that you’re not a danger to yourself and those you would protect, much as you need a road test to show that you can parallel park and don’t drive on the sidewalk. This includes rapidly but accurately shooting from the holster at relatively close range.

Right now, the bar is very low. While we require that drivers have 20/40 corrected vision, firearm permits have no eye test and the practical exam can be trivial. As a result, there are blind people with legal guns. Do you really want to die because you sounded like a criminal to someone?

It’s fine if concealed carry is difficult because, for most people, having a gun on them doesn’t make them any safer. They don’t need it, but the ones who do need it, need it all the time, wherever they are. The corollary is that, if you don’t need it at all times, you probably don’t need it at all.

As for open carry, to be quite blunt, it serves no legitimate purpose and should be flatly illegal. The kinds of weapons that can’t realistically be concealed, such as rifles, shotguns, and large pistols (think Desert Eagle) are also the sort that have no legitimate role in self-defense, especially outside the home.

It’s one thing to use a rifle to hunt or have a shotgun locked up in your house, another to walk the streets looking like Rambo. This policy prevents intimidation and allows us to treat revealing your weapon as brandishing. Anyone who waves their metal dick around needs to have it cut off.

Locks for Glocks

Firearms should not be left laying around where they can be used by children or stolen by criminals. If it’s not in your hands or in your holster, it has to be locked up.

This is not a shoebox.

This means you can’t just toss a rifle in the back of the truck or keep a pistol on the bed stand or under the pillow or in a shoebox on the top shelf of the closet where Junior probably can’t reach it. Yet.

Instead, it should be in a safe that is firmly attached to something heavy and stationary, or to the vehicle. There are safes that are designed to be unlocked rapidly and quietly, even in the dark, so there’s no excuse.

Gun locks are insufficient because, not only are they notoriously easy to pick or bypass, they don’t prevent the weapon from being taken to another location, where there is privacy and access to whatever tools might be needed to force them open. They’re the worst sort of half-measure, providing undeserved confidence. Nobody takes them seriously, so the law shouldn’t, either.

It would be nice to imagine a world where each firearm is keyed to a fingerprint on the trigger, but this does not seem to be technologically feasible, particularly since we wouldn’t want to lock out the user just because their hand is dirty or bloody. There are other possibilities, such as an RFID reader matched to a watch or ring, and standardization to these options should be explored.

Capacity limits

The ammunition capacity of civilian firearms is typically limited by a cylinder or magazine. Once it’s spent, you have to swap or reload, and that turns out to make a huge difference when it comes to mass shootings. It’s only when they’re briefly out of bullets that they can be rushed and the massacre stopped.

Either you’re a really bad shot, or you want to massacre a school.

The magic number in many areas these days is 10, which strikes a good balance, given that a typical defensive use involves 2 or 3 bullets. But if you’re going to go on a spree, you can easily get larger magazines from areas which lack such controls, and this significantly bumps up the body count.

A Glock 19, one of the most common pistols in use, can take a 51-round magazine! Keep one more round chambered and you’ll have a bullet for each playing card in a standard deck. Have a spare mag and now your ammo count can (easily) exceed your IQ. Unless you’re under attack by a mutant zombie horde, this is ridiculous.

Nobody has a legitimate need for this, nobody should have it.

Range-only use

Many rules that make perfect sense everywhere else just don’t apply to a shooting range.

Here, that giant magazine isn’t a tool of mass murder, it’s a convenience that lets you spend your time practicing instead of reloading. Here, there’s no harm in having access to a military weapon that you have no plausible excuse to have at home, such as a machine gun. Here, it would even be ok to own such a weapon, so long as it stays here. Here, suppressors (“silencers”) are a useful safety device to protect your hearing, not something for super-spies and assassins. (Actually, more on these later.)

Just another day at the local firing range…

It’s not that anything goes, but that different, necessarily lower standards apply. While you need safety training to shoot at a range, you don’t need a permit, much less a carry permit. Besides the common-sense aspect, this encourages people not to own guns if all they want to do is occasionally plink away at some targets. Ranges typically offer gun rental at moderate rates.

A low barrier to range access lets people satisfy their curiosity and reduces the allure of the forbidden. There’s even an argument to be made for doing for guns what high-school driver’s education does for cars. The goal, besides training people to be safe, is to satiate curiosity, dispel ignorance, and make firearms less cool.

It’s a public safety win to migrate firearms from closets in homes to lockers at ranges. Guns are a perfectly fine hobby, but just as we don’t play darts on the street corner, we should encourage keeping more dangerous missile weapons where they can be played with safely.

Weird laws

There are strange and unusual gun control laws that exist for historical, political, or sensationalistic reasons and do not serve the public interest. These once again treat regular people as criminals while giving gun extremists more fodder to complain about regulation. An advantage of having one set of laws is that it becomes feasible to stamp out these local peculiarities, injustices, and absurdities.

For example, New Jersey (and only New Jersey), has restrictions against hollow-point bullets, even though these serve the legitimate purpose of limiting over-penetration and reducing collateral damage, making them standard for defensive rounds. They’re also more merciful and effective for hunting.

To make things worse, you can still buy them legally, but if they catch you with them outside your home, you’re guilty. This “gotcha” has no net benefit and has only served to make the state an easy target for criticism, not that NJ needs any help here.

Welrod: silent-ish but deadly.

Another highly-restricted firearms option is the suppressor. It’s not a “silencer” because those don’t exist outside of movies, but they’re restricted because of the Hollywood-inspired idea that criminals will use them to shoot whisper-quiet bullets which will make them uncatchable. In reality, not only does the big tube screwed onto the end of your gun make it harder to conceal, but that’s just not how physics works. Silence is golden but bullets are leaden, and LOUD.

Even the British Welrod, designed by SOE spy-boffins for the express purpose of assassination, and optimized for quiet at the cost of being bad in most other ways, is 123 dB, which is louder than an ambulance siren. That’s because a suppressor muffles the muzzle blast somewhat, but can’t do anything about the noise generated by firearm action, much less the sonic boom of supersonic bullets. And even with small-caliber, subsonic rounds, guns are deafeningly noisy; around 160 dB, louder than a jet engine at takeoff.

A suppressor can lower this by as much as 32 dB, a decrease on par with what is provided by the ear protection used at ranges. It’s still loud, but not quite as painfully and deafeningly so. This makes it possible for someone to use it in, say, a home defense situation while still being able to hear whether the “intruder” is just a neighbor looking to borrow a cup of sugar; or, if it turns out to be a real threat, not immediately deafen themselves.

Combined with regular hearing protection, it can make the range a lot safer for your ears. In an outdoor range and with the right rounds, you don’t even need the muffs. It also lowers recoil and reduces lead in the air, but it won’t turn you into James Bond.

In general, weird restrictions lead to bizarre and undesirable consequences. For example, a resident of Los Angeles can’t get a concealed carry permit, but someone who lives in a less restrictive area of the state can carry concealed when in LA. But if they go to New Mexico, their concealed carry permit isn’t honored, yet anyone can buy a gun without a permit and walk around with it in their hand.

This makes no sense. When the law is an ass, we all look like assholes.

Form vs. function

What kills is the bullet, not the bling.

Some firearms, such as “assault weapons”, are designed to look “tactical”. While that appeals to men with small, limp penises, it doesn’t improve their performance, much less their firearm’s. Despite this, evil-looking firearms are restricted on the basis of appearance while we ignore the real dangers.

Either my gunsmith is a lawyer or my lawyer is a gunsmith. Maybe there’s no difference.

If you look at the “assault weapon” law in California, you can’t help but to notice that it focuses on largely cosmetic details that don’t affect deadliness, particularly such things as grip varieties. And it’s so complicated that you need a flow chart just to figure out whether you’re legal.

Does all this help? Probably not, as it has spawned a cottage industry dedicated to crafting dangerous, scary-looking guns that are designed to skirt the law. Meanwhile, the elephant in the center of the room is being ignored. Let’s talk about that elephant.

Long guns

Firearms-control laws are largely focused on restricting access to concealable weapons. Aside from the focus on “assault weapons” outlined above, they’re much more lax about rifles and shotguns, yet these are far more dangerous.

The reason is that a boring, traditional-looking hunting rifle with an uncool, non-tactical wooden stock can fire rifle, not handgun, rounds. The difference isn’t really in the bullet, which can be pretty much the same diameter (“caliber”) and mass, but in how big a pile of explosives it sits on and how much time it spends being accelerated in the barrel.

.22 LR handgun round vs. .223 Remington rifle round

I said earlier that it’s the bullet that kills, but lumps of copperclad lead aren’t dangerous except to the extent that they’re propelled at thousands of feet per second at your body. Rifle rounds have more boom, so they travel two to four times faster and hit correspondingly harder. (Actually, energy goes up with the square of the velocity, but only linearly with mass, so it’s even worse than you’d think.)

This makes them deadly, even when they’re small: the bullet in an AR-15 is about the same diameter and mass as one from a Saturday night special, but it’s much more dangerous. While bigger is usually better, when it comes to rifle rounds, smaller can be nastier because the bullets are more likely to tumble and fragment.

Whereas a handgun is only effective to perhaps 50 yards, at that range rifles are just getting started. They’re good to hundreds of yards, and the extreme versions can kill someone two miles away. Being able to kill at a great distance is not particularly helpful for defense, but works wonders for offense, including assassination.

The other long gun is the shotgun, and what it lacks in range, it makes up for in power. When loaded with shot, it creates a cone of death with a 40″ spread of lead pellets at 30 yards. At closer range, the spread is less, but the damage is even more substantial. And if you really want to punch a hole in someone or something, you can load a self-rifled slug, which dwarfs most conventional bullets in mass and impact.

While shotguns are traditionally touted as being good for home defense, based on the theory that they don’t require aiming and won’t go through walls (neither of which is true), that’s mostly genital size overcompensation. Shotguns are at their best when hunting small animals and birds, where the spread at a distance turns a miss on a tiny moving target into a hit, and dinner.

Should all of these large weapons be illegal? Probably not, but they need more substantial restrictions, to the point where we generally don’t want people to have them in their homes or vehicles.

A hunting rifle should be left at the range except when it’s actually being used to hunt. Likewise, shooting some clay pigeons or real ones with your shotgun doesn’t mean having it in your closet. And those fun-to-shoot military-style rifles, like the AR-15, belong permanently at the range, along with all the other weapons that have no legitimate use outside it.

This would, among other things, end the absurd legal arms race over what qualifies as an “assault weapon”. Instead, you can own the most tacticool penis substitute you like, but it has to stay locked up or else you will be. What distinguishes these range-only rifles isn’t their sometimes-daunting appearance, but how many bullets they can fire per minute.

Hunting rifles don’t need to be semi-automatic; you don’t have to be able to spray a deer with bullets as fast as you can pull the trigger. The noise accompanying a missed first shot means that you don’t get a second try. Single-shot guns, like traditional bolt-action or even break-action rifles, should be somewhat less restricted in that they, unlike your AR-15, are suitable for hunting.

Conclusion

To borrow a slogan, firearms should be safe, legal, and rare.

We can make them safer by keeping them out of the hands of those who can’t be trusted. By keeping them legal, we avoid the perils of prohibition and reduce harm instead of packing prisons. By setting reasonable limits that discourage unnecessary access, we make them relatively rare, once again making the country safer. And by doing this right, we can depoliticize gun control and prevent the right from using it as a bludgeon against liberalism.

The perils of making an omelette.

The price of educating against bigotry by the dominant group.

Where did he go wrong? Was it in being white?

In 1872, a famous eggman was quoted as saying, “When I use a word, it means just what I choose it to mean—neither more nor less.” According to the source material, he then had a great fall from the wall upon which he sat, and said no more, because he became no more than a wet splat.

In 1972, an anti-racist academic named Patricia Bidol wrote a textbook entitled Developing New Perspectives on Race: An Innovative Multi-media Social Studies Curriculum in Racism Awareness for the Secondary Level. Pretty dry stuff.

Bidol’s focus was not on racism in general, but on institutional or systemic racism, which works in terms of laws and policies. While such racism is implemented by individuals, it is impersonal in manner and this facelessness increases its harm. Bidol made that clear when she stipulated that the definition of “racism” she used in the book was “prejudice plus power“; in other words, systemic racism.

This is all well and good. It’s completely legitimate for someone to stipulate a more narrow definition, if that’s what their text is focusing on, so long as they make that intent clear, as Bidol did. Humpty Dumpty did nothing wrong!

Unfortunately, certain activist popularizers decided to try to make this stipulative definition the normative one, taking over by fiat. With this change, it’s not just that systemic racism happens to be the type that Bidol is concerned with in her writing, it’s the only kind of racism that they’re willing to acknowledge the existence of. If it’s not systemic racism, they insist that it is, by (their) definition, not racism at all.

This peculiar, revisionist redefinition has not caught on, except in certain niches, but those niches are highly aggressive. The modern proponents also go further than Bidol intended, claiming that that personal—not systemic or institutional—racism is likewise one-sided by definition. Their definition, naturally.

If a white person hurls racist slurs, discriminates on the basis of race, or commits racially-motivated violence, these count as racism of the personal sort. No argument here. But if a Native American did the same thing, even against a Mexican American, it “can’t” be racist, they insist. Why?!

Essentially, they’re committing the No true Scotsman fallacy. Racism is bigotry that’s rooted in the notion of race, but when we consider racism outside the dominant group, these people claim it’s not true racism solely because they’ve chosen to artificially narrow their definition to exclude it. They move the goalposts so that only they can score.

They don’t stop at racism, instead applying this notion to other forms of bigotry in the same way. They claim, for example, that sexism against men is “impossible” because men have the power. All of it, somehow. We’ll come back to this, but first let me argue against myself.

On the one hand, it’s easy to understand why we might want to focus on bigotry by the dominant group; essentially, straight, cis, able-bodied, well-to-do, white, Christian men.

This is the biggest, longest-running, most entrenched, and most harmful form of bigotry in America precisely because the dominant group has the power to not only get away with acts of personal bigotry, but to institutionalize this bigotry systematically. Not only are they above the law; they are the law. They write it and they enforce it, all to their own advantage.

None of this is merely theoretical; it is our history of colonialism and white supremacy. Moreover, it’s obvious that much of the bigotry encountered by the dominant group is, if not well-deserved, at least entirely understandable in context. It’s blowback, when the oppressed have a chance to turn the tables on their oppressors.

White supremacy is the (white) elephant in the middle of the room, so prevalent that we take it for granted. The dominant group, despite being a numerical minority, forms the baseline for our expectations, against which everyone else is contrasted.

This white, male doctor is just a doctor, but that woman is a woman doctor, that Hispanic is a Hispanic doctor, and so on. The dominant group hyphenates the rest into inferiority. They are the peak of the hierarchy, with others being measured in terms of how close they come; white men above white women, white women above Black men, Black men above Black women, and so on.

So when the elephant bellows, we can’t ignore it. It deserves to be our focus, our target, our greatest internal enemy. We should hate it and we should fight it.

However, while this hatred of bigotry is not in itself bigotry, responding in kind is. It’s good to hate Nazis, even to punch them, but it’s wrong to hate the German people as a whole just because some of them were Nazis, even if Nazis ran their country.

The latter goes past blaming the oppressors and becomes guilt by association. It generalizes to groups that people have no choice about being a member of, instead of holding them accountable for what they choose to do. This is the very definition of bigotry.

To bring the example home, hatred of white supremacy is fully justified, but it’s bigotry to hate white people for the existence of white supremacy. Hatred of misogyny is justified, but it’s bigotry to hate men. Hating people for choosing to be bigots is not bigotry; hating people because of the bigotry of others who happen to look like them is.

It gets worse. One corollary of their view is that, when minorities commit acts of bigotry, it doesn’t count because it’s just insult, not injury. We’re weak and powerless, so our mere words are not like the sticks and stones of the dominant group.

According to this, when someone in the dominant group complains of being a victim of bigotry, we should disregard them because their hurt feelings are not important compared to the broken bones of “real” bigotry. Even complaining is a symptom of “fragility” and is worthy of mockery and disapproval, they insist.

This view infantilizes minorities, denying them agency and autonomy. It falls right into the “white savior” trope, where the oppressed are too weak to fight back and it’s up to sympathetic members of the dominant group to cross the line and fight for us, making all the decisions in the process, and taking all the credit.

But not being dominant doesn’t entail being subservient. Power is never as simple as all or nothing. Not having the bulk of the power doesn’t mean being powerless. It means having less power in many places, and sometimes more power in a few. And where we have power, even the power to act personally and directly, our bigotry can cause injury, not just insult.

Minorities can certainly benefit from members of the dominant group who oppose bigotry, but we are not feeble and defenseless on our own. The whole point of our movement is that we’re all fundamentally equal and deserve to be treated as such.

Narrowing the definition of bigotry to make it one-sided is in itself bigoted, not only against the dominant group but against the oppressed. And yet it is a frequent component of performative anti-bigotry, the false wokeness that latched onto—and corrupted—Bidol’s work.

Speaking out against it, no matter how clearly and gently, is a sure way to be branded a bigot, even though all you’re saying is that bigotry is bad no matter who does it. It’s such a simple, self-evident point, which is perhaps why the counter-reaction is so vicious.

The irony is that the intentions behind this were good. This didn’t start off as a cover for minority bigotry; Bidol is a white woman. She wrote this book to teach (presumably white) high school students not to be racists. She meant well, but the idea mutated and became toxic.

By fetishizing white guilt and applying a double standard, it only serves to create more bigotry. It alienates those who would otherwise be more sympathetic, it provides a defense for bigotry against the dominant group, and it reduces the oppressed to mere victims.

And instead of being able to focus on systemic changes, the dominant group is expected to participate in endless performative public self-flagellation; mea culpa, mea culpa, mea maxima culpa! When you’re busy performing, you don’t have time to do the real work.

They know what they’re doing is wrong, which is why they’re so touchy about it, but they figure that the end justifies the means. Sure, this ideology is insulting and unfair, but that’s the price we have to pay as educators to punish students for their unrequested privilege and guilt them into anti-racism.

If these activists want to make an anti-racist omelet, they figure they gotta break a few eggs, or at least bruise the feel-bads of brittle wypipo. But if Humpty taught us anything, it’s that breaking eggs makes a mess that splatters all over the place and there’s no undoing it.

TANSTAAFL!

On the varieties of free lunch worth having

You’re not paying for it, but is it free?

There ain’t no such thing as a free lunch! Or at least that’s what this acronym stands for. The pithy but unpronounceable term is a favorite rhetorical cudgel of libertarians, who are convinced that there is a deep truth in it: a core economic principle that (conveniently) justifies anti-social behavior and uncontrolled greed.

Libertarian SF writer Robert Heinlein popularized it in his novel, “The Moon Is a Harsh Mistress“, and libertarian economist Milton Friedman entitled one book “There’s No Such Thing as a Free Lunch“, which is more grammatical but less catchy. The Libertarian Party put it on their flag made it their logo. And anytime anyone anywhere suggests helping people, libertarians can be counted on to repeat this mantra as if it was self-evident proof that morality is pointless and counterproductive.

The supposedly profound core is that even a meal you get to eat without paying for isn’t “truly” free because there’s a cost that someone, somewhere is paying, individually or even (gasp!) collectively. This is, depending on how you interpret it, either trivially true, significantly false, or completely irrelevant.

It is trivially true that even something that isn’t charged for is not free “all the way down“. Those “free” salty peanuts at the bar are more than paid for by the additional drinks that thirstier patrons buy. Here, the cost is displaced and concealed but remains.

It is significantly false, in that making something freely available can take advantage of economies of scale and create sufficient positive externalities to more than pay for itself. For example, the road system generates much more wealth for society than it costs to maintain, so not only don’t we bother charging people to cross the street, but these streets are better than free because they’re profitable. Charging money would make them less profitable by discouraging their use and creation.

It is completely irrelevant because, to the person eating it, the lunch is simply free. It might not be free in a way that satisfies a libertarian, but who cares? Libertarians are greedy idiots, so what they think doesn’t matter anyhow.

Idiocy is not rare. There’s a long history of deep thinkers mistakenly insisting that something is essential when it doesn’t even exist. They raise the bar impossibly high, then act as if it matters that nothing real can meet their arbitrarily high standard. The classic example of the failure of this essentialist approach is vitalism.

Like the rest, this one begins with people facing a question that, at the time, has no available answer and being unwilling to accept this with intellectual honesty. Not knowing what it is that makes an organism alive, some were not content with admitting they had no clue, but instead hung a lampshade on their ignorance by plastering over it with a meaningless label.

They declared that the source of life was some sort of “vital essence” (or if you’re French, or pretentious, “élan vital“). Like the God of the gaps argument, this spurious placeholder not only offers no additional explanatory value, but leads to artifacts and illusions because it creates a rift between the behavior and its cause. Due to their theoretical baggage, they find themselves compelled to deny the legitimacy of what is observed.

With vitalism, the obvious artifact is that it becomes logically possible for an organism to act in all ways as though it’s alive while somehow lacking that “vital essence” that is (they insist) required for “true” life. In this way, vitalism allows for the existence of zombies.

Worse, it makes it impossible, even in principle, to ever distinguish someone from a zombie because no amount of “merely acting” alive is (according to them) sufficient. And if you believe that a vital essence is required for life, then the absence of any evidence for such a thing must mean that nobody is “truly” alive; we’re all zombies.

A similar, but more explicitly religious, error is the dogma that what makes us alive is a supernatural essence; the soul. The result of this move is that anyone denying the existence of souls will be accused of denying the existence of life itself. If not for souls, they insist, we’d just be “bags of chemicals“. It’s zombiism all over again, only with Jesus.

Parallel to these errors is the claim that what makes us conscious is yet another ineffable and utterly undetectable essence: qualia, which are defined as experiences (somehow) severed from behavior and behavioral dispositions. A consequence is that, according to this idea, it is now possible for someone to act in all ways conscious while somehow not being conscious: a mindless philosophical zombie.

It’s almost as if there’s a pattern here, and it’s a frustrating one. It’s hard to know what to say to someone who insists that there’s more to a behavior than everything about the behavior itself, and who defines it so that no evidence about the matter is even logically possible.

Earlier, I casually (yet entirely fairly) bashed libertarianism, but this term has an older meaning from philosophy. In contrast to political libertarians, who are just greedy anarchists and Nazis, metaphysical libertarians are convinced that what makes free will possible is yet another very special essence.

According to them, the only reason we can be held accountable for our actions is that they are (somehow) uncaused, making them ours and ours alone. In other words, they define free will as will that is free from causality itself, and conclude that we obviously have it. They attribute this seemingly magical ability to everything from Cartesian dualism and God to emergence and quantum mechanics. (In other words, their excuses span both the genres of fantasy and science fiction.)

Ironically, the opposite-but-equal hard determinists agree on the requirements but disagree on whether they’ve been met. They say, correctly, that acausality is physically impossible. They say, incorrectly, that this means we don’t have free will. I don’t blame them, though, because they were obviously predetermined to make that mistake.

Collectively, these two groups are called incompatibilists because of their shared belief that determinism is incompatible with free will. If you try to tell a metaphysical libertarian that it’s physically impossible for our actions to be uncaused, they’ll accuse you of denying that you have free will. If you say the same to a hard determinist, they’ll nod and say that this proves you don’t have free will. Tweedledee, meet Tweedledum.

The shared mistake is that they misdefine free will. First they ask the wrong question, then they disagree about the answer. But a broken question can’t be answered, only itself questioned and ultimately unasked. It generates a problem that cannot be solved, only dissolved.

Free will is, first and foremost, a type of will; wanting some things over others. We are capable of wanting because we form beliefs about how the world is and ought to be. These are formed as the consequence of interacting with external reality.

The world at large causes our will, so anything interfering with this, whether it’s supernatural or random, only undermines that will. If you want something for no reason at all, in what sense do you want it?

Causality is a requirement for any sort of will; acausality doesn’t make our will free, it destroys it. So why would we even want our will to be free from cause? Why would we make it a requirement?

Incompatibilists believe that, if our choices are determined, then we can’t be held accountable for them. As in: “Sure, I killed her, but I was predetermined to do so, therefore it’s not my fault, so you have to let me walk.”

This is bad, not only because the conclusion is both morally repugnant and obviously mistaken, but because we actually want to be able to be held accountable and to hold others accountable in turn. Otherwise, how can we cooperate? Without it, we cannot form a social contract.

Consider a more literal contract. In order to make a binding agreement, both parties must accept a responsibility to follow it and be held accountable for choosing not to. But this is only possible if entering into it was your choice in the first place; if you agreed to it of your own free will.

If you sign a contract with a gun pointed at your head, you didn’t really have a choice. Rather, you acted under duress—under the control of another’s will—so you can’t be bound to it. Likewise, if you genuinely agree but then violate it at gunpoint, that too was under duress and therefore not your fault.

We don’t have will that’s free from causality, because that’s physically impossible, but we don’t need it because it doesn’t matter. Will that is free from duress is the only sort of free will required for us to be held accountable and to hold each other accountable. Freedom from duress is not only sufficient for this, but unlike freedom from causality, it’s actually possible. In fact, it’s ubiquitous.

The conclusion that causality and free will are compatible is, predictably, called compatibilism. Much as those who deny the existence of various other essences—élan vital, souls, qualia—are accused of denying the existence of what the essence purportedly explains, compatibilists are accused of denying that free will exists.

This is true and false. Compatibilists do deny that acausal free will exists, but they also deny that it’s the sort of free will we need and have. They believe in free will, but not Free Will. When they make this clear, they’re accused of equivocating, of lying about what free will is, but the issue is deeper than definitions.

While compatibilists and incompatibilists disagree about the meaning of “free will”, this isn’t a purely semantic argument. It’s not just about which sort of freedom is meant, but which matters: which one we ought to mean.

Therefore, compatibilism isn’t distinguished by its definition of free will, but by its endorsement of this sort of free will as ethically sufficient. All three stances agree that freedom from duress exists, but compatibilists are the only ones saying it is worthy of fulfilling the ethical role of free will.

Why do incompatibilists disagree? I suspect it’s because they are in the thrall of an ontological error. Attributes like will, and beliefs, and thoughts, and so on, are mental. They are only defined in terms of minds, hence only visible at the level of the intentional stance. Physical causes therefore cannot undermine the will, but are instead required for it.

It’s something of a subtle point, but minds are not caused by bodies; they supervene. The mind exists in terms of the body, much as software exists in terms of hardware, as a pattern in it, an abstraction. But hardware doesn’t cause software; software is hardware seen from a distance, much as words are formed from letters but not caused by them.

(Naturally, yet another form of essentialism, Platonic idealism, insists that these abstractions are more fundamental than what they abstract from. Essentialism is the gift that keeps on giving: like herpes.)

This confusion about the relationship between the physical and the mental is what drives incompatibilism. It creates a categorical error, akin to trying to answer “Why did the chicken cross the road?” with “Because its molecules moved in that direction”.

Such an answer misses the point so badly that it’s not even wrong, and it’s a daunting task to even begin to unwrap the layers of false assumptions that underlie it in order to get through to them. You have to break down their beliefs, educate them with replacements, and then show them the connection.

So when compatibilists say that we have free will (because it’s free from duress) and incompatibilists insist that it’s not free enough for their standards (which require freedom from causality), it’s much the same as a political libertarian bothering you as you eat your free lunch by insisting that it’s not free enough.

And I counsel the same reaction: just enjoy what’s free and ignore them. Their requirements are solely their own and therefore completely irrelevant. Also, use condiments, even if they tell you not to.

I, too, was (allegedly) a sexual harasser

Al Franken, Kirsten Gillibrand, and the politics of accusations and blowback

Note the shadow beneath the fingers; he’s not touching her.

This one’s personal. So there’ll be no food analogies segueing into the topic. I’ll just get right to it.

In the wake of Gillibrand’s departure, I’ve been arguing on Twitter about what happened to Al Franken, and one of the points I made is that the injustice of it only harms the #MeToo movement. Franken never got his day in court; he was pushed out before the investigation which he demanded had a chance to clear (or damn) him.

So, in the spirit of full disclosure, I’m going to reveal the one time I was accused of sexual harassment. This way, you can decide if my defense of Franken is just self-serving, personal bias from a creep. Or, at the very least, you can calibrate against whatever bias you detect.

To protect the guilty, I’m going to avoid sharing most of the details, but I’ll otherwise do my best to be accurate. I’m also going to stick to gender-neutral terms, quite intentionally. It’ll be interesting to see what assumptions readers make.

So, at some indefinite but fairly distant point in the past, I joined an unspecified large company and encountered a co-worker whom I found attractive. They didn’t work in the same part of the company or in the same role, but I did run into them from time to time. And when I did, I was distracted.

There was evidence that the attraction was mutual, but I was still recovering from a relationship that had ended badly and wasn’t really in the market. Besides, dating a coworker seemed like a bad idea at the time. In fact, it turned out to be. But my common sense had to contend with more basic urges, and it was a losing battle.

About six months later, you could cut the sexual tension with a waterjet. It was so noticeable that co-workers were openly commenting about it. The gist of the peanut gallery’s good-natured but nosey remarks was that, if we weren’t already a couple, we should be. We should just get it over with, or get a room already. That room turned out to be an elevator.

At the end of a workday, we wound up alone together in an elevator heading for the streets, and it was awkward. In the middle of my desperate attempt at small talk, they interrupted to question me about why we weren’t dating yet. I didn’t have an answer to that, so I suggested that we should have dinner together. I remember that we were both pretty happy about this at the time. We were relieved, after all that will-they/won’t-they tension.

Later that week, it was the Friday of our first date. I was surprised and wary when my manager sternly called me into his office, and even more so when I noticed that he had a witness in there with him. I could tell that this wasn’t going to be a casual chat.

He didn’t waste any time: he told me flatly that I had been accused of the sexual harassment of a co-worker and that this was a very serious matter. I honestly wasn’t sure what to make of it all. My first thought was that maybe my date had second thoughts or something, but I’d just seen them in the hallway and they seemed enthusiastic about our plans for the evening.

So I asked my boss whom I was accused of harassing. He wouldn’t say. I then asked who had accused me. Same response. More annoyed than flustered, I pointed out that, if I didn’t know what this was about, then I couldn’t say anything, either. That stumped him, so the meeting ended. However, my relationship with my manager took a big hit.

That night, over Chinese food, I told my date this story and they laughed about it, as confused by the whole thing as I was. That dinner went well, and over the course of the next month or so, we went out a few more times before we broke it off, amicably and by mutual agreement. We just didn’t have that much in common, despite that attraction. And after we’d given in to it, we found that there was no basis for anything deeper.

Still, whenever we bumped into each other in the office, it became clear that the sexual tension had not gone away, and had perhaps gotten worse because we knew what we were missing. We tried to keep things professional, but there were still many awkward moments. We even relapsed briefly, kissing in the elevator, before immediately coming to our senses. After that, we did our best never to be alone together, with mixed results.

We hadn’t told our co-workers that we’d been dating, so we didn’t have to tell them once we stopped. As a result, they continued the comments about how we’d make a cute couple and all that. I wanted to just say to them that, no, we weren’t really compatible. But I hadn’t forgotten that bizarre yet scary sexual harassment accusation, so I kept my mouth shut.

To this day, I have to wonder if the relationship would have gone better if we hadn’t had to keep it on the down low. Probably not, but still, I have to wonder. It doesn’t matter. By that point, I had accumulated a variety of good reasons to leave this job, besides my misadventures in dating, so I started looking. It didn’t take long for me to find a place that would let me make a fresh start. When things are uncomfortable, leaving is the natural reaction.

Before I left, I did find out more about that sexual harassment claim, through a backchannel. It turns out that, as the timing suggested, it had actually been about my date. However, the accusation came from a third party; a co-worker who had hit on them and been rebuffed. Presumably, they saw me as a rival and went after me out of some sort of jealousy.

The bottom line is that I was falsely accused of sexual harassment, so I naturally have some sympathy for others who face such allegations. In my case, it didn’t really amount to anything, but I didn’t know that at the time. All I knew was that I faced a faceless claim against me and had no way to defend myself. About half of Franken’s accusers were likewise anonymous and the one who started the whole thing had questionable motives, much as my rival did.

I also knew my job was on the line, and the fact that my manager didn’t have my back was further motivation not to bother sticking around. That’s why I don’t blame Franken for resigning under pressure when his own party threw him under the bus. It’s not a sign of guilt, but of despair; of wanting to get away from a situation that’s unpleasant and uncomfortable, when those you count on to protect you from unfair treatment are not on your side.

Some people might read this heavily-censored autobiographical account and take home the idea that I’m only defending Franken because I was falsely accused myself. Others, I hope, will consider that my experiences have made me more sensitive to how it feels to be on the receiving end of such an accusation, and more sympathetic to someone who gives up when they lose faith in their colleagues.

My sympathy is not one-sided, because I’ve been on the other end of things. I was sexually harassed earlier in my career, by my own manager. It was a quid-pro-quo request in order to keep my job, and I chose to leave, but didn’t bother reporting it.

When I talk about the problem of false accusations, what bothers me most is that, because there is so much stigma and risk around accusing someone of sexual harassment or worse, most claims made publicly and without the shield of anonymity are true. As a result, every visible instance of a false claim is used to undermine the legitimate ones that vastly outnumber them. I don’t want my defense of one particular person to be abused into a defense of the guilty.

This is what I meant when I said that the Franken debacle harms #MeToo. There is a culture of exaggerating the risk of false claims so as to undermine victims, and what feeds this narrative are the rare exceptions: the illegitimate accusations that get disproportionate publicity precisely because of their rareness. “Man bites dog” is newsworthy, “dog bites man” is not, so you’d think from reading the papers that dogs fear men biting them and not the other way around.

The only way to undermine this attempt at intimidation is to starve it of support. Yes, #MeToo taught us to #BelieveWomen, but this has to mean that we take their claims seriously and investigate them neutrally, not that we rush to judgment in either direction. False accusations hurt real people, not just the falsely accused but the victims who aren’t believed because there’ve been a few well-publicized false accusations. So we need to trust but verify, not trust blindly.

Some accusations are malicious, others stem from some level of misunderstanding, but the overwhelming majority are legitimate. These legitimate accusations are the ones we need to protect by blocking the illegitimate and mistaken. Moreover, as Pence shows us, a world where women are seen as an occupational risk is not good for women. Excessive zeal to punish the guilty creates harmful blowback that hurts the innocent.

Bringing home the bacon

On the use of force and the utility of impeachment; intentions vs. consequences

Did you just say “down, boy” to me?

We are currently in the throes of a food fad based on adding bacon to everything, but while we love the sweet, salty flavor, we don’t think much about where the meat came from. And when we do think of pigs, we imagine the tame farm animals; all pink, rotund, and cute. But these were bred from a much scarier creature.

The wild boar is a large, powerful beast, more than capable of goring a human to death, and more than willing to do so when enraged; it is easily enraged. Boars have a thick, protective hide, dense bones, and lots of muscle, and once they get angry, they do not quit.

Traditionally, wild boar was hunted with long spears from horseback, the better to keep those deadly tusks far away from human flesh. One characteristic feature typically found on boar spears is their crossguard, whose job is to keep the impaled boar from pushing itself up the shaft to attack the person holding it. Think about that for a moment.

As you might imagine, the wild boar is not an animal you can control through pain. And yet the strategy of pain compliance is all too often taught for defending against other humans. In particular, it has become a mainstay of “women’s self-defense”.

Most of us have seen carefully-staged videos of tiny women stomping on a large, male assailant’s instep or bonking him on the nose or twisting his wrist, causing such agony that the man gives up. It works in the videos so it must be true, amirite?

In reality, this approach is not necessarily a good idea. When the assailant is timid and unsure, expecting no resistance and perhaps unaware of the line they’ve crossed, a bit of pain might actually dissuade them, like shouting “no” but harder to ignore. But a motivated opponent, especially one who is already riled up and running on adrenaline, one unwilling to take no for an answer, may hardly feel the pain as painful and is likely to react by escalating further.

Like the wild boar, pain just makes him angry and more violent, pushing him past the point of no return. And given that the victim is fighting off someone bigger and stronger, this could end badly.

Am I suggesting that she just take it? No, not at all. Resistance isn’t futile, but it has to be based on damage, not just pain. Twisting a wrist is one thing, breaking it is another. The defense strategy that works is to take away their ability to harm, not count on psychological discouragement. To put it another way, taunting the boar is suicidal, but shooting it dead works.

Which brings us to impeachment. If we could cause Trump damage, not just pain, with impeachment, we should. So if we could follow up that impeachment in the House with conviction in the Senate, expelling him from office and exposing him to arrest for his various felonies, it would be worth doing. This remains the case even if it means incidentally providing fodder for the right-wing persecution complex.

But we can’t. The corrupt, traitorous Republicans control the Senate, and they wouldn’t convict Trump even if he confessed to the entire nation. We cannot harm Trump with this, only cause pain. And while I don’t have any hesitancy about making Trump’s life less pleasant, this is as counterproductive as smacking a boar’s snout.

If the House attempts to impeach him and either fails outright (currently likely, given the lack of support among even Democratic Representatives) or succeeds only to be blocked by the Senate, how will this damage Trump? It’ll cause him some pain, but the fascists in America are already enraged past the point of being discouraged by pain.

Instead, they will be encouraged by our show of weakness. We’ll have taken our best shot to no effect. They will see that they have nothing to fear from us, so they’ll rush to the polls, feverishly excited to re-up the fascist-in-chief’s tour of duty and hog wild about crushing “libtard snowflakes”. Meanwhile, dejected, fickle liberals will stay home and cry like sore losers, while the populist left makes a feast of Democratic misery in the primaries, further weakening the DNC and aiding Trump.

I’m sorry to say that impeachment was never the answer. Like election, impeachment is a political process, not a judicial one. It represents the will of the people, but only a minority of citizens support it. Not only is impeachment unpopular, but it’s becoming more unpopular; support dropped 12 points among Democrats between January and July of 2019, even as Trump’s approval rating has plummeted.

It’s fine to cause Trump pain through public hearings about his crimes, but the goal has to be to motivate the left, discourage the right, and appeal to the middle. That’s how we won in 2018. It’s how we will win in 2020, and when we win, Trump loses more than his job. He’ll move from the White House to the courthouse to the big house to the graveyard of history, where he belongs. It’ll be “That’s all, folks” for him.

History does not give consolation prizes for good intentions; only consequences matter. We might think we’re doing the right thing, but if the results aren’t right, then we were wrong. The moral high ground is already ours; we don’t need to do anything just to retain it. What we need is to use it to remove the party of white supremacy from power. Nothing short of that—no symbolic victory or good intentions—will do. We need to bring home the bacon, not just rile up the boar.

Foiled again

What separates conspiracy theory from conspiracy fact?

I wear the hat; it does not wear me.

Tin foil is a lie!

We use aluminum for foil these days, not tin, because it is cheaper and stronger, but we persist in calling it tin. The tin foil is inaccurately named, and everybody knows it, but I’m not proposing a swift, orderly change because this isn’t some sort of conspiracy, just imprecise language.

Whatever we call the foil, it’s pretty useful. I like to line pans and cookie sheets with it so that I don’t have to scrub them, but it’s also good for covering the thin parts of large pieces of meat to prevent burning, and of course, for storage. One thing I don’t do with it is wrap it around my head and wear it as a hat, because I’m no conspiracy theorist.

We laugh at conspiracy theorists, and we are right to do so. Whether it’s the nuts who claim we faked the moon landing or the loons who say the government is controlling our minds with fluoridation or chemtrails or microwaves, they are fools to believe as they do, and doubly so for thinking us fools for disbelieving. Perhaps the worst theories are the ones that are fundamentally political and often blatantly racist: consider such antisemitic favorites as the Protocols of the Elders of Zion, the blood libel, and Holocaust denialism.

At heart, conspiracy theories posit simple-sounding, emotionally-satisfying explanations for why specific things are bad. As a result—instead of having to deal with a cause that is abstract, speculative, and statistical—believers have a villain to hate.

The psychological rewards are obvious: if there’s a bad guy, then they’re the good guy. If there’s a secret plot that is hidden from all eyes, then they’re special for seeing right through it and being in the know. And if there’s something horrible that they really want to do to other people (see above), there’s a justification so overwhelming that it is (ahem) hard to believe.

Conspiracy theorists believe as they do because they want to, not because they have to. The evidence didn’t force them to accept the conclusion; the conclusion was accepted regardless of or even despite the evidence because it was desirable in itself and for what it brings. Sometimes, they posit these theories to explain away inconvenient truths that they cannot accept. And often, those who create and spread these lies do so on a knowing, self-serving basis.

What gives it away is just how unwilling they are to consider that they might be wrong. They believe (or say they do) because they want it to be true, not because it is. They implicitly recognize this, which is why they overreact to criticism by doubling down (“the more you try to dissuade me, the more convinced I become”), circular reasoning (“the fact that you’re denying it is proof that it must be true”), and paranoia used to reject expertise (“trust no one”).

But not every theory about conspiracies is a conspiracy theory in the normal sense, because there are two necessary elements. The first element of a proper conspiracy theory is that it’s about an action, often an ongoing one, that requires the long-term cooperation of many people who are working in concert to achieve their goals.

This part is actually easy; it’s literally the whole point of a political party or a corporation or a glee club (which is why we should never trust any of them unconditionally, especially not glee clubs). People “conspire”, in this limited sense, all the time, often quite successfully. The second element, which turns out to be the tricky part, is that the conspiracy has to effectively remain secret. After all, it’s not much of a conspiracy if everyone knows. Or is it?

What makes conspiracies implausible, even ridiculous, is that the more people they supposedly involve and the broader the supposed actions are and the longer they supposedly go on, the less likely it is for them to keep it all secret. With so many people, it’s only a matter of time before one of them spills the beans, or screws the pooch and is noticed.

Sure, you can try to explain this away by positing secondary conspiracies to silence, discredit, and even kill those who tell the truth about the primary one, but it quickly stretches all credulity. Two can keep a secret, if one is dead. True secrecy therefore requires a murder spree.

Consider one theory about a truly depraved conspiracy. Imagine if a prominent individual, such as a slimy, Jewish Wall Street billionaire who owns a gossip magazine, were to make a habit of hiring girls—and I do mean “girls”, as many were in their early teens—to “massage” him and perform various sex acts, sometimes by forcing them physically.

Further, imagine if he had “lent” these girls out to famous, powerful people to generate blackmail material and ensure that he was owed favors so he was able to continue enjoying his child sex ring unbothered by law enforcement. Imagine if this involved over 75 victims and went on for over 6 years. Imagine if this remained an open secret; known by many but not acknowledged, much less acted upon appropriately.

Preposterous! Except that it happened and you probably know all about it.

Ok, fine, it happened, but it’s not a proper conspiracy theory because he was unable to keep it up indefinitely. He was, however, able to keep it under wraps for a long time, and then almost entirely avoid the consequences of his crimes. He got “the deal of a lifetime”, and pretty much walked away scott free.

This travesty of justice has since received increased scrutiny, and now he’s under arrest again, so perhaps the arm of the law is long enough that even he can’t escape it, but if so, then the wheels of justice have turned exceedingly slowly, perhaps too slowly. He’s 66 years old right now, and still filthy rich, so he just might be able to drag this out until he dies. If not, he’ll die in jail, which would be just.

The lesson here is that the sort of thing that would be easy to dismiss as a conspiracy theory can actually happen in real life, it just can’t be kept secret forever. It may, however, be possible for the guilty to get away with it for quite a while. The dirty deed would not be a secret, but it also would not be broadly accepted as factual, much less result in intervention and punishment.

Now consider another theory about a conspiracy, this time with even bigger stakes. Imagine if a corrupt foreign government were to use hacking, social networks, and sexy spies to compromise powerful political organizations and even a major party so as to ensure that their asset becomes the American president. This is wild shit, straight out of the Manchurian Candidate, and yet the claim came from President Jimmy Carter and is supported by the conclusions of 16 intelligence agencies.

It turns out that it’s entirely possible to do this sort of thing, at least if you’re Vladimir Putin and Donald Trump, and to get away with it for years, though not to keep it secret. It’s been in plain sight since the primaries, but there is a gap between the truth being apparent to anyone paying attention and it being incontrovertible to the point where it cannot be ignored, even by those who would prefer to.

So far, nothing much has happened to Trump, and he may yet get re-elected instead of impeached. He may get away with it, even though his presidency is entirely illegitimate and he is a corrupt, traitorous pawn of Russia. He only has one term left, and he’s 73 and in poor health. The grave may get him before justice ever does.

There is precedent for this. Consider that Nixon was not just guilty of ordering the break-in of the DNC HQ in the Watergate Hotel, but was variously corrupt and criminal, yet it took years for him to be brought to justice. Even then, most of his violations were ignored, and he dodged the bullet by getting pardoned instead of being impeached. He never even faced criminal charges.

So, where does this leave us? Well, Carter has pointed out that the American emperor wears no clothes. Mueller did, too, albeit in drier terms and at greater length. The wheels of justice are turning, however slowly, and we can only hope that they grind exceedingly fine. Even if we never stop Trump, perhaps we can purge the Russian taint from the American right wing and block the political aspirations of the next generation of fascists, including Trump’s own children.

In the meantime, we should expect that anyone who mentions the plain fact that Trump is a traitor and the fake president can expect to be dismissed as a conspiracy theorist. With so much evidence, though, you’d have to wear a tin foil hat and pull it down over your eyes to deny the plain truth about the man in the Oval Office. The real conspiracy theory is the idea that Trump is the legitimate POTUS.

Salad forks on the center left?

Formal dinners aren’t the only time when left, right, and middle matter.

Words matter; they have meaning.

I’ve ranted before about the way terminology can change over time and leave people confused or misled. Now, I want to focus primarily on the left-to-right political axis and how it relates to the current incarnations of the political parties.

Probably the most abused term these days is centrist. While it has a legitimate meaning, it’s almost exclusively used instead as a slur by the far left against anyone who’s not far enough to the left to satisfy their pathological need for ideological purity.

First, the legitimate meaning. A centrist is someone who’s neither left- nor right-wing on the whole. They might have views that are fairly neutral or weakly held, or a spread of positions scattered on both sides of the line, or maybe they’re rocks and don’t have opinions at all. You never know with those inscrutable centrists and their bizarre neutrality.

None of this accurately characterizes American modern liberals, as they are left of center. So when the extreme left calls liberals “centrists”, this is hyperbole. More bluntly, it’s a lie that verges on false equivalence. They’re basically admitting that they’re so far to the left that they lump everyone else together.

Liberals are also correctly described as being center left, which just means that they’re moderate. They’re to the left of center, but not all the way to the left. To put it another way, they’re the part of the left that’s nearer to the center than to the extreme. But while liberals are center-adjacent, they’re not centrists.

So where do we draw the borders? Well, liberalism has room in its big tent for anyone left of center, so it’s pressed up against the center on one side. On the other, it can go as far to left as it likes, just so long as it doesn’t cross the line into radicalism. What specifically defines radicalism is the refusal to cooperate with those who think differently.

Because a liberal is moderate, they’re willing to team up with almost anyone, at least on issues where they find common cause, and to the extent that they do. So while a liberal might not agree with someone who’s centrist or far left or moderate right on most matters, they’re usually willing to at least try to work with them where they do agree, to find a compromise, when one is possible, so as to get things done.

It is this ability to cross the center that allows democratic government to work. So long as there are moderates in power on both sides of the center, there is bipartisanship, so things keep running smoothly. Without overlap, there is just partisanship, hence either gridlock or winner-takes-all radicalism. Moderates can work together because they respect competence and pragmatism, whereas extremists do not.

So what defines the left border of liberalism is not how far to the left they go, but how unwilling they become to cooperate with anyone else. Liberalism ends where ideological puritanism begins. One consequence is that some liberals are considerably to the left of extreme leftists on key issues.

While there are no issues where liberals are far to the left, they can be more consistently to the left across all issues than the extremists because the extremists’ populism leads them to pick and choose what matters to them. Extremists either care too much or not at all, with nothing in between. And when they do care, they take an all-or-nothing, no-compromise approach.

An example of this would be civil rights, which liberals are deeply committed to but the far left disparages as mere “identity politics“. This is not just a theoretical divide but a practical one. The far left is apathetic about a woman’s right to choose, and doesn’t want to help refugees and other immigrants. It’s also lacking in commitment to gun control or protecting minorities against police violence.

Zooming out, all liberals are leftists, but not all leftists are liberals. Pretty simple, but what really confuses things is the term “progressive“. Skipping over its history, progressive, as an adjective, means left-leaning. We can talk about whether single-payer is more progressive (further to the left) than ObamaCare is. This is very similar to using liberal as an adjective.

As a noun, its meaning is ambiguous and is still shifting. Back in the days of Reagan, it became another label for a liberal but, lately, it has come to refer to just the extreme left, not liberals. So, for example, while Obama’s policies were progressive, Obama is not a progressive; he’s a liberal. Moreover, while the people who identify as “progressives” today are far left, they’re also populists.

Briefly, populism is a style of politics which is based on the conceit that the numerical minority it represents consists of the citizens who truly matter; the real people. Its leaders likewise claim to be independent-minded outsiders who are “authentic” and will lead the good guys (it’s always guys) to victory over the “establishment” elite, which consists of everyone who’s not one of them. Those people are seen as the enemy, and therefore inherently “inauthentic” and “corrupt”.

Populism is often right-wing, such as with Trump or various European fascists, like Le Pen. It can also be left-wing, such as Sanders or the so-called Justice Democrats. Despite being on opposite extremes in one dimension, they share a great deal in common, not only in style but substance.

Among these elements are demagoguery, nationalism, and pandering. Put simply, populists promise the impossible, which is why their positions are so extreme. They also demonize everyone who’s not as extreme, which is what creates the insatiable demand for ideological purity.

What’s fascinating is that populism is a second political dimension, allowing the extremes of left and right to come together to form a horseshoe.

Twisted in the populist dimension

The far left and far right are united by populism against their common enemy, which they call “centrists” but really mean everyone who’s anywhere near the center. In other words, all extremists hate all moderates. They hate them even more than they do the opposite extreme.

When populism moves people away from the center, it also shifts them from conventional to radical views. On the far left, this means Marxism, which includes democratic socialism and outright Communism. On the far right, this means Fascism, which includes Christofascism and neo-Naziism.

Switching away from populism and back to the left/right spectrum, I only have a little bit more to say because there’s just not much left of the conservatives. These people are (were?) center-right, which is to say moderately to the right. They often opposed progress and were casually bigoted, but they weren’t monsters.

Many conservative politicians, like Eisenhower, were competent, honorable, and had positive accomplishments. Moreover, it was possible to work with them productively, and while they did drag us down, they weren’t an anchor to sink us. They even served the useful purpose of keeping the radical left in check and being a buffer against the radical right.

I miss them, but they’ve lost power and faded away. Their last gasp came when the Tea Party movement took over the RNC and left them without political representation. Some of the more moderate ones occasionally vote for Democrats, but while we welcome them, they’re just not ever going to be comfortable with our liberalism.

The left faces the same risk, as a constellation of far left people and organizations, such as the Justice Democrats, Our Revolution, Bernie Sanders, The Young Turks, Jacobin, and The Intercept, are all working to do to the DNC what the Teabaggers did to the RNC. They are the Herbal Tea Party, and the herb is toxic populism.

Hopefully, this rant will help you set the political table in an orderly fashion. Remember, the spectrum starts on the far right with fascists, moves towards the moderate center with conservatives, transitions to the moderate left with liberals, and then goes back off the deep end with Marxists.

Addendum:

A favorite talking point used by both the American far left and some Europeans, is that American liberals would be center-right in Europe, and that America therefore has no true left. It’s really hard to take this seriously.

Foreigners have great difficulty mapping the issues that distinguish political stances across national divides because only issues that are controversial in that country serve as useful measures of political orientation. So, for example, the NHS has broad support in the UK, across parties, whereas support for M4A in the US is strongly correlated with party affiliation. In this case, Europe leans more to the left, but there are other issues, such as immigration, where Europe leans more to the right.

What further complicates such cross-cultural comparisons are differences in political systems, where parliamentary governments allow for small factions to be considered full political parties. In such systems, you can vote for a fringe party which then joins a coalition, whereas such a vote in America is wasted as an empty protest. As a result, the major parties represent the compromise that the coalition has settled on, and the most extreme views are intentionally lost in the shuffle.

Of course, when the American far left claims that there is no left wing in America, this is telling on themselves. They’re bragging about their ideological puritanism, admitting that they’re edgelords who refuse to recognize any distinctions among those who fall short of their impossible standards. They’re basically claiming that everyone to their right is right-wing and that only card-carrying Marxists should count as left-wing.

This sort of both-siderism erases the distinction between the moderate left and the far right, which is absurd.

Taste-testing for quality control and identity

Does this taste like identity politics to you?

A classic TV commercial depicts diners at a fine restaurant being informed that the coffee they just had with their expensive meal was really just freeze-dried Folgers from the supermarket. Naturally, they’re surprised that it wasn’t the fresh-brewed, gourmet drink they thought they were getting.

This says a lot about how people allow their expectations to undermine their objectivity, as well as raising the question of whether the identity of the product matters as much as its quality. Does it matter if it’s Folgers if it’s good? Is choosing a premium brand important or are generic and off-brand products acceptable? These questions of identity affect not only food, but also politics.

What qualifies something as “identity politics“, anyhow? Officially, it’s defined as the sort of politics where “groups of people having a particular racial, religious, ethnic, social, or cultural identity tend to promote their own specific interests or concerns without regard to the interests or concerns of any larger political group“. (Emphasis mine.)

In other words, it’s explicitly partisan; identity politics is intended to help one group above others, as opposed to promoting equality. Of course, when a group has been pushed below others, attempts at equality can look partisan, especially when viewed from above. The shrinking of an unfair gap can appear to be bias when it’s your advantage that’s doing the shrinking.

The way to tell whether it’s about equality or partisanship is not to focus on the spin or rhetorical style. Instead, we have to consider whether their proposals show a disregard for others, as opposed to seeking to help everyone.

So, for example, BLM doesn’t suggest that cops should be free to shoot anyone they want, just so long as they’re not black. Instead, their proposals seek to prevent all unjustified shootings, with the focus on black people explained by the disproportionate impact. That’s not partisanship, despite any appearances.

In the other direction, being conspicuously neutral (i.e. color-blind) about forms of bias that don’t happen to affect you, or admitting that the bias is real but claiming it will somehow automagically go away when your own, more general problems are fixed is an indication of partisanship in disguise.

To be clear, by overwhelming numbers, the most common form of actual identity politics in America is white supremacy. Strangely, it’s often not considered identity politics because it’s taken for granted.

Logically, white supremacy is a great example of identity politics. Practically, the term has come to be used selectively as an insult; a slur by the more-than-equal to denigrate anyone who promotes the equality of the less-than-equal. There’s a saying about fish not having a word for water because it’s all around them: that’s how white supremacy is. It’s ubiquitous.

Accusing a minority of identity politics is a dog whistle, like saying that they’re uppity (because they want equality) or well-spoken (for what they are) or don’t know their place (which is the bottom). When you hear that whistle, look for white supremacy and you’re likely to find it. Coincidentally, Bernie Sanders has used the accusation of identity politics for years to smear his opponents, while supporting conspicuous color blindness that is quietly but distinctly white supremacist.

When Sanders harangues that it’s not enough for someone to say “I’m a woman! Vote for me!”, he is implying that the only reason to vote for her is because she’s a woman. Of course, when he aimed this attack at Clinton, it was insulting and laughable—she was the most qualified candidate we’ve seen in decades—but that didn’t stop him. For that matter, it didn’t stop him when he aimed it at Obama and tried to have him primaried.

Lately, Bernie Sanders and his surrogates having suggested that all the excitement about various minority candidates—women, black people, Hispanics, homosexuals—is due to “identity politics”; due to partisanship. This is, again, an insulting lie. Candidates such as Kamala Harris are at least as qualified as Sanders is and there is no shortage of good reasons to pick them instead of him.

When pushed, Sanders supporters like to redefine “identity politics” by shrinking it down to exactly match the specific way Sanders uses it. Instead of referring to partisanship as a whole, they say it’s literally only about voting for someone solely due to a shared identity. This is still dishonest and insulting, but it does raise an interesting question: Is it necessarily partisanship to allow someone’s identity to influence your vote?

I don’t think so. Assuming we’re talking about choosing candidates whose qualifications are comparable, there are legitimate reasons to prefer the minority. I’ll focus on two: signals and representation. And then I’ll discuss some anti-patterns.

When two candidates for a job look about the same on paper but one is a minority, the latter is statistically likely to be better because minorities are systematically undervalued; they have to work twice as hard to get half as much. Minorities are assigned lower grades, lower interview results, and lower performance scores than they deserve, due to implicit bias. As such, minority status among high achievers is an additional signal of quality, not some sort of noise to be filtered out. It shows that they’re even better than they might appear, because they had to overcome a societal handicap.

The other reason is representation. People are fundamentally equal, so when we see unequal outcomes, this has to be explained somehow. And, in the absence of a better reason, the default one is bias. When the demographics of a field don’t match those of the general populace, unless there are other factors demonstrably at play, it means they’re being unfairly selected. Therefore, intentionally choosing an equivalent candidate who differs only in being a minority is a reasonable way to make up for that by making the field more representative.

When doing this, the preference among minorities should not be towards your own, if any, but whatever is statistically justified. This is one of the reasons I somewhat favored Clinton over Obama in 2008: women are a larger “minority”, so large that they’re not even a numerical minority in the population at large. They’re a minority in the sense of being less than equal, which is why they’re a numerical minority in prestige fields such as politics.

Minority candidates can be better just because they’re minorities. They are likely to be more directly aware of and personally motivated by the issues that disproportionately, or even uniquely, face them. For example, when you see a room full of white men explaining why women shouldn’t have bodily autonomy, it’s hard not to think that the absence of women is relevant.

A related notion is that of role models. When people do not see themselves as being represented in our leadership, it has a chilling effect. Somewhat rightfully, they feel that this shows that it’s not their government and they’re not seen as important. This discourages political activism; especially voting, but also other forms of participation, including running for office. Every minority in power is therefore a role model for equality, encouraging and legitimizing buy-in. This is a huge boost for democracy.

Can this go wrong? Sure. I’ve been critical of the notion that you have to be a member of a group in order to care about it or that those who fight for equality should be relegated to the inherently-inferior status of ally if they’re not part of your group. In particular, I argue that, by virtue of not being members, they have a strong, built-in defense against accusations of partisan “identity politics”.

Another way it can go wrong is when the candidate is a traitor to their group, guilty of the same bigotry that the group suffers under. Consider Sarah Palin or Margaret Thatcher or Milo Yiannopoulos. Ironically, it’s not that unusual for the earliest examples of a member of a minority group openly entering a field to be one of those who are hostile towards their own identity; “self-hating”. After all, it is this very hostility that makes them more palatable and acceptable to the majority, which lets them get in.

Consider how a woman entering a field dominated by men might feel a pressure to show that she’s “one of the guys“, emphasizing her masculine traits and deemphasizing her feminine ones in order to be taken seriously. Another example would be a black doctor who keeps his hair closely trimmed and goes golfing. A third is the intentional use of respectability politics as a cover for denigrating others of their group and establishing themselves as “one of the good ones“. In all these cases, they’re overcompensating for their minority status by playing down their identity and throwing the rest of the group under the bus.

Of course, the most obvious way it can go wrong is when the candidate is underqualified or flatly unqualified, yet favored by members of their identity group. The example that comes to mind, both of this and the earlier problem of overcompensating, is Pete Buttigieg. While he doesn’t hide his homosexuality, he was closeted until very recently and is not really a member of the gay community in any social sense. Moreover, he identifies more strongly with being white than gay and wears his Christianity on his sleeve, hence his ongoing outreach to the bigoted “white working class”.

Pete is not the worst possible candidate, but he’s just not that impressive if you look at him objectively. His political experience is limited to being mayor of a small city, and his previous attempts to get traction at even the state level were unsuccessful. As suggested above, his political views lean away from liberalism and do not energize the base. If he was straight and wasn’t a white male, he’d be ignored by the press.

Aside from being a white man, why is he getting so much publicity? Much of it is not despite being gay but because of it. I can’t help but to notice that his candidacy has received undue attention from gay reporters and activists, such as Maddow and Takei. They’re so excited about finally getting some representation that they’re allowing themselves to be blinded to his faults and weaknesses.

This is unfortunate, because his attempt to appeal to white folks at the cost of throwing the Democratic base under the bus (which is like the back of the bus, only worse) will not work. No matter how hard he tries to blend in with the majority, the bigots will not vote for him. Not only is he gay, but his color-blind bias just can’t fire up the white supremacists the way Trump’s overt bigotry does.

On the other hand, Kamala Harris is clearly competent, and being a black woman (with Jamaican and Tamil ancestry) offers the non-white, non-male Democratic base the motivation and inspiration they need to overcome Republican voter suppression and work to get their votes counted. She is the positive side of so-called identity politics, whereas Sanders and Buttigieg are the negative.

Chickens and eggs

Abortion, viability, and rounding errors

The optional sunny-side-up stage in the life cycle of the chicken.

What came first, the chicken or the egg? Actually, that’s a stupid question: it’s the egg, of course. The egg is an early stage in the life cycle that, if all goes well, ends in a chicken. This fact is embodied in the admonition not to count your chickens until they hatch.

But note how how this saying inadvertently promotes an egg to a chicken. You’re counting “chickens” that aren’t even chickens yet, and might never become chickens, which is why you shouldn’t be counting them. Effectively, it “rounds up” the egg to what it might one day become, and therein lies the problem.

This part really isn’t complicated: a thing is not (yet) what we expect it to become. It is potential, not actual. A seed is not a tree, even if it may one day be. A person is not corpse, even though that’s really only a matter of time. If and when the time comes, fine, its status changes and we treat it differently. But not until then. Why jump the gun?

We don’t bury the living just because they’ll die someday. Yet this sort of confusion about the actual and potential status of things forms the basis of arguments against a woman’s right to choose. You can see this in the self-contradictory term, “unborn child”, which makes as much sense as “living corpse”.

Come back here, you living corpse, I’m here to bury you! Stop insisting on your rights as a person; I’m rounding you up to a cadaver!!!

The ethics of abortion are often framed in terms of personhood. If it’s a person, it has rights, so killing it is murder. But this quickly turns into a game of Pin the Tail on the Donkey with blind attempts at sticking a pin through the magic moment at which personhood is achieved. Spoiler alert: there is no such moment because there’s no such thing as magic. Real life is more complicated.

An ovum and a spermatozoon are individual cells, and I don’t think anyone mistakes either for a person. If things go well, however, they might join together to eventually become a newborn in about 40 weeks. Just as uncontroversially, it doesn’t seem as though anyone denies that this newborn should be treated as a person. So, somewhere between these two points in time, in this gray area, the potential person transitions into an actual one. That’s where the controversy is to be found.

Those who oppose female bodily autonomy justify it by prematurely promoting a potential person to an actual one. Many of them argue that life (by which they mean personhood; they don’t understand ethics) begins at conception (by which they mean fertilization, not implantation; they’re ignorant about medicine, too). This is muddled and entirely arbitrary, but it yields their desired conclusion, so they stick with it.

A more recent trend is to claim it starts with having a heartbeat, but since that’s about 5 weeks in, it’s usually before the woman even knows she’s pregnant, so it serves the same purpose. (Even then, it’s not an actual heartbeat, as there’s no heart yet, just a measurable electrical signal.) Either way, they want us to treat something which cannot survive on its own as a person.

This is relevant because, so long as the embryo or fetus is wholly dependent upon the pregnant woman, there is no way for us to grant it rights except by taking hers away. And while the personhood of a fetus is questionable, there’s no question about the woman being a person. It’s her body, her rights, her choice. If she chooses to give up some of those rights to transfer them to the fetus, that’s fine so long as it’s her choice and not ours.

A note on terminology. When a woman decides she will carry the pregnancy to term, it’s entirely fair to round her up to a mother and round the fetus (or, really, even embryo or zygote) up to a child or baby. There’s nothing offensive about that and doctors do it routinely. But if she hasn’t, then such rounding up is both dishonest and emotionally manipulative. It’s where you get bullshit phrases like “mothers murdering their babies” in reference to abortion.

It’s not murder because the fetus has not earned any rights on its own and the woman has not chosen to give it rights at her own expense. If she did, then killing it would indeed be murder. So if someone sticks a knife in a pregnant woman’s uterus and kills the fetus, that’s murder, but an abortion isn’t. By the same token, there is no contradiction between allowing abortion and opposing pregnant women doing things that would lead to a newborn that is unhealthy.

This all goes back to viability. I said before that there’s no magical point, and that’s because it’s gradual. Fetal viability is not a phase change, like ice melting into water. It’s more like tar slowly turning soft until it flows. There’s solid tar, liquid tar, and a whole range in between, where it’s sticky.

Under our current technology, no embryo is viable. At 9 weeks in, the embryo is considered a fetus, but there’s still no chance of surviving outside the womb. It’s not until about 22 weeks that there’s any chance at all, and it remains very low: about 5%. Even then, this is a measure of survival, not health. Pre-term babies suffer from serious issues, and these don’t all go away even if they live: long-term disabilities are common, and many of these are dire.

At around 24 weeks, viability increases dramatically and reaches about 50%. A couple of weeks later, viability is up past 90%, and the last few percentage points slowly come in as the 38th week approaches. This is also around the time that even a premature birth will still likely result in a healthy newborn. Childbirth is usually around 40 weeks in, though viability never does reach 100%.

So while there’s no magic point, there are three stripes which blur into each other. There is a clear black zone (up to 22 weeks), a gray zone (22 to 27), and then a white zone (27 to 38+). With modern medical technology available, we tend to round up from the halfway point, considering a 24-week fetus to be viable enough to deserve intervention, but even so, death is still the most likely outcome.

When a fetus cannot survive on its own, aborting the pregnancy entails killing it. Once it can, there’s no such connection. Doctors could just induce labor or perform a C-section and hand the baby off to someone who actually wants it.

In practice, this is a largely a non-issue because elective abortion of pregnancies past 26 weeks is nearly nonexistent. Women don’t request them and doctors won’t perform them. There are still a handful of abortions even this late, but they’re therapeutic, not elective. In other words, they’re for medical need, for desperate circumstances such as the fetus not being viable or the woman’s life being at risk.

Back to that newborn that we all agree is a person. Let’s be frank: it has not earned personhood through its own merits; even dogs are smarter. Their status is based on their potential, but it’s safe to round up because we don’t have to round anyone else down in the process.

Ultimately, the morality of abortion comes down to distinguishing the potential from the actual so that we don’t count our fetuses as babies unless we can do so without counting women as mere incubators. We put the actual rights of actual people above the potential rights of potential people. The alternative would be immoral.

Washing your hands and other food-safety tips

Memetic Hygiene, Contagious Hate, and Empathy


Not the infection you should be worrying about.

Legally, restaurants must provide three bathrooms: male, female, and employee. (Insert your own joke here about genderless worker drones.) Despite this, employees do use the customer bathrooms, so you’ve probably seen that small sign near the sink which reads: “Employees must wash hands before returning to work.”

There’s a bit of humor in the fact that only employees have to do this, but the topic of sanitation is not all that funny, especially if you’ve ever come down with food poisoning from a restaurant. Ask me how I know.

Still, while we all understand the need to prevent foodborne infection, it’s not the most dangerous kind. The most dangerous kind is mental. Contagious diseases of the mind—often, political diseases—are a far greater threat to our safety. I’ll explain.

Richard Dawkins coined the term meme by analogy to gene, as the unit of the transmission of ideas. The idea of wearing a baseball cap backwards is a meme that spreads mostly by observation and imitation. The idea of Christianity is a meme that spreads vertically by childhood indoctrination, horizontally by proselytization. The idea of a meme is itself a meme that spreads by books and by pedantic rants from online sandwich-makers and political pundits.

Just as a virus is a bundle of genes that spreads itself around, a bundle of memes can act as a mental virus. This cluster of memes—called a meme complex—can spread and become popular, not because it is true or even good for its hosts, but because it has attributes that make it good at spreading for its own sake or for the sake of non-believers who benefit from it.

This has been understood for some time now. Over two thousand years ago, Seneca wrote that: “Religion is regarded by the common people as true, by the wise as false, and by the rulers as useful.” A belief can be completely false or even nonsensical, yet remain common because it serves the interests of those who don’t even hold it.

That is the chief insight of meme theory: something can be successful in the marketplace of ideas despite having no merit whatsoever. Even if it harms the host, or kills them—think of the Jonestown mass-suicide cult—it can still benefit itself by propagating faster than it dies out. Take that, sociological functionalism!

The virulent meme complex that has been the focus of much of my attention for a few years now is white supremacy, a constellation of self-serving bigotries against (obviously) those who cannot pass as white, but also women, gays, Muslims, Jews, Hispanics, and others who are not entitled to be on the top rung of society. It is, and has long been, the dominant form of bigotry in America.

Like infection with HIV, there is no broad, reliable cure for white supremacy, or even a vaccine for it, but there are effective treatments. I’d like to explore this analogy further.

With HIV, antiviral medications are used to prevent HIV-positive people from getting full-blown AIDS and also stop them from being contagious. The same drugs can be used for prophylaxis, which means HIV-negative people taking the medicines in advance so that they don’t become infected if exposed, protecting them much as immunization does. And, of course, there are barrier methods, such as condoms, dental dams, and gloves.

With white supremacy, the best we can do is the moral equivalent of antivirals; we can suppress the harm it causes and hinder its proliferation, so that it will diminish and perhaps eventually die out. Barrier methods play only a minor role here: we can lock up white supremacist terrorists, but we’re not monsters; we follow the Hippocratic oath’s admonition to “first, cause no harm”. So we’re not going to take a page from their playbook by separating children from adults, much less running concentration camps.

But it does start with children, because they aren’t born infected, so we can protect them by effectively immunizing them through a comprehensive, honest education. Schools have to inculcate critical thinking skills and the scientific method so that students can resist the indoctrination that we can’t block. Rather than vaccinating against specific diseases, we are strengthening their immune system against all of them.

An important part of this education is an anthropological survey of the cultures of the world, exposing them to the variety of beliefs that exist so as to curb unthinking ethnocentrism and provincialism. Schools also need to be desegregated, have federal-level financing and curricula, and teach the whole truth about the history of colonialism, slavery, and Jim Crow, as opposed to the whitewashing myth of the Lost Cause of the Confederacy.

Even when we fail to prevent white supremacy from taking root, including in the adults who missed their chance, there is still much we can do. Without a cure, we can only treat and suppress: prevent their bigotry from being expressed through discriminatory action, biased social policy, and socially-acceptable hate speech. The goal is a societal version of herd immunity, where the infection is contained because enough people are resistant, even though not everyone is.

We can’t tell people whom to befriend, but we can and should criminalize discrimination in all but the most private matters. This means laws against bias in housing, jobs, schools, businesses, and so on. We can counter institutionalized discrimination and even counter its lingering historical effects through reparations.

In addition to laws, we can personally hold those who spread bigotry accountable for their hate speech by ensuring that they are shunned and perhaps even lose their jobs. To the extent that we can do so without undermining the necessity of free speech in a liberal democracy, we must work to deprive them of opportunities to proselytize. For example, when a business takes a stand in favor of bigotry, we should very pointedly spend our money elsewhere.

We need to understand that white supremacy isn’t merely an individual moral flaw, it’s a social disease. And like the smallpox blankets intentionally given to Native Americans in an early form of germ warfare to serve the interests of colonizers, the disease of hate is disseminated from above because it serves the interests of the very rich.

Bigotry separates poor whites from their natural allies: minorities who are impoverished by bias and lack of opportunity. It motivates whites to vote against their own interests by opposing progressive taxation and social programs that benefit everyone, because they may well benefit minorities more. Thanks to bigotry, they can be counted on to choose policies that harm themselves so long as they believe they harm minorities more. When they suffer, as they will, it is through their own malicious choice, but their suffering is nothing compared to the suffering they cause to those with less privilege.

There is much we can do, but none of it involves “empathizing” with bigots or otherwise coddling them. We know that the so-called “white working class” is not suffering from “economic anxiety“. Their anxiety is about losing some of their illegitimate lead over minorities. They’re not afraid of the increasing gap between rich and poor or the shrinkage of the middle class, they’re afraid of having to deal with an even playing field where being a mediocre white man might not be enough anymore to guarantee success.

Let’s be real: we’re not going to change minds and win hearts here. The way we stop the white supremacists is to politically crush them. We should therefore write them off entirely and not pander one bit towards them, even by omission. Instead of hoping to make our platform color-blind enough that perhaps some bigots will swing our way, we should focus on ensuring that all of our votes are counted. We cultural minorities hold a numerical majority, so we must turn it into a political majority by voting the bigots out of office.

Is this a purity test? Only if you think that opposition to white supremacy is an optional part of the liberal agenda, and I certainly don’t.