Moral Tastebuds

Perhaps as is evident from my current fixation on the Enneagram, I’m drawn to personality metrics, and all kinds of explanatory models for why people act so damn different from each other (which is to say, from me) all the time.

One fascinating one is the “moral foundations” quiz (link below), which is based on the work of Jonathan Haidt and his colleagues. Haidt is a psychologist who studies the psychology of morality, and if that seems a little odd to you, then I’d highly recommend his book The Righteous Mind. It is completely fascinating.

When I read it in 2017, it blew my mind–essentially because I had become so unaccustomed to hearing the word “morality” used in a secular context. I think many of us have a strong distaste for that word, preferring a term like “ethics,” which seems a little more civilized and rational, a little less feral, than “morality.” But the book forces you to look right at that word, and to examine what your own morality might be. “Haven’t got one,” I thought, slightly tongue-in-cheek, for the first hundred pages or so. But I was wrong.

Haidt identifies six moral foundations, which he calls “tastebuds.” Each of us cares more or less about each of the six, resulting in our own individual morality which we use constantly, unconsciously, to evaluate the world. Each of the six is a set of concepts, as follows:

  1. Care/Harm
  2. Fairness/Cheating
  3. Loyalty/Betrayal
  4. Authority/Subversion
  5. Sanctity/Degradation
  6. Liberty/Oppression

The degree to which each person feels each of those concepts strongly will determine how they evaluate a situation. For example, most of us can readily understand care vs. harm. Just imagine a situation in which someone or something vulnerable–a child or a puppy–is in need of care, or worse, is being mistreated. The feeling that stirs in you or the action it might drive you to, so the theory goes, shows your care/harm tastebud activating. And if the bare mention of the idea that such a thing could happen got you squirming, congratulations: your care/harm tastebud is alive and well.

But what about the others? The one that fascinates me the most is sanctity vs. degradation, also known as “purity.” The most obvious applications of this tastebud are in sexual morality. But it’s also about food, and personal hygiene. Imagine a clean, cool glass of water. Now imagine someone drops a harmless, sterile cockroach in it. Would you drink it? Your rational mind may accept that the cockroach is sterilized, but your sanctity/degradation tastebud will violently protest if you try to drink it. Even though that may not sound like “morality” at first, consider how religions have almost always set standards for cleanliness of body and of food. We must conclude that food and cleanliness can be deeply moral to humans. If Haidt is right, this tastebud demonstrates the influence of purity on your morality, even at an unconscious level, and even when the tastebud reacts to circumstances that don’t appear to have a moral component at all, like the cockroach.

In the last portion of The Righteous Mind, Haidt goes on to apply his findings about the moral tastebuds to the modern* political climate. He found experimentally that American liberals and American conservatives have recognizable patterns that show up in their tastebuds: liberals tend to weigh care/harm and fairness/cheating very highly, and weigh the rest of the tastebuds less strongly. Conservatives weigh all six somewhat evenly, meaning that their care/harm and fairness/cheating tastebuds come out a little lower than liberals’ did, but the others are higher.

You can easily imagine how that would play out in a political argument: person A might be making excellent points about how Policy X will harm someone, while person B is making excellent points about how Policy X diminishes oppression. They may quickly begin talking directly past each other, unable to hear the value in the other’s point, or see the weaknesses in their own.

(You definitely have something in mind for what policy X is, don’t you? You even know what you think about policy X, and I haven’t said what it is.)

Anyway, if you’re interested, take the test and find out where you come out.

*A note on the “modern” political climate: The Righteous Mind was published in the salad days of 2012, back when politics were all about the Obama era and the Tea Party moment. It feels like so long ago. And even way back in 2017, I thought Haidt’s descriptions of what makes someone “liberal” or “conservative” were pretty outdated. Things have changed, folks.

More or less without comment…

The Lectionary is the daily set of Bible readings many Christian churches use: each day there’s a passage from the Old Testament, one from a Psalm, and one from the Gospels. On Sundays, there’s a reading from the epistles.

I read it most days, because it fascinates and inspires me. It also so often speaks directly to something I or the collective “we” are going through. Uncannily, sometimes.

Today, the Old Testament reading is from Judges. Speaking the first parable of the Bible, Jotham cries out on the mountaintop against the kingship of Abimelech:

All the citizens of Shechem and all Beth-millo came together and proceeded to make Abimelech king by the terebinth at the memorial pillar in Shechem.

When this was reported to him, Jotham went to the top of Mount Gerizim and, standing there, cried out to them in a loud voice:
“Hear me, citizens of Shechem, that God may then hear you!
Once the trees went to anoint a king over themselves.
So they said to the olive tree, ‘Reign over us.’
But the olive tree answered them, ‘Must I give up my rich oil,
whereby men and gods are honored,
and go to wave over the trees?’
Then the trees said to the fig tree, ‘Come; you reign over us!’
But the fig tree answered them,
‘Must I give up my sweetness and my good fruit,
and go to wave over the trees?’
Then the trees said to the vine, ‘Come you, and reign over us.’
But the vine answered them,
‘Must I give up my wine that cheers gods and men,
and go to wave over the trees?’
Then all the trees said to the buckthorn, ‘Come; you reign over us!’
But the buckthorn replied to the trees,
‘If you wish to anoint me king over you in good faith,
come and take refuge in my shadow.
Otherwise, let fire come from the buckthorn
and devour the cedars of Lebanon.’

For context, Abimelech was a power-hungry would-be king of Shechem, a city in Israel. He killed all of his brothers but one, Jotham, to seize the throne.

Stan Patterson writes:

A dominance orientation is always rooted in an exaggerated opinion of self and a marginalization of others. It opens the door for coercive behavior that engenders fear and force limited only in terms of what the character of the person will allow. In his bid for dominance, Abimelech’s character allowed the most extreme coercion–deception and murder. The reward was his coronation beside the “oak of the pillar which is at Shechem” and the title of king.

Does this remind you of anyone yet?

Jotham’s parable of the buckthorn (or “bramble”), which he shouted from the mountaintop, criticizes the people (the mighty Cedar of Lebanon) for choosing a scheming, parasitic bramble of a man to crawl all over them and declare himself king. How can a bramble, which is not even a tree, but which may constrict and suck the life from a tree, be king of the trees, by the trees’ own choosing? Woe to the trees. Woe to any of us who let ourselves be ruled by a bramble man.

And then…this happened:

Never say the lectionary isn’t relevant, kids.

Swamp lessons.

Something else I’m always talking about in meatspace (perhaps “complaining about” is more apt) which is especially relevant in August in DC:

The dew point.

Bear with me.

Most of us are accustomed to talking about the humidity percentage. But as I laze here indoors on this mid-August mid-Atlantic free sample of purgatory with a lingering migraine, it’s 93 degrees out and just 50% humidity. So why does it “feel like” 100 degrees? Why, when one steps outside, does the sweet release of death suddenly seem so appealing, whereas 93 degrees in my arid hometown would be patio weather for the semi-hardy? 50% humidity doesn’t sound terrible.

And how can Olympia, Washington be one of the most humid cities in the country, and no one there sweats profusely while cursing existence?

As this article from the Washington Post’s Capital Weather Gang explains, humidity percentage is an expression of relative humidity—how much water vapor is in the air relative to the air temperature. You could have 90% humidity at 32 degrees, or 90% humidity at 90 degrees, and there is a very different amount of water vapor in those two air situations. Hot air can hold far more water vapor than cold air, so higher relative humidity in warmer air is bad news for people who don’t like their air drinkable.

The relative humidity percentage, then, doesn’t tell you what you want to know, which is how miserable you’ll feel if you dare to venture out. Weather nerds prefer using the dew point, which is the temperature at which water vapor will form dew. Think of a cold drink brought outside on a hot day: how long will it take for water droplets to form on the outside of the bottle? A higher dew point means that the droplets will form quickly, because the air touching the bottle won’t have to cool very far to form dew. The higher dew point means there is more water vapor in the air generally, as the air becomes closer to saturated with water. And more water in the air means more discomfort.

For those new to dew points, the number will mean almost nothing to you at first. But if you memorize a quick set of reference points about the dew point (perhaps from this handy chart) you can quickly get a sense of the system. For example: during my walk earlier today, the dew point was 74, which is abominable. At the time, the temperature was 81 or so, which gives us a relative humidity of about 80%. I was…not thriving. And I knew it, because the dew point was 74, and I looked like this:

Now, the dew point is supposed to tell us how our delicate human bodies will feel when we are exposed to a certain amount of water vapor. The Capital Weather Gang helped us out with this handy graphic:

But, it seems, even though no one ever says they get used to this humidity, we are somewhat accustomed to it. I mean, look at how much more delicate people are in Idaho:

“West is best,” I scream into the swamp as I melt into a puddle of sweat and tears and the whisper of a dream of autumn.

Indecision fatigue.

It’s a time of plenty, friends, by just about any measure (which isn’t to say that this plenty is properly distributed, or that there is no room for improvement, but that’s another matter). And yet I notice quite often that my mind has a habit of seeing scarcity everywhere. See, for example, my almost perilous aversion to wasting food, culminating in narrowly avoided tragedy in the glass-shard soup incident of 2015. My grandparents lived through the Depression, yes, but why is its hold so strong on me still, when I grew up wanting nothing?

Scarcity looms everywhere, though, probably as a form of the common human negativity bias. It tries to keep me alive by telling me to make hay while the sun shines, (seriously, get out there, it’s sunny, what are you doing, the sun will go away and you won’t have enough hay, dummy!) It keeps me alert to potential danger by reminding me that a bird in the hand is worth two in the bush (and, hey, you’ve only got one bird in that hand; what are you going to do when you’re next in need of a bird? You think birds grow on trees?)

Pretty obnoxious, really. And one place it comes up in this kind of meta way is when I am working on an idea: some little voice in the back of my head is piping up: “Hey, what about the next idea? Is this the only one you’ve got? You might never have another. Just thought you should know!”

It is, to say the least, hardly helpful. There’s nothing less creativity-inspiring than worrying about the potential future scarcity of creativity, just as there’s nothing less satisfying to the stomach than fretting about where one’s next meal will come from. Yet even as I’m working on picking what I like and seeing how it grows, the little scarcity gremlin looks over my shoulder and warns me that I’m probably going to run out of ideas soon, and probably half of the ones I’ve got are going to fizzle out before I can finish…just trying to be helpful!

So there I am, clinging white-knuckled to the little list of ideas I’ve got, squeezing the life out of the poor things. But in Big Magic, which I read recently (and you should too, if you want to live a more creative life and could use a little boost), Elizabeth Gilbert has an interesting idea that turns my scarcity gremlin on his head: she pictures inspiration as a sentient force, a personified entity, that goes where it likes, and stays if you entertain it generously. With a straight face, she tells the story of the time that the idea for a novel visited her for a while, then once she’d demonstrated that she didn’t have the space in her life to tend to it fully, entered the body of Ann Patchett the day that they met. Ann Patchett ended up writing this book that so badly wanted to be written. Which is not to say “cling harder and panic when you have an idea, lest it enter the body of a more talented nearby artist.” (That would be our neighborhood scarcity gremlin talking again). I suppose it’s just to say: be grateful when it’s there. Welcome it. Don’t squeeze the life out of it.

Now I’m doing better, taking some deep cleansing breaths and welcoming the friendly inspiration spirit in. I’m living in abundance mode, not scarcity mode. I’m countering that negativity bias. Noticing the plenty, not the lack.

But then–oh no–such plenty. Ideas coming so hot and heavy that I have several different digital to-do lists, like so many crumpled purse-bottom post-it notes, cluttering up the joint. Ideas coming so fast and incomplete that they talk over each other and my mind starts to feel all jerky and fuzzy like it does when I’ve been visiting the portal too much. Each one is a welcomed guest but, “well,” I begin to say to the throng of them in my living room, “I do visit the portal a fair amount, and I have my day job to consider, and my social life, and I’ve got to stay showered and need to shop for groceries now and then, and as it is I’m already halfway through two of you, so I might not get to you” (pointing to one of them in the back, a new arrival staring at me slackjawed) “until halfway through next year, at the earliest, I’m sorry to say.”

How do I pick what I like and see how it grows when I struggle to follow a thread long enough to plant it, and when I like quite a lot of things?

Don’t be fooled. This is scarcity talking. Scarcity of time. Like any other scarcity fear, there’s a grain of truth within it, but it’s hardly helpful to let it run the show.

So you have to conquer the first scarcity, the fear that ideas will come at all; perhaps you do this by trusting that the idea has its own will, and will find you if and when it pleases, no matter how tightly you squeeze or how much you suffer in the meantime, so you may as well relax. Then, when you’ve caught one, you have to conquer the second scarcity fear: the fear of scarcity of ability, the fear that you won’t do the idea justice and shouldn’t even try. You do this by learning to entertain it gently, and just follow it where it leads.

But how do you conquer the third scarcity, of time to follow the many possible leads you’ve got? You’re indecisive. You don’t know which one is top priority. (After all, if everything is top priority, nothing is, and there we are in the bowels of the couch playing Flash games until long after our bedtime). Okay, so just do something. Do literally anything. Print out a list and throw a dart at it, see which one it lands nearest. Use a random number generator. Just start. Just for ten minutes. Just mess around with it on your phone while you wait in line at the pharmacy.

But the fear is still there. The gremlin hasn’t left. It says: “okay, you’ve got a lot for now, that’s great. But what will you do in five years? You haven’t really got five years’ worth on that list, do you? Aren’t you worried that you’re going to to flame out in a while, even if you do find the time to do any of them?”

That’s when I look that gremlin in the eye and go, “listen, buddy, I know this game.” I know all about trying to pin my future down and buy it an insurance policy. It’s a fool’s errand. I look back to what I was dead certain I wanted my long-term future to look like when I was 15, 18, 20, 22, 25, 28…every single time, if I could have locked it in, of course I would have. But every single time, I either didn’t get it, or I got some very different version of it, and thank God for that, because I can’t imagine much worse than being trapped in a life younger-me chose. She doesn’t know anything about me.

So ideas will come. And I’ll entertain them as best I’m able. And I’ll get new ones, or I won’t, because maybe me in several years won’t even want them, and that’s totally fine. And I’ll find the time, here and there, or maybe future me will. She can figure it out.

Reith Lectures. A history of histories.

We live in a time of many, many podcasts. But while we’re here, one benefit is the ready availability of recordings that weren’t even made to be podcasts. Recently, I’ve been meandering through the archives of the Reith Lectures, an annual series of five lectures by a single speaker each year, exploring some topic of interest. The BBC has been putting the series on since 1948 before a small, invited live audience, but the whole catalog is available online for free, for posterity.

Some of my favorites have been Bertrand Russell speaking on “Authority and the Individual” and the need for a world order to prevent the recurrence of world war (1948), Dr. Jonathan Sacks on “The Persistence of Faith” in a secular world (1990), John Keegan on “War in Our World” (1998), Niall Ferguson on the enemies of the rule of law (2012), Kwame Anthony Appiah on culture (2016), and Hilary Mantel on historical fiction (2017). Other lecturers include Stephen Hawking and Aung San Suu Kyi.

The archive of these lectures is how the internet shows the best version of itself: it acts as a leveler to share humanity’s knowledge somewhat evenly. Without mass communication, these lectures would have been heard only by those elite few who were invited to the talk. The BBC was able to tear down that barrier by broadcasting the talks on the radio. But the internet sends the talks even further, into the future and into great distances of space, lasting however long the digital archives of the BBC do.

Beyond these recommendations, I have some observations.

First, are we now living through a new age of audience participation (and can we make it stop?) In the Reith Lectures, at some point around the late 20th century, audience questions and answers show up. I assume that the audience was allowed to ask questions before then (although, it would be fascinating to find out that they weren’t!) but it was only relatively recently that the audience participation was included as part of the lecture recording. As someone who has sat through several too many “more a comment than a question” sessions, I’m never stoked about the opportunity to be held hostage to the often egomaniacal whims of whoever runs fastest to the public mic. But sometimes, miraculously, the commenter does have something of interest to say, despite my impatience. Anyway, I wonder if the inclusion of these questions in the recording signals a recent phenomenon–a symptom of some deep societal change opening us up to the thoughts of the rando, or the decreasing cost of data storage, or who knows what. In any event, this has made the actual substance of the lectures shorter, but the recordings stay the same length to make time for the almighty q&a, and I suppose I have to live with it.

Second: in the last 60 years of lectures, the history of the period echoes a bit in the choices of lecture topics. Some topics are of continual interest (scientific advancement is a perennial favorite), whereas others dwindle: there have been very few lectures on religion in the last 30 years, for example. This reminds me of a story about the making of Downton Abbey: allegedly, the audience never sees the family begin eating a meal, because historical accuracy would also require showing them praying, and the modern audience would find that too spooky indeed. So too, perhaps, with the lecturers: we are in an age recently in which religion is not a topic for lectures of general interest.

Another fun aspect of a dive in the archive is that you get to hear the accents mutate and diversify from the absolute lunacy of the mid-20th century upper-class pinched aristocrat voice to a spectrum that includes all manner of Brits, international voices, and even the occasional Canadian.

A final observation: there is a strange middle ground here between enjoying something old as a curiosity, and enjoying something recent by learning from it at face value. A work of scholarship that is very old is a historical document. We’d study it to find out more about the period in which it was written, but we are not as likely to use it for information about what it actually says. In other words, it ages enough until it is a primary source document–but only a primary source document. On the other hand, very recent scholarship is just scholarship. But where is the line? How old does something have to be before it becomes a primary source for a historian studying the time it was written, or a curiosity?

The first lecture, from 1948, is from Bertrand Russell, talking about his views on the need for a world authority to prevent the proliferation of nuclear weapons to prevent another world war. His views sound essentially quaint. They would be quite useful in assembling a history of the early anti-nuclear movement. But no one would use them now in a policy paper about nuclear policy or international relations.

More recent lectures felt, to me, to be in a funny kind of limbo in that regard. Lectures from the ’80s and ’90s are not yet far enough away in time to be quaint or revelatory (“my goodness, I didn’t realize people thought like that way back then!”). Instead, they’re just outdated. I guess they’re waiting in the vault, aging like a fine wine, but somewhat awkwardly. (Perhaps my bias is showing: I’ve never been very interested in history of the recent past. My interest in history has always been next door to my interest in fiction, trying to find other worlds to visit.)

As an example of this phenomenon, let’s briefly compare Bertrand Russell in 1948 to military historian John Keegan fifty years later in 1998. Both speakers saw a strong UN or other world body as the key to eliminating war. Russell thought it should do so through disarmament, eliminating nuclear weapons. In his day, scarcely out of World War II, it must have seemed quite possible that nuclear war would soon be frequent. But Keegan, speaking after the end of the Cold War, is adamantly against disarmament: he argues that the dangerous knowledge of how to make nuclear weapons can never leave humanity; therefore, responsible actors (the United States!) must retain them. For Keegan, nuclear weapons still looked like the future of war.

Twenty years on, as a lay observer of war, I’m not so sure. Yes, we have scares about nuclear Irans and North Koreas. But we also see something akin to “war” happening purely online through hacking and disinformation campaigns, and through the quiet misappropriation of information. These are “war” in the sense that the security state turns its full attention to them. But these appearances of “war,” in which nuclear weapons sit basically forgotten (we pray), where actual combat occurs only with unsophisticated non-state actors and technology otherwise is the most feared threat, would be unrecognizable to an ancient person. Or even to Russell.

Book Two.

I recently finished a draft of a novel. It was a meandering process, guided by occasional spurts of planning, and then re-planning around the total chaos that ensued when I actually sat down to write.

What’s more, it was based (semi-closely, in places) on true events, so I didn’t have to make everything up on my own. I also couldn’t make everything up on my own, to the extent that I felt bound to be loyal to the true events. Sometimes this took away from my ability to follow the ideals of story crafting. (Or, isn’t it pretty to think so, a nice excuse.)

Now that I’m done with that draft, I’m going to try to do another one. Different genre, not based on true events this time, no longer historical fiction. In other words:

The grand plan is to spend the next six months or so planning Book Two, and then hammering out my SFD (“shitty first draft,” for anyone who is rusty on their Anne Lamott.)

I come prepared so far with a very rough outline: as I see it now, it’s three mini-novels, unified by a common story that underlies each part. I have the loose sense of where I want each part to go, plot-wise, and the feel I want each part to have. I know, for now, where each part is set and a lot of what might happen. But what I lack right now is something tremendously fundamental to a readable novel:

Characters.

Sure, I have the vaguest sense of a few of them. One of them looks like Bebe Neuwirth in her Frasier years. I know their Enneagram types, because naturally I would. And that actually is something: it gives a framework to start building on, both with what the behaviors might be and with the directions these people might go when distressed, or when secure (about which more later.) But I don’t know much else about them, and that won’t do.

A tool that is often helpful for people in building characters is doing an exercise that I have been internally calling, grotesquely, a “character dump,” but Google is giving me no reassurance at all that this is a phrase anyone else uses. Could it be that I, on my own motion, started calling it that? Horrors. In any event, this exercise is one in which you sit down and start writing down a bunch of details about a character. When were they born? What do they like to eat? What do they wear? What was the saddest thing that happened in their childhood? What makes them laugh? Who is their best friend? What dreams do they have? Do they think a hot dog is a sandwich? And on and on.

Now, you may say, “you just said a few paragraphs up that you don’t know much about your new characters yet. How could you possibly know all these details?” Gentle reader, I don’t, and I probably won’t even after I finish this exercise. I’ll make some stuff up, like a kid half-assing an English Lit final, and maybe some of it will resonate, but most of it won’t be helpful. That’s okay. This process isn’t what you might call efficient.

Then, later, I’ll do other exercises relating to plot. They might similarly be inefficient, but then I’ll have more words down, about both these half-formed people and the half-seen things that will start happening to them. Then I’ll sit down and start writing the thing. Every 200 words or so, I’ll realize I’ve got a plot hole, or my character isn’t wanting to behave the way I expected. Fine. I’ll make new stuff up. I’ll do some tailoring at the edges of the plan. I’ll start a new plan. I’ll write more. I’ll find some massive hole. I’ll give up for a day or two, binge some romantic comedies on STARZ. I’ll come back to it and realize I have a way out that is just brilliant. I’ll write in a delighted frenzy for a week. Then I’ll realize I created an even bigger hole than before, I’ll fix it. Someday I’ll have a draft, and I’ll start over and write draft 2.

It’ll be fun.

Brain molds.

When I was young, my parents occasionally threw what they called an “Ugly Food Party.” The concept (patently my father’s brain child) was a potluck, to which each guest should bring a dish that was both delicious and ugly. The more delicious, and the uglier, the better. At the end of the evening, a first, second, and third-place winner would be announced. Memorable dishes ranged from the cartoonish (bean dip served in a diaper) to merely unusual-in-my-neighborhood (a whole sheep’s head). One was an opaque greyish jello served in a brain mold. 

It was the jello brain mold that got me here, because this is how my actual brain works: I came here to write about neuroplasticity, which is to say, brain molding, hence brain mold, hence ugly food party. My brain did this little hopscotch in about four seconds. This is, perhaps, why I am almost constantly distracted. 

Copyright 2019

Speaking of habits of the mind, neuroplasticity has been on my mind lately because I just finished reading The Brain That Changes Itself, which I would highly recommend. Dr. Norman Doidge writes accessibly about how the brain works, and the history of our growing understanding that the brain is constantly changing and adapting. Our thoughts and behaviors quite literally physically change the brain, because the brain adapts the tasks we put it to frequently. Thoughts we often think together become physically linked, until we make a conscious effort to disentangle them. 

The book is full of fascinating examples of people who have been through tremendous physical or emotional trauma, and whose brains have adapted incredibly to overcome it, often with the help of therapy. The final chapter is about a woman who was born with literally only half of her brain (the other half never developed, after some unknown trauma in utero), and yet she was able to read, speak, and socialize, and was excellent at some kinds of math. Her half-brain had adapted to work as a whole one, finding space for all of her needs. 

To summarize the conclusion of the book: you get better at what you practice. This is not what you might call groundbreaking. But to understand that it is true on a physical, neurological level is pretty amazing. Like a muscle, the brain becomes competent at, and then efficient at, doing what is repeated–whether these habits are harmful or helpful.

The book came out over a decade ago, and straddles the market between popular science and self-help. It offers very little in the latter category, but it is popular in those circles no doubt because it provides such a clear illustration of what needs to be done to improve oneself. Self-improvement on the order of increasing positivity, decreasing anxiety, or improving at an artistic task, begins to look like a physical reality, like straightening one’s posture or practicing piano scales. The prescription may remain the same as before (do more of the thing you want to do; do less of its opposite), but the explanation for why this helps makes it feel a little more tangible.

Even as I’m writing this now, I’m imagining my brain getting just that much better at associating the motor skills and verbal skills together that are needed to type while I think. And on a larger scale, I think back to a few weeks ago when I first decided to start this blog. At first, I had one or two ideas. Then, as soon as I articulated the thought to myself, “I will have a blog,” before I knew it I had quite literally dozens of ideas listed in a document. The neuroplastic explanation for that might be that even in imagining writing in this format, my brain strengthened the pathways between thoughts. I’m drawing connections between things, and drawing connections between those things and the ideas and actions of writing them down, more quickly and efficiently. It’s just practice. It’s picking what you like, and seeing how it grows, on a neurological level. It’s pretty neat. 

Yesterday, I listened to a recent episode of the “Typology” podcast, which is about the Enneagram. The guest was Dr. Richard Lindroth, a 5 who was willing to come on the show. (Type 5 is the “observer” or the investigator.” They are typically introverted and analytical, both of which traits might explain why it would be the rare 5 who would want to come talk about him- or herself on a podcast about something rather woo-woo like the Enneagram). Dr. Lindroth quoted the famous statistician George Box: “All models are wrong, but some are useful,” and that is precisely how I think about the Enneagram. I decided I quite liked him. 

He also struck me by mentioning neuroplasticity in connection with the Enneagram (about ten minutes into the interview). As a 5, one of his struggles is in allowing himself to feel his emotions. His inclination is to tamp them down and think them, rather than to feel them. Since learning about the Enneagram, he says:

“One of my practices for the last couple of years has been to reduce that distance [between myself and my emotions]…When I am experiencing an emotion [I] take the time and really experience it in the moment, hold it in my head and really experience it for 20 seconds. That’s my neuroplasticity rule…If I can engage with this thought or engage with this emotion for 20 seconds, it’s going to help rewire my brain to accommodate emotions probably in a more healthy way.”

The imperfect model of the Enneagram can hold up a mirror, showing us aspects of ourselves that we had no idea were unusual. Learning that we do these things (and not everyone does, and we needn’t keep doing them), can point out some areas where we can constructively change and grow. Thinking about this in the context of neuroplasticity, one might say that the brain of someone deep in their Enneagram type has grown and adapted and developed efficient patterns of that type: Dr. Lindroth has learned to distance himself from his emotions, and I have learned to associate conflict with danger, and to shut down at the first sign of it.

But by noticing that our brains have that habit, and by imagining doing things differently, and by engaging for 20 seconds with an emotion or by staying mentally present when interpersonal things get rough, we can start to weaken the associations our brain has made, and to strengthen new ones. 

I wrote a novel.

(Full disclosure: I moved this text over from a Google Docs document I started when I first started thinking of starting a blog. Writing as though on a blog, but in a Google Doc, felt very Creed Thoughts, so it’s quite a relief to know this is going out on the actual internet.) 

A few weeks ago I finished my second draft of a novel I wrote. Right now it’s out with a handful of generous people who were willing to read it. In the long run, I’d love to publish it–but for now I’m here, and here is in a welcome pause in the writing process while I wait for feedback. 

Copyright 2019

It was a long process to write the thing, taking anywhere from six months to three years, depending on how you count. Mostly it happened in little half-hour chunks before or after work, or on weekends. There were times I thought I’d never finish. But somehow I wrote a whole book in 2017, about 115,000 words, then set it aside, and wrote it all again almost from scratch, about 130,000 words, in the first half of 2019.

In the last few months while I worked toward the finish line, I fantasized about what it would be like to be Future-Kate, who was done with the draft. She was going to read so many books, make the most of living through the Golden Age of Television (literally, what a time to be alive!), go out so much more, be much better at working out and socializing. She was going to sleep more and cook like a champ and deep-clean everything. But writing seems to breed writing. I’m Future-Kate, and I started this project. Go figure.

I plan to write more later on the ideation, writing, and editing processes. I also hope Further-Future-Kate will know more than I do: will the draft go into the ether like Creed’s thoughts, or become a “real” book? Although after spending that many hours on the drafts, there’s one thing I’ve learned for sure: either option has to be fine or else it isn’t worth doing. Writing has to be its own reward, because there are much better ways of spending time if it isn’t.

The upside of conflict

I hate conflict.

This is one of those things that comes with being a 9, and in my case it goes so deep that I wasn’t even conscious of it until shockingly recently. All I knew is that I basically wanted to wither and vanish when anyone was angry or oppositional. I couldn’t imagine why anyone else felt differently. Facing down someone who was saying or even declaring what they felt was utterly incomprehensible, just as it would be incomprehensible to see someone burn all the cash in their wallet or something. Why on earth would you do such a thing? Why would you want to ruin everyone’s day like that?

(You see, if I were to do the same, it would ruin everyone’s day. No question.)

But part of learning about myself has been learning that not everyone is wired the same way. Most people are far more comfortable with conflict than I am. What’s more, there is so much to be gained from conflict deployed well. You can ask for what you want. You can confront an important issue. You can highlight what’s not working, and have a chance to change it. Amazing. (Am I there yet? No. But closer than before.) And best of all, it’s generally not a day-ruining kind of situation.

Copyright 2019

Thinking a lot about conflict in my own life has gotten me to think more about conflict in the political news. Specifically, how it’s basically a win-win for those who engage in it. This has been blowing my mind to realize, having thought for so long that conflict was basically a glitch in the Matrix.

Let’s first consider the case of Nancy and the Squad. It’s from a few weeks ago, which is basically decades in 2019 time, so I’ll recap: Nancy Pelosi, often a subject of critique from the left wing of her party, made some dismissive comments about four progressive Congresswomen, suggesting that they should be more pragmatic. Those Congresswomen’s supporters got fired up against the moderate part of the party, represented by Pelosi. (I assume the moderates also got fired up at some point, but that wasn’t evident from my Twitter feed at the time. [I disclaim, now and forever, any claims of responsible research on this site.])

Then, Trump started tweeting racistly about the Congresswomen, saying they should go back to their countries (even though they’re all American citizens, and three of four were born here.) Pelosi and all the Democrats came to their defense. Pelosi and Rep. Omar are good now.

On a large scale, what happened? Everyone benefited. “The Squad’s” stars rose. Pelosi maintained her credibility with the older and more moderate Democrats. Trump made racists happy, which is his thing. Pelosi came in to defend the congresswomen against the racist attacks. If you were conspiracy-minded, you might even say that that was her plan all along.

But the point here is: through this conflict, everyone made happy those they sought to make happy, and isn’t that sort of what politics is, as distinguished from actual policy- and law-making? You’d have to call it a success, start to finish.

This is what blows my mind. But I’m seeing it all over the place now. Adam Scott vs. Mitch-McConnell-being-Darth-Vader (which, I surmise, is what Republicans like about him: at least he’s their Darth Vader?). Democratic candidates for President strategically seeking conflict to stay in the race longer. Trump “feuding” with Rep. Elijah Cummings (again: making racists happy is his jam. They find it hilarious when he’s monstrous. Meanwhile, everyone else can rally around an embattled Democrat.) These are win-win situations for the parties. There’s no evident downside to participating, even if you didn’t “start it” in Kindergarten terms.

This may all seem obvious to people with less broken relationships to conflict in their own lives, but it’s all new to me.

And I’m left wondering if this conflict is a win-win all around, including for those of us on the outside. It does help clarify and crystallize our positions. It can show people’s true colors, even if we’ve already seen them many times too many. It can bring issues to the fore that would otherwise fester unspoken.

But in the long run, I’m worried that we’re stuck. The incentives all point toward increasing conflict in an increasingly politically polarized (or ideological) environment. The more polarized the public is, the more it benefits the parties to a conflict to engage in it, and to even prolong or heighten it. The further apart one is from one’s enemy, the more upside and less downside there is to opposing the enemy publicly. After all, everyone loves seeing the other guy get owned, even if no one is ever quite as owned as the owner expects. So there is no end in sight, and I’m not sure that the outcomes of clarified positions and increased transparency are worth it.

Basically, the incentives for public figures are to engage in conflict to satisfy the people who agree with them already. Everyone enjoys the spectacle. No one’s mind is changed. Don’t get me wrong: this is not a “let’s understand Rust Belt America” essay. This is not centrist sighing. I’m not even much of a centrist. I’m trying to get more comfortable with healthy conflict, but all of this just makes me sleepy.