Liber Augmen

(The Book of Growth)

John David Pressman

Updated 2021-09-02 23:03


You are about to read a very strange book.

The seeds of what would become Liber Augmen were planted in early 2018 when I took a college class on comparative religion. As a staunch atheist I'd expected to go in having to nod along with people's accounts of ghosts, demons, divine revelations, and other basic failures of the map-territory distinction.

Instead I found myself listening to other people describe their feelings about the sacred. As I listened I was shocked to discover I'd had these feelings too. But they weren't associated with god or meditation, rather they were attached to the cosmic scheme I'd learned from thinkers like Hawking and Sagan, and the transhumanist ideas of authors such as Eliezer Yudkowsky and Scott Alexander. It seemed everyone else was able to articulate the relationship between their feelings and the ideas in their cosmology, but I just had feelings and no language to discuss them with. This stirred up a subtle malaise that crystalized when the instructor asked me in an assignment to discuss 'my faith tradition' and I realized I had no idea what to say to her. I stammered my way through that section, writing that 'it' was 'recent' and had started roughly in the 80's with Max More's Extropians. For some reason I also included a brief description of AI Risk which noted that heaven and hell are collective outcomes, and the soul is considered a kind of information.

Afterwards she passed back my writing with the comment that she was a fan of "making your own god" or something similar. I hadn't known what to say and what I did say wasn't understood: it was humiliating, I felt like an intellectual pauper. Here I was being asked about the most important things there were to know and the best I could manage was a tonguetied stammer. As Eliezer Yudkowsky might say: Oops and Duh. It was also around this time that I began to be seriously concerned that the people calling themselves 'postrationalists' knew something I didn't, so I found some books on the occult and read them. Fortunately I chose Hall's Secret Teachings Of All Ages and Principe's Secrets Of Alchemy, which are both written about real western occultism for a mind that is looking to understand. I'd expected to find old magick rituals and obscuritanism, instead I was staring in the face of my lost philosophical ancestors. To my astonishment I could trace contemporary transhumanist and extropian ideas back to alchemy, and through alchemy back to antiquity.

I had also reached a place where existential risk bore heavier and heavier on my mind. Climate change and environmental headlines took on an apocalyptic tone, and it was becoming increasingly clear that nobody intended to do half of what was necessary to avert horrific outcomes. As I sat and considered what it would have to look like for us to grapple with greenhouse gases and nuclear bombs and AI risk, I realized the basic problem was that humanity had never dealt with problems like these before. If we tried to solve them the way we'd solved every other problem in history, by seeing and then acting, we would die for certain. The entire reason why these problems disable our ability to act is that they have to be dealt with long before they reach their crisis point. You have to act on a time scale of decades or centuries to fight an invisible opponent long before it arrives, the whole time using vast resources that could be spent on tangible problems here and now. The only comparable enterprises in human history are cathedrals, monuments, and various acts of sacrifice to invisible daemons. That is to say the only machinery in the human animal that can act on these problems is religious, period. The outside view told me that absent a genuinely religious sentiment towards solving our looming crisis's, there was no reason to expect us to survive.

There are only a handful of thinkers who I feel have really 'gotten' existential risk and done anything substantial about the overall category. Perhaps most astonishing is a semi-obscure Polish nobleman named Alfred Korzybski, who outlined the basic idea of X-Risk before there were even any nukes to speak of. He anticipates the concept in his 1921 Manhood of Humanity. The Manhood of Humanity is a book whose essential thesis is that man is a time binder, differentiated from the rest of nature by the ability to retain experiences and transmit them across generations. In Korzybski's view, technological and social progress is an exponential function dependent on already accumulated knowledge. To him WWI was prima-facie evidence that the growth rate of technological capabilities had surpassed that of socializing abilities. This would inevitably lead to an increasingly powerful humanity less and less restrained by the social sciences. Eventually our powers would grow to world threatening heights, with an infantile understanding of the best way to use them. Given his thesis, it seemed obvious that the only hope of saving the world would be to find out what 'time binding' is made of, and then use that understanding to improve our ability to bind time in the social sciences.

Korzybski viewed the problem as an inability to learn from history. In marked contrast, his recent spiritual successor Eliezer Yudkowsky sees it as an inability to look into the future. He writes about 'future shock levels' and the importance of orienting yourself to the full possibilities implied by physics. Physics implies that a radically different, much more enjoyable future is possible for earthly life. To Yudkowsky, if your unbiased consideration of human potential would not suggest high future shock, this is a sign that your natural philosophy is too weak. He wrote a very long book explaining how to think like he does. Readers of his book organized under the banner of 'rationality', and proceeded to be eaten by centralizing in the Bay Area. Yudkowsky had hoped that he'd be able to find someone that could play his role better than he could. He described his vision in an optimistic April 2011 essay as:

“Stay on track toward what?” you ask, and my best shot at describing the vision is as follows:

“Through rationality we shall become awesome, and invent and test systematic methods for making people awesome, and plot to optimize everything in sight, and the more fun we have the more people will want to join us.”

This did not happen, and it was also in 2018 that I fully internalized this abject failure. It seemed clear to me that it wasn't possible to fix the 'rationalist community', having selected its membership on an 'elite reject' model that attracted very intelligent screwups like a MENSA chapter. Their mutual brokenness had reached fixation, and it was being a high functioning person that was ultimately stigmatized and excluded. If I couldn't fix it, then the only option was to pack up what was 'special' about Yudkowsky's rationalists and take it somewhere where people weren't so dysfunctional. That seemed like the highest priority, so I began researching 'rationality' books to try and get some idea of what the essence of the thing was.

All these threads of inquiry ended up merging into one research project. On the religion front I began asking what it would look like to have a religion which only included literally true things in it. I asked what the difference was between a 'mundane' truth like the earth being round and a 'radical' truth like the possibility of Friendly Artificial Intelligence or environmental disaster. What I eventually singled it down to was priorities, writing:

If religion is to be based on truth, it must be radical truth. Our notion of the transcendant will not settle for that which is merely common sense. We find no beyond in passive facts like the earth’s spherical nature. Radical truth is a revelation, it’s an aggressive force in the world that implies a total restructuring of priorities. Our onrushing ecological apocalypse is radical truth, the infinite possibility of the cosmos and potential to extract resources from the stars is radical truth, the symbiosis of machines that speak like men and men that think like machines is radical truth. It’s the visions of mystics and prophets and wizards gone mad by their own revelations which can touch that outer rim of our vestigial connection to the dreamtime. Perhaps to truly understand we must go mad with them. How does one explain the evolution of ants, and men that spring from monkeys? These incredible facts go unnoticed in part because they have not been presented with the mania necessary to justify them.

I'd already admitted to myself that I was more or less experiencing what the people in that comparative religion class were experiencing. Further, I was trying to take meaningful action for things decades into the future; having this thought at all meant I could analyze myself for clues as to whether I was 'religious' or not. In the end I concluded that 'religious mission' did in fact more or less characterize the difference between me and your average reader of Yudkowsky. Between these two things I had an existence proof: Somewhere in my head, I'd squared the circle and discovered a religion which can take the world as it is. Attaining this state seemed to be a basic prerequisite to doing anything meaningful about existential risk, and likely the most productive way to frame Eliezer Yudkowsky's philosophy.

These are the basic premises which I spent the next two and a half years from January 2018 researching. This was done in the limited free time I had during college and later programming work. Things came to a head during a trip I took to Paris in 2019, which involved a near death experience. I caught the flu in an airport and experienced wicked fever dreams about being tortured by demons. Did I actually almost die? It doesn't matter, because it sure felt like I might. Standing dehydrated outside a Franprix grocery store with no idea where I was, I had a realization: If I died right now, nobody would know any of the stuff I'd discovered during my research. My biggest regret would be not writing more publicly, and not telling more people about my ideas.

This book is my attempt to fix that. I'd originally tried writing a series of essays, but found the requirement that I weave so many different ideas together into one narrative was too much. I had also experimented with microblogging, on the theory that if nothing else the shorthand version of my thoughts would be better than nothing if I kicked the bucket. Eventually I abandoned the essay format because it wasn't suitable for the kind of writing I wanted to do. An essay is good if you have 1-3 ideas you're trying to get across in detail. But this was more like trying to transmit a gestalt of 100-150 mental models, which needed to all be understood to get a full picture of what I am trying to explain. As a result Liber Augmen is presented as a series of 'minimodels', or microblog posts in a particular format. Each post is meant to be a named model of some phenomena, described in 1024 to 2048 characters, along with appropriate citations and references for where the reader can go to get more information.

In short, Liber Augmen is a description of a religion which I term "Eliezer's Extropy", along with a series of mental models and tools to think (epistemology) about 'belief', 'religion' and 'agency', etc. The intent is that after reading it you will be in a better position to understand the strategy employed by a figure like Elon Musk or Dominic Cummings. The best possible outcome would be that it sets the stage for an outreach strategy to be developed that summons 1-10,000x the current number of "Eliezer Yudkowsky style" agents in the world. As it stands there are too few for me to imagine humanity veering away from its collision course with certain death.

I wish you the best of luck in your studies.

Cause Area

Effective Altruism breaks down into various cause areas. These are named opportunities to do good in the world. e.g. Peter Singer advocates the use of 1st world wealth to save 3rd world lives using bed nets. ‘Bed Nets’ then is a shorthand phrase for the strategy of doing good by eradicating malaria infected mosquitoes with nets. Because good is subjective, it’s not really possible to separate the value of a cause area from the moral framework used to evaluate it. For example, promoting contraception in the 3rd world is probably a decent cause area if you’re a secular humanist but extremely net negative to a Catholic.

The choice of cause area determines how much good can come from your efforts. 100% dedicating yourself to picking up trash on the beach probably doesn’t do as much good as donating 10% of your income to research that will help eliminate plastic altogether. It’s worth taking the time to figure out where you can put your effort to get the most leverage against a problem.

Impact, Neglectedness, Tractability

Typically when evaluating cause areas a simple framework of impact/neglectedness/tractability is used.

Impact: How much do we benefit from interventions in this area?

Neglectedness: Are other people already handling it? What is the opportunity cost of an intervention in this area compared to the alternatives?

Tractability: Can we do anything about it?

For example, many people think it would be good if they got a medical degree and studied to cure cancer. But a lot of other people are doing that (not neglected), and cancer seems to be pretty hard (intractable?), even though the benefits would be great (high impact). By contrast, wild animal suffering is something almost nobody thinks about (neglected), there are between 100 billion to 100 trillion wild land vertebrates alone (high impact), and since nobody has an immediate incentive to think about or work on this even a relatively mediocre person might have huge impacts just by trying.

Upstream & Downstream

One consistent strategy for getting leverage is to solve intractable cause areas by finding more tractable problems that solve the 'intractable' problems as a side effect. e.g. Something like Neuralink could solve war, usually considered deeply intractable, by allowing humanity to merge into a composite agent. We can get at this precise mental motion by saying a cause area is upstream of another if it determines how the other one plays out. The basic idea behind a lot of cypherpunk and 'distributed defense' 3D-printed gun style activism is that physics is a higher court of law than those of states.

Upstream causes are typically less neglected than downstream causes, but not always. Until there is an adequate market in making the world better, going upstream when it makes sense is key to maximizing impact. Much of the point of high future shock is to get above the petty infighting of our society and solve the root issues it's all downstream of.

Talebian Ruin

People don't act "perfectly rational" because there is a zero state they can't continue from (i.e. death). Therefore to most people minimizing the risk of nonsurvival is more important than maximizing expected utility. You can only maximize when you feel secure enough that the worst case scenario won't leave you stranded or dead. Nassim Taleb calls this concept ruin in the context of rationality, and claims that the possibility of ruin disproves most ideas about what's "rational".

Fear of ruin is the basic thing holding most people back from acting like a maximizing agent. Internalizing that ruin is the default state in the 21st century frees you up psychologically. You can worry less about certain categories of failure, and notice it makes sense to go for broke trying to fix things. Someone who groks the situation we're in is rarely risk averse, usually extremely risk hungry compared to most people. In that sense rationality is a philosophy of desperation:

But if your precious daughter is one of the 500, and you don’t know which one, then, perhaps, you may feel more impelled to shut up and multiply—to notice that you have an 80% chance of saving her in the first case, and a 90% chance of saving her in the second.

And yes, everyone in that crowd is someone’s son or daughter. Which, in turn, suggests that we should pick the second option as altruists, as well as concerned parents.

My point is not to suggest that one person’s life is more valuable than 499 people. What I am trying to say is that more than your own life has to be at stake, before a person becomes desperate enough to resort to math.

    — Eliezer Yudkowsky

High Variance Strategy

Another consistent strategy for getting leverage is to have more risk tolerance than normal.

High variance strategies are ones that usually don't work (or end in disaster), but when they do it pays off big (e.g. becoming an actor). I tend toward the view that we are royally screwed and the current trajectory points toward global annihilation. In that light the safe road to changing things isn't all that safe. You can go get your Ph.D and 'influence policy', the world will still burn.

A more realistic view is offered by thinkers like Dominic Cummings, who took on the British ruling class in a longshot and won. Safe careers are individually 'rational' but collectively ruinous. If you really care about saving the world, it probably means you need to think outside the box to even put a pothole in the road to ruin.


To deal with our problems we have to take the world as it is. The materialistic worldview, based on natural philosophy developed during the 18th century onwards is our best explanation of how reality works; and its conclusions are often unpleasant.

During the 2000's much was made of 'the god delusion' and we all argued about whether god exists. In retrospect I think "does god exist?" was the wrong question, what we should have been asking is "does god do anything?". 'Atheism' isn't quite about not believing in god, a Deist 'believes' in god but is already an atheist in their expectations. What it's important to accept is our existing in a (near) deterministic universe, which sufficient internalization of strips away any expectations based on divine intervention.

Things like 'Extropy', 'Transhumanism', and 'Singularitanism' are really of the same species, they're radical materialism. The kind of worldview you have when you stop thinking Pagan gods will smite you for hubris, or that everything will turn out alright because the world isn't allowed to end; or that divine intervention will step in to preserve your mind once your body falls apart. The difference between the beliefs of your average atheist and someone with high future shock is mostly one of quantity, not quality.

An SL4 person that thinks we're in a simulation literally believes in god, but they're still an atheist. Someone who thinks god isn't real but ghosts are is not an atheist in the sense that matters, even though they literally don't believe in god.

High Future Shock

Idea based on Eliezer Yudkowsky's map of the hard science fiction fandom circa 2000 or so. Future Shock is the degree to which someone has oriented themselves to the full possibility of human potential as implied by known math and physics. High Future Shock is when someone has oriented themselves to the possibility of a radically different future based on technology that would alter the human condition.

This kind of idea has become less popular with the rise of green activism and increasing distrust for the big institutions which people imagined would bring these technologies about in the past. The math and physics have not changed however, and the potential for these things to exist is still there.

As a key point: High future shock is supposed to be based on an understanding of real math and physics. It's not just generic Star Trek scifi crap. Most science fiction is just fantasy magic with a modernist aesthetic.

Future Shock Level

Shock levels are defined in the order they’d come up in a well educated hard scifi wargaming group. In a discussion of SL-X ideas, someone will eventually generate a SL-X+1 thought which makes most discussions in SL-X irrelevant. SL-X+1 will tend to dominate discussion (at least for a while) afterwards. e.g. Most space opera scifi settings (SL-2) are upended by the intelligence gains from genetic engineering (SL-3). People that care about extrapolating the real future will recenter conversation around genetic engineering and nanotech once someone begins taking them seriously.

Different shock levels dominate discussions of The Future™, which can make futurology seem schizophrenic. ‘Soft science fiction’ that’s more fiction than science has made it easy to treat ideas like cryonics as mere stories. People will read nonsense but aren’t interested in how life insurance, liquid nitrogen, and antifreeze can give any middle class person who wants it a realistic shot at eternal life.

Shock Level 1

“Nuclear power is upstream of scarcity politics”

In the first half of the 20th century futurologists like H.G. Wells primed the public for the development of flying cars, energy too cheap to meter, widespread peer to peer telecommunications, satellites, and more. While the latter items were achieved it’s taken for granted that the Internet could not have been predicted (even though it was) and that ideas like flying cars and cheap power were always whimsical flights of fancy. However at the time it was predicted the idea of a flying car would have been entirely reasonable. Already existing vehicles like the autogyro and helicopter portended a growth curve like the motorcar, which had started as a fragile toy for the rich but gradually became a convenience available to more people in the 1st world.

The only speculative element was a sane assumption that energy would continue to become available to society at the rate it had been for the past 150 years, 7% average annual growth.

The first industrial revolution in the 18th century set off a social chain reaction that continued unabated up till that point and afterwards through the 1960’s. Any intelligent person could see that like coal displaced manual labor and oil displaced coal, nuclear power was going to displace petrol as the dominant energy source.

As the century progressed and the science necessary to do this was invented something entirely unreasonable happened: It didn’t. The industrial singularity that had been building since the Newcomen engine in 1712 stalled.

Had it gone on as expected many more Americans would be able to comfortably afford an autogyro or helicopter, and average wages would look something like 150k a year. A (completely possible) 10x improvement on current reactors would desalinate enough water for Africans to use it like Americans do, 100x improvement could feed the whole continent in climate controlled greenhouses.

Shock Level 2

“Space travel is upstream of nuclear power.”

For all its benefits the atomic age would still be resource constrained. There is only so much gold, silver, helium, uranium, lithium, and other rare elements and materials on our planet. Space travel would allow us to mine massive quantities of these materials and bring them back to earth.

Most contemporary visions of The Future™ are based on the glut of science fiction pulp published in the second half of the 20th century. This material is largely fantasy and ignores most of the practical benefits of mastering space travel. We could harness large amounts of energy from the sun using satellites. Polluting industrial processes we currently do on earth could be conducted on space stations, leaving the planets surface pristine. All this however pales in comparison to the possibility of colonizing and expanding to new worlds, mitigating the extinction risk from planet wide catastrophe while providing the possibility for new social systems to develop.

Shock Level 3

“Nanotech is upstream of space travel.”

Space opera depicts unmodified humans dominating events for hundreds or thousands of years. But the kind of society that can do sustained space travel will greatly modify its physical being long before it establishes a galactic empire. This is trivially the case when genome sequencing and synthesis is on an exponential curve like Moore’s Law. That implies both biotechnology and nanotechnology: the two go hand in hand. Per Drexler a sufficiently advanced biotechnology (i.e. cell editing) would let us attempt to bootstrap atomically precise manufacturing. Even if we assume this is impossible, we have already mapped enough of the genetics of human intelligence to know that sufficient progress in genome synthesis will let us make rare outcomes like Einstein or von Neumann the default, if not abilities far beyond them. There is no plausible future that looks like the 20th century extrapolated into space utopia.

Shock Level 4

“AGI is upstream of nanotech.”

In Engines of Creation Drexler argues that the development of narrow AI will lead to nanotech, and then computers powerful enough to implement neuromorphic general intelligence. But as Moore’s Law continued into the late 20th century and beyond, consensus began to shift away from a nanotech-before-AGI timeline. In particular Eliezer Yudkowsky’s model of a Moore’s Law driven AGI genesis became the main worry of the sort of person that had previously concerned themselves with problems like the gray goo apocalypse where rogue nanites eat the planet.

Much like the gray goo, most of the problem of AGI is a rapid loss of control as an exponential process that cares little for our values or wellbeing maximizes its objective function. Encoding human values into an objective function isn’t something anyone knows how to do, and an AI will lack the careful process of evolution that gave us a tendency to care about others. The default is a sociopathic superintellect that bootstraps nanotech and eats us.

Eliezer's Extropy

My name for Eliezer Yudkowsky's overall philosophy, as elaborated in his Rationality: AI to Zombies. Yudkowsky's basic philosophy is a hard-science-fiction cosmology (i.e. Extropian-Humanism), Bayesian Epistemology, New Atheist sociology, and Behavioral Economics under the name 'rationality'.

Of these elements, the Extropian-Humanism is probably the rarest. Yudkowsky forked from More's Extropians after deciding they were too blaise about the dangers of nanotech and AI. In More's Extropy rationality is an emphasis, in Eliezer's Extropy it's the emphasis. EE is structured like Buddhism, where a cosmological and moral interpretation of the world is supported by a (dis)organized mental practice.

Unfortunately there are no monasteries you can go on a retreat to and learn rationality (CFAR tries, but they aren't very good). Given that my own beliefs are Eliezer's Extropy rather than More's, when you read 'Extropy' without qualifiers I'm talking about this version.

Four Extropian Virtues

The four basic things you need to develop to be good at Extropy.

  1. Agency. Well versed in the methods of piloting yourself to do things. Building habits, not giving up at the first setback, strength, maximizing, etc.

  2. Sanity. A clear view of the world, very well in tune with yourself, a strong well constructed (i.e., not full of ad-hoc garbage) identity, good epistemology, etc.

  3. A love for the world and its inhabitants. The belief that death is Bad, a fully developed secular moral system. Not limiting your scope of concern to immediate tribe-mates or friends & kin. Caring about people even when you have to fight them. Relentless determination and altruism for people you don't know. Taking consequentialism seriously if not literally.

  4. High future shock. Necessary to realize that there are solutions to the problems we have, and things worth fighting for. It's not all hopeless, there are glorious things within our reach, etc.

Existential Risk

A problem that will either destroy humanity or the potential for advanced human civilization if left unchecked. Nuclear war is the best known X-Risk, along with catastrophic climate change. While these two things exist in the public mind independently, I don't think most people have the general category of an X-Risk. Two clear examples aren't enough to feel the need to form a category around them. During the 21st century however we will be adding several items, including 'artificial superintelligence', 'bioengineered pandemic', and 'Bronze Age Collapse 2.0 but we can't bootstrap civilization again because we ate all the fossil fuels'.

The 21st century is a suicide ritual, and at the end humanity kills itself. It’s so easy to get caught up in the role, in the character you’re playing in this story, that you forget there’s a real world full of real people who will really die. Playing a role makes the situation acceptable, it’s a way of coming to a mutual suicide pact with others.

The Singularity

We live in interesting times. The culmination of thousands of years of civilization has led us to a knife edge decision between utopia and extinction.

On the one hand, the world is in intractable, fractal crisis. Most of that crisis is of our own doing: Global Warming, Nuclear Warfare, AI Risk, portents of ecological and social collapse.

On the other hand, we are now very close to a technological singularity: the point where we will be able to think and act so quickly that we invent virtually everything there is to invent in one sprint. Ironically enough this possibility arises from the same technological powers that endanger us.

It is a race now, between whether we will first unlock that boundless spacefaring future or meet ruin and die on earth. Ultimately, this might be the only thing that matters in the 21st century, perhaps the only thing that matters in any century up to this point and forever after.

The outcome might be decided by a cosmic millimeter, mere hours, days or months.

New Atheism

Defunct movement that lasted roughly from mid-2000's to mid-2010's. The New Atheists were evangelistic atheists organized around the writings of the "Four Horsemen" (Richard Dawkins, Sam Harris, Christopher Hitchens and Daniel Dennett). Their loud, aggressive approach to non-belief was a welcome counterpoint to the presidency of Bush Jr. It's difficult to convey the ubiquity of "Is god real?" debates on Internet forums during his second term.

New Atheism is based on skepticism, which made Yudkowsky's Extropian-Bayesian atheism a significant departure from James Randi style 'rationalism'. A skeptic has loss aversion, they don't want to believe anything false. Rationality in the Yudkowsky sense considers opportunity cost. Skeptics praise the FDA for weeding out fake treatments, rationalists look on in horror at the people who died due to delays in drug availability.

New Atheism failed because it turns out loss aversion is a poor philosophy and there was no positive vision on offer.

Nonlocal Phenomena

John von Neumann pointed out in 1955 that with nukes humanity had reached the point where it couldn't rely on a big world to protect it from system failure. The size of our powers has reached an 'absolute limitation' in what can be safely instantiated on earth, yet continue to grow. Worse still the problems that arise from this global scope must be dealt with at a farther and farther distance from their crisis point. Climate change has to be tackled before it undeniably manifests in the environment. This chronologically nonlocal consideration has to be taken seriously in the present for any of us to survive in the future.

With problems like AI Risk, things must be tackled farther and farther away in time from their crisis point. This suggests that existential risk requires an essentially religious ontology to deal with. You need to think you can predict the future in 20, 50, 100 years and act on that knowledge now in expensive ways to deal with problems you can't even see.

(Priority Changing) Radical Truth

The kind of religious ontology which can be held by a rationalist who has bound themselves to reality.

Once you've filtered out all fables and major falsehoods, the things you have leftover are candidates for radical truth. Radical truth is something which totally shifts priorities. Internalizing the Singularity changes how you approach life. High future shock tends to converge towards a radical truth orientation, making it almost intrinsically religious.

The key ingredient for a religion is agency directed towards nonlocal phenomena. Phenomena can be spatially or chronologically nonlocal. Spatially nonlocal would be e.g. heaven, which is 'up' even though we know there is no heaven 'up', only space. Chronologically nonlocal would be e.g. a connection to the ancestors, or considering future tribes and societies.

Not restricted to high future shock. Veganism and Peter Singer's charity ideas both count.

Basic Trust

Three beliefs (implicit or explicit) that develop by default during a nontraumatic childhood and adolescence:

  1. The world is benevolent.
  2. The world is meaningful.
  3. The self is worthy.

Damage to these beliefs tends to destroy agency. However in the context of a secular worldview and existential risk, all three are absurdly false. This presents a barrier to internalizing Eliezer's Extropy: Internalising high future shock and sanity seems to require destroying agency, creating an apparent contradiction between agency and an Extropian worldview. The basic solution is to replace each of these three beliefs with something more realistic. The world isn't benevolent, but it is consistent. Not meaningful, but full of fragile value. You're not worthy, but you are who is there to deal with the problem.

Until you've replaced 'basic trust' with something that isn't totally delusional, your models of the problems our world is facing will be based on fake rules.

Keeping Your Identity Small

For most people the biggest barrier to clear thinking is their identity. The tribal bickering of political and religious discussion is identity driven behavior. Notice that we're perfectly capable of discussing old politics in a levelheaded way, we call that 'history'. Politics become (proper) history when most people in the discussion no longer have their identity wrapped up in discourse. There is a common map-territory error where we assume that intractable discussions point toward intractable problems, but often the facts are quite clear and some parties just aren't willing to accept them. Once we allow something into our identity we stop being able to think clearly about it, because any attack on the idea is taken as an attack on ourselves.

KYIS is the deliberate practice of minimizing how much you let into your identity. By identifying with something we fuse with it, and each thing we fuse with makes us stupid. That stupidity takes the form of taboo tradeoffs and poor epistemic posture. Minimizing what goes into your identity frees you up to be smart where other people act dumb, it is one of the basic steps to maximize your practical intelligence.

Cuckoo Belief

A false belief that has become part of someone's identity. Cuckoo beliefs are the typical barrier to intellectual progress because they're slow to update and bottleneck everything else. Like their namesake, a cuckcoo belief encourages its holder to push away real knowledge to protect it. If developed to malignancy it can become a full subagent, which actively defends itself against being updated.

The most entrenched cuckoo beliefs are load bearing, which means it's not enough for them to be shown false. If something is doing practical work in your cognition, and you don't have a coherent alternative to replace it with, it's usually preferable to just keep acting as if the known-false belief is true until you generate a true alternative. This can take a long time, especially since we're doing our best not to falsify the belief. From the inside updating on a cuckoo belief feels like noticing lots of little things, and then encountering a final puzzle piece that connects the observations together.

HMC Event

Looking back, what Eliezer2001 needed to do at this point was declare an HMC event—Halt, Melt, and Catch Fire. One of the foundational assumptions on which everything else has been built has been revealed as flawed. This calls for a mental brake to a full stop: take your weight off all beliefs built on the wrong assumption, do your best to rethink everything from scratch. This is an art I need to write more about—it’s akin to the convulsive effort required to seriously clean house, after an adult religionist notices for the first time that God doesn’t exist.

    — Eliezer Yudkowsky

Most people are fractally wrong when it comes to seeing the world clearly. Their wrong ideas are downstream of several mutually interlocking cuckoo beliefs that would be transformative if dislodged:

SIMPLICIO: But Mr. Confessor, if you're so dedicated to the truth why can't you convince me vaccines are necessary?

CONFESSOR: Because I can't change one mistaken belief in isolation, your beliefs about vaccines are tied up with your beliefs about government and institutional credibility and what's low or high status in your tribe and what good standards of evidence look like. I can no more change your mind on this issue than I can make you a woman. To change your mind you would need to be a totally different person.

In his Denial Of Death Becker writes that seeing the world clearly requires a kind of identity death and rebirth. Moreover it requires a personal relationship with death, as seeing clearly forces a confrontation with mortality. This confrontation 'kills' you, but the mind and body are left behind to act in pseudodeath. The true philosopher is undead, until death at last takes them literally.


Symbolism passed down from the Hermetic lineage to the present day. In its original context the phoenix referred to a young philosophers stone. This drew on the larger Christian culture where Christ was often likened to a phoenix that miraculously renewed itself through death. The stone then would be an imitation of Christ (who Catholic tradition holds descended into hell) through deeds and study rather than worship.

It is also convergent with the identity death and rebirth which Becker claims is often necessary to perceive the world clearly. An Extropian must give up all attachments to things that are not there to be experienced outside the mind, often killing hopes and dreams. A phoenix has also mastered the secrets of life and death, which it demonstrates by recalling itself from destruction.

In Eliezer's Extropy the phoenix represents the enduring spirit that defies death, taking responsibility for the world.


In Buddhism a Jhāna is a state you access through prolonged meditation. Here I use it as a metaphor to describe certain experiences you will have as you internalize Eliezer's Extropy. My hypothesis is that these states are convergent, i.e. shared between practitioners. The Jhāna in Buddhism are convergent, you reliably access them as you meet further attainment. Jhāna is the closest word I know for "Convergent mental state reached as part of insight into a particular way of approaching reality." I don't think there's an English word for that, or I'd use it. If they turn out not to be convergent, then the hypothesis would be disproven and the metaphor no longer makes sense.

Without more research it's hard to establish how common the following experiences are. An informal survey of people who seem to really 'get it' finds that most serious extropians have experienced the 'Phoenix', usually (but not always) while reading the relevant scene in Harry Potter and The Methods Of Rationality.

Phoenix (Jhāna)

"It's a contemplative moment where your sense of perspective expands outward. As it does you become aware of all the suffering in the world. The relentless cruelty of the prison we're all trapped in, all the people who are worse off than you: poorer, less insightful, filthy, unvirtuous.

Their world is pushing them into that. They may have never had a choice to be anything better. Every day is pain that they endure. There's millions of them, billions. Out farther there's beings that can't even speak, they come in all shapes and sizes, they can feel pain, maybe they can even suffer.

There's billions of them, trillions. An entire planet suffused with suffering, and you're viscerally aware of yourself as a tiny dot in that ocean. Most of those beings are less capable of dealing with it than you, more innocent. You want to take their pain from them, even if you have to deal with it yourself. You'd swallow the ocean and take on all the pain in the world if you could, in that moment."

Gnon (Jhāna)

"One day I noticed Gwern Branwen's anime girl generator had jumped up in quality. When I asked him about this, he said he'd started using a bigger net, and the anime girls got more detailed to compensate. They'd developed ornamentation, looked healthier; more unique, less emaciated in detail. They looked more like natural creatures. Diverse populations are healthy populations, they can develop ornamentation, etc. The application of these abstract principles to anime girls really threw into focus what I was looking at.

These weren't 'anime girls', they were a digital life form, pixels being hunted by a predator trying to sniff them out as fakes. Their anime girl form was incidental, they could look like anything. They were anime girls implemented as digital life forms in Gwern's big TPU computer, obeying basic population dynamics.

Internalizing this principle was a profound lesson in natural theology. I understood in that moment the mind of Gnon, and that Yahweh could not be nature's god."

Singularity (Jhāna)

"I was taking a comparative religion class, and got to the chapter about Australian aboriginals. There it discusses the concept of the dream time. In the dream time, all events which determined the shape of our universe took place. Dream time ancestors are immortal, and it's the reverberations of their actions that created our world.

Through rituals, chanting and dance, the aboriginals are supposed to enter into the presence of their immortal ancestors. Robin Hanson compares our era to this 'dream time'. It's a period that will determine the fate of all life that might exist after it. And in this story the immortal ancestors that shape the world which are remembered in song and myth, they're you.

Reading and realizing this in the hallway between classes, I began to feel the pieces fitting together. Every narrative merging into one narrative, one human story about the future of life in the universe. I'd entered into the realm of the immortals, and took in the fantastic feelings that washed over me until mundane reality reasserted itself and the instructor told us how to write a for loop."

(Phenemonological) Necessity

Phenemonological necessity can be stated as: 2 and 2 equals 4, conclusions follow from their premises; it asks "Is this a reasonable expectation?". A reasonable expectation follows from our best models of the world. Much of becoming 'rational' is learning how to see that an unreasonable expectation was always so and the person who wronged you was yourself for thinking things might be otherwise. It never made sense to think I could have computers work for the welfare of humanity isolated from a society that works for the welfare of humanity. You need to be able to feel gradations of necessity, to tell if something is immortal truth or open to interpretation.

A reasonable person will have trouble arguing with something thoroughly justified. You can feel the necessity of an idea by trying to disprove it: What does it look like for 2 and 2 not to equal 4? What would be contradicted if the conjecture is false, or its inverse true?


The extent to which consideration of an idea has been justified over the billions of other ideas you could be thinking about.

Analogous to the warrant police need in the US to search someone's house or person. You need a certain amount of evidence before it makes sense to investigate someone: If police picked someone in town at random to investigate for murder this would be obviously unjust. You need a certain amount of evidence before it makes sense to bring a case to trial: If you picked one of your suspects at random and brought them to trial for murder that would also be obviously unjust.

Yet we often allow others to 'bring a case to us' without showing that it makes sense to persecute in the first place. Letting other people ask you questions without justification takes you most of the way to believing whatever they want you to believe.

"Does X cause cancer?" needs some kind of indication it does before you even ask.

Lie Contagion

Lies are constrained in power by their nature as contradictions of reality. Every lie is swimming upstream against the convergent forces of physics pointing at the truth. Lie Contagion is how many observable phenomena are contradicted by a lie. The higher the lie contagion, the more difficult a lie is to pull off. To a keen observer, most "small lies" are actually gaping holes in the fabric of reasonable expectation:

I find it very easy to imagine showing a geologist a pebble, and saying, “This pebble came from a beach at Half Moon Bay,” and the geologist immediately says, “I’m confused” or even “You liar.” Maybe it’s the wrong kind of rock, or the pebble isn’t worn enough to be from a beach

Forensics works because a forensics expert knows more about the evidence left behind by criminal acts than the vast majority of criminals. One informal definition of epistemic rationality is that it's the component of lie-detection that is dependent on general reasoning ability rather than domain knowledge.


A statement generated from stored linguistics models and a vague emotional direction, rather than consulting a map and describing what it says about the territory.

Feels from the inside like 'blurting out', often moderate anxiety about the idea that the statement might be socially challenged. This is a specific mental motion that generally results in an excuse or lie. It seems to be a default behavior in people, which means that it's one of the basic barriers to honest communication and thinking.

Most directly observable in split brain patients left hemispheres, which confabulate as a default response to having their actions or beliefs questioned when they arise from information processed by the right hemisphere.


Someone who plays by fake rules. In a fighting game, this is the person that insists anything which consistently beats them is 'cheap'. The anguish caused by the invention of guns is a real life episode of this kind of impotent flailing. A scrub is fundamentally someone that screens off the real rules of the game they're in to play a different game whose rules exist in their head.

This doesn't mean all bans on anything physically permissible are a 'fake rule', sports like boxing are defined by limitations placed on the fighters. Rather a scrub is someone who unilaterally declares what the rules of games are, and gets angry when you don't bend to their entitlement. "If you're bad at the game, change the rules" is a pride-saving heuristic that screens off real growth.

One working definition of a rationalist is someone who is not a scrub, that consistently notices their assumptions and works not to live their life by fake rules.

Efficient Market Hypothesis

The observation that you should ask "If it's such a great idea, why hasn't anyone done it yet?" before executing on stuff that seems like low hanging fruit.

Formally, the observation that you should expect prices in a stock market to converge to what the stock is worth as an investment, no more and no less. People tend to treat this as a kind of static assumption, but I think it's more useful as a model. Rational actors with access to good information and deep pockets should reliably buy an asset until its price converges to its value.

We can invert these assumptions to find places where value is there to be had. Irrational markets with poor or uneven information distribution or money cleanly separated from the people who know how to use it are where opportunity is. In this sense, a lot of market regularization laws actually cripple innovation and adventure.

Society has to tolerate a certain amount of fraud, scams, and stupidity in order for the good stuff to happen.

Obvious Mistake

In contrast to the usual definition of "a mistake that is somehow 'obvious'". I think of an obvious mistake as one that has no coherent argument in favor of making it. It's not making the incorrect choice in a delicate balancing game of several optimization objectives, so much as it is a failure to optimize.

Repeating a word word in a sentence is an obvious mistake, there's no argument for doing it. Many awkward and unnecessary phrases are also obvious writing mistakes.

Here obvious means "obvious to someone who has internalized all the important dimensions to optimize on". To an experienced practitioner these mistakes stick out like a sore thumb, and you're a proper journeyman once you've learned enough to stop making them. People who are really good at what they do basically never make obvious mistakes in this sense.

Opportunity Cost

The observation that time and resources spent on X aren't available to pursue not-X. Opportunity Cost is the "cost of following an opportunity" imposed by not being able to use resources for something else. e.g. It might 'cost' a lawyer hundreds or thousands of dollars to volunteer at an animal rescue instead of billing more hours. The shelter would much rather have their money.

Ignoring opportunity cost can literally kill people. You can impose safety measures that save a life at the cost of two lives lost to inaction. Nobody complains about this because corpses left by inaction are normalized as the status quo.

It's sickening to imagine the future where we properly developed nuclear power instead of continuing to burn dead dinosaurs for fuel. We might spend generations cleaning up the mess because 'nuclear' sounded scary to people in the 70's.


The mix-and-match philosophy that currently dominates Western religious thought. People take the "good bits" of competing philosophies and use them as raw material for an ad-hoc identity. Usually these do not even rise to the level of a thinking 'system', but rather a gestalt of inconsistent pieces which maximize a primal sense of 'rightness'. The eclectic is by default a wirehead, not a systematic thinker. It is the useful products of systematic thinkers that they tend to cannibalize for their ideas. As the occultist Manly P. Hall put it: "Eclecticism appears to have had its inception at the moment when men first doubted the possibility of discovering ultimate truth."

Eclectic ideas can be powerful, but they're weak at inference. It's not clear what belongs in their philosophy, which makes coordination difficult. There is no clear method of extension or development, because so many dimensions of value have been incorporated that 'progress' in the spatial sense becomes meaningless.


The default state of 'religion' is a neutered set of ideas that were once taken literally. Even literalist Christian revivals are more or less fake. You can't earnestly engage with the universe holding those beliefs and retain them. Historically religions have relied on a "fish in water" effect that kept people from noticing them. Self reflection implied by 'religion' as a category was the beginning of the end for old religions. New ones will be reflexively stable. You will be able to see them and still take them literally. This is a key feature of radical truth.

For example we can already imagine a concrete instance of Saivite, a Buddhist Physicist that wants to destroy the universe. We don't see many of those yet, but we can expect them to exist in the years and decades to come. Once Buddhists are exposed to Western memetics it's only a matter of time. Their possibility is a potent reminder that religion is not dead, only sleeping. When it recovers from its century long slumber by finally metabolizing the loss of its fables, one consequence will be the repair and restoration of previously neutralized ideas.


Profiting off a real problem by peddling a non-solution. The central premises of most religions are grift. It is entirely doubtful that Christian piety takes a believer to the afterlife. There is no particular reason to privilege the Hindu pantheon over any other pagan pantheon. One thing that distinguishes radical truth is that it has the opportunity not to be grift.

The opportunity, but no guarantee. Perhaps the second most popular secular religion worthy of that name is the soft pagan "we are going to merge with animals and reestablish our ancestral connection to gaia" eco-activist thing. This seems to manifest at the societal level as things like straw bans, which are pure grift sapping resources from realistic perspectives and solutions. 'Woke' SJ (i.e. baizuo) is also grift, prioritizing statues and the use of 'Master' in git repositories over addressing the material and social conditions of lower class life.


One of the ways you can tell that a philosophy or ideology is rotten is when it replaces the 'immortal' traditions with transient objects of affection. Hitler wanted his followers to take Germany to be god and Lenin wanted the philosophy of communism to be worshipped in Russia; painting himself as Messiah. Idolatry is the worship of something less than god as god.

It is not the replacement of god but replacement with something less than god that heralds their intrinsic rottenness; something that does not even rise to the level of immortality. Committing a type error and trying to slot profane and temporary things into places that only the lasting and enduring should fit.

This would be one of the principles then that distinguishes you as cosmologically worthy: Is your philosophy an enduring ambition which transcends its era, or is it a product of and fetish of its era? Asking your followers to worship Germany as god is not an insult to god so much as it is an insult to human dignity.

St. Paul's Demon

One of the key features of an abusive religion. A concept with Mysterious Parts that someone can tell you the properties of but you can't verify or predict in advance, which you are encouraged to place towards the center of your identity.

This demon acts like a separate agent inside the victim, a little helper for their controlling religious gurus. For example the concept of the 'holy spirit' and a 'guilty conscience' in Christianity is very much of this flavor. You're told as a kid that everyone knows deep down when they do bad things, and they try to get caught on some level so they can atone for their wrongs. Such a notion is both untrue and exists mostly to make you more easily controlled.

You should ask where truth is coming from in your worldview. Someone who is capable of getting you to believe a concept they have functional control of, and put it at the center of your identity owns you, almost literally.


The extent to which you are aware of yourself existing in a body, and comfortable with that existence. Embodiment is not about 'doing things with your body', it is the fundamental awareness that you are part of the natural world. That you are going to decay and die, not an avatar of yourself.

Coming to terms with embodiment means coming to terms with your fundamental monstrosity. Your life is only possible because other things suffer, animals and people alike. This doesn't mean you have to endorse that, but it does mean you can't be allergic to the idea and see yourself clearly.

It is common for religions like Christianity and Buddhism to promote active bodily disassociation, the fantasy that you can separate yourself from the natural world through piety or dignity. This is an impediment to clear headed thinking, as Alfred Korzybski famously discussed at length in his Science and Sanity.

Absurd Conjecture

A conjecture (often with a narrow technical proof to provide warrant) which is widely considered but generally believed to be false even if no conclusive counterargument exists yet. For example the quantum suicide thought experiment is (or at least was) an example.

These tend to rely on advanced technical knowledge to evaluate personally. A more intuitive example which is not widely considered but of the same general form would be SMBC's Bayesian Immortality parody:

The solution to this (what might be considered an) apparent paradox is simple, (rot13) Gur znc vf abg gur greevgbel, cebcbfvat zber ulcbgurfvf qbrf abg punatr gur haqreylvat qvfgevohgvba.

"I admit your proof is a proof but find it unconvincing, expect to find concepts to disprove it later." is a valid mental movement against scrupulosity.

Semantic Response

The "happening-meaning" discussed in J. Samuel Bois's The Art Of Awareness. A semantic response is the unique meaning-experience (because the 'meaning' comes from many input sources, it doesn't exist separate from experience) that a person has to a scenario, concept, phrase, etc. It is the 'gut feeling' you have when you hear certain words. Learning the internal semantic universe of others is key to persuading them. People will not react well if you remind them of a traumatic childhood experience, or a hated rival tribe.

Persuading people about anything interesting pretty much requires learning their internal semantic universe and then constructing your message in terms of logic which they will actually parse and emotionally attach to. There is no 'objective' universally compelling chain of logic that gets people to accept your ideas. You can be right and it won't matter unless you can communicate that you're right in a way the other person will emotionally accept.

Epistemic Posture

How the mind/body is holding itself as this relates to epistemology. Noted by Alfred Korzybski as a barrier to uncommon sense (but not by this name). He would train 'semantic relaxation' at seminars to help get people in a place to evaluate and listen well.

Every good ingroup member knows to tense up and not listen when an outgroup member makes an argument that might change their loyalties. People who are anxious have a tendency to 'catastrophize' and jump to negative conclusions without fully considering the logic to get there. Once someone has noticed you are trying to change their mind about something they don't want to update on, they tune out. Code switching is one of the central tactics to help avoid this response in others.

I once saw someone dissect a traumatic experience by thinking about the thing, stopping the predictable fear/anxiety response with meditation, then thinking about it more until they had a breakthrough.

Meme Magick

One of many names for the tactic where an attractive aspect of a meme is used as a carrier signal for an underlying idea that would not have traction on its own. In recent years this tactic has been used to great success by the far right, but all good dissidents take advantage of it.

In its 'Meme Magick' formulation the central carrier signal is humor. You would be surprised by how complex a model can be transmitted by something fundamentally humorous (see first attachment for a particularly impressive example). Manly P. Hall also discusses the use of majesty (pyramids, monuments, carvings, etc) as a carrier signal, along with artifacts humans are inclined to replicate like playing card decks. According to esoteric legend Tarot was invented to encode the Egyptian Hierophants mystic wisdom for future generations.

Map and Territory

A metaphor invented by Alfred Korzybski to explain the distinction between our perception/understanding of reality and reality itself.

A map (our understanding) represents a territory (underlying 'object level' reality). The map must be structured like the territory to represent it accurately. That is, places like Seattle and San Francisco must be represented with the right spatial relation to each other for us to consider the map accurate. A map lets us predict what we will encounter if we drive north from San Francisco.

If a map of the Western United States was wrong, we'd immediately understand the futility of insisting that San Francisco is north of Seattle. Our maps of reality can also be wrong, and we should update those maps to reflect the territory they represent.

Korzybski posited three principles of maps.

  1. Non-Identity - Maps are not the territory they represent. A photograph is not the object, etc.

  2. Non-Allness - Maps are partial representations of the territory, and necessarily have to be. This is trivially proved by considering that maps exist in the territory. A map that tries to contain itself ends up in an infinite regress.

  3. Self-Reflexiveness - It's possible for maps to be representations of other maps. We can have thoughts about our thoughts, etc.

Perceptual Control Theory

Model of behavior that says organisms control the perception of variables, not the variables themselves. The best explanation I know of for the phenomena where children hide under their blanket to make the monsters beneath their bed go away.

This kind of fear motivated map-territory error is actually encouraged by American pop culture as 'self care' and 'mental health' under the guise of 'positive thinking'. Positive thinking does not make the nukes go away, nor does it make your disaster of a society stop falling apart.

PCT is the basic reason why the concept of a 'map and territory' can be a revelation at all, since naively we shouldn't expect it to be. It's easy to tell when someone is acting like a PCT agent by observing basic denials of reality. PCT agents tend to get stuck in local optima where they only keep out the perception of problems in an ad-hoc way rather than address their root causes.

Doxa, Episteme, Gnosis

Ancient Greek distinction between three levels of personal experience backing knowledge. It's important to mentally track where/how you learned something if you want to be able to answer "What do you believe and why do you believe it?"

I know the earth is round because...

Doxa: "I read it in a book". Knowledge from someone else's experience.

Episteme: "I measured the shadow cast by an eclipse and did the math". Knowledge from inference on experience.

Gnosis: "I sailed around the world until I wound back up where I started". Knowledge from direct experience.

The vast majority of knowledge a modern educated person has is Doxa, they've never seen the Great Pyramids but they're pretty sure they exist. By default Doxa is a disconnected series of facts, not a systematic understanding. People often find they don't grok a subject they "learned" until they try to explain it to someone else, and realize how uncompressed and ad-hoc their understanding is.

Extensional Device

Modification to language/norm designed to make speech and writing more closely fit the territory. 5 modifications are given in Bruce and Susan Kodish's Drive Yourself Sane:

  1. Indexing words to disambiguate words with multiple meanings. e.g. belief_1 ("a thing you have a representation for in your head labeled 'true'") is not belief_2 ("an implicit model defined by your expectations"). Indexes and order are arbitrary, the purpose is to distinguish between words.

  2. Dating words and phrases so their impermanence is emphasized. e.g. Instead of saying "Neuroscience" say "2020 Neuroscience" or use a subscript.

  3. Use of 'etc' to denote that not every possibility is covered by a list or description.

  4. Scare quotes to mark when a word or phrase can mess with thinking. e.g. We speak of 'minds' separate from 'bodies' when there is no such separation.

  5. Hyphens to connect words, suggesting relations. e.g. A felt-obligation is an experience, a felt obligation is a social concept.

The Noncentral Fallacy

An argument of the form "X is (technically) a Y, therefore X is as bad as a Y".

e.g. "The local priest runs a religious gathering, therefore they're a cult leader!"

e.g. "Martin Luther King was a criminal, therefore he shouldn't have statues in his honor."

The trick to a rhetorically effective use of the noncentral fallacy is that it's technically true, priests are 'cult leaders' and MLK did in fact break the law protesting. However, neither of these people are very much like the little picture we have in our head of a 'cult leader' or 'criminal'. When I think of a criminal, I imagine clip art of a cat burglar in a ski mask with a burlap sack strung over his shoulder:

We can call that little picture the categories center, with criminals becoming progressively less so as they move away from it. By the time we're at MLK, there is little family resemblance between him and the cat burglar I typically imagine. It would be inappropriate to dedicate a statue to the honor of a typical catburglar, probably less inappropriate to dedicate one to MLK.

North Star Fallacy

The incorrect claim that my ideas (e.g. Extropianism) are functionally a Pascal's Wager. When everything in your society is set up to mutually reinforce bad behavior what you have isn't a system of individual institutions you can "reform", but an equilibrium of evil. Under that system even the mundane utopia where you can know the cost of a medical procedure before it happens is out of reach. Any good thing you might want is ultimately downstream of that gestalt.

I didn't get to my ideas by following the dim hope of abstract 'utopia' like a north star, always off in the distance. Rather I read a book when I was 14 that convinced me computers fell far short of their potential to uplift and bolster the welfare of humanity. I spent years pursuing that vision into books and code before I realized that doing computers correctly forced you to take on the ruling class, market forces, media tycoons, social/cultural forces, etc.

There is a certain sense in which even if I just want one lousy thing, let alone utopia, I have to take on the full forces of badness that exist in society. They are so good at ruining, so good at perverting, that to take them on requires absurd amounts of memetic, social, capital, etc power. If you need that to defy them in a consistent, stable way anyway, why focus my ambition on making computers work right? You may as well completely disrupt that equilibrium and replace it with something better.

Greek Tragedy

If every self story has a genre, then rationalists live in a Greek tragedy. Tragedy is the genre that grapples directly with necessity. There is a misconception that tragedy is defined by a protagonist who is punished for their arrogance, this is not so. The tragic protagonist is defined by their refusal of necessity, which might be caused by arrogance but could also be religious belief or relentless determination.

A rationalist is a particular kind of tragic hero, and perhaps an odd one. In accepting the rules of nature they're defined by the internalization of necessity. Without it, rigorous scientific beliefs are impossible; conclusions don't have to follow from their premises so you can believe whatever you want. Yet to pursue goals like immortality or galactic conquest, you must refuse necessity. The rationalist says "I may not know exactly how these goals are to be accomplished, but I know that physics allows them so I will continue to act as though they are possible."

It is no coincidence that what is often considered the first work of Science Fiction, Mary Shelley's Frankenstein, is a book whose protagonist has his life ruined by a tragic attempt to defeat death.

Infinite Game

One answer to the observed problem that people become depressed when they realize they can capture their own pattern. Infinite games are quite literally games that are not designed to end. Everything that happens is instrumental, there are no terminal goals. Carse describes the concept in his 1986 Finite and Infinite Games.

Interestingly enough Max More describes more or less the same thing in his 1990 Transhumanism: Towards a Futurist Philosophy (I wonder if he read Carse or just converged on the idea independently).

As an answer to self-pattern-capture I'm skeptical, induction continues to exist even if you try to pretend it doesn't. And I don't think we're nearly as complex as we want to think we are. As I've said in other contexts, to a superintelligence humans are the predictable robots that can't escape their programming. Everyone ends up a loop immortal eventually.


The felt-sense of having a belief. An expectation is quite literally something you expect to happen. You push the ball, it rolls. You flip a light switch, the lights turn on. A belief is a belief in so far as it implies expectations about your sensory experience.

People often say they ‘believe’ one thing but expect another thing to happen. Obviously, the thing they actually expect to happen is what they believe. Expectations are implicit predictions, and if your beliefs are true you should be able to predict the future (to a certain extent). When I was 12 I ‘believed’ lots of conspiracy theories, I spent a lot of time thinking about any silly thing said by any silly person. It wasn’t until I had many competing notions in my head causing traumatic levels of stress that I started insisting my beliefs predict the future.

I began writing down what I ‘believed’ would happen in a journal with a date for them to happen by. I got three predictions in before I stopped believing in conspiracy theories.


Bets are the natural tax on bullshit. Following on the idea that expectations are predictions, someone with well founded expectations shouldn't be afraid to 'put their money where their mouth is'. Therefore it is common in rationalist spaces for someone making an outlandish prediction to be asked if they'd like to bet on it.

A great deal of the point of betting is to have a clear statement of expectations, and often one of these will suffice in lieu of any money changing hands. In the United States gambling is usually considered legally and morally suspect, a common workaround is to donate to the winners charity of choice.

Bets generally involve uneven odds, so you'd best know how those work. Odds are generally established by the person making the outlandish prediction.

e.g. If I believe Donald Trump has a 10% chance of winning the 2020 election (PredictIt currently gives him 40%), you can rightly ask me to bet 5 of my dollars to 1 of yours that he won't win. From my perspective, you think he has twice the chance that he does and it's free money. From your enlightened perspective of being-able-to-look-at-an-odds-table, I'm giving him half the chance he has and it's free money. On average the person with correct beliefs should expect to come out ahead.

Constraint Modeling

Old theories of epistemology (the kind that go by the name 'symbolic logic') think of beliefs as sentences with a truth value. In classical logic things are either true or false, the moon is made of cheese or it isn't. This notion of 'beliefs as sentences' is so common that you may have never questioned it before, but it only takes a few minutes of serious poking for it to fall apart. Sentences can be nonbeliefs ("Help her."), ambiguous ("Have you seen my case?"), exaggerated ("Time heals all wounds."), and much more.

A more reasonable approach is that beliefs are expectations about what will happen, which are grouped together into models that try to capture the behavior of things we care about. These models mimic reality, and their usefulness comes from constraining our expectations down to things that will actually happen. If I say anything can happen, that is an admission I know nothing. You can feel yourself 'narrow down' a model as you build expectations around something, constraining it to fewer and fewer dimensions of uncertainty.

One function of language is to transmit these models, letting the audience build up a set of constraints in their head that roughly correspond to the thing you're trying to describe.

Underspecified Model

Concept that sounds like it's a model, but you don't have solid expectations about what it means. It's analogous to a solid piece of plastic painted to look like a machine. It mimics the form of something with moving parts, but there aren't any actual pieces you can have predictions about.

Uses phrases like "we need to build new systems around authentic relating" and you have no idea what that means beyond a vague feeling of what it might mean. Relies on you projecting your own meaning onto statements.

There's a certain level of warrant when you say something. Statements come with the implicit promise that they are pointing to something specific in concept-space. How you say things gives some hint as to how specific a concept you're pointing to. Precision of language and precision of concept being referred to should go together. You should be able to feel when you have specified enough bits with your words for a reader to uniquely locate the concept you're referring to.

Bronze Mindset

A set of thinking habits (or lack thereof) that keep you bad at strategy games (and life). A thinking style characterized by the confabulation of strategy and tactics.

A bronze player is incapable of having expectations about what they're doing. When they lose they don't ask "why did I lose?", to them things Happen more or less by chance. Without expectations there is no chance to notice prediction error, and no chance for improvement.

You get good at fighting in part by getting punched in the face. A great deal of how I got (hopefully) good at thinking is by getting thrashed in arguments. I'd ask "Oh, wait, I thought I knew what I was talking about how'd I get beat up so badly there?" and think about what standards to apply to my thinking so it wouldn't happen again. This kind of intervention into mindless habit, updating not just on the immediate subject but the strategy that generated it is the opposite of Bronze thinking.

Plausibility and Probability

Confusion detection technique where you keep track of which thoughts are based on rational inference and which thoughts are based on empirical observation, paying close attention when they disagree.

'plausibility' is the degree to which something is supported by rational inference, 'probability' the degree to which it's supported by empirical observation. When the two disagree you've likely hit a gold vein. Go digging and you'll find insight through synthesis.

e.g. A friend criticized my donation to Andrew Yang on the basis that any money I sent would be a small portion of the overall pot. I would be improving his chance to win by a fraction of a percent. Not exactly effective altruism.

This parses to me as plausible, but the logical conclusion is absurd. The same argument could be used against almost any form of collective action. "You only represent one vote in a sea of votes so voting is worthless". Yet people vote for candidates and they do win.

Most of the value is noticing there's a question here at all: Collective action turns many insignificant things into a significant thing. How do you value participation then?

Prediction and Compression

Prediction is a form of stochastic lossy compression. This can be seen in e.g. speech coding. This correspondence implies that prediction of arbitrary phenomena encodes an epistemology even without extra pieces (we observe this empirically in models like GPT-2). This means you have a unified mechanism for intelligence in the knowledge sense and in the expectation sense.

Insight is compression. The Babylonians did engineering by building a structure and seeing if it fell down. They'd write down the result of the experiment in a book. They had giant libraries of these books. And all new structures were based on empirical knowledge of old structures. You can take that entire Babylon library, and compress it down into a handful of math and material science textbooks.

Higher levels of insight 'contain' lower levels by being able to generate/predict them. In strategy this cashes out to leading the opponents OODA loop.

Pattern Capture

Specifying a pattern in a model or predictor that captures all the novel behavior of that pattern. A pattern has been captured when it can't violate expectations anymore.

Much of the problem with strategy and tactics is you have to optimize for an outcome, which entails structure and predictability. But you also have to avoid pattern capture, which requires being unpredictable, more complex than your opponent can hold in their head.

If you're too predictable, other agents can exploit you by simulating you in their head until they find a gametree moveset(s) where you lose. This is one of the basic tools used by the rent seeking class to exploit the lower classes: Use your advantage in time and resources to totally capture the patterns of the lower class mind and take them for everything not needed for malthusian survival.

One fundamental agent algorithm then is "Think in ways that avoid pattern capture". How would you have to think to use structure but avoid repeating yourself?


The extent to which agents share values. Alignment is a key problem in artificial intelligence, but also in movement building. The default as you scale is to lose the original values that made your thing good in the first place. e.g. 3D printing starting with a RepRap that tries to make its own parts and ending with proprietary machines that can't make a RepRap.

Degree of alignment can be thought of as how far agent(s) can achieve their goals before they stop having convergent interests. Corporations often have convergent interest with customers, because the best way to make profit is to sell a quality product. Eventually however corporations stop having interest in common with the people that buy from them. An omnipotent corporation wouldn't look after human welfare, it never cared about people in the first place.

Because convergent interests only fall away during extreme outcomes, it can be difficult to test for alignment. This goes for people as much as it does any other agent.

Goodhart's Law

The observation that once something becomes a metric it tends to get gamed into uselessness. One time while running a minetest server I decided to incentivize economic development by putting up a vending machine that let people buy and trade useful goods. I woke up in the morning to find that someone had dismantled the neighbors house to pawn it for money. After that the machine was set to only allow the sale of raw goods (as an exercise, try to imagine how players might game that in unintuitive ways).

This presents a problem for science and modernity because the act of measuring something often has the effect of corrupting it. It's not possible to make good decisions at scale without good numbers, but trying to get good numbers defeats itself.

Cold Reading

The art of gathering information about someone from channels they don't realize they're feeding information into. Traditionally body language, clothing, etc. A side channel attack on unconscious indicators of personal history and mindstate.

Advanced technique applies lie contagion to agent strategies independent of whatever social reality exists around them. "What is this animal doing, why does it do that?" is a very useful frame to analyze human behavior from. If your life is centered around optimizing for X and Y, it's very hard to hide the consequences of that optimization.

A central cold reading tactic is to pretend to know more than you do so that the target reveals information under the impression of existing mutual knowledge. Intellectual charity is often epistemic learned helplessness when reading people would make more sense.

Lime Metric

A secret metric that you use to measure performance. These are useful to avoid Goodhart's Law where looking at a metric causes the metric to stop being useful.

You can find these easily by having a strong model of what a good faith, properly functioning system looks like, and then noticing the subtle ways an impersonator messes up the details. The classic example is detecting wage theft in a bar by looking at the number of limes consumed during a barkeep's shift.

If you want to use a lime metric as a social weapon it's of limited value to reveal your metric, it will just be goodharted eventually. Much more powerful is to reveal a lime metric generator that captures your opponents pattern. If you do this right the only way the opponent can break out is to do something novel which is not inside the pattern capture. This is one way you can force others to update.

Eschaton Clusters

From my unpublished TTRPG concept Eschaton: A Greek Tragedy For Six Players. Meant to represent distinct points of stable moral alignment/cooperation along two basic axis: Scope of moral concern and Utilitarianism vs. Contractualism.

As far as I know Utilitarianism and Contractualism are the only halfway sane interpretations of ethics, which is interesting because they both suggest very different worldviews. When I originally formulated the clusters they were empirical, and the Utility/Contract split is something of a lossy fit over that. So to distinguish when I mean a cluster is utilitarian I'll say that and when I mean utilitarian as a metaphor I'll use scare quotes (e.g. 'utilitarian'). The original idea I was going for is more like "Thinks of morality as existing in the map vs. existing in the territory, realism vs. relativism", but that doesn't quite fully capture the distinction either.

Still utilitarianism tends towards realism and contractualism tends towards relativism.

Humanist (Eschaton Cluster)

The second broadest (stable, convergent) moral scope, Humanists are concerned with the welfare of sapient creatures. Their typical intuition is that animals do not have reflexive experience, so their pain "isn't experienced" in the same way that a patient under anesthesia doesn't remember their ordeal. A humanist is the sort of person that wouldn't support genocide even if Nazi Science was true.

Representative Philosopher: Eliezer Yudkowsky

Humanist's Utility/Contract split gives us:

Kalii ('Utility')

In recent years "Social Justice Warrior" has become an insult, but a Kalii is the sort of person who'd wear the label with pride. They always choose to expose bottomless pits of suffering whenever possible. At their best these social reformers are staunch advocates for the little man, at their worst they form Maoist cults which destroy value.

'Extropian' ('Contract')

While neither Max More or Eliezer's Extropy preclude orientations towards Mahayana or Kalii, the central Extropian is probably a Contract-Humanist. Extropians are often criticized (e.g. see how people talk about Silicon Valley) for their indifference to the problems of ordinary people. In the Extropian mind however these immediate concerns are mostly distractions. Human welfare is dependent on growth and the accumulation of wealth, period. "A rising tide lifts all boats" thinks the Extropian, and the most important mission is stopping society from descending into a zero-sum malthusian hell.

Jain (Eschaton Cluster)

The broadest possible scope, Jain's feel concern toward all living creatures, including the ones that humans generally think of as insignificant. A Jain is the sort of person that asks "Can bugs suffer?"

Representative Philosopher: Brian Tomasik

Jain's Utility/Contract split breaks down into two major factions:

Saivite (Utility):

Life is probably not justifiable in 'rational' utilitarian terms. The Saivite takes this to its logical conclusion and seeks the destruction of earthly life. This is known formally in philosophy as Negative Utilitarianism. It's a fairly standard view in Eastern philosophy which I suspect will become more important as time goes on.

Mahayana (Contract)

Mahayana Buddhist monks take a vow not to reach Nirvana until all sentient beings are saved from suffering. Implicit in this is the idea that salvation is possible. While some variants of Buddhism believe Nirvana to be the cessation of all experience (as atheists believe happens after death), it is also common to believe Nirvana is the cessation of unrequited desire. Someone who is both vegan and wants to stop climate change is implicitly Mahayana, as a Saivite would be cheering humanity on to destroy the world.

Asymmetric Tactic

From Scott Alexander's post on 'Asymmetric Weapons' in discourse. The idea that you should try to use tactics that only work because they're tied to your good qualities (e.g. being more true).

Anti-Example: Lovebombing. Even Islamic Militants like ISIS can use lovebombing. If this is the kind of thing you do to get recruits you don't fundamentally have any advantage over Islamic Militants in recruiting. It's a symmetric tactic that works just as well for ISIS as it does for you. Somewhere there is (hopefully) something that separates you from Islamists in power, an asymmetric tactic or appeal you can use and they can't. If there isn't you should probably ask yourself why you believe you're better than that in the first place.

People, organizations, and movements which rely solely on the same tactics as everyone else don't have a ton of incentive to preserve their 'core values'. The values are window dressing, they could be totally jettisoned to no effect. If your effectiveness is reliant on the thing you're trying to promote it's much more likely you'll be good at it.

Brier Score

A score function that can hold people accountable for their predictions over time. Brier's rule gives us an idea of how good something is at prediction, even when we don't know the 'true' underlying probability. e.g. Philip Tetlock asked recruits many questions about the near future, like: "Will NK detonate a nuclear device before the end of this year?"

We don't know the 'true' chances of that, politics aren't a clean math subject like flipping coins. This is taken to mean that the truth of a prediction is unknowable, but that's not true. There is a reasonable expectation of how likely NK is to detonate a nuke, even if we don't know what it is. The Brier Score uses the law of large numbers to let us avoid having to know. If you say something is unlikely and it happens, you're penalized to the degree you said it was unlikely; vice versa for saying something is likely that doesn't happen. On average if someone makes many predictions the Brier Score can tell who has insight and who pretends.

Raising The Sanity Waterline

Asymmetric strategy based on distributing models, heuristics, and philosophical razors that undermine the credibility of competing memetic grifters. Ernest Codman overturned centuries of medical tradition by showing that you could measure hospital performance. I suspect that Christ did something like this with his monotheistic philosophy, which is still persuasive against the Pagan ideas it's designed to defeat. Militant Jews like the Zealots insisted Jesus wasn't Messiah because he didn't overthrow the Roman Empire: the punchline is that he did. By developing an asymmetric evangelism strategy that eroded the underlying ethos of Roman society, Christ posthumously defeated his Roman oppressors. Eliezer Yudkowsky follows a similar strategy to show readers why they should prioritize AI Risk.

Liber Augmen is meant to undermine the appeal of eclectic religious philosophies, squaring the circle between secularism and 'spirituality' by reconstructing religion as a category.

Replication Crisis

Science is founded on repeatable experiment. People forget that without the ability to repeat an experiment science would just be journaling and memoir, a personal experience. The replication crisis is an ongoing problem in science where we realize that things we thought were facts do not happen again if you try to repeat them, they don't replicate. In other words they're not science. This realization started in psychology, but has branched out to areas of study we thought were 'bulletproof' like neurology.

This isn't just an issue for academics, I think of the replication crisis as a fundamental shift in the way we think about standards of evidence, analogous to the one that characterized the Enlightenment. For centuries it was thought that experts and scripture were enough to establish something as fact. When we learned they weren't it didn't just undermine ideas about the natural world, but the epistemology underlying dominant religious ideas too.

Outsider Science

I suspect the most interesting effect of SciHub and LibGen will be an academic counterculture more or less separate from traditional academia. This parallels the appearance of a literate middle class in 12th century Europe. Suddenly the established Catholic Church found itself having to regulate grassroots middle class interest in Christianity, which the church was not set up to incorporate. We can already see the seeds of this in independent academic authors, who investigate subjects with the devotion of an anchorite while enjoying minimal institutional support.

This counterculture is likely to play an important role in the 21st century. Between the replication crisis, "publish or perish", and increasing 'secularization' of universities as they meddle further and further into national politics; their monopoly on credibility is set to fade. A wider intellectual overton window will be good for updating sclerotic, inbred academic practices and ideas, but at the cost of making consensus more difficult.

RepRap Epistemology

A reasoning system that is capable of inventing (and thus improving) itself. By analogy to the RepRap, a 3D printer designed with the goal of being able to make its own parts. One of the flaws with Eliezer Yudkowsky's Rationality: AI To Zombies is that it doesn't get across well how the author came by their knowledge. In Liber Augmen by contrast I try to provide as many of the tools I use to develop the ideas in the book as possible. Someone who fully understands the ideas in Liber Augmen should be able to write Liber Augmen if it didn't already exist. Better yet, they should be able to write an improved version if they're aligned and have the life experience necessary to internalize things and learn more than I know.

This is one reason why the book is creative commons licensed, I fully encourage remixes, adaptions, and improvements. But remember that the default when you work on a carefully structured work is to ruin it, unless you understand that structure well.


The logical conclusion of the (Christian) humanist idea that clergy should not get between the pious and god. A form of religious organization where each member is expected to participate in finding new followers and converts.

Necessarily implies they have the tools to teach and teach others to teach others the central ideas in their religion. D. James Kennedy says if you're doing it right you should expect to have 'spiritual grandchildren and great grandchildren'. Important that you don't fall into the trap of teaching others to teach others, who in turn can't teach. Doing that means you only get one generation of spread.

This approach has the twin benefits of rapid growth and relatively Democratic norms. If everyone is expected to be qualified to teach, you have a stronger, flatter, more distributed organization than one where key parts of the secret sauce are held by secluded clergy.

Has a bad reputation as a strategy due to poor implementations. In general, good evangelists leverage their existing friend network and connections to find new members. The fire and brimstone preacher on the street corner is probably not a very good evangelist (though they do make the ideas more mentally available, so street preachers play a role).

Mission Command

Organizational principle that went a long way towards the outsized effectiveness of German troops during the world wars. The idea of Auftragstaktik ('Mission Command') is simple: Commanders should avoid telling subordinates how to do things. Instead they should say what to do (the goal) and why they're doing it. This ensures that each layer of the command hierarchy is an autonomous entity capable of reacting to local conditions.

In other words, your organization stops being an agent using humans as parts and becomes an agent made of subagents. Taking full advantage of the distributed cognitive and observational resources of your fighting force makes planning more fault resistant and unit cohesion less vulnerable to the loss of officers.

This is related to but not the same as the flat organization used in Evangelism.

Loot System

When people go raiding in an MMORPG they often have many people work together to get an indivisible reward. Loot Systems exist to address the question: Who gets that reward?

You might imagine the solution is to sell the loot and split the money, but sometimes there are illiquid markets in loot and it really does just have to go to somebody.

An analogous problem happens with group research, where many contributors might have complementary skillsets but they can't agree on a research topic. Here the indivisible reward is what the group puts its focus into researching. You can't split the focus, otherwise you aren't doing group research.

You can bridge the gap between partially aligned researchers by employing a loot system to determine who gets to have topic focus for a given project. If the group can stay together long enough for people to trust that they'll get their turn, you get the benefit of alignment without needing everyone to have exactly the same priorities.

Fractally Wrong

Being wrong such that no single fact(s) can be pointed at to make progress towards convergence to the territory.

Young Earth Creationism is the most blatant example of how this structure feels to interact with. Each layer of evidence that might be argued or discussed is only a thin layer in a mighty onion of falsehood. There is no single fact about dinosaurs or carbon dating or natural theology that can be pointed at to begin unravelling the structure. It's a well designed epistemic prison of wrongness.

People who have uncommon sense tend to feel this structure when they interact with people who don't. It is a disturbingly common kind of mental state to be in. The effort threshold to get someone out of it is pretty extraordinary, so in practice people who get stuck in something like this tend to stay there until outside circumstances force an update.

Topic Steering and Chaining

Topic Steering

Basic conversation technique based on the same principle that lets you play Six Degrees of Kevin Bacon. You start with one topic and then subtly shift the conversation to another by progressively stepping closer and closer in the direction of the thing you want to talk about.

Quick, what's a relationship between cell phones and pens?

Cell Phone -> Note taking apps -> Note taking -> Paper notes -> Pens

It's not that you know the whole path in advance. Rather there are many roads to your destination and you can bring the conversation closer every step by pushing topic focus in a particular direction.

Topic Chaining

Conversation technique where you discuss a topic until it's exhausted and then steer to a new, more interesting topic. This technique reliably produces those magic conversations where you meet someone and discuss "life, the universe, and everything" all in one go. If done right it should be like having a dozen small conversations that semalessly transition into each other.

Conversation Chaining

Core piece of my workflow that I use to develop ideas. After coming up with the seeds of an insight or hypothesis, discuss it with someone. Once the conversation winds down you've probably made progress. Immediately start a new conversation with someone else using the improved version of the idea as a starting point. You can often go from hunch to sterling insight in a day doing this. It's also positive sum because you share your accumulated discoveries with others in the course of conversation.


A technical expert that uses awareness of social reality as a significant input to their craft.

In the modern era centrally associated with computer security, but this is the exoteric interpretation for clueless newbs. In the esoteric doctrine of computing circles a hacker is someone who has grokked the interaction between social epistemology and the territory at a deep level. They can use this knowledge to do things widely considered to be 'impossible', but which are not actually impossible in a phenomenological, physical sense.

This applies to the meta level as well, where hackers can use their knowledge of social reality to predict which lines of inquiry have not been explored, leading to their centrality in Paul Graham's models of computing startups. Typically associated with autism et al., because these cognitive differences make it easier to see where social reality and Reality interact.

Leaking (Julian Assange)

Asymmetric tactic designed to attack networks of bad actors by exposing their secrets. The core idea behind Assange's strategy is that leaks have a disparate impact on the unjust and tyrannical. Conspiracies have more to hide from the public than open systems, and open systems tend to be more just than systems that hide things. Leaks place bad actors in a Catch-22: they can either adopt better opsec which calcifies roles and increases friction at every stage of their OODA loop, or take the hits and occupy a dangerous position relative to more just competitors.

The key here is that leaks attack the network, rather than individuals in it. Taking down individual conspirators is high effort, like pulling weeds. Leaks let you attack the whole thing at once, even if you don't know who all the members are. Once information is out there, unjust conspirators have a way of taking themselves down with infighting and paranoia (along with a healthy dose of outside pressure).

Open Source Intelligence

Intelligence collection and inference using publicly available information. Even in the era of libraries and newspapers this was a powerful technique in the right hands, because as it turns out secrecy at scale is hard. The Internet has made this absurdly easier than it was. You can routinely break peoples models of what you're 'supposed' to know with Google and some creative thinking.

In a consistent universe full of sensors there are no secrets, only lazy researchers. Anon can find you with scant information to go on.

Be sure to brush up on your google-fu before trying this.

33 Bits

The amount of Shannon Information that it takes to uniquely identify a human being. "Leaking bits" is the basic way that people who are anonymous stop being anonymous.

Gwern goes into extended detail on how this works in his analysis of Death Note. The basic lesson is that anything which is dependent on your physical circumstances as a person (scheduling, language you use, what information sources you're exposed to, etc) can be used to narrow down possibility space and identify you.

Unless you take great pains to hide your identity, you should model yourself as uniquely identifiable in person-space with sufficient effort. Pseudonymity raises the cost to doxx and harass you, but doesn't eliminate it.

Dragonfly Eyes/Perspective

The ability to analyze a scenario from many different perspectives at once. A key rationalist skill that Phil Tetlock identified as crucial to accurate forecasting.

Training yourself to see more than one perspective is time consuming, but worth it. The easiest way is to pretend to be someone else, someone you don't agree with. Make an account on the political forum of some hated outgroup and try to blend in. Attempt an ideological turing test. Learn how they see the world in as much detail as possible and try to predict things from their worldview. This isn't free charity for your enemies, understanding your opponent in detail is one of the most devastating moves you can make against them.

The flip side is that considering a hypothesis in detail takes you the vast majority of the way to accepting it, there is a symmetric element. Your increased potence against the enemy is offset by their impact on you.


At its simplest an agent is an intelligence which has sensors and some ability to act on what it senses. A thermostat is a primitive agent, an insect would be a more complex agent. In practical terms most agents are usefully modeled with Boyd's OODA loop - they have the ability to observe, orient, decide, and act.

By far the most complicated part of this process is orientation, making sense of what is being observed. One common 'rationalist' failure mode is to confuse challenges in orientation for challenges in decision making. This is a recurring mistake made by students of e.g. Eliezer Yudkowsky.

Simple reflex based agents are easily captured (i.e. put into an infinite loop). The ability to think past what can be immediately observed (i.e. having an epistemology, being able to react to nonlocal phenomena) is a key element of agency.

Hunger Cultivation

Core Extropian practice based around creating expectations and dissatisfaction with things as they are. A good Extropian is hungry, the world is not enough for them. Happiness should not be your highest value, happiness is contentment is satisfaction is anti-agentic. Unsatisfiable hunger is the defining feature of a maximizing agent. Not being satisfied is the basic prerequisite to continued growth and improvement.

Many people say they 'want' to read a book. Then I ask them about reading the book a week later and they tell me they haven't done it. I don't let books sit for a week when I have time and want to read them. To figure out the difference I paid close attention to when I saw a book I wanted to read. I'd pick it up and think about what I might learn, daydream a bit. When I started asking people they admitted they didn't expect anything from reading a book; not good or bad, nothing.

Saying you 'want' things by labeling them as wanted is the same category of mistake as thinking you 'believe' things you label true. You want things when you have expectations about them. Imagine playing a slot machine with no expectations, things just happen. You don't anticipate winning, losing, just "oh, I won some money", "oh, I didn't get anything". It'd be boring right? It's that we have expectations, that we imagine ourselves winning that makes a slot machine interesting.

After I picked up on this I began playing a game with people. I'd have them tell me the title of a book they 'wanted' to read, and ask about their expectations for reading the book. At first, most people tell me they have no expectations. "If you have no expectations, why read it?", "I was told to". I have them tell me what they hope to see, then what they're scared to see, and suddenly a lot of them start reading those books.

Dead Man Walking

"A communist is a dead man walking, find me six such men and I will take over the world."

    — Attributed to Lenin

Someone who has stepped completely outside of the expectation that they will live a full and ordinary life. A dead man walking has accepted on some level that their quest is a suicide mission. The kind of resolve demonstrated by Christ's apostles, who preached his gospel in the face of torture and execution; which they met as martyrs. One commentator summed up Machiavelli's philosophy as the lament that a Prince must go to hell if he's to save his country. Everyone is eager to be rewarded for their virtues, but are you still eager to be punished for them?

Suicidal people have to be kept in asylums because no one can control them. The suicidal man is usually not aware of it, but he has the purest potential for rebellion and reform. We are quite fortunate that suicidality and evil tend to go with stupidity. Great woe would befall us if the Nazi punks that shoot up churches realized they could dedicate their life to damaging society at large through a slow burn political career.

Gottfried Leibniz

Well known mathematician who is less well known for his role as the first person to research Friendly Artificial Intelligence.

While Leibniz is best known for his independent invention of calculus, his life's goal was to create a logical calculus by which it would be possible to discover and prove the truth of any philosophical argument by computation. In the service of this goal he invented for himself a primitive computer science, and worked on the creation of practical devices to advance the art of mechanical calculation.

He proposed to label each philosophical concept according to a scheme reminiscent of the prime numbers used by Godel in his famous incompleteness proof. Once constructed, the device could be used to derive the true human morality according to the bible and unite Christendom. It is the first time in recorded history that someone attempted to solve the problem of programming a computer to understand human morality.

Study The Author

Korzybski said when you read a book, you should not just read the book but 'study the author'. I've found this advice helpful to contextualize things. A dimension is added to Nietzsche's famous "What doesn't kill me makes me stronger." when you realize he probably thought of the line while doubled over in bowel pain. For this reason I tend to read many biographies (history of a person), not just histories of places, nations, or philosophies. Philosophy summarizes life, it's a compressed record of life experience. If philosophy is statistics, history is the data. Philosophy lies to you the same way statistics do, with personal bias and slanted interpretation. History is one of the only ways to interpret the big picture for yourself.

Part of studying an author is knowing their personal history (biography). But it's also important to study the author through their work. What features are significant to them, who do they reference, why are they writing? What kind of existence is this work a summary of?


A self contained hyper short post (I limit myself to 1024 characters, 2048 if I absolutely need it) which is intended to transmit a complete but not necessarily comprehensive model of some phenomena, skill, etc.

The MiniModel format fell out of three things:

  1. My dissatisfaction with essays and blog posts.
  2. My experimentation with microblogging as a way of getting my ideas out faster and more incrementally.
  3. Maia Pasek's published notes page.

MiniModels can be contrasted with a note or zettel which is generally a derivative summary of some other document or concept. MiniModels are centrally original, synthesized, or enhanced concepts even if they take inspiration from other sources. It is the model as it exists in the authors head, not someone else's.

How To Name A MiniModel

Naming concepts can be problematic. Part of the problem is that most authors aren't very good at naming. Keep these two things in mind:

Compression - Model names should fit as many of the specific features of the model as possible into themselves. Ideal names are a microcosm of the model. Most people will learn about your idea through context when others use the name, so it had better be pointing at the right idea.

Convergence - Two independent thinkers trying to name the same concept would ideally choose similar names. Try to choose names you could imagine multiple people coming up with. Use established names if they're decent. If your idea is new, consider using a metaphor or important historical reference.

Narratory Citations

The basic idea behind the citation format in Liber Augmen is to express the history of a model as it exists in my head. That means providing the sources I actually used to build the model ("inspiration"). It also means providing related ideas ("see also") from other authors ("related"), or the same idea that I discovered after inventing it myself ("convergence"). I also annotate the citations {like this} so that it's clear why something is included. This helps when an inspiration is indirect, or a small piece of a larger work.

Traditional citations aren't adequate for representing where ideas come from. Citations only encode the is-a relationship, "X is-a instance of Y". If you take your inspiration from 3 ideas, none of which are straightforwardly in the final product, you can only cite them by bringing them up as "related work". If an inspiration is tangential to the larger work this feels strange and 'improper'. Worse still a bibliography makes no distinction between its member works, besides order of appearance. Instead of explaining the origin of ideas you're looking up crap post-hoc on Google Scholar and representing it as the 'source' of the concept. This is intellectually dishonest to a stunning degree, but nobody cares because it's normalized.

Tabsplosion Design

Web design based around densely interlinked content that leads the reader to open page after page recursively. Visiting one of these sites reliably ends in 20 browser tabs and a foggy memory of what you originally came for. This is one of the key ingredients in Eliezer Yudkowsky's Sequences, which Liber Augmen emulates in a less aggressive way. Most readers have no idea that Eliezer's blog posts come out to 1800 pages because it's generally consumed piecemeal over many sessions, prioritizing the parts that the reader is most interested in.

This design is engaging, but has the downside that it tends to attract an ADD personality type. Society is very good at capturing the time of capable people, so anything which sucks huge chunks of time from the reader is going to filter for the time society didn't want. Liber Augmen tries to strike a middle road by keeping content interlinked but short.