Preface
You are about to read a very strange book.
The seeds of what would become Liber Augmen were planted in early 2018 when I took a college class on comparative religion. As a staunch atheist I'd expected to go in having to nod along with people's accounts of ghosts, demons, divine revelations, and other basic failures of the map-territory distinction.
Instead I found myself listening to other people describe their feelings about the sacred. As I listened I was shocked to discover I'd had these feelings too. But they weren't associated with god or meditation, rather they were attached to the cosmic scheme I'd learned from thinkers like Hawking and Sagan, and the transhumanist ideas of authors such as Eliezer Yudkowsky and Scott Alexander. It seemed everyone else was able to articulate the relationship between their feelings and the ideas in their cosmology, but I just had feelings and no language to discuss them with. This stirred up a subtle malaise that crystalized when the instructor asked me in an assignment to discuss 'my faith tradition' and I realized I had no idea what to say to her. I stammered my way through that section, writing that 'it' was 'recent' and had started roughly in the 80's with Max More's Extropians. For some reason I also included a brief description of AI Risk which noted that heaven and hell are collective outcomes, and the soul is considered a kind of information.
Afterwards she passed back my writing with the comment that she was a fan of "making your own god" or something similar. I hadn't known what to say and what I did say wasn't understood: it was humiliating, I felt like an intellectual pauper. Here I was being asked about the most important things there were to know and the best I could manage was a tonguetied stammer. As Eliezer Yudkowsky might say: Oops and Duh. It was also around this time that I began to be seriously concerned that the people calling themselves 'postrationalists' knew something I didn't, so I found some books on the occult and read them. Fortunately I chose Hall's Secret Teachings Of All Ages and Principe's Secrets Of Alchemy, which are both written about real western occultism for a mind that is looking to understand. I'd expected to find old magick rituals and obscuritanism, instead I was staring in the face of my lost philosophical ancestors. To my astonishment I could trace contemporary transhumanist and extropian ideas back to alchemy, and through alchemy back to antiquity.
I had also reached a place where existential risk bore heavier and heavier on my mind. Climate change and environmental headlines took on an apocalyptic tone, and it was becoming increasingly clear that nobody intended to do half of what was necessary to avert horrific outcomes. As I sat and considered what it would have to look like for us to grapple with greenhouse gases and nuclear bombs and AI risk, I realized the basic problem was that humanity had never dealt with problems like these before. If we tried to solve them the way we'd solved every other problem in history, by seeing and then acting, we would die for certain. The entire reason why these problems disable our ability to act is that they have to be dealt with long before they reach their crisis point. You have to act on a time scale of decades or centuries to fight an invisible opponent long before it arrives, the whole time using vast resources that could be spent on tangible problems here and now. The only comparable enterprises in human history are cathedrals, monuments, and various acts of sacrifice to invisible daemons. That is to say the only machinery in the human animal that can act on these problems is religious, period. The outside view told me that absent a genuinely religious sentiment towards solving our looming crisis's, there was no reason to expect us to survive.
There are only a handful of thinkers who I feel have really 'gotten' existential risk and done anything substantial about the overall category. Perhaps most astonishing is a semi-obscure Polish nobleman named Alfred Korzybski, who outlined the basic idea of X-Risk before there were even any nukes to speak of. He anticipates the concept in his 1921 Manhood of Humanity. The Manhood of Humanity is a book whose essential thesis is that man is a time binder, differentiated from the rest of nature by the ability to retain experiences and transmit them across generations. In Korzybski's view, technological and social progress is an exponential function dependent on already accumulated knowledge. To him WWI was prima-facie evidence that the growth rate of technological capabilities had surpassed that of socializing abilities. This would inevitably lead to an increasingly powerful humanity less and less restrained by the social sciences. Eventually our powers would grow to world threatening heights, with an infantile understanding of the best way to use them. Given his thesis, it seemed obvious that the only hope of saving the world would be to find out what 'time binding' is made of, and then use that understanding to improve our ability to bind time in the social sciences.
Korzybski viewed the problem as an inability to learn from history. In marked contrast, his recent spiritual successor Eliezer Yudkowsky sees it as an inability to look into the future. He writes about 'future shock levels' and the importance of orienting yourself to the full possibilities implied by physics. Physics implies that a radically different, much more enjoyable future is possible for earthly life. To Yudkowsky, if your unbiased consideration of human potential would not suggest high future shock, this is a sign that your natural philosophy is too weak. He wrote a very long book explaining how to think like he does. Readers of his book organized under the banner of 'rationality', and proceeded to be eaten by centralizing in the Bay Area. Yudkowsky had hoped that he'd be able to find someone that could play his role better than he could. He described his vision in an optimistic April 2011 essay as:
“Stay on track toward what?” you ask, and my best shot at describing the vision is as follows:
“Through rationality we shall become awesome, and invent and test systematic methods for making people awesome, and plot to optimize everything in sight, and the more fun we have the more people will want to join us.”
This did not happen, and it was also in 2018 that I fully internalized this abject failure. It seemed clear to me that it wasn't possible to fix the 'rationalist community', having selected its membership on an 'elite reject' model that attracted very intelligent screwups like a MENSA chapter. Their mutual brokenness had reached fixation, and it was being a high functioning person that was ultimately stigmatized and excluded. If I couldn't fix it, then the only option was to pack up what was 'special' about Yudkowsky's rationalists and take it somewhere where people weren't so dysfunctional. That seemed like the highest priority, so I began researching 'rationality' books to try and get some idea of what the essence of the thing was.
All these threads of inquiry ended up merging into one research project. On the religion front I began asking what it would look like to have a religion which only included literally true things in it. I asked what the difference was between a 'mundane' truth like the earth being round and a 'radical' truth like the possibility of Friendly Artificial Intelligence or environmental disaster. What I eventually singled it down to was priorities, writing:
If religion is to be based on truth, it must be radical truth. Our notion of the transcendant will not settle for that which is merely common sense. We find no beyond in passive facts like the earth’s spherical nature. Radical truth is a revelation, it’s an aggressive force in the world that implies a total restructuring of priorities. Our onrushing ecological apocalypse is radical truth, the infinite possibility of the cosmos and potential to extract resources from the stars is radical truth, the symbiosis of machines that speak like men and men that think like machines is radical truth. It’s the visions of mystics and prophets and wizards gone mad by their own revelations which can touch that outer rim of our vestigial connection to the dreamtime. Perhaps to truly understand we must go mad with them. How does one explain the evolution of ants, and men that spring from monkeys? These incredible facts go unnoticed in part because they have not been presented with the mania necessary to justify them.
I'd already admitted to myself that I was more or less experiencing what the people in that comparative religion class were experiencing. Further, I was trying to take meaningful action for things decades into the future; having this thought at all meant I could analyze myself for clues as to whether I was 'religious' or not. In the end I concluded that 'religious mission' did in fact more or less characterize the difference between me and your average reader of Yudkowsky. Between these two things I had an existence proof: Somewhere in my head, I'd squared the circle and discovered a religion which can take the world as it is. Attaining this state seemed to be a basic prerequisite to doing anything meaningful about existential risk, and likely the most productive way to frame Eliezer Yudkowsky's philosophy.
These are the basic premises which I spent the next two and a half years from January 2018 researching. This was done in the limited free time I had during college and later programming work. Things came to a head during a trip I took to Paris in 2019, which involved a near death experience. I caught the flu in an airport and experienced wicked fever dreams about being tortured by demons. Did I actually almost die? It doesn't matter, because it sure felt like I might. Standing dehydrated outside a Franprix grocery store with no idea where I was, I had a realization: If I died right now, nobody would know any of the stuff I'd discovered during my research. My biggest regret would be not writing more publicly, and not telling more people about my ideas.
This book is my attempt to fix that. I'd originally tried writing a series of essays, but found the requirement that I weave so many different ideas together into one narrative was too much. I had also experimented with microblogging, on the theory that if nothing else the shorthand version of my thoughts would be better than nothing if I kicked the bucket. Eventually I abandoned the essay format because it wasn't suitable for the kind of writing I wanted to do. An essay is good if you have 1-3 ideas you're trying to get across in detail. But this was more like trying to transmit a gestalt of 100-150 mental models, which needed to all be understood to get a full picture of what I am trying to explain. As a result Liber Augmen is presented as a series of 'minimodels', or microblog posts in a particular format. Each post is meant to be a named model of some phenomena, described in 1024 to 2048 characters, along with appropriate citations and references for where the reader can go to get more information.
In short, Liber Augmen is a description of a religion which I term "Eliezer's Extropy", along with a series of mental models and tools to think (epistemology) about 'belief', 'religion' and 'agency', etc. The intent is that after reading it you will be in a better position to understand the strategy employed by a figure like Elon Musk or Dominic Cummings. The best possible outcome would be that it sets the stage for an outreach strategy to be developed that summons 1-10,000x the current number of "Eliezer Yudkowsky style" agents in the world. As it stands there are too few for me to imagine humanity veering away from its collision course with certain death.
I wish you the best of luck in your studies.