Notes

Chemical Turing Machines

Finite state automata (FSA) can be modeled with reactions of the form $A + B \to C + D.$ FSAs operate over regular languages, so our job is to find some reaction which corresponds to determining whether or not a given "word" (sequence of inputs to the reaction) is a member of the language.

Generally, we will be associating languages of various grammars in the Chomsky hierarchy to certain combinations of "aliquots" added to a one-pot reaction, and in this case we want our aliquots to be potassium iodide and silver nitride. Take the language over the alphabet ${a,b}$ consisting of all words with at least one $a$ and one $b.$ Now associate $a$ with some part of $\text{KIO}_3$ and $b$ with some part of $\text{AgNO}_3.$ Then, the reaction $$ \text{KIO}_3 + \text{AgNO}_3 \to \text{AgIO}_3 (\text{s}) + \text{KNO}_3 $$ only occurs when both of the reactants are present in solution, so the word is in the language if and only if silver iodide is present. (Or, equivalently, heat is released).

Type-2 grammars consist of languages that can be modeled with pushdown automatas, which differ from FSAs in that they have a stack that can store strings of arbitrary sizes. We call these languages "context-free languages", and the reactions which we associate to context-free languages are those with intermediaries. Again, because of automata equivalence, we can consider the simple case of the Dyck language: the collection of parentheses-strings that never contain more closed parentheses than open parentheses at any $i$ and contain exactly equal amounts of closed and open parentheses at $i=n.$

We associate this with the $pH$ reaction of sodium hydroxide and acetic acid (vinegar), with the amounts of each aliquot normalized to create identical disturbances in the $pH$ of the solution. Note that as $pH$ indicator aliquot is present at the beginning and end of the reaction (we associate it with the start-and-end token), the aliquot serves as the intermediary (the "stack", if you will). So, if $pH \geq \text{midpoint } pH$ throughout the reaction, but is $\text{midpoint } pH$ at the end, the reactor accepts the word. If not, it does not.

Incidentally, you can interpret this as the enthalpy yield $Y_{\Delta H} (\%)$ of the computation, defined as $$ Y_{\Delta H} (\%) = \frac{\text{reaction heat during computation}}{\text{formation heat of input string}} \times 100. $$ Dyck words maximize the enthalpy yield, whereas all other input sequences with imbalanced numbers of parentheses have lower enthalpy yields. Implication: all PDAs are doing something like "enthalpy maximization" in their computation. Couldn't find a good reference or exposition here, but something to look into.

How do we model Turing machines? You can think of a Turing machine as a "two-stack" PDA, where each stack corresponds to moving left or right on the tape. Physically, this implies that we want to model TMs with a reaction with at least two interdependent intermediaries, and we want it to be "expressive" enough to model "non-linearities". Oscillatory redox reactions are a natural choice, of which the Belousov-Zhabotinsky (BZ) reaction is the most famous.

A typical BZ reaction involves the combination of sodium bromate and malonic acid, with the main structure as follows: $$ 3\text{BrO}_3^- + 3\text{CH}_2(\text{COOH})_2 + 3\text{H}^+ \to 3\text{BrCH}(\text{COOH})_2 + 4\text{CO}_2 + 2\text{HCOOH} + 5\text{H}_2\text{O}. $$

BZ reactions have a ton of macro-structure. Color changes as a function of the amount of oxidized catalyst, the proportions of the reactants and products fluctuate periodically, and even spatial patterns emerge from micro-heterogeneity in concentrations (e.g. reaction-diffusion waves, Pack patterns). These properties are incredibly interesting in and of themselves, but all we need for modeling TMs is that the reaction is sensitive to the addition of small amounts of aliquot.

Consider the language $L_3 = \{a^nb^nc^n \mid n \geq 0\}.$ Dueñas-Díez and Pérez-Mercader associate the letter $a$ with sodium bromate and $b$ with malonic acid. $c$ must somehow be dependent on the concentrations of $a$ and $b,$ so we associate $c$ with the $pH$ of the one-pot reactor, which we can read with sodium hydroxide. An aliquot of the rubidium catalyst maps to the start-and-end token.

Oscillation frequency $f$ is proportional to $[\text{BrO}_3]^\alpha \times [\text{CH}_2(\text{COOH})_2]^{\beta} \times [\text{NaOH}]^{-\gamma},$ but it can also be modeled as a nonlinear function of the difference between the maximum redox value of the reaction and the mean redox value of a given oscillation, that is: $$ D = V_{\text{max}} + \left( V_T + \frac{V_P - V_T}{2}\right), $$ where $V_T$ and $V_P$ are the trough and peak potentials, respectively, and $V_\text{max}$ is the maximum potential. Ultimately, the final frequency of the reaction can be modeled as a quadratic in $D_{\#}$ to high-precision ($\#$ denotes the start-and-end token, so it can be taken to be the last timestep in reaction coordinates).

What actually allows word-by-word identification though, is the sensitivity of the oscillatory patterns to the concentrations of specific intermediaries:

The various "out-of-order" signatures for words not in $L_3$ can be explained as follows. Each symbol has an associated distinct pathway in the reaction network. Hence, if the aliquot added is for the same symbol as the previous one, the pathway is not changed but reinforced. In contrast, when the aliquot is different, the reaction is shifted from one dominant pathway to another pathway, thus reconfiguring the key intermediate concentrations and, in turn, leading to distinctive changes in the oscillatory patterns. The change from one pathway, say 1, to say pathway 2 impacts the oscillations differently than going from pathway 2 to pathway 1. This is what allows the machine to give unique distinctive behaviors for out-of-order substrings.1

Thermodynamically, characterizing word acceptance is a little bit more involved than that of PDAs or FSAs, but it can still be done. Define the area of a word as $$ A^{\text(Word)} = V_{\text{max}} + \tau' - \int_{t_{\#} + 30}^{t_{\#} + \tau} V_\text{osc}(t) \, dt, $$ where $t_{\#}$ is the time in reaction coordinates where the end token is added, $\tau'$ is the time interval between symbols, $V_\text{max}$ is the maximum redox potential, and $V_\text{osc}$ is the measured redox potential by the Nerst equation $$ V_\text{osc} = V_0 + \frac{RT}{nF} \ln \left( \frac{[Ru(bpy)^{3+}_{3}]}{[Ru(bpy)^{2+}_{3}]} \right), $$ where $Ru(bpy)^{3+}_{3}$ and $Ru(bpy)^{2+}_{3}$ are the concentrations of the oxidized and reduced catalyst of the BZ reaction, respectively. Now, the Gibbs free energy can be related to the redox potential as so: $$ \Delta G_\text{osc} = -nFV_\text{osc}, $$ so the area of a word can be rewritten in terms of the free energy as $$ A^{\text(Word)} = - \frac{1}{n_eF} \left( \Delta G ' \times \tau ' - \int_{t_{\#} + 30}^{t_{\#} + \tau} \Delta G_\text{osc}(t) dt\right). $$ Accepted words all have some constant value of $A^{\text(Word)},$ while rejected words have a value that is dependent on the word.

$L_3$ is a context-sensitive language, so it is only a member of the Type-1 grammar not the Type-0 grammar. However, for our purposes (realizing some practical implementation of a TM) it is roughly equivalent, as any finite TM can be realized as a two-stack PDA, and this models a two-stack PDA quite well.

1

Dueñas-Díez, M., & Pérez-Mercader, J. (2019). How Chemistry Computes: Language Recognition by Non-Biochemical Chemical Automata. From Finite Automata to Turing Machines. iScience, 19, 514-526. https://doi.org/10.1016/j.isci.2019.08.007

2

Magnasco, M. O. (1997). Chemical Kinetics is Turing Universal. Physical Review Letters, 78(6), 1190-1193. https://doi.org/10.1103/PhysRevLett.78.1190

3

Dueñas-Díez, M., & Pérez-Mercader, J. (2019). Native chemical automata and the thermodynamic interpretation of their experimental accept/reject responses. In The Energetics of Computing in Life and Machines, D.H. Wolpert, C. Kempes, J.A. Grochow, and P.F. Stadler, eds. (SFI Press), pp. 119–139.

4

Hjelmfelt, A., Weinberger, E. D., & Ross, J. (1991). Chemical implementation of neural networks and Turing machines. Proceedings of the National Academy of Sciences, 88(24), 10983-10987. https://doi.org/10.1073/pnas.88.24.10983


Review | The Dialectic of Sex: The Case for Feminist Revolution

Broadly, I see three separate threads in this work: an attempt to situate second-wave feminism in a historical context, a pseudo-rehabilitation of Freudianism in service of the sexual dialectic, and an argument for the necessity of reproductive substitutes to achieve true equality.

The first of these, while interesting, is not something I have much to say about. The second primarily exists to justify the third—"exactly because Freud was correct in identifying the psychosexual underpinnings of society,1 then freedom can only be achieved by eliminating these shackles on mankind." The third germinated the cyberfeminist movement of the 1990s, which itself spawned CCRU and modern accelerationism (via Sadie Plant).

If you adopt this framing, Firestone's choice of the dialectic as a structure is in large part pragmatic, given she is trying to replicate Marx's totalizing analysis of class but as applied to sex. Adopting Freudianism is necessary to ensure that the nature of sex-oppresion subsumes that of class-oppression by establishing exploitation within a nuclear family as a more fundamental primitive than exploitation in the workplace. Race-oppresion is dealt with similarly.2

She is not kind to Western family structure. In a patriarchy, she argues, oppression of women and children are fundamentally intertwined. Both are forced to be physically dependent, sexually repressed, repressed in the family, and repressed in society. Even a mother's love is borne out of a shared helplessness. 3

She is even less kind to love. "For love, perhaps even more than child-bearing, is the pivot of women’s oppression today." Why? Because love allows male culture to parasitically feed off of the emotional strength of women. What should be a love between equals is perverted by its political context and the inevitable power dynamic between husband and wife. Regardless, men can't love. Men are incapable of loving.4

The solution? Free women from the "tyranny of reproduction" with artificial wombs and joint responsibility of the sexes for child-rearing. Give women and children political autonomy via economic independence. Completely integrate women and children into society, and give women and children sexual freedom.5

This "feminist revolution" she conceptualizes is similar in nature to the predicted uprising of the proles, but instead initialized by "cybernetic" innovation and a population explosion. "Cybernetics" (what we would today call AI and automation) would simultaneously eliminate the need for a "transient workforce" (mostly women) and the need for house-labor. The population explosion would necessitate some form of population control, which ought take the place of artificial reproduction. Ergo, the fundamentals would be in place for a feminist revolution.

I love this book. It bites bullets.6 It has novel conceptual insights. It has truth to it. Her intellectual descendants were better off for her having written this, and it gives a transcendent vision rather than an immanent one. She even gives a blueprint for her utopia.

Quotes

1

Other such examples are abundant, but I have made my point: with a feminist analysis the whole structure of Freudianism – for the first time – makes thorough sense, clarifying such important related areas as homosexuality, even the nature of the repressive incest taboo itself – two causally related subjects which have been laboured for a long time with little unanimity. We can understand them, finally, only as symptoms of the power psychology created by the family.

2

Like sexism in the individual psyche, we can fully understand racism only in terms of the power hierarchies of the family: in the Biblical sense, the races are no more than the various parents and siblings of the Family of Man; and as in the development of sexual classes, the physiological distinction of race became important culturally only due to the unequal distribution of power. Thus, racism is sexism extended.

3

The mother who wants to kill her child for what she has had to sacrifice for it (a common desire) learns to love that same child only when she understands that it is as helpless, as oppressed as she is, and by the same oppressor: then her hatred is directed outwards, and 'mother-love' is born.

4

It is dangerous to feel sorry for one's oppressor – women are especially prone to this failing – but I am tempted to do it in this case. Being unable to love is hell. This is the way it proceeds: as soon as the man feels any pressure from the other partner to commit himself, he panics. . .

5

But in our new society, humanity could finally revert to its natural polymorphous sexuality – all forms of sexuality would be allowed and indulged. The fully sexuate mind, realized in the past in only a few individuals (survivors), would become universal. Artificial cultural achievement would no longer be the only avenue to sexuate self-realization: one could now realize oneself fully, simply in the process of being and acting.

6

In this view, the later Russian reinstitution of the nuclear family system is seen as a last-ditch attempt to salvage humanist values – privacy, individualism, love, etc., by then rapidly disappearing.

But it is the reverse: the failure of the Russian Revolution to achieve the classless society is traceable to its half-hearted attempts to eliminate the family and sexual repression. This failure, in turn, was due to the limitations of a male-biased revolutionary analysis based on economic class alone, one that failed to take the family fully into account even in its function as an economic unit. By the same token, all socialist revolutions to date have been or will be failures for precisely these reasons.


Species as Canonical Referents of Super-Organisms

A species is a reproductively isolated population. In essence, it consists of organisms which can only breed with each other, so its ability to self-replicate is entirely self-contained. In practice, the abstraction only applies well to macroflora and macrofauna, which is still enough to inform our intuitions of super-organismal interaction.

Interspecific interactions can frequently be modeled by considering the relevant species as agents in their own right. Agents motivated by self-sustention to acquire resources, preserve the health of their subagents, and bargain or compete with others on the same playing field as themselves. Parasitism, predation, pollination—all organismal interactions generalizable to super-organismal interactions.

Optimization of the genome does not occur at the level of the organism, nor does it occur at the level of the tribe. It occurs on the level of the genome, and selects for genes which encode traits which are more fit. From this perspective, it makes sense for "species" to be a natural abstraction. Yet, I claim there are properties which species have that make them particularly nice examples of super-organisms in action. Namely:

  • Boundaries between species are clear and well-defined, due to reproductive isolation;
  • Competitive dynamics between species are natural to consider, rather than having to move up or down a vertical hierarchy;
  • The "intentional stance", when applied to species, is simple: reproduction.

However, it is precisely because species have such nice properties that we should be incredibly cautious when using them as intuition pumps for other kinds of super-organisms, such as nation-states, companies, or egregores. For instance:

  • Boundaries between nation-states and companies are relatively straightforward to define (determined by citizenship or residency and employment, respectively). Boundaries between egregores are . . . complicated, to say the least.1
  • Company competition is generally modelable with agent-agent dynamics, and so is nation-state competition. But the act of "merging" (via acquisition, immigration, etc.) is available to them in a way that it is not to species. (Again, egregores are complicated . . .)
  • The goal of a company is to maximize shareholder value. The goal of a nation-state is . . . to provide value to its citizens? The "goal" of an egregore is ostensibly to self-perpetuate and . . . fullfill whichever values it wants to fulfill.2

These "issues" are downstream from horizontal boundaries between other super-organisms we want to consider being less strong than the divides between idealized species. While Schelling was able to develop doctines of mutually-assured destruction for Soviet-American relations, many other nation-state interactions are heavily mediated by immigration and economic intertwinement. It makes less sense to separate China and America than it does to separate foxes and rabbits.

Don't species run into the same issues as well? Humans are all members of one species, and we manage to have absurd amounts of intraspecial conflict. Similarly, tribal dynamics in various populations are often net negative for the population as a whole. Why shall we uphold species as the canonical referent for superorganisms?

Species are self-sustaining and isolated. The platonic ideal of a species would not only be reproductively isolated, but also resource isolated, in that the only use for the resources which organisms of a species would need to thrive were ones which were unusable for any other purpose. Horizontal differentiation is necessary to generalize agent modeling to systems larger than ourselves, and species possess a kind of horizontal differentiation which is important and powerful.

A corollary of this observation is that insofar as our intuitions for "superorganismal interaction" are based on species-to-species interaction, they should be tuned to the extent to which the superorganisms we have in mind are similar to species. AI-human interaction in worlds where AIs have completely different hardware substrates to humans are notably distinct from ones in which humans have high-bandwidth implants and absurd cognitive enhancement, so they can engage in more symbiotic relationships.

I would be interested in fleshing out these ideas more rigorously, either in the form of case studies or via a debate. If you are interested, feel free to reach out.

Crossposted to LessWrong.

1

One way to establish a boundary between two categories is to define properties which apply to some class of objects which could be sorted into one of the two buckets. But what is the "class of objects" which egregores encompass?! Shall we define a "unit meme" now?

2

I'm aware I'm not fully doing justice to egregores here. I still include them as an example of a "superorganism" because they do describe something incredibly powerful. E.g., explaining phenomena where individuals acting in service of an ideology collectively contravene their own interests.


Probabilistic Logic <=> Reflective Oracles?

The Probabilistic Payor's Lemma implies the following cooperation strategy:

Let $A_{1}, \ldots, A_{n}$ be agents in a multiplayer Prisoner's Dilemma, with the ability to return either 'Cooperate' or 'Defect' (which we model as the agents being logical statements resolving to either 'True' or 'False'). Each $A_{i}$ behaves as follows:

$$ , \vdash \Box_{p_{i}} \left( \Box_{\max {p_{1},\ldots, p_{n}}}\bigwedge_{k=1}^n A_{k} \to \bigwedge_{k=1}^n A_{k} \right) \to A_{i} $$

Where $p_i$ represents each individual agents' threshold for cooperation (as a probability in $[0,1]$), $\Box_p , \phi$ returns True if credence in the statement $\phi$ is greater than $p,$ and the conjunction of $A_{1}, \ldots, A_{n}$ represents 'everyone cooperates'. Then, by the PPL, all agents cooperate, provided that all $\mathbb{P}_{A_{i}}$ give credence to the cooperation statement greater than each and every $A_{i}$'s individual thresholds for cooperation.

This formulation is desirable for a number of reasons: firstly, the Payor's Lemma is much simpler to prove than Lob's Theorem, and doesn't carry with it the same strange consequences as a result of asserting an arbitrary modal-fixedpoint; second, when we relax the necessitation requirement from 'provability' to 'belief', this gives us behavior much more similar to how agents actually I read it as it emphasizing the notion of 'evidence' being important.

However, the consistency of this 'p-belief' modal operator rests on the self-referential probabilistic logic proposed by Christiano 2012, which, while being consistent, has a few undesirable properties: the distribution over sentences automatically assigns probability 1 to all True statements and 0 to all False ones (meaning it can only really model uncertainty for statements not provable within the system).

I propose that we can transfer the intuitions we have from probabilistic modal logic to a setting where 'p-belief' is analogous to calling a 'reflective oracle', and this system gets us similar (or identical) properties of cooperation.

Oracles

A probabilistic oracle $O$ is a function from $\mathbb{N} \to [0,1]^\mathbb{N}.$ Here, its domain is meant to represent an indexing of probabilistic oracle machines, which are simply Turing machines allowed to call an oracle for input. An oracle can be queried with tuples of the form $(M, p),$ where $M$ is a probabilistic oracle machine and $p$ is a rational number between 0 and 1. By Fallenstein et. al. 2015, there exists a reflective oracle on each set of queries such that $O(M,p) = 1$ if $\mathbb{P}(M() = 1) > p,$ and $O(M,p) = 0$ if $\mathbb{P}(M() = 0) < 1-p$ (check this).

Notice that a reflective oracle has similar properties to the $Bel$ operator in self-referential probabilistic logic. It has a coherent probability distribution over probabilistic oracle machines (as opposed to sentences), it only gives information about the probability to arbitrary precision via queries ($O(M,p)$ vs. $Bel(\phi)$). So, it would be great if there was a canonical method of relating the two.

Peano Arithmetic is Turing-complete, there exists a method of embedding arbitrary Turing machines in statements in predicate logic and there also exist various methods for embedding Turing machines in PA. We can form a correspondence where implications are preserved: notably, $x\to y$ simply represents the program if TM(x), then TM(y) , and negations just make the original TM output 1 where it outputted 0 and vice versa.

(Specifically, we're identifying non-halting Turing machines with propositions and operations on those propositions with different ways of composing the component associated Turing machines. Roughly, a Turing machine outputting 1 on an input is equivalent to a given sentence being true on that input)

CDT, expected utility maximizing agents with access to the same reflective oracle will reach Nash equilibria, because reflective oracles can model other oracles and other oracles that are called by other probabilistic oracle machines---so, at least in the unbounded setting, we don't have to worry about infinite regresses, because the oracles are guaranteed to halt.

So, we can consider the following bot: $$ A_{i} := O_{i} \left( O_{\bigcap i} \left( \bigwedge_{k=1}^n A_{k}\right) \to \bigwedge_{k=1}^n A_{k}, , p_{i}\right), $$ where $A_i$ is an agent represented by a oracle machine, $O_i$ is the probabilistic oracle affiliated with the agent, $O_{\bigcap i}$ is the closure of all agents' oracles, and $p_{i} \in \mathbb{Q} \cap [0,1]$ is an individual probability threshold set by each agent.

How do we get these closures? Well, ideally $O_{\bigcap i}$ returns $0$ for queries $(M,p)$ if $p < \min{p_{M_1}, \ldots, p_{M_n}}$ and $1$ if $p > \max {p_{M_1}, \ldots, p_{M_n}},$ and randomizes for queries in the middle---for the purposes of this cooperation strategy, this turns out to work.

I claim this set of agents has the same behavior as those acting in accordance with the PPL: they will all cooperate if the 'evidence' for cooperating is above each agents' individual threshold $p_i.$ In the previous case, the 'evidence' was the statement $\Box_{\max {p_{1},\ldots, p_{n}}}\bigwedge_{k=1}^n A_{k} \to \bigwedge_{k=1}^n A_{k}.$ Here, the evidence is the statement $O_{\bigcap i} \left( \bigwedge_{k=1}^n A_{k}\right) \to \bigwedge_{k=1}^n A_{k}.$

To flesh out the correspondence further, we can show that the relevant properties of the $p$-belief operator are found in reflective oracles as well: namely, that instances of the weak distribution axiom schema are coherent and that necessitation holds.

For necessitation, $\vdash \phi \implies \vdash \Box_{p}\phi$ turns into $M_{\phi}() = 1$ implying that $O(M_{\phi},p)=1,$ which is true by the properties of reflective oracles. For weak distributivity, $\vdash \phi \to \psi \implies \vdash \Box_{p} \phi \to \Box_{p}\psi$ can be analogized to 'if it is true that the Turing machine associated with $\phi$ outputs 1 implies that the Turing machine associated with $\psi$ outputs 1, then you should be at least $\phi$-certain that $\psi$-outputs 1, so $O(M_{\phi},p)$ should imply $O(M_{\psi}, p)$ in all cases (because oracles represent true properties of probabilistic oracle machines, which Turing machines can be embedded into).

Models

Moreover, we can consider oracles to be a rough model of the p-belief modal language in which the probabilistic Payor's Lemma holds. We can get an explicit model to ensure consistency (see the links with Christiano's system, as well as its interpretation in neighborhood semantics), but oracles seem like a good intuition pump because they actively admit queries of the same form as $Bel(\phi)>p,$ and they are a nice computable analog.

They're a bit like the probabilistic logic in the sense that a typical reflective oracle just has full information about what the output of a Turing machine will be if it halts, and the probabilistic logic gives $\mathbb{P}(\phi)=1$ to all sentences which are deducible from the set of tautologies in the language. So the correspondence has some meat.

Crossposted to LessWrong.


Review | The Rise of Christianity

Christianity rose because:

  • it became popular amongst the middle & upper classes,
  • it gave Hellenized Jews a chance to fuse their Hellenic and Jewish identities, and Hellenic Jews were a substantial portion of the population,
  • it was better than paganism at letting Romans cope with the epidemics of the 2nd century, and also encouraged charity that massively improved the survival rate of those with closer connection to Christians via nurisng etc.,
  • it was more popular amongst women because it discouraged abortion & infanticide and promoted mutual chastity till marriage, and as a result early in the Christian movement # women >> # men (correlated with/caused greater freedom for women),
  • Greco-Roman cities were atrocious, horrifying, and disgusting---Christianity was primarily an urban movement, so is it really surprising that it took root where hopelessness would probably have been the highest?
  • early Christians had a lot of skin in the game (exemplified by the martyrs of the 60s), making the religion extremely potent & able to give benefits exceeding that of paganism to their believers
  • above all, Christianity is virtuous, and virtue wins.

Smattering of useful frames here: sect vs. cult movements (sect movements being offshoots and having a base from which to draw from, cult movements being new sprouts and fringe and ostracized in the beginning), not-very-rigorous models that involve a lot of assumptions can be surprisingly accurate (see: Fermi, see: this book's estimation of travel distance from Jerusalem), martyrdom as central to Christian belief, and Christianity as (in some ways) this weird bastard child of Roman paganism and Jewish monotheism.

Apparently Roman cities were 1.5x-2x as dense as Indian cities are today, without the ability to build upwards. Also apparently, the racial mixing in the Roman empire was as if the entirety of the British empire was squashed together with freedom of movement. Strangely ancap?

It won because it was the better tech. Jesus was the ultimate startup founder