08/19: GHz momentum computing simulation #1
nb: attempting a daily posting cadence as weekly clearly doesn't work. adjust quality priors accordingly
Momentum computing is, as far as I can tell, a reversible computing paradigm which circumvents Landauer's limit by embedding the memory state transitions of some computing device in a physical system which equilibriates slower than the time it takes to do an individual bit-swap, so that the memory state transitions themselves can store information in their "instantaneous momenta" and subsequently perform bit-swaps with near-zero net work.1
In particular, one can construct toy theoretical energy potentials which implement a
Now imagine that
This theoretical "bit-swap" comes at no work cost because there is no change in potential energy from time
gradiometric flux logic cells
The theoretical guarantees above require complete & efficient decoupling of the system from the bath. You can get similar results by simply ensuring that the relevant computational timescale is significantly smaller than the energy flux rate between the system and its bath, so that "from the perspective of the computation" there is no coupling.
[RC22] chooses to implement such a system with gradiometric flux logic cells, a kind of circuit utilizing Josephson junctions designed particularly to withstand global magnetic noise fluctuations [I do not really understand GFLCs very well, that will be a topic for another day's post].

With suitable assumptions & parametrizations [such will be the subject of yet another day's post], the GFLCs follow "significantly underdamped Langevin dynamics", which can be described with the following equation:
The details of this are very interesting, still confusing to me, and this is by no means an exhaustive parametrization of the underlying models. However, below (Fig. 2) showcases that varying

further considerations
- I really want to understand the interface between the theoretical dynamics and the physical implementation better. Why is this theory so substrate independent? Why does it matter that our memory state transitions are modeled by CTHMCs instead of CTMCs? Why are we using superconductors?
- How do we actually get efficient circuit modeling of the kind described here? I couldn't readily find a Github repository associated with the paper, so I want to write my own library and replicate their results. They find that the efficiency of their circuits are largely dependent on "circuit hyperparameters", and it would be interesting to investigate their structure.
- Benchmarks for algorithms that can be implemented with momentum computing and with typical CMOS/transistor logic, and developing simulations that can accurately predict efficiency differences. Still not sure how to think about this! Will absolutely be the topic of a later post.
- Fermi estimates of all the physical quantities at play here. What is "one Landauer" at STP? How much does it cost to make a gradiometric flux logic cell? Etc. Etc.
All credit goes to the coauthors of the two papers cited in this post.
This is probably wrong and definitely imprecise, but it reflects my current level of understanding.
Setting taken from [RWBC21].
Here we describe the one-dimensional case for intuition, but the paper details the Fredkin gate implementation with this method, which requires three-dimensional potentials to encode