Convolutional Differentiable Logic Gate Networks
26 points
7 days ago
| 4 comments
| arxiv.org
| HN
westurner
4 days ago
[-]
The Solvay-Kitaev algorithm for quantum logical circuit construction in context to "Artificial Intelligence for Quantum Computing" (2024): https://news.ycombinator.com/item?id=42155909#42157508

From https://news.ycombinator.com/item?id=37379123 :

Quantum logic gate > Universal quantum gates: https://en.wikipedia.org/wiki/Quantum_logic_gate#Universal_q... :

> Some universal quantum gate sets include:

> - The rotation operators Rx(θ), Ry(θ), Rz(θ), the phase shift gate P(φ)[c] and CNOT are commonly used to form a universal quantum gate set.

> - The Clifford set {CNOT, H, S} + T gate. The Clifford set alone is not a universal quantum gate set, as it can be efficiently simulated classically according to the Gottesman–Knill theorem.

> - The Toffoli gate + Hadamard gate. ; [[CCNOT,CCX,TOFF], H]

> - [The Deutsch Gate]

reply
westurner
4 days ago
[-]
"Convolutional Differentiable Logic Gate Networks" (2024) https://arxiv.org/abs/2411.04732 :

> With the increasing inference cost of machine learning models, there is a growing interest in models with fast and efficient inference. Recently, an approach for learning logic gate networks directly via a differentiable relaxation was proposed. Logic gate networks are faster than conventional neural network approaches because their inference only requires logic gate operators such as NAND, OR, and XOR, which are the underlying building blocks of current hardware and can be efficiently executed. We build on this idea, extending it by deep logic gate tree convolutions, logical OR pooling, and residual initializations. This allows scaling logic gate networks up by over one order of magnitude and utilizing the paradigm of convolution. On CIFAR-10, we achieve an accuracy of 86.29% using only 61 million logic gates, which improves over the SOTA while being 29x smaller.

reply
mikewarot
6 days ago
[-]
The approach of making an algebraic function across real numbers that can then be differentiated to allow for training, seems brilliant to me. The end result of this work is a trees of logical gates, which could be pushed into an FPGA for really fast and efficient execution.

I look forward to digging into their results, and attempting to parse them into something that works with a bit level systolic array.

reply
meltyness
6 days ago
[-]
Differentiable relaxation makes it sound like Rowhammer backpropagation, but I'm fairly certain that's not what they're talking about.
reply