I recently read an interesting article about a neural network that, instead of artificial neurons, has logic gates.
https://google-research.github.io/self-organising-systems/difflogic-ca/?hn
The paper described its neural network as “sparse”, and this made me want to ask ChatGPT “Why does this paper describe logic-gate networks as sparse?”
I didn’t realize this really important fact:
In “Deep Neural Networks”, they’re called “Deep” because it’s a 2D matrix of nodes “neurons” that are FULLY CONNECTED to every single output in the previous layer.