"If the entire universe is a neural network, then something like natural selection might be happening on all scales from cosmological (> 10⁺¹⁵ m) and biological (10⁺² − 10−⁶ m) all the way to subatomic (< 10−¹⁵ m) scales. The main idea is that some local structures (or architectures) of neural networks are more stable against external perturbations (i.e. interactions with the rest of the network) than other local structures. As a result the more stable structures are more likely to survive and the less stable structures are more likely to be exterminated. There is no reason to expect that this process might stop at a fixed time or might be confined to a fixed scale and so the evolution must continue indefinitely and on all scales. We have already seen that on the smallest scales the learning evolution is likely to produce structures of a very low complexity (i.e. second law of learning) such as one dimensional chains of neurons, but this might just be the beginning. As the learning progresses these chains can chop off loops, form junctions and according to natural selection the more stable structures would survive. If correct, then what we now call atoms and particles might actually be the outcomes of a long evolution starting from some very low complexity structures and what we now call macroscopic observers and biological cells might be the outcome of an even longer evolution. Of course, at present the claim that natural selection may be relevant on all scales is very speculative, but it seems that neural networks do offer an interesting new perspective on the problem of observers."
- Vitaly Vanchurin
The World as a Neural Network