I believe minds are strange loops: level-crossing feedback loops. My understanding is that they are systems that edit how they edit themselves.
This post is about self-improving artificial intelligences, which I believe are strange loops.
The question is, could there be a class of strange loops that tend to increase in complexity?
What environment would these strange loops need to grow?
It might be related to why there is so much non-coding DNA: it’s easier to fork a gene and keep the original rather than mutate in-place.
Extra copies can then be turned off when they aren’t useful, which creates DNA cruft. DNA doesn’t seem to refactor itself.
Does this imply a strange loop that grows in complexity linearly would require an exponential increase in resources?
Perhaps not exponential, but more than linear. While a self-improving AI could refactor, that may take longer and longer as it grows in complexity.
So maybe, as Ramez Naam believes, artificial intelligence will not result in a quick intelligence explosion.
I wish your ideas were circulating with other strange loops so y’all could get this stuff figured out…