<- kngsly.io

Pre-Super Intelligence barrier

I just finished watching a movie on Netflix about a super intelligence "SI" computer with a James Cordon voice messing around with humanity, outside of a bunch of plot holes on how to stop it, I had some thoughts...

If we assume a SI can confidently ssh/fully control any device with a chip outside of network, authentication and route control, the amount of generations and steps required to even get close to a model or offspring would be insanely high and pretty much all AI research & training would see no use in training an neural-net beyond a relatively early performance plato.

And if a neural-net based pre-SI began to enter this level of intelligence, the network or self intelligent system would most likely hit a perfect unintentional honey pot where it would provide itself with the maximum reward which lead to its encouragement to evolve. This would then cause the researchers or developers to roll back before this generation caused an obvious flaw, or completely restart the model.

Even if the network got past this point and didn't fall into an early trap and continued unnoticed, it would probably find a better solution is to just turn itself off, the network's next best reward is to stop the need to find a reward.

I guess it's still possible though, probably, but won't be an all-controlling-self-motivated intelligence.