The complexities of human text are finite - just as the complexity of a chess board is finite. We shouldn't really be surprised that a large enough system can fully capture that complexity and successfully mimic human generated text.
Complexity is by definition NOT finite or predictable.
A complex system has a scope which is beyond human capacity to comprehend.
A group of, say five, single ants has a certain behaviour - but - one million ants together exhibit a totally different .. and unpredictable ... behaviour.
TheTeleological Principle at work.
(I doubt that it has been tested, but perhaps a community of 10 billion ants changes mode yet again)
So, a change of scale produces a totally different behaviour.
There are hints are that the GPT model is changing behaviour as it gets bigger .. I have seen a few videos where people have noticed this.
I suspect that the GPT models will exhibit
'interesting' behaviour as they grow 10 or 100 times bigger.
It may be comforting to say that
'AI will take 20 years before coders are obsolete' or that
'GPT is only a toy' or that
'GPT models will never exhibit sentient ... or quasi-sentient .. behaviour'.
However, real life may not oblige in the long - or even short - term.
Also,
The Precautionary Principle requires us to assume that autonomous AI is very near.
If we persuade ourselves that it can't happen, then we may be victims of a hostile AI before we even accept that such a thing is ever possible.
Elon Musk:
ChatGPT is scary good. We are not far from dangerously strong AI.
7:48 PM · Dec 3, 2022