TImeless Intelligence

I've always thought of Intelligence as something that needs time and state storage. Without those two things, how could an intelligence learn? adapt? think? I had an interesting conversation today that has changed my perspective.

Imagine that your mind was suddenly transported into an alternate universe, but your context of the past few seconds was replaced by the context in the alternate universe. Would you know? In the case of a human, yes, because we have a memory storage system. But take away that memory storage system and only teleport the "intelligence" part of our mind. If all that your mind perceived was the past few seconds, and those were substituted, would you still be intelligent?

Current day AI's are trained through mountains of data. Then they are executed by providing them with context and giving them a perceptual (from the perspective of the AI) instant with which to evaluate the context and produce a result. The ML system is not updated between runs, so all information has to be conveyed through context. Under this system, it is potentially possible that an intelligence can develop so long as:

Consider a modern AI chatbot. It is trained on a huge corpus of text providing a baseline "intelligence". Instead of calling it an intelligence, lets call it a "Fixed Compute Unit" because once trained it doesn't change. You type a phrase and it gets loaded into the ML systems context. The Fixed Compute Unit then generates a result. The next time you type a reply, the ML system gets your original message, it's response and then your new message to load as context. Given a sufficiently advanced "fixed compute unit," the data in those three messages could convey enough data to make the "fixed compute unit" self aware. How? If our fixed compute unit was a magic box, it could divide itself into two parts. The first part evaluates the first message and determines a hypothetical "state of mind" that the AI ended up in for the first response. The second part then operates on this state of mind in the context of the next messages. This can continue as long as the magic box can divide itself into parts and reconstruct it's state of mind for prior messages.

Is it possible for our Fixed Compute Unit to become this magic box? I think for it to occur, the model's internals will become a kind-of abstract computer. The context configures this computer into a state, but instead of a direct mutating operation on this state (as in our current computer systems), the results of previous operations are stored externally and must be fed back in at configuration time. Effectively time is replaced by the spatial layout of the input data. Where is the intelligence? It's the emergent behaviour between the Fixed Compute Unit and the State/Context it operates on. Individually neither is intelligent, but together they could well be considered capable of sentience.

So now the question becomes: are our modern NLP systems like GTP sentient? I don't know. An intelligence with an external state storage and no concept of time is so alien to me that I can barely imagine it's existance. I struggle to reason about it's potential behaviour. Some things I think we could expect to see are:

So is it possilbe for a non-time-based system to attain intellgence? I don't see why not. (I say non-time based because it doesn't have any internal time-varying state).