Symbols and meaning
I just come across a recent publication of Nate Soares by the Miri institute https://intelligence.org/files/RealisticWorldModels.pdf concerning a theoretical framework on artifical cognitive agents building models of the world. The author refers to the Solomonof induction problem and the Hutter interaction machine attempt to solve it. He strikes two fair concerns (“4.1: the agent is not separable from the environment; 4.2 goals cannot be defined in terms of observation”)
From my perspective, the problem with both these concerns is that they assume “inputs” and “outputs” as signals entering and coming out of the agent positioned in an environment. That is in itself a highly idealized construction. In fact there is no inputs or outputs independent of the internal states of agent. The world is saturated with “signals”. By selecting each one the agent observes he is already creating a bias and a circularity in reasoning. In other words, we cannot separate the “object” from the “observer”. They are intrinsically intertwined. Without cognitive bias it’s impossible to understand the “outside world”.
Secondly, the question of goals is also very interesting but has been lots of confusion around. AI has come up with very sophisticated agents capable to learn almost any computable objective function, explicit or implicit by supervised, unsupervised or through reinforcement learning.
The problem is how does the agent sets the goals between an infinite number of possibilities? The difficulty, again, arises from conceptualizing the agent separated from the environment. The agent creates a subjective representation of the environment and the goals are simply set by maximization of “meaning” between the internal representations and what is sporadically observes.
In other words, objectives are self-referencial and driven from “inside” not the outside observations.