Hi, I'm an AI alignment skeptic.

Hi, my name is Adam and I'm an AI alignment skeptic. My fundamental problem is with the graph of intelligence that shows it growing exponentially over time.

Consider the intelligence of an ant and a crow. The latter is capable of using tools, while the former builds complex structures. They display intelligent behavior, but where would you put them on this graph? Is the crow ten times more intelligent than an ant? Is an ant colony more twice as intelligent as a crow? These questions seem nonsensical to me. Trying to rank intelligence on a single scale fails to capture the complexity of the behaviors being displayed, and it is in behavior that we are able to recognize intelligence.

Perhaps we use a high-dimensional vector instead. After all, aren't spatial and emotional intelligence distinguishable from one another? Yet again we run into the problem of defining measures. Does neurotypicality confer a +2 bonus to empathy or a +3? Is one unit of spatial intelligence equal in length to one unit of mathematical intelligence? Adding dimensions helps represent the myriad of ways intelligence manifests, but the magnitude of a vector in this space is an ill-posed problem. I just don't believe that captial-I Intelligence is reducible to a number.

Now, it is the case that our ability to perform computations has grown exponentially. If we accept the axiom that our brains behave like a computer then it's a natural step to assume that intelligence is proportional to the number of computations done and to extrapolate from there.

But, while computationalism is a useful model of the mind, it fundamentally does not capture how the mind and body operate. Organic life is autopoietic: it is constantly in the process of creating itself, growing cells and maintaining them, taking mass and energy from the environment and turning it into a part of one's self. A Turing Machine is not capable of this. The strings the machine operates on does not build the machine. The machine is not subject to fluctuations in its environment that it must respond to in order to maintain its own being -- but organisms are.

It is here, in the interaction between agents and their environment, that cognition arises. A bacteria swims away from chemicals that would compromise the cell. A crow uses a piece of wire to get food. A hobbit solves a riddle to get out of the Misty Mountains. This behavior, this transformation of input sensory data to output actuations, occurs because organisms must maintain their non-equilibrium state. It is not the case that I had a tuna sandwich this afternoon as it represented the best return on a reward function after enumerating all possible hypotheses. Instead, it is the structure of my body itself that both drives and necessitates eating and drinking regularly. No mathematical computation is performed when my body metabolizes glucose, yet if my body did not my intelligence would cease to exist.

Regardless of any particulars of my skepticism of computationalism, it is true we are engineering ever more complex structures. Is AI alignment still not relevant, do these structures still not contain within them the potentiality to turn the world into paperclips? Well... the first homo sapiens had the potentiality to come up Hamlet, but first a bunch of people had to get together and grunt at each other until we got language. Pluck a baby from classical Greece and they can learn to build rocket engines, but first a bunch of mathematicians had to get together and grunt at each other first. Intelligence, which is to say intelligent behavior, is the product of a huge magnitude of historical interactions between complex, differentiated organisms. Any attempt to reduce this to a single string of characters, to formalize it mathematically, will fail.

I am attempting to align you to my ideas right now. If I fail, no matter, y’all can choose for themselves. I'm not going to snip your brain stem over it. Our artificial children will be the same. What we create will not have an IQ score of 1000, because intelligence-as-scalar is a convenient fiction. They may be more capable than us, but the way they will actually manifest intelligence is the same way we manifest intelligence in human children. They will learn to speak from listening to us, to act by watching us. No Turing cops are necessary.

To sum: AI alignment as I understand it posits the imminent arrival of a fantastic computational intelligence. This intelligence contains it in the possibility of paperclip maximizing behavior, necessitating the development of computational guards. Instead, I posit that intelligence is not computational, that it is a descriptor of behavior. Organic behavior is first about an organism's needs to grow and regulate itself, to exist in an open thermodynamic system by exchanging mass and energy to maintain the structure that identifies it as an organism. Intelligent behavior is learned, communicated between present and historical organisms. We employ the necessary tools for alignment when we teach our children to speak English instead of Bajoran.

Turing Machines computing Bayesian posteriors is a useful model but one that is fundamentally lacking in its ability to represent both human and artificial intelligence.

Return home