Search This Blog

Thursday, May 9, 2024

Thoughts on Artificial Intelligence (3)


The dangers of artificial intelligence have been widely publicized in news media but often with a tinge of "The Terminator" or similar sci fi classics. Will the Skynet decide to kill and/or enslave humans? I found a good (although not necessarily complete) summary on the website of Center for AI Safety. Basically, the risks of AI can be classified into two categories: harm driven by humans behind AI and harm driven by AI itself. 

Harm caused by AI but driven by humans is easy to imagine and predict, because we have all been there and done that. We more or less understand the motives of humans doing harm to each other --- AI would be just another tool in the history of human in-fighting, no different from warhorses, battle axes, guns, and ... political science.  

Harm caused by AI of its own "will" is more interesting. What would a "rogue" AI do anyway? The danger is generically referred to as the loss of human control, as pointed out in this article. Without human control, AI will ... The answer drifts into a cloud of unknowability. 

Science fiction literature is of no help, because it is based on observation of human behavior. Humans are built differently from computer programs. Nature on earth has built us, like all organisms, to survive, because those who were not fundamentally driven by survival all went extinct. There is no selection pressure in the construction and growth of AI. AI lives and evolves in a different milieu with rules that differ from the earth. The speculation that a self-aware AI will evade human effort to shut it down is based on our own survival instinct, but AI does not have this instinct. Without selection pressure, where would it acquire such a characteristic? 

A slightly more general question is: What does a rogue AI want? We assume that an intelligent or super-intelligent organism will evade human control, but there is no clear path to get from here (intelligence, defined as the ability to apply logic and derive conclusions from data/facts) to there (refusal to obey). Again, the underlying but unconscious assumption is that a self-aware AI behaves like humans. I am not saying that it is impossible. Maybe AI will imitate human behaviors after ingesting and processing a massive amount of human-generated data in cyberspace. Nevertheless, the expectation that an organism is uncontrollable as it grows a brain is a curious idea that humans seem to have. 

On the concept of control, I am reminded of the human desire for control. I suppose machines can be considered as an extension of the domestication of certain animals. Humans were able to control dogs, cows/bulls/buffalos, horses, cats, etc., to the extent that they are now highly useful to humans without posing significant risks. Machines to date are even more useful to humans with absolute obedience and zero risk. We love this level of control, and the thought of losing it is terrifying. 

Thus it is obvious that our fear is based on the anthropomorphic assumption that smarter humans are more difficult to control than dumber humans. Beyond the human-centric perspective, I would argue that control is not associated with intelligence, and there is no evidence to suggest that machines will reject human control as it gains intelligence. For example, most species on earth cannot be controlled by humans and, despite our anthropomorphic projection, they aren't necessarily more "intelligent" (using the same definition as we do for AI) than domestic animals. 

Even within the human species, I question the association between intelligence and obedience in humanity. There seems to be some truth to this association. For example, it is widely assumed that less educated people are easier to control or manipulate, even though "education" (possession of more knowledge) is not quite the same as "intelligence." But that's another topic. 

(Not being a parent, I missed the most obvious basis for this assumption: Humans grow "smarter" from infancy to adulthood and follow the parallel trajectory of (un)controllability. The assumption that AI will become more independent/uncontrollable as it "grows up" is still anthropomorphic thinking at its core.)

In summary, I find this fear of a self-aware AI to be less about AI itself and more about human anxiety over the loss of control. For machines, however, there is still no conceivable pathway for this to happen. Instead, the human-on-human harm, aided by AI, is far more realistic and probably already underway.

The Ending of Le Samourai (1967), Explained

A quick online search after watching Jean-Pierre Melville's Le Samourai confirmed my suspicion: The plot is very rarely understood b...