top of page

Will Artificial Intelligence Kill Us?

  • bnepeg
  • Aug 24, 2023
  • 5 min read

ree

These days it is hard to avoid the topic of Artificial Intelligence (AI). AI is attributed with so many possible avenues of development, from helping to improve productivity in many fields, to taking over jobs from people, to completely overrunning and obliterating the human race. While a number of them are real possibilities, with some already actualized, the others are farfetched Sci-Fi-type scenarios. Let us sort them out.

AI, at least in its most popular current incarnations, is, essentially, a computer-based artificial neural network capable of building task-specific partial world models by employing various methods of computer learning. As such, these networks are forming artificial (non-brain-based) structures capable of storing and associatively retrieving information that has been learned about the world. While AI systems seem to closely mimic the human brain and are thought to soon surpass human abilities, there seem to be at least 2 major differences setting humans apart, which we may not expect AI to overcome.


1) Sense perceptions.

The sensory inputs of AI systems are fundamentally dissimilar to those of humans. For us, the meaning of a word includes some fundamental sensory experiences, or qualia, which may not be described by words. Just hearing the word “red” invokes the sense of seeing red color. Even the most abstract concepts can be drilled down to meanings of less and less abstract terms connected to them, defining them, and, eventually, to qualia. For example, the word “world” may call up an image of a blue-and-white globe floating in dark space, or the word “bomb” may evoke hearing of a loud explosion. AI machines are completely devoid of qualia. Therefore, the meanings of the words and symbols they operate with are different from those of humans. They are only linking objects (words, images, symbols, etc. pre-parceled by the humans) to each other to mimic human usage. For an AI machine to experience qualia, it has to have exact copies of all the human perception circuits. Such machines were replicated billions of times, mostly without a need for science. They are called humans. But their intellect was not artificial; it developed naturally with civilization. An AI machine may be eventually made to perfectly imitate a human being, easily passing the Turing test. It may make an ideal zombie casually indistinguishable from a human, but what for?


2) Corporal Sensations.

For every thought we think, for every decision we make, it always happens and is affected by the background of our current organism's state and physical conditions. These bodily awarenesses, such as stomach conditions of hunger or fullness, perceived muscular activation states, pains, anxiety, etc., require at least as much as the presence of a body. That faculty is again inaccessible for AI without the complete replication of a human organism. In a sense these reactions are similar to qualia but for internal, rather than external, stimuli. But without a body, there may be no pleasures or pains, no fears or desires, and, therefore, no self-established goals.


We have to realize and always keep in mind that all the concepts we are operating with were created by humans based exclusively on human experiences over the entire history of civilization. We were and we are parceling our experiences as we feel fit the best for our existence in the world. On the other hand, AI in order to achieve objectives defined by humans and be relevant to human tasks has to operate with world parceling and values developed by human civilization. It may only operate, though sometimes quite skillfully, on human language, symbols, and imagery without adding any truly novel dimension to them. AI’s ability to mimic human sentiments and behavior learned from examples fed into it may fool even seasoned professionals into believing that they deal with a real conscious entity. After all our belief in the existence of other humans is mainly based on observing other’s behavior and comparing it with our own in similar situations. AI system may behave very convincingly in any appropriate situation as if it is hungry or in pain without any ability to feel hunger or pain. But make no mistake, AI has no mind. Their artificial nets only link human-created symbols with other human-created symbols without reaching any deeper into actual perception qualia and bodily reactions. AI is not, and may not be developed into a superhuman. It is a tool like a hammer, an assembly robot, or a computer. A very sophisticated tool. And as with any other tool it may greatly improve our lives if utilized correctly or cause harm if used carelessly or with malevolent intentions. A hammer is indispensable for driving nails, but you can hurt your finger if you are not skillful enough in using it. And a criminal may still use it to hit a victim on the head. But hammers are widely used and nobody is thinking about banning or outlawing them. The same should go for AI. It is a relatively new tool and we just have to figure out how to use it and how not to use it. As we can see, panicking questions like Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” (Bengio, 2023) are not warranted, for now, and for the foreseeable future AI (or more advanced Artificial General Intelligence, or AGI) will still have no mind.


We may have, however, to adjust to the widespread AI usage. Realize, for example, that not every entity that looks, speaks, writes, or behaves like a human being is actually a sentient self-aware human. Or learn not to immediately trust any photographic image, video, or spoken proclamation. They might have been artificially created using AI capabilities. But the best defense from rogue AI-generated fake news may still be another expert AI system fed with large volumes of true facts and trained to detect fakes. These history-expert AI systems might be also of great help to sort out true facts from causally incompatible “alternative facts” as they may have very large data storage capacity and the unique ability to retrieve associated relevant information. One can even imagine AI-based historical archives where accessing relevant data will be much easier.

Fear of machines replacing humans and taking their jobs can be traced back over two centuries ago to Luddites movement. However, even this violent response to technological development did not and could not stop progress. And no legal ban will prevent AI advancement either, as pushing it underground is a nasty and dangerous alternative.


Given an erroneously stated or outright savage goal, AI may certainly be dangerous. It may act autonomously and inventively to achieve these goals. But it will not be the only AI system around. So again, the best way to counteract these malevolent deeds would be to enlist the help of other AI(s), aligned with human safety goals. In no way, however, may an AI system develop completely independently, self-determining its goals. Without world-defining individual perceptions it will not be able to advance its “intellect” apart from being fed with human experiences through training sessions.


All that said, we can still imagine a remotely probable Sci-Fi-type scenario where self-replicating AGI-enabled systems are developed by humans for some reason and left to evolve on their own. If they manage to survive for a few generations they will have to start reparceling the world to adapt their world model to their sensory inputs to eventually form an alien civilization either friendly or, most likely, competing with humans. But that new civilization will take a long time to develop.


As with any other tool invented by humans, AI will not kill us. But it will surely make it easier for us to kill ourselves.

ree

See the video of this post on YouTube at https://youtu.be/m1L5Xs_8DHY


Further resources:


 
 
 

Comments


Tell me what you think.

Thanks for submitting!

© 2023 by From Inside the Box. Powered and secured by Wix

bottom of page