Wednesday, April 19, 2023

A.I. The Sum of All Our Fears?

Artificial Intelligence (A.I.) was and still is in the news lately with a seemingly well established shelf life that could last for quite a while, if not indefinitely, as reliance in computer and information technology has become deeply intertwined, fused even, into our lives. Furthermore, it has become a modern day boogeyman, except that it is adults that seem to worry.  People in the high tech sector, business leaders, military officials even, and politicians alike have expressed concern. Elon Musk thinks A.I. could conceivably end human civilization.



In 1970, I recall what I then thought was a B-movie science fiction, "Colossus: The Forbin Project". I am surprised to learn recently that it has garnered an 88% in Tomatometer ratings (Rotten Tomatoes); quite remarkable for a film made 53 years ago.

Movie Plot: "Colossus" was a super-advanced computer/military defensive system created in the U.S., located deep into a granite mountain somewhere in the Midwest; spearheaded by Dr. Forbin .  It was powered by its own self-contained nuclear reactor, sealed from interference from the outside once the heavy doors were shut.  It had the ability to repair itself and, more importantly, it was designed to totally deter any nuclear attack by other nations and respond autonomously. That is to say, it was capable of launching U.S. warheads on its own. Humans, including Dr. Forbin, can communicate with Colossus only via terminals linked by cable. The U.S. President declared that "Colossus was the perfect defense system".

Not long after it was operational, "Colossus" detected that there was another computer like itself, named "Guardian", located in the Soviet Union.  I will stop short of crossing the spoiler alert line except to say that the two supercomputers started communicating with each other.

Elon Musk actually first made his warning on A.I. way back in 2016.  He must know a lot more today so his concerns, along with others, couldn't be trivialized.  Worth noting is that as early as 1940, Isaac Asimov, prolific and influential science fiction writer of that era but whose writings are still very popular today, must have had some concerns as well when it came to potential threats posed by "intelligent machines".  Asimov had a proposition.  Before we get to that, it was two decades earlier than 1940, when the word robot came into the English lexicon.

In 1920 a Czech playwright, Karel Capek, wrote a play that was later translated into English, "Rossum's Universal Robots" - where the word robot  came from Czech, "robotnik", that meant forced worker, where "robota" was forced labor, compulsory service, drudgery, etc. In 1923, a robot was defined as a mechanical person, or person whose work or activities were entirely mechanical.

Asimov, perhaps already concerned about the so called mechanical persons, proposed the "Three Laws of Robotics" :

1st:  A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2nd: A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.

3rd: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

Asimov's idea, though nobly well-intentioned to safeguard humanity, was naïvely conceived, given man's proclivity to misuse technology. You see, from the moment our ancestors emerged from the stone age, into Bronze and Iron ages, tools developed to improve hunting and gathering soon evolved into killing implements - first among neighboring tribes, then by hordes for conquest and domination. Alfred Nobel saw TNT go well beyond mining and road building into mass killing of people and destruction of territories.  We have, in other words, always managed to misuse everything, Nobel Prizes or not.

Well, what do we have now?  As in the movie, "Colossus", and many other stories written  to serve as warnings, we are again faced with a situation that seems to be well past contemplation of a dilemma. It is Pandora's box laid bare. It seems that the old suggestion that we can "always pull the plug" is no longer humorous. But, is it really that dire?

A.I., which we treat as if it is an individual entity, is so much more than one single object that we can segregate, separate and excise.  The call to pause further developing  it for some definite period of time until we can create more safeguards as proposed by Musk and all the other high tech luminaries seems like a fool's errand against the backdrop of so many variables, such as, other sectors (nations, companies, individual developers, etc.) not heeding the moratorium for one and many reasons that are likely to be mostly self serving.  

As had always been the case with anything that accompanied humanity in its journey through time, the road we've taken is littered with good ideas and good intentions only to splinter into the inevitable "good, bad and ugly" faces that we as a society always resigned to accept. 

A.I., in my opinion, is not the problem. We are the problem. And there are countless examples. It will require a book just to list them.  But let's take just one that recently happened that is stoking our fears about this one splinter that is eerily scary.

I marvel at the technology of speech translation that used to be only a figment of the creators of Star Trek.  Today, I can talk in English and my Pixel phone will translate it into spoken Filipino instantaneously with a voice of a Filipina perfectly enunciating the words with the proper modulation as if spoken by someone in Manila with perfect grammar. Pixel  will do it in reverse by translating spoken Filipino to English with nary a time lag, and again with perfect grammar.     

That is one awesome A.I. that is doing it.  Good, but here comes the ugly part. In a kidnapping for ransom scam, a mother received a phone call demanding money for her daughter held captive. What she heard when the kidnappers purportedly had her daughter speak on the phone was her daughter's voice crying and pleading for help. The scammers had cloned and mimicked the voice of their daughter via an A.I. program. 

A.I. obviously was not the problem. We can go back to the beginning of civilization whenever that was that we are told A.I. could destroy.  Pick an era. Pick a timeline. Algorithm that is the lifeblood of all A.I. has been around for centuries. Whether it was a simple algorithm on paper or on a sophisticated computer program, A.I. uses algorithms and it does it well once we allow the process to  manipulate data to achieve what is intelligibly useful that among other things can operate robots at the assembly line flawlessly and not take coffee or bathroom breaks.

So, if A.I. is the constant, the variable is us. It is no different from employing a laser to make precise measurements, or cut cleanly through metal, and then we turn around and use it to guide a smart bomb to kill and destroy. Need I say more?

A.I. is not a moral entity.  We are.  A.I. can be programmed to lie, manipulate information and spread it with the speed of summer lightning but it does not care why.  We do. But then some of us don't. 

A.I. does not need any pause.  We need to pause for just a moment to look at ourselves before we start blaming A.I. Allegorically, it was Dr. Frankenstein who created the monster, yet our fear was directed at the latter. 

So, Mr. Musk, "Ought we not be looking in a mirror and see if it is us who might conceivably put an end to civilization?" 

Or, perhaps, instead of lamenting over artificial intelligence, we redirect our focus towards a far more Superior Intelligence that had been around since the beginning of the universe.  Perhaps, S.I. instead of A.I. is what we need to assuage us of the sum of all our fears.



No comments:

Post a Comment