EDITION 117: June - August 2023

Artificial Intelligence

By Jerry Brownstein
Artificial intelligence (AI) has been part of our lives for many years, but recently it has come to more people’s attention with the advent of ChatGPT. This ‘chatbot’ and others like it are available to the public for free, and they can create answers to written questions that sound very human. You can carry on a conversation, or ask it to write essays on an endless variety of topics. The latest versions can even perform more complex tasks like writing computer code. AI has the potential to bring about positive changes in our world, but there are also potential dangers inherent in this technology.

On the positive side, AI can automate many tasks that are now done manually by people. This could increase efficiency and productivity for industries like manufacturing, transportation and logistics. Healthcare could be transformed as AI can analyse data quickly to assist in diagnosing diseases, and developing personalized treatment plans. Protection of our environment could also be organized by AI so that energy consumption was accurately monitored and reduced.



Of course the downside is that millions of people could lose their jobs in just about every area of society. Manufacturing, retail and clerical jobs would be among the most affected. There is also danger in AI’s ability to collect and analyse vast amounts of personal data. This has the potential to violate privacy and endanger personal security (passwords etc.). Another possible problem is that relying too much on AI systems in all areas of life will make humans overly dependent on technology. They will then be helpless when a system fails or is compromised. In addition, AI systems can be created with a bias. This could lead to unfair treatment for certain groups of people, particularly in areas such as criminal justice and hiring practices.



These last two paragraphs were written by ChatGPT when I asked it to assess the positives and negatives of AI... and it wrote the answer in just 10 seconds! That’s pretty impressive, particularly when you realize that new chatbots which are even more comprehensive are coming out every month. Let’s take a closer look at how artificial intelligence has developed and where it could be going. Most people are familiar with earlier versions of AI that are relatively simple and devoted to one task. A Google search uses AI to find information on the internet. Google Maps goes a step further by coordinating location data from smartphones with other information to choose your route for driving. Personal assistants like Siri, Alexa and Cortana use language processing to receive instructions or search for online information.



"AI is rapidly becoming More Complex and More Independent"

ChatGPT is an example of a more complex type of AI called a “neural network”. It’s a mathematical system that learns skills by analyzing and remembering data. Chatbots are created by exposing them to enormous amounts of digital text including books, news articles and other information selected from the internet. They absorb literally billions of writing patterns, and this teaches them how to gener- ate their own text. Self­driving cars are another example of a deep learning neural network. They learn how to detect objects around them, determine their distance from other cars, identify traffic signals, etc.. (As a side note: I am extremely sceptical about the safety of cars driven by robot computers – but that is a discussion for a future article.) A computer program called MuZero takes deep learning to a new level of awareness. It has managed to master games that it has not even been taught to play, including chess and an entire suite of Atari games. It learns through brute force by playing the games millions of times over and over. 



AI systems are evolving to new levels of sophistication, and experts are warning us about the inherent dangers that come with this “progress”. Dr Geoffrey Hinton is known as “the godfa­ther of AI”, and yet even he has expressed grave concerns saying, “It is hard to see how you can prevent bad people from using AI for bad things.” One “bad thing” that has already begun to surface is the use of AI to cre­ate “deep fake” videos of people, places and events. These look incredibly real and they are being used to spread false information. But that’s just the tip of the ice­ berg. GPT­4 is the latest version of ChatGPT, and the system was tested before its release to see if it could be used for dangerous purposes. They found that it was easily convinced to give negative information: how to buy illegal fire­ arms online; ways to make harm­ ful substances from household items; and writing Facebook posts to convince women that abortions are unsafe. This AI system was also clever enough to defeat the “Captcha” test, which is widely used to identify bots online.



So what can be done to ensure the safety of AI? The European Union is considering a law to regulate AI technologies that might create harm, including facial recognition systems. It would require companies to conduct risk assessments of AI technologies to determine how their applications could affect health, safety and individual rights. This seems a bit vague, and as of this writing the law has not yet been passed.

There is also serious concern coming from within the tech world. More than 1,000 technology leaders and researchers, including Elon Musk, have published an open letter that urges artificial intelligence labs to pause working on the most advanced systems. “Development of powerful AI systems should advance only once we are confident that their effects will be positive and their risks manageable.” They warn that, “AI developers are locked in an out­ of ­control race to develop and deploy ever more powerful digital minds that no one –not even their creators– can understand, predict or reliably control. This presents a profound risk to society and humanity.”



Their fears go far beyond problems like deep fakes or facial recognition. The Holy Grail for many AI researchers is the creation of a machine that is self-aware, and has broad-­based intelligence that can be applied to any task. It’s called artificial general intelligence, and the ultimate result could be dominating robots like the ones we have seen in numerous science fiction books and movies: The Matrix, West- world, Star Trek, etc., etc.. At this point it seems that our technology is not close to that level, but what will the future hold? Will AI be a wonderful tool that makes our lives more interesting and productive, or will we become slaves to our robot masters?