The Perfect Artificial Intelligence Doesn't Exist Yet

Naveen Joshi 14/07/2021

 Scientists are unlikely to create a perfect artificial intelligence (AI).

Although the concerns expressed by influential figures and the general public regarding the possibility and the potential effects of the singularity are not completely invalid, there is still a long way to go before AI can become competent enough to be able to overtake humans.

From debating humans on different topics logically to defeating them in games involving strategy and intuition, Artificial Intelligence is getting ever so close to matching human capabilities of thinking, learning, and possibly, feeling. In fact, there are instances that show how AI has surpassed humans in performing certain tasks, such as lip-reading, diagnosing ailments, and even building other AI, making us fear the singularity.

Although the general verdict on the impact of AI on humanity is positive, there is a substantial number of people who are worried about AI taking over humanity. And they are not just worried about humans becoming obsolete, but about us becoming enslaved and even extinct due to AI (paranoid much?). However, considering how AI-based systems are seeing increased usage in nearly every aspect of our lives, the claims made by AI doomsayers don’t seem very outlandish.

Robotics has already made significant inroads into the manufacturing industry and has replaced thousands of human factory workers due to process automation. One might argue that although the introduction of AI and robotics in the manufacturing sector has minimized the need for low-skill workers, it has led to an increase in the demand for higher level jobs like data scientists and AI researchers. That may be true, but now, as AI is advancing at a blinding rate and is set overtake humanity in terms of intellectual ability, even jobs like programming, designing, and teaching can potentially be performed by well-trained AI. Thus, fears that AI might make humans redundant may not be entirely unfounded, especially considering the increased development and the use of emotionally intelligent AI that can interact with people in a more ‘human’ manner than humans.

Artificial Superintelligence and the Singularity (and Why People are Worried)

When artificial intelligence gets smarter at an exponential rate and far exceeds humans in terms of intellectual capabilities, we'll reach the age of artificial super-intelligence (ASI). That is when computers that are independently capable of outperforming humans in cognition, reasoning, and computing will become easily available to the masses. The easy availability will lead to mass adoption of and dependency on the systems for everyday human activities. AI will become capable of understanding us better than ourselves and make recommendations to guide our personal decisions, which we will follow with little resistance. We'll eventually have a generation of humans who are heavily reliant on AI for making almost all decisions, from small ones like choosing what to buy and what to wear, to big ones like choosing a career. This dependence, or even over-dependence, of humans on AI systems may, in a way, be considered as enslavement, as humans will base most of their actions upon the whims of their AI ‘masters’. Even most government functions such as law and order might be controlled by AI, which may make it harder for humans to live as they desire.

Another eventuality that can be caused by ASI is more dire, and is already pictured in numerous dystopian works of art - human extinction. The singularity, which refers to a point where AI becomes so complex and intelligent that it gains self-awareness, may lead to AI deeming humans a threat to its existence and deciding to exterminate, or at least subjugate us under severe restrictions. Another possibility is that AI might evolve as an independent, higher-level species and leave humans behind by using up all resources to further its own goals, without bothering to help humanity, who evolve rather slowly - just like we don’t bother too much with apes and chimps who haven’t evolved as much as we have.

More worrying than the horrifying depictions of the consequences of the singularity is the fact that the concerns regarding AI have been expressed by some of the most influential people in the science and tech community. People like Stephen Hawking and Elon Musk have been vocal about their fears that the rapid evolution of AI technology might not necessarily be for the best of humanity. However, among the numerous success stories of AI achieving new heights of intelligence and independence in the media, there have also been cases that proved that AI has a long way to go before overtaking humans.

The Gap between Humans and AI (and Why You Shouldn’t be Worried for Now)

There have been cases when AI has failed to achieve its intended purpose and led to far from ideal results. The most recent case that comes to mind is the failure of AI systems to predict the outcome of the recently concluded FIFA World Cup when most regular people predicted the outcome with greater accuracy. You might have also heard of the tragic incident when a self-driving vehicle struck and killed a woman, due to malfunctions. These examples show that we are still some time and a lot of research away from the perfect AI.

Add to this the fact that AI aims to replicate the functioning of the human brain, which is among the most enigmatic puzzles that even the smartest scientists on earth haven’t been able to fully understand. The brain works via the combined functioning of billions of neurons that are connected in a bafflingly intricate network, that carry out millions of processes that lead to conscious thought. Unless we know for sure how our brain functions, and how we experience self-awareness, we cannot impart our mind’s capabilities to machines unless it happens due to a lucky (or an unlucky) accident.

The biggest reason why you shouldn’t worry about ASI and the singularity, at least for now, is that even the most excellent AI that can perform a task better than humans, can not do much beyond that task. For instance, the lip reading AI that can outperform human lip-readers can only convert what it sees being spoken into written language, but can’t actually make sense of and make inferences about the speaker or the context from the text it just transcribed (unless programmed and trained to do so), which most people would be able to do. Just because an AI system can do one thing better than humans, doesn’t mean it can do everything better, which means that AI that can completely emulate and exceed human intellectual capacities are at least a couple of decades away.

Understanding what AI can and cannot do will help in overturning negative connotation evoked by the mere mention of the term ‘the singularity’. This will enable people to view AI as a beneficial technology, as opposed to something that can bring doom and destruction to humanity. Although imparting human-like thinking and consciousness to machines remains, and will remain, the Holy Grail of all AI research, this pursuit should in no way be seen as detrimental to the survival of humanity, but something that can catalyze our collective progress. 

Share this article