AI is not an existential threat to humanity - Cupid Chan

SEP 07, 2021

We have always loved robots in films, even the bad ones. But now, as the machines around us are getting smarter and smarter, it's hard not to worry that artificial intelligence (AI) with sinister intent - a kind of terminator - will be unleashed on the real world.

Of course, we are not alone with these gloomy thoughts. Stephen Hawking warned that "the development of full artificial intelligence could mean the end of mankind".In the face of the increasingly convincing performances obtained by Artificial Intelligences, some important personalities of science and business have repeatedly pointed out the possible dangers inherent in the creation of two independent Artificial Intelligence.

Artificial intelligence is a powerful tool and its application ranges now in many fields: from marketing automation, autonomous driving by car to robotics, from the field to space weapons and even sectors such as medicine are increasingly implementing AI.

It is known that great responsibilities arise from great responsibilities and the main doubts on the use of AI mainly concern ethical aspects.

Elon Musk, the extrovert South African-born entrepreneur who owns, among other things, Tesla Motors and SpaceX is, notoriously, critical of the unregulated development of Artificial Intelligence. Musk defined AI as the “greatest threat to the existence of humanity and joined his voice with that of a group of technological luminaries and researchers, including Stephen Hawking, to urge the United Nations to ban killer robots and the application of AI to the field of weapons and automatic defense systems. The billionaire has proposed starting a federal oversight program to monitor the growth of the technology.

To address this problem, the owner of Tesla and SpaceX has created a non-profit association, Open-AI, which is involved in studying the relationship between ethics and artificial intelligence to prevent AI from turning into an existential threat to humans. Open-AI is carrying out studies, projects, and analyses with the aim of enumerating the concrete risks deriving from Artificial Intelligences already in the near future.

We are currently experiencing a real AI boom. New, sophisticated algorithms are constantly being developed in research, and in practice, AI is making more and more improvements, from speech recognition to self-driving vehicles to dating apps. So, it seems high time to take the existential risk of AI seriously: Do we have to worry that AI will soon develop awareness and become a threat to humanity?

Cupid Chan who is Managing Partner at 4C Decision and a well-established industry leader doesn’t think that AI can be a threat to humanity. He has talked about AI in several conferences and events and according to him, “In order for the AI to extinct human, there must be a very high level of intelligence and access to a wide range of resources. At this point, all AI implementations are fragmented and focus only on certain areas”.

Cupid Chan who is also the co-founder and CEO of Pistevo Health further said that “Even though there are scientists researching about Artificial General Intelligence (AGI) with the goal to train a computer like a human, that is still a very infancy stage. Even if we can attend that level, that AGI must have a mission to kill humans. As long as we have more“Good” AGI than the “Bad” AGI, which is programmed by humans at the end of the day, we all should be safe. But again, this assumes AI can really achieve true 100% AGI, which difficulty is like flying to Mars in 10 mins: We cannot achieve now, and there is no solid proof we cannot achieve that, but that will be very hard if not impossible based on our current knowledge on technology”.