TAKING OVER Could artificial intelligence REALLY wipe out humanity?


SOURCE: THE-SUN.COM
FEB 19, 2022

MANY fear that artificial intelligence will be the end of humankind – here's the truth according to experts.

By now, most people around the world use some sort of AI-utilizing device that is integrated into their daily lives.

Most experts believe that AI will not take over humanity

They use Siri to check the weather, or ask Alexa to turn off their smart lights – these are all forms of AI that many people don't realize.

However, despite the widespread (and relatively harmless) use of this technology in nearly every facet of our lives, some people still seem to believe that machines could one day wipe out humanity.

This apocalyptic ideal has been perpetuated through various texts and movies over the years.

Even staple figures in the field of science such as Stephen Hawking and Elon Musk have been vocal about technology's threat against humanity.

In 2020, Musk told the New York Times that AI would grow vastly smarter than humans and would overtake the human race by 2025, adding that things would get "unstable or weird."

Despite Musk's prediction, most experts in the field say humanity has nothing to worry about when it comes to AI – at least, not yet.

Most AI is "narrow"

The fear of AI taking over has developed from the idea that machines will somehow gain consciousness and turn on their creators.

In order for AI to achieve this, it would not only need to possess human-like intelligence, but it would also need to be able to predict the future or plan ahead.

As it stands, AI is not capable of doing either.

When prompted with the question "Is AI an existential threat to humanity," Matthew O'Brien, a robotics engineer from the Georgia Institute of Technology wrote on Metafact: "The long-sought goal of a 'general AI' is not on the horizon. We simply do not know how to make a general adaptable intelligence, and it's unclear how much more progress is needed to get to that point".

The facts of the matter are that machines generally operate how they’re programmed to and we are a long way from developing the ASI (artificial superintelligence) needed for this “takeover” to even be feasible.

At present, most of the AI technology utilized by machines is considered "narrow" or "weak," meaning it can only apply its knowledge towards one or a few tasks.

"Machine learning and AI systems are a long way from cracking the hard problem of consciousness and being able to generate their own goals contrary to their programming," George Montanez, a data scientist at Microsoft, wrote under the same Metafact thread.

AI could help us to better understand ourselves

Some experts even go as far as to say that not only is AI not a threat to mankind, but could help us to better understand ourselves.

"Thanks to AI and robotics today we are in the position to 'simulate' in robots and colonies of robots the theories related with consciousness, emotions, intelligence, ethics and compare them on a scientific base," said Antonio Chella, a professor in Robotics at the University of Palermo.

"So, we can use AI and robotics to understand ourselves better. In summary, I think AI is not a threat but an opportunity to become better humans by better knowing ourselves," he added.

AI does have risks

That said, it is clear that AI (and any technology) could pose a risk to humans.

Some of these risks include overoptimization, weaponization, and ecological collapse, according to Ben Nye, the Director of Learning Sciences at the University of Southern California, Institute for Creative Technologies (USC-ICT).

"If the AI is explicitly designed to kill or destabilize nations...accidental or test releases of a weaponized, viral AI could easily be one of the next significant Manhattan Project scenarios," he stated on Metafact.

"We are already seeing smarter virus-based attacks by state-sponsored actors, which is most assuredly how this starts," Nye added.

At present, AI can only function the way it is programmed it

In other news, a four-tonne chunk of a SpaceX rocket is on a collision course with the Moon, according to online space junk trackers.

Boeing has sunk $450million into a flying taxi startup that hopes to whisk passengers across cities by the end of the decade.

Personalized smart guns, which can be fired only by verified users, may finally become available to U.S. consumers this year.