Notable AI Advancements In The Last Decade


SOURCE: MEDIUM.DATADRIVENINVESTOR.COM
FEB 08, 2022

The growth of Artificial Intelligence scares a lot of people who tend to think that in the future we’ll see robots replacing us in our job positions. What they miss is that AI might be the most important technology ever developed, capable of helping in different fields and improving our lives significantly.

We’re growing in the direction of teaching it how to mimic the human brain, to become more like a partner solving problems and making our lives easier.

For this article, I selected some AI-related events of the past decade and I invite you to time travel with me. Hope you enjoy the ride!

2012 — Cat videos

Some scientists consider that it was a revolutionary year for deep learning. Although some features, like speech recognition, were far from perfect, it was the first time that Google’s brain was able to recognize cat videos. What a cool start, eh?

It may not seem a great step for the AI’s history for those cold-hearted humans who can’t assume cats are awesome sometimes, but, for the Google X lab researchers, it took up to 16 000 computer processors with one billion connections to build a neural network.

The computer scientists involved, never really told the algorithm what a cat was. They just taught it some cat-like features until, over time, the computer was able to recognize cat videos on YouTube with great accuracy.

This development was an important first step for image and facial recognition, features that nowadays are routinely completed by the actual neural nets.

2013 — Never-Ending Language Learning

It wasn’t the best year in Artificial Intelligence’s history, but we still had time to watch the growth of NELL (Never-Ending Language Learning), a semantic machine learning system, created by a team of Carnegie Mellon University.

We’re talking about a system, running 24/7, that learns from experience, just like humans, using previous knowledge to improve and avoiding plateaus in the performance.

NELL can read the web and identify fundamental semantic relationships between a few hundred predefined categories of data, such as cities, companies, emotions and sports teams. It extracts some current beliefs and removes the old ones in order to improve.

In 2013, we watched the launch of NEIL: it’s similar to NEEL but it doesn’t look for information. It learns from the images and common sense, figuring out some relationships that are too basic for us to understand but it’s still a huge step for a computer. It can be as simple as the system recognizing that a door is a part of a house.

In less than 6 months, NEIL was able to identify more than three thousand relationships.

2014 — Alexa…

It was the year Alexa was born.
Well recognized AI voice, invented by Rohit Prasad, is a little device that turns on by calling her name. Once it catches the name “Alexa”, it’ll start recording and storing everything you say on the cloud, for it to learn and respond.

It can do some tasks like turning on music, to-do lists or setting alarms. It’s like living with a friend you ask for some little favors to make your life more enjoyable. But if you ask “Alexa, where’s my million dollars?” you won’t get the answer you want.

2015 — Teaching machines how the hands work

Artificial Intelligence isn’t all about computer screens after all.

In 2015, a german primatology team recorded a primate’s hand motion associated with neural activity, so that, based on brain activity, they were able to predict the fine motions that were going on. Then, they taught those fine motor skills to robotic hands, as a way of improving neutral-enhanced prostheses.

A few months later, a U.C Berkeley team found an easier way to teach robots the same motor skills, by applying deep reinforcement learning-based guided policy search. Those robots were able to perform everyday actions, like screw caps on bottles or to use the back of a hammer to remove a nail from wood.

Those tasks are simple for people but too specific for a machine — imagine if you depended on one to use your hands. This one also learns how to do the tasks by practicing, and refining its technique after a few tries.

Great news for prostheses users.

2016 — AI in Medicine

The star of the year is IBM Watson, invented by David Ferrucci, a data analysis processor, that becomes a precious help in a doctor’s office, once it’s capable of spotting health issues that scaped human eyes. In Japan, for example, it detected leukemia in a woman that was previously missed.

Also, in Texas, one AI program reviewed millions of mammograms 30 times faster than humans, and its accuracy in interpreting diagnostic information is around 99%.

As an early diagnosis it’s key to efficient treatment, we must say AI is starting to save some lives.

2017 — AlphaZero

This was the year that AlphaZero was launched, by the company DeepMind, to master the games of chess, shogi and go. In only 24 hours, it had reached a superhuman level of play in any of those three games, defeating the world champion Stockfish.

The most interesting part is that AlphaZero hadn’t studied one single human game at all — it learned how to play by, after acknowledging the rules,

playing against itself.

In the beginning, the moves were random but, after a few games, it had already understood what lead to a win and what doesn’t.

Although winning chess games may be nice, the ambition for this system is to apply similar methods to solve real-world problems such as building room temperatures superconductors or even folding proteins in potent drug molecules.

2018 — BERT

Although it wasn’t a year of huge developments, Jacob Devlin, with the help of his colleagues from Google, created BERT, a new language representation model that stands for Bidirectional Encoder Representations from Transformers.

BERT was designed to pre-train deep bidirectional representations from unlabeled text, by reading the entire sequence of words at once, instead of just reading it from left to right. It was trained to predict if a chosen sentence was probable or not, given the first sentence, since it learns contextual relations between words in a text.

Even though it looks simple, BERT was a breakthrough in the use of Machine Learning for Natural Language Processing, mostly because it’s approachable and allows fast fine-tuning that will allow a wide range of practical applications in the future.

2019 — OpenAI

Have you ever succeeded to solve the Rubik’s cube?

In 2019, thanks to OpenAI’s, a robot hand called Dactyl was trained in a simulated environment, but then it was able to transfer the knowledge to a new situation and solve the Rubik’s cube.

Again, the main goal isn’t to have a robot who can solve a cube that’s impossible to solve for part of us, but to enhance the dexterity and to see its ability to deliver results in an environment it’s not used to.

2020 — Linearfold algorithm

A foggy year, isn’t it?

Well, not for AI. While we were on lockdown, Baidu opened the LinearFold, so our dear robots were already working hard on trying to reestablish our freedom.

Artificial Intelligence took a big part to speed up the development of the vaccines that usually would take decades, by helping researchers analyze vast amounts of data about coronavirus.

2021 — SEER

Past year, Facebook developed SEER, which stands for Self-supERvised, a computer vision model. It learns from any random group of images on the internet, without the need of having everything labeled, as the normal computer vision trainer without supervision.

It comes to solving the struggle of the systems that often can’t recognize common daily objects since they are underrepresented in the data used to train AI systems.

Thank you for reading till the end. If you liked this piece — please consider sharing or giving some claps so others can find this article too.