Can a computer win literature Nobel? GPT-4 could take us closer to epic potential of AI


SOURCE: NEWS18.COM
SEP 19, 2021

Scientists say that we are still some time away from artificial intelligence that could come close to how humans think

An AI model released last year by OpenAI created ripples by showing what machine learning could achieve,. Now, its next iteration is on the cards

Reports about the impending release of the next iteration of a neural network machine learning model created by OpenAI, a company based in San Francisco that has billionaire Elon Musk as one of its co-founders, has sparked a buzz in the artificial intelligence community. The language ability demonstrated by GPT-3, which was launched in 2020, means there is great curiousity surrounding the upcoming model. From writing books to creating computer code and poetry, GPT-3 gave a glimpse of what AI could be made to achieve. With GPT-4 expected to only improve upon the previous version, here is a look at how far artificial intelligence is seen as having gone down the road of achieving human-like abilities.

WHAT IS GPT-3?

How about this for a satirical piece in imitation of writer Jerome K Jerome of ‘Three Men In A Boat’ fame: “It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage. I called it an anomaly, and it is." Before you comment on the style and form and parallels with the language of the English humorist, it must be pointed out that this paragraph (and the whole six-page story that it opens) was penned by GPT-3.

Short for ‘Generative Pre-trained Transformer’, or ‘Generative Pre-Training’, GPT-3 is the third generation of the model that was trained using data gathered from crawling the internet to generate text. GPT-3 can study anything that is structured like a language and then can perform tasks centred around that language. For example, it can be trained to compose press releases and tweets and also computer code. In that sense, such AI is described as language predictive and performs what are known as Natural Language Processing (NLP) tasks. That is, “it is an algorithmic structure designed to take one piece of language (an input) and transform it into what it predicts is the most useful following piece of language for the user", writes author Bernard Marr in Forbes.

GPT-3 was not released to the public and reports in October last year said access to it was given to selected experts, who have now given a glimpse of all the tasks it can accomplish.

HOW WILL GPT-4 BE DIFFERENT?

Before GPT-3 there was GPT-2 and GPT-1, which were launched by OpenAI in 2019 and 2018, respectively. But they were like fledgling steps leading up to the launched of GPT-3. Where GPT-2 had 1.5 billion parameters, GPT-3 featured 175 billion, making it the largest artificial neural network to have been created and 10 times more powerful than the one it pipped — the Turing NLG model created by Microsoft, which ha.

An artificial neural network (ANN) is a system designed to mimic how the brain functions, and enables “computer programs to recognise patterns and solve common problems in the fields of AI, machine learning, and deep learning", says IBM.

‘Pre-trained’, where GPT and other NLP programs like it are concerned, means that such models have been fed on huge loads of data so that it may work out the rules of the language, the variations in the meaning of words, etc. After such a model has been trained, it can generate output from a basic prompt. For example, for the story a la Jerome K. Jerome, the user trying out GPT-3 says on Twitter that “All I seeded was the title, the author’s name and the first “It", the rest is done by #gpt3".

Now, coming to GPT-4. Reports say that going by the trend of OpenAI launching a new version every year, it may come up with a version for testing by experts before long. It has thus been suggested that some version of GPT-4 could be out early next year or in 2023. And, it is widely expected that it would be a game-changer.

A Towards Data Science (TDS) report said that GPT-4 could have 100 trillion parameters and will be “five hundred times" larger than GPT-3. “The brain has around 80–100 billion neurons (GPT-3’s order of magnitude) and around 100 trillion synapses. GPT-4 will have as many parameters as the brain has synapses. The sheer size of such a neural network could entail qualitative leaps from GPT-3 we can only imagine," it added.

The TDS report also said that GPT-4 “probably won’t be just a language model", referring to a December 2020 article by Ilya Sutskever, the Chief Scientist at OpenAI, in which he said that in 2021, “language models will start to become aware of the visual world".

However, Sam Altman, the CEO of OpenAI, has been reported as saying GPT-4 will not be bigger than GPT-3 but will use more compute resources.

SO, HOW CLOSE ARE WE TO AI THAT IS AS GOOD AS HUMAN INTELLIGENCE?

The stated goal of OpenAI is to achieve the creation of artificial general intelligence, that is, AI that exhibits the same intelligence that a normal human being is assumed to possess. It is something that sounds much simpler than what it actually is. As OpenAI itself notes, “AI systems today have impressive but narrow capabilities. It seems that we’ll keep whittling away at their constraints, and in the extreme case they will reach human performance on virtually every intellectual task."

But it adds that “it is hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly". To that extent, it says that its avowed purpose is to “advance digital intelligence in the way that is most likely to benefit humanity as a whole".

As to the question how long that will take, OpenAI says that “it’s hard to predict when human-level AI might come within reach". And, further, some experts say that the approach it has taken, of ‘deep learning’ may not be the best mode to cracking the AI code although the organisation is convinced of the path. It notes that for a long time it was held that if AI could be developed to excel at a game like chess, then such a programme would come close to mimicking human thinking patterns. However, the fact remains that “solution to each task turned out to be much less general than people were hoping".

But it turned its focus to deep learning because the strategy was found to have “yielded outstanding results on pattern recognition problems, such as recognising objects in images, machine translation, and speech recognition", which it says has now provided a peek into “what it might be like for computers to be creative, to dream, and to experience the world".

However, that said, NPT-3 is not exactly the perfect text-writing tool that humans can completely rely on. As Altman himself has said, “The GPT-3 Hype is too much. AI is going to change the world, but GPT-3 is just an early glimpse." Pointing out to one of its drawbacks, Marr writes that “while it can handle tasks such as creating short texts or basic applications, its output becomes less useful (in fact, described as ‘gibberish’) when it is asked to produce something longer or more complex".

But although it is suggested that future iterations of such AI systems would improve upon the chinks of the previous generations, not all are convince. TDS quotes Stuart Russell, a computer science professor at University of California at Berkeley and AI pioneer, as saying that “focusing on raw computing power misses the point entirely… We don’t know how to make a machine really intelligent — even if it were the size of the universe". Which is to say, deep learning only may not be good enough to achieve human-level intelligence.

But it’s nonetheless an approach some of the biggest names in tech are pursuing. For example, Microsoft, which itself is an investor in OpenAI, too, has taken this route. It has put out a sort of explainer, “generated by the Turing-NLG language model itself" that says “massive deep learning language models… with billions of parameters learned from essentially all the text published on the internet, have improved the state of the art on nearly every downstream natural language processing (NLP) task, including question answering, conversational agents, and document understanding among others".

All this means that scientists are not writing off human-like AI yet, although many say, as pointed out in a report by consultancy firm McKinsey, artificial general intelligence “is nowhere close to reality". But the report says that “many academics and researchers maintain that there is at least a chance that human-level artificial intelligence could be achieved in the next decade".

“Understanding human-level AI will be a profound scientific achievement (and economic boon) and may well happen by 2030 (25 per cent chance), or by 2040 (50 per cent chance) — or never (10 per cent chance),” Richard Sutton, professor of computer science at University of Alberta, is reported to have said.

Similar articles you can read