Updated: May 9
GPT-3 stands for Generative Pre-trained Transformer 3, and it is a deep learning model that produces human-like text. It is one of the latest GPT-n series, created by OpenAI, in an Artificial Intelligence laboratory based in San Francisco. GPT-3, introduced in May 2020(currently in beta testing), is part of a trend in natural language processing (NLP) systems of pre-trained language representations. Before GPT-3, Microsoft's Turing NLG was the hugest language model with a capacity of 17 billion parameters, which is even less than one-tenth of what GPT-3 has.
GPT-3 has become so intelligent that the level of text generated by it is as high as a human would have written, which has both success and risk. Thirty-one OpenAI researchers and engineers presented the original May 28, 2020 research paper introducing GPT-3. The GPT-3 research paper shows the warnings of its potential dangers, and therefore, the GPT-3 team has called research for reducing the risk. One of the Australian philosophers, David Chalmers, has described GPT-3 as "one of the most interesting and important AI systems ever produced."
On September 22, 2020, Microsoft announced that they had licensed "exclusive" use of GPT-3; but still, others can use the public API to receive output, but the control of source code will only be in the hands of Microsoft.
Here's an interview with GPT-3 that clearly reveals the enormous intelligence of GPT-3 👇, the link to the video is down below.
Is it a threat to humanity or a huge success? Well, why don't you hear it from the GPT-3 itself? Sounds cool, right? In fact, it's really astounding that the GPT-3 has written an article itself on theguardian.com. You might want to see it, and in my opinion, it's must-read stuff. You will be amazed!
Here's is the link to the article 👉: click here.