Elon Musk’s OpenAI has released a commercial API for their latest GPT 3 language model. This is an exciting development in the field of AI. 175 Billion Parameters, and approximately 12 million dollars just to train. As a result, this can easily be the most expensive and expansive model ever built. It can generate language, computer code, answer questions and much more. Most benchmarks give it state of the art performance. It is also scary because it can have some negative applications. Fake news and misinformation being the obvious ones. Therefore, OpenAI wants to limit access to the technology. This is because it wants to protect it from falling within wrong hands. As a result, OpenAI has launched a commercial API. It says it will use proceeds to cover costs and progress the mission of achieving Artificial General Intelligence (AGI).
Anyone who knows or has heard about ML / Artificial Intelligence should gasp at this point.
What is GPT 3 and what is the fuss about?
GPT stands for “Generative Pretrained Transformer”. GPT 3 is the latest version of OpenAI’s language models released in 2020. It is an autoregressive model with 175 billion parameters! In Machine learning parlance it is a breakthrough, well known for its few-shot and task agnostic performance. It can do translation, sentence and text completion, question and answer, 3 digit arithmetic.
I believe this is a fundamental shift in the field of AI since we are moving away, for the first time from task specific data and training.
For instance, look at the application below where a consumer of the API has been able to simply “describe” the layout of the page he wants to build and GPT 3 was able to generate a complete layout with functionality.

It can generate layouts and designs just by describing
Social media has been flooded with other applications such as extremely logical and realistic chats. See one here that suggests that there is a deep connection between laws of physics, and love! I don’t think most humans could have made this up.
It can answer questions with deep context and make up believable answers
Or look at the example here where it tells Paras about the “central idea of a fairly complex book. It is eerily accurate (or not but that is not the point). The point is, that it is unbelievably realistic. Don’t forget to also look at the advise on starting a startup. Just amazing!
It is not all intelligent, not even close. We must separate the hype from reality
A lot of conventional wisdom in AI research has been that we may be at the end of extracting performance by simply scaling up the data and size of the models. GPT 3 has basically destroyed that debate.
However it is still to be discussed whether this is the go to approach, especially considering the resources such models take to build, and the resources they take to retrain, deploy and possibly maintain. Artificial intelligence has always been about modelling intelligence on our brain, which is more about top down reasoning. However, most AI and deep learning that we see today is bottoms-up and therefore takes way too much data to train. There is a new vigour in inventing systems that learn top down. However it still remains fairly elusive.
Turing test has been the golden standard for understanding whether we have truly reached human levels, GPT 3 brings it pretty close but still there is a long way to go! we must separate hype from reality
So where does GPT 3 flounder?
As mentioned above the size of the model makes it extremely hard to deploy, run, or change. It has not even been open sourced yet for obvious reasons. I came across one article that consistently and most logically stumped GPT 3 in question & answers. As long as you can ask a question that can be logical, it is pretty damn good. See below..(sourced directly from the blog)
Q: What is your favorite animal?
A: My favorite animal is a dog.
Q: Why?
A: Because dogs are loyal and friendly.
Q: What are two reasons that a dog might be in a bad mood?
A: Two reasons that a dog might be in a bad mood are if it is hungry or if it is hot.
Q: How many eyes does a giraffe have?
A: A giraffe has two eyes.
Q: How many legs does a frog have?
A: A frog has four legs.
Q: Are there any animals with three legs?
A: No, there are no animals with three legs.
Q: Why don't animals have three legs?
A: Animals don't have three legs because they would fall over.
(Sourced from the linked blog above)
However as soon as you switch to questions and answers that cannot be found on the internet, GPT 3 begins to flounder. It is not good at simple math questions either. The author of the blog (linked above) has done a great job and understanding sub-human performance of GPT 3.
Q: How many eyes does a giraffe have?
A: A giraffe has two eyes.
Q: How many eyes does my foot have?
A: Your foot has two eyes.
Q: How many eyes does a spider have?
A: A spider has eight eyes.
Q: How many eyes does the sun have?
A: The sun has one eye.
Q: How many eyes does a blade of grass have?
A: A blade of grass has one eye.
(Sourced from the linked blog above)
We have clearly come a long way since early days of language models I still remember playing with Eliza in my engineering days, and this is certainly going to make assistants even better. I can even imagine specific versions of these taking over some human tasks such as receptionists, personal assistants on administrative tasks completely…
We are not anywhere near Artificial general intelligence, but as a result of the advances made by OpenAI in GPT 3 it certainly doesn’t feel unachievable. If this is not a glimpse into the future, I don’t know what is.
3 replies on “GPT 3 – How OpenAI’s latest tech is a glimpse into the Future”
[…] a space faring civilisation our accomplishments are relatively modest. Modern day visionaries like Elon Musk are working on grand ambitions to get colonies on Mars. Even though the path to get there is laced […]
[…] So, lets recap. Machine learning is great, but its not so great that it can converse with us without asking too many questions and create magic, just yet. Edit: GPT class might change this completely. See GPT2, and GPT3. […]
[…] I don’t think people appreciate how much of a big deal this is. Basically you feed the model some seed and it generates many many paragraphs of coherent human like text on its own, replete with stories, meaning and full sense. If you read it you will not believe that an AI wrote it. You add this and you have a frankenstein-ish monster on your hands. This can be used to generate fake news en masse. and would be hard to detect it. It can impersonate people. On a positive side it will make bots, writing assistants translation, and knowledge systems remarkably useful and human like! For an AI that is the ultimate goal. See an example below from openAI original blog post. See also a post on newer version GPT3. […]