Humans excel at dissecting and evaluating information. Robots are superior. Data analysis by machines has many applications, such as spotting fraud or spam, estimating when your package will arrive, and even choosing the next TikTok video to play based on your viewing history. Their intelligence in these areas is growing. Analytical AI is the term for this kind of classic artificial intelligence.
Though we excel at analysis, human beings also have a knack for innovation. We also produce poems, designs, games, and programming. The only way for robots to compete with humans was to do analytical and routine cognitive tasks, but until recently, they had no chance at all of doing such creative work. Yet, machines are just recently improving to the point where they can create things that are both sensible and attractive. The term “Generative AI” was used to describe this new kind of artificial intelligence, in which a computer creates something from scratch rather than analyzing an existing piece of data.
In certain circumstances, generative AI already outperforms what people can make manually in terms of speed and cost. Everything from social media to video games, from advertising to architecture, from programming to graphic design, from product development to the law, from marketing to sales, is susceptible to disruption because everything relies on human creativity. Better, quicker, and cheaper creation across many end markets should be made possible by generative AI. This may mean that some tasks be entirely replaced by AI, while others may benefit more from a tight iterative creative cycle between humans and machines. The hope is that by lowering the marginal cost of creative and knowledge work to near zero, generative AI will boost overall labour productivity, economic value, and market cap.
Knowledge work and creative work, two of the areas that generative AI seeks to improve, employ billions of people worldwide. Generative AI has the potential to increase productivity in several industries by at least 10%, allowing for greater speed, efficiency, and even expanded capabilities. Because of this, Generative AI might potentially add billions of dollars to the global economy.
Why Now?
Better models, more data, and greater processing power are all reasons why generative AI is relevant today. The category is evolving at a rate that is difficult to keep up with, but some background on its recent past will help put the present into perspective.
Wave 1: Microscale representations superiority dominate (Pre-2015) Small models were state-of-the-art for language comprehension 5+ years ago. These little models are excellent at analyzing data, and they are used for anything from estimating delivery times to identifying fraudulent transactions. But they lack the dynamism necessary for use in generic generating applications. Producing writing or code at human levels is still science fiction.
Wave 2: Competition for greater heights (2015-Today) Transformers are a new kind of neural network architecture for natural language processing described in a seminal publication by Google Research (Attention is All You Need) that can produce higher quality language models, are more parallelizable, and take much less time to train. These models may be trained with little data and adjusted to new tasks quickly.
For sure, when the models grow in size, they start to outperform humans and eventually even us. The computing power needed to train these models is expected to expand by a factor of six between 2015 and 2020, and by that time they will be able to outperform humans on a wide range of tasks, including handwriting, voice, and picture recognition, as well as reading and language comprehension. The GPT-3 model from OpenAI stands out since its performance is far better than GPT-2’s and it provides enticing Twitter demonstrations on tasks ranging from code creation to sarcastic joke writing.
The advancements in basic research have not led to widespread use of these models. They are cumbersome to deploy (need GPU orchestration), costly to utilize, and limited in availability (beta only or not yet released to the public). The first Generative AI applications are appearing despite these barriers.
Wave 3: One that is better, quicker, and cheaper (2022+) As computing becomes more efficient, its price drops. Costs associated with training and running inference may be reduced with the use of new methods like diffusion models. Better algorithms and bigger models are still being developed by the scientific community. Closed beta access is opened up to open beta, and even open source, in rare situations, for developers.
The floodgates have opened for researchers and programmers who previously had limited access to LLMs. The number of applications increases.
Wave 4: Successful new applications appear (Now) The application layer is ready for a development boom as the platform layer matures, models continue to improve in quality, speed, and cost, and model access tends toward being open and free.
We anticipate that these massive models will inspire a new generation of generative AI applications, just as mobile liberated new sorts of apps with new features like GPS, cameras, and on-the-go networking. We anticipate that killer applications for Generative AI will emerge, much as a small number of “killer apps” emerged after the inflexion point of mobile computing ten years ago. Now we have a race on our hands.