Although It is important to acknowledge that Generative AI is not a completely new technology. The concept of Generative AI was first introduced in the 1960s with the development of chatbots. However, it wasn't until the arrival of Generative Adversarial Networks (GANs) in 2014, which is a specific type of machine learning algorithm, that Generative AI gained the ability to create highly realistic images, videos, and audio of actual individuals.
What is Generative AI?Generative AI is a cutting-edge form of AI that can create diverse content formats, such as text, imagery, audio, and synthetic data. The current surge in interest in generative AI can be attributed to the user-friendly interfaces that makes it possible to generate top-notch content including text, graphics, and videos in mere seconds.
How does Generative AI work?Generative AI involves training two neural networks, one called a generator and the other called a discriminator.
The generator creates content, while the discriminator judges the quality of the generated content.
Generative AI models have achieved a significant breakthrough by making use of various learning techniques such as unsupervised or semi-supervised learning during the training phase. This breakthrough has enabled organizations to quickly and easily create foundation models by utilizing a large volume of unlabeled data. These foundation models act as the base for AI systems that can perform multiple tasks. GPT-3 and Stable Diffusion are examples of such foundation models that allow users to harness the power of language. For instance, popular applications like ChatGPT, which employs GPT-3, can generate essays based on short text prompts. Similarly, Stable Diffusion can be used to create photorealistic images from textual inputs.
What does it take to build a generative AI model?Creating a generative AI model has typically been a complex and demanding task, which is why only a handful of large tech companies with ample resources have taken on the challenge. For instance, OpenAI, the company responsible for ChatGPT, previous GPT models, and DALL-E, has secured billions of dollars in funding from prominent donors.
Training a generative AI model using a vast amount of internet data comes at a significant cost. While OpenAI hasn't disclosed exact figures, according to research and online data available it is estimated that GPT-3 was trained on approximately 45 terabytes of text data, which is equivalent to about one million feet of bookshelf space or a quarter of the entire Library of Congress. This training is estimated to cost several million dollars, a cost that many startups cannot afford to incur.
ConclusionIn this blog post, we mainly talked about
- What is Generative AI and how it came into existence?
- How a Generative AI works?
- and finally, What does it take to build a generative AI model?