Non-tech founder’s guide to choosing the right software development partner Download Ebook
Home>Blog>What is the future of generative ai?

What Is The Future Of Generative AI?

2022 was the year that Generative AI entered the mainstream. Although the underlying technology has existed in some form or other for years, it has only recently become powerful enough to consistently resemble human creators. 


DALL-E can turn text prompts into precisely rendered artworks. ChatGPT mimics human conversation, and can write code, poetry, and prose. 


While those two are probably the most famous, companies like Midjourney and Stable Diffusion have released similar tools, and Microsoft and Meta are currently working on their own models.


It is difficult to overstate the magic of these new tools. It is really quite incredible what they do, and it is likely that they will disrupt many industries.


But if we want to make the best use of a new technology, we need to be optimistic, while balancing the hype with an acute realism. Some of the current hype about generative AI is just hype, but some of it is definitely justified.

Hype Cycles

The American technology research firm Gartner has a model of technological adoption called the Gartner Hype Cycle.


The Gartner Hype Cycle has five phases:
  • Technological Trigger
  • Peak of Inflated Expectations
  • Trough of Disillusionment
  • Slope of Enlightenment
  • Plateau of Productivity 

When a powerful new technology is announced, this is a Technological Trigger for media and investment attention. Hype grows, a bubble may form, and we reach the Peak of Inflated Expectations. Eventually, these expectations fail to materialize, and we reach the Trough of Disillusionment. Years pass, people find valid use cases for the technology, and we climb the Slope of Enlightenment. Finally, the technology reaches its Plateau of Productivity: now we know what it is actually good for, and keep on using it for that.


While this is by no means a perfect model, it does a good job of describing the adoption of many different technologies. 


Remember the hype about drone delivery about a decade ago? It failed to deliver on early promises of mass delivery, but is now settling into a valuable niche: delivering crucial healthcare products to remote rural locations.


Applying the Gartner Hype Cycle to current AI trends is tricky, because there are a couple different points that might be considered the “Technological Trigger”.


First, we have the birth of Deep Learning in 2012. This is still the dominant paradigm in AI, and improvements since then have mostly been incremental. 


But a decade of incremental improvements adds up. DALL-E and ChatGPT could count as technological triggers in their own right.


Either way, it’s pretty clear we are nowhere near the Plateau of Productivity. We still don’t know exactly what generative AI will be used for in the long term. And, for better or for worse, we haven’t yet hit the Trough of Disillusionment.

AI Winter

The lifecycle of AI tech has been studied so much, it has its own name for the Trough of Disillusionment: an AI Winter. The last AI Winter happened in the late 1980’s.


Could we be headed into a new AI Winter? The Gartner Hype Cycle suggests that, yes, at some point we will be. What goes up must come down. 

It could be a relatively mild winter. The new generative AI programs are incredible, and they likely will find some important use cases. But, as with any new technology, there are limitations.

Limitations of Deep Learning

Deep Learning is the current paradigm for AI. It is based on neural nets, trained on high volumes of data, using powerful microprocessors. This paradigm has some limitations:
  • Data quantity and quality
  • Imitative nature
  • Inability to check reality

Data Quantity and Quality

The single biggest factor in the quality of a Deep Learning model is the data it trains on. More data is better, and in general adding more data is more effective than finding a smarter algorithm.


But a lot of data is poor quality, unstructured, or biased. Generative AI programs tend to pick up on the biases inherent in its training data, which is why researchers have found that some systems make racist or misogynistic choices.


Even more limiting is the fact that the actual quantity of data AI researchers have to work with is limited. It might seem like humanity produces a limitless quantity of information, but keep in mind that GPT-3 looked at about 10% of the visible internet. Some researchers predict that, at this rate, we could actually run out of useable data for AI language models in just a few years.

Imitative nature

DALL-E and ChatGPT create really cool tools, but they work by imitating human creations, rather than thinking of their own ideas. Essentially, they are optimized for plausibility

ChatGPT creates prose that can pass a Turing Test because it can smash symbols together in a way that looks like a human did it. But that doesn’t mean it can come up with actually original thoughts.

This, unfortunately, leads into what is perhaps the biggest problem: lack of a connection to reality.

Inability to check reality

Obviously, the pictures that DALL-E produces aren’t real. And OpenAI will be the first to tell you that you should be skeptical about anything ChatGPT says. 


The reason is because these programs optimize for plausibility, and not for truth. Everything ChatGPT says sounds like it could be true, but it is not actually connected to reality to check. Instead, it is connected to its own vast collection of word associations, which was trained on text produced a few years ago.

This means that it is now easier than ever to produce false or misleading content. As generative AI programs don’t back up their claims with references, it will become more and more important for human content producers to make their own documentation.

Generative AI and Web Development

ChatGPT is flexible enough to write code. It has been shown repeatedly to be capable of writing basic apps and web pages; or at least, of showing people how to write them.

Although ChatGPT has some understanding of HTML and related technologies, it knows nothing about your business and cannot align its goals with yours. In fact, it doesn’t really even have goals.


If you want to build a complex product or platform that satisfies the specific needs of your organization, you need real human developers. Seasoned developers can comprehend the intangible needs of your organization and work with you to create tailored solutions that meet your specific requirements. They can also build long-lasting relationships with clients to ensure that the code is updated and maintained over time.

At JetRockets, we recognize the importance of aligning our goals with our clients' objectives, and we are committed to delivering the tools and resources that they require to succeed.

Discover More Reads

Real Stories & Real Success

Do you have a tech idea?

Let’s talk!

By submitting this form, you agree with JetRockets’ Privacy Policy

If you prefer email, write to us at hello@jetrockets.com