What can we expect of next-generation generative AI models?

(Credit: Unsplash)

This article is brought to you thanks to the collaboration of The European Sting with the World Economic Forum.

Author: Andrea Willige, Senior Writer, Forum Agenda


  • A new generation of generative AI models will soon be released, including OpenAI’s ChatGPT and Meta’s Llama.
  • Developers are focused on optimizing and expanding their capabilities, including reducing bias and errors, enabling reasoning and planning, and addressing ethical challenges.
  • The World Economic Forum’s Presidio AI Framework aims to improve generative AI governance, establishing early guardrails as both AI engines and their applications evolve.

To say that generative AI has taken the world by storm would be understating the veritable avalanche set loose when OpenAI released ChatGPT in late 2022. Now it has announced the arrival of its latest upgrade, ChatGPT 5, and competitor Meta is following suit with an upgrade of its open-source Llama AI engine, the Financial Times reports.

Generative AI continues to disrupt and transform

The coming together of AI methodologies, high-performance data processing and the “cloud” converged to make what had long been anticipated, by both science and science fiction, a reality. Needless to say it has been fundamentally transforming how we work and live – with no end in sight.

Statista expects the generative AI market to grow to $1.3 trillion by 2032, from only $14 billion in 2020. In 2023, it stood at $900 billion.

As the World Economic Forum points out in its Top 10 Emerging Technologies of 2023 report, the generative AI models that have dominated the headlines over the past year or so are mainly focused on text, programming, images and sound. However, their application could widen over time as the technology progresses.

ChatGPT, Llama, Google’s latest AI offering, Gemini, and Microsoft’s Copilot are all large language models (LLMs). These are algorithms that can analyze, summarize, predict and generate new content using deep learning techniques and large data sets. LLMs are typically trained using around one billion or more variables, though there is no consensus as to how much data is needed to train an LLM.

LLMs get an upgrade

Despite the advantages associated with generative AI – simplifying, speeding and automating previously manual work – its weak points have also become apparent. Not only are there ethical and security concerns with AI’s application, but as TechTarget points out, there are issues such as bias – which may be hard to identify and remove – and hallucinations – when an AI engine provides a wrong answer.

An investigation by the Washington Post last year showed that an AI image generator defaulted to outdated Western stereotypes when asked to produce images of attractive people or houses around the world.

Continuous training and evolving LLM algorithms are key to addressing these issues and to advancing generative AI applications.

Alongside reducing bias and hallucinations, the developers’ ambition for both ChatGPT 5 and Llama 3 is to take the engines beyond simple chatbots. To widen the scope of applications to more complex tasks, it will be crucial to enable LLMs to reason, plan and retain information. They will also have to learn to gauge the effects of their actions, the Financial Times points out.

Another development area for both Llama and ChatGPT is multimodality, allowing AI to process not just text but speech, images, code and videos. Moreover, greater levels of personalization are expected to be part of the next-generation offering.

Addressing ethical issues in generative AI

GPT-5 is understood to be undergoing rigorous testing and training, with a strong focus on safety protocols to address ethical concerns. However, in trying to eliminate bias and errors, generative AI companies can inadvertently achieve the opposite, as Google found to its detriment.

When originally launched, the image generation module of its Gemini platform was generating images of historically white groups like the US Founding Fathers or 1930s German soldiers as people of colour, or produced female hockey players when prompted for images of a well-known league that’s exclusively male. In trying to weed out bias, Google had unintentionally overcorrected its AI engine.

However, such fine-tuning will be a constant feature as generative AI engines continue to evolve. And as the use of AI expands and organizations are creating their own LLMs or adapting the major platform for their purposes, the need for solid governance frameworks remains a topical issue. For example, the World Economic Forum has proposed the Presidio AI Framework, which promotes safety, ethics and innovation with early guardrails to guide development.

Leave a Reply

Go back up

Discover more from The European Sting - Critical News & Insights on European Politics, Economy, Foreign Affairs, Business & Technology - europeansting.com

Subscribe now to keep reading and get access to the full archive.

Continue reading

Discover more from The European Sting - Critical News & Insights on European Politics, Economy, Foreign Affairs, Business & Technology - europeansting.com

Subscribe now to keep reading and get access to the full archive.

Continue reading