Meta Launches Llama 4 AI Models in Bid to Regain Edge in Open AI Race


Meta Launches Llama 4 AI Models in Bid to Regain Edge in Open AI Race

TEHRAN (Tasnim) – Meta released a new generation of artificial intelligence models over the weekend, introducing the Llama 4 suite as it seeks to compete with top players like OpenAI and Google.

Meta unveiled three new AI models — Scout, Maverick, and Behemoth — in a surprise announcement on Saturday, April 5.

The release marks a major step forward in Meta’s open-source AI ambitions, with the models designed to handle tasks ranging from document summarization to multimodal reasoning across text, images, and video.

The Llama 4 models are built on a “mixture of experts” (MoE) architecture that enhances efficiency by allocating tasks to specialized components within the system.

Meta claims its flagship model, Maverick, surpasses OpenAI’s GPT-4o and Google’s Gemini 2.0 in several benchmarks related to coding, reasoning, and image interpretation.

However, it falls short when compared to OpenAI’s GPT-4.5 and Google’s Gemini 2.5 Pro, according to TechCrunch.

Scout and Maverick are now available on Meta’s website and through partners such as Hugging Face, though with limitations on use.

Meta has restricted access to the models for users and developers based in the European Union, citing the region’s strict AI regulations and privacy laws.

The company has previously criticized the EU’s regulatory stance, describing it as overly restrictive and harmful to innovation.

The launch comes amid heightened competition in the open-source AI sector, particularly following the rapid progress of Chinese AI lab DeepSeek.

DeepSeek’s models — especially R1 and V3 — have challenged Llama 2’s performance, prompting Meta to speed up the development of Llama 4.

In response, Meta reportedly established internal “war rooms” to analyze and replicate DeepSeek’s efficiency gains.

Among the new releases, Scout is the most lightweight, featuring 17 billion active parameters and a context window of 10 million tokens.

This makes Scout well-suited for handling long documents and codebases, with potential applications in academia, legal work, and enterprise data analysis.

It is also optimized to run on a single Nvidia H100 GPU, enabling smaller-scale deployments.

Maverick, which includes 400 billion parameters (with 17 billion active across 128 experts), is designed for general-purpose AI tasks such as language comprehension and creative writing.

Running Maverick requires enterprise-grade computing infrastructure, including Nvidia’s DGX systems.

Behemoth, the third model, is still in training.

According to Meta, the model is expected to outperform competitors on STEM-related benchmarks.

Behemoth includes 288 billion active parameters and nearly two trillion in total, making it one of the largest publicly described AI models to date.

Preliminary testing suggests it may surpass GPT-4.5, Claude 3.7 Sonnet, and Gemini 2.0 Pro in solving advanced mathematical and scientific problems.

However, Gemini 2.5 Pro reportedly maintains an edge in several key areas.

Most Visited in Space/Science
Top Space/Science stories
Top Stories