The new model’s smaller footprint and higher precision may attract enterprise customers even as licensing remains a concern. Credit: Phalexaviles/Shutterstock Mistral AI has launched a 123-billion-parameter large language model (LLM) called Mistral Large 2 (ML2), strengthening its position as a significant competitor to OpenAI, Anthropic, and Meta. In a statement, the company said that ML2 has a 128k context window and support for dozens of languages including French, German, Spanish, Arabic, Chinese, Japanese, and Korean. It also supports over 80 coding languages, including Python, Java, C, C++, JavaScript, and Bash. The announcement follows Meta’s unveiling of the Llama 3.1 family of LLMs, which includes its most advanced model, 405B. Meta claims its models also feature a 128K context length and support eight languages. Last week, OpenAI released GPT-4o mini, its most affordable small AI model. Mistral AI said that benchmarking shows ML2 performs on par with leading models such as GPT-4o, Claude 3 Opus, and Llama 3 405B in areas like coding and reasoning. On the popular benchmarking test MMLU, ML2 achieved an 84% score, while Llama 3.1 405B scored 88.6% and GPT-4o scored 88.7%. GPT-4o mini scored 82%. Mistral AI models are available on Vertex AI, Azure AI Studio, Amazon Bedrock, and IBM watsonx.ai, the company said. Key attractions for enterprises Analysts point out that the AI battle has shifted to conversational and multimodal models, each striving to excel in complex mathematics, advanced reasoning, and efficient code generation. According to Neil Shah, partner and co-founder at Counterpoint Research, key AI players like Mistral AI are focusing on minimizing hallucinations, enhancing reasoning capabilities, and optimizing the performance-to-size ratio of their models. “This is where Mistral Large 2 excels in terms of packing more performance per size, requiring just 246GB of memory at full 16-bit precision during training,” Shah said. “Mistral Large 2’s smaller footprint compared to the competition, while maintaining higher precision, is advantageous for enterprises. It allows them to produce more accurate and concise contextual responses faster than other larger models, which require more memory and computing.” Moreover, enterprises heavily dependent on Java, TypeScript, or C++ will benefit from the superior code-generation performance and accuracy that Mistral’s benchmarks claim, Shah added. Open-source models like Mistral’s can also enable users to create specialized LLMs tailored for specific industries or locations, according to Faisal Kawoosa, chief analyst at Techarc. “Eventually, these kinds of specialized LLMs will emerge over time,” Kawoosa said. “While generative AI is useful, in many cases, a specialized understanding of the domain is necessary, which can only come from creating such LLMs. Therefore, it is crucial to have an open-source platform that not only provides LLMs to use AI models but also allows for tweaking and further development to create those very specific platforms.” Charlie Dai, VP and principal analyst at Forrester, also noted that Mistral LLM-2’s advanced features in code generation, mathematics, reasoning, performance, and cost efficiency — designed to run efficiently on a single H100 node — along with its multilingual support and availability on major cloud platforms, will significantly enhance its competitiveness for enterprise clients in their AI initiatives. Licensing and other concerns A potential concern for users is that Mistral is releasing ML2 under the Mistral Research License, allowing usage and modification only for research and non-commercial purposes. For commercial use that requires self-deployment, users must obtain a separate Mistral Commercial License from the company. “Since Mistral AI must have incurred significant data and training costs for Large 2, they have rightly reduced the scope for commercial usage without a license, requiring a strict commercial license, which drives up the pricing and could be an inhibitor,” Shah said. “This may be a deal breaker in certain areas like emerging markets.” Prabhu Ram, VP of Industry Research Group at Cybermedia Research, added that while Mistral AI has shown promise and potential, certain concerns persist. These include data transparency, model interpretability, and the risk of bias, which remain critical areas for improvement. Related content news Go language evolving for future hardware, AI workloads The Go team is working to adapt Go to large multicore systems, the latest hardware instructions, and the needs of developers of large-scale AI systems. By Paul Krill Nov 15, 2024 3 mins Google Go Generative AI Programming Languages news Visual Studio 17.12 brings C++, Copilot enhancements Debugging and productivity improvements also feature in the latest release of Microsoft’s signature IDE, built for .NET 9. By Paul Krill Nov 13, 2024 3 mins Visual Studio Integrated Development Environments Microsoft .NET news Microsoft’s .NET 9 arrives, with performance, cloud, and AI boosts Cloud-native apps, AI-enabled apps, ASP.NET Core, Aspire, Blazor, MAUI, C#, and F# all get boosts with the latest major rev of the .NET platform. By Paul Krill Nov 12, 2024 4 mins C# Generative AI Microsoft .NET news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos