The XGen-Sales model, which is based on the company’s open source APIGen and its family of large action models (LAM), will aid developers and enterprises in automating actions taken by AI agents, analysts say. Credit: JHVEPhoto - shutterstock.com Customer relationship management (CRM) software provider Salesforce’s AI research team has come out with a new large language model and two new open source products that will help developers and enterprises to not only build agents but automate actions taken by them. The new large language model (LLM), dubbed XGen-Sales, is itself based on a pair of new products, APIGen and xLAM, both of which have been released on Hugging Face. XGen-Sales, according to the company, is expected to be made available soon via the company’s Einstein 1 Agentforce Platform, a low-code/no-code platform for building autonomous AI agents that Salesforce co-founder, Chairman, and CEO Marc Benioff is expected to discuss in his Dreamforce 2024 keynote this month. “By fine-tuning xGen-Sales to increase accuracy for relevant industry tasks, enterprises can expect the new model to deliver more precise and rapid responses, automating sales tasks such as generating customer insights, enriching contact lists, summarizing calls, and tracking the sales pipeline,” the company said in a statement. Last month, Salesforce introduced two new AI agents, Einstein Sales Development Rep (SDR) Agent and Einstein Sales Coach Agent, that push beyond the Einstein copilot to augment sales operations, autonomously executing key tasks. Both AI agents, which are part of the Sales Cloud, have been developed on the Agentforce platform. Just a few days before the announcement, the CRM software provider said that it was considering a new pricing model for AI agent conversations. Decoding Salesforce’s “open source” APIGen, xLAM for enterprises The two other new products, APIGen and the xLAM family of large action models, have been released under the “Creative Commons Attribution Non Commercial 4.0 International” license, said Shelby Heinecke, senior AI research manager at Salesforce. The license itself, as the name suggests, technically does not completely make these products open source, contrary to Salesforce’s claims, as it doesn’t allow commercial usage. These products are more akin to the so-called open source LLMs released by the likes of Meta, Mistral, and Anthropic that reveal their weights but put caveats on usage. However, several experts believe that there is no true definition of an open source LLM as defined by the Open Source Initiative. One notable development in August was The Linux Foundation taking the Open Model Initiative (OMI) under its wing to develop community-based LLMs and guidelines for interoperability between open source models. The combined usage of these products, according to Heinecke, will find real-life use cases across applications that are running AI agents on portable devices, as well as automating workflows in LLMs or agent actions in a larger environment with supporting infrastructure. What is Salesforce’s APIGen? Salesforce’s AI research team defines APIGen as an automated data generation pipeline designed to produce verifiable high-quality datasets for function calling applications. Function calling, according to Abhishek Mundra, practice director at Everest Group, can be defined as an AI development technique to help LLMs connect with external tools and APIs that are needed to execute a user request. Typically, an LLM such as GPT-4 is fine tuned to understand when a function needs to be called, and it then generates a JSON document that contains the logical code required to call the related function. Function calling is generally used in LLM-powered chatbots or agents, such as Rabbit AI, that need to use external tools to answer questions. Other use cases for function calling could include making an LLM extract and tag data automatically, acting as a module or application of the LLM to help convert natural language to API calls or database queries, and as a conversational knowledge retrieval engine that talks to a knowledge repository. Salesforce’s rationale for releasing APIGen, according to Dion Hinchcliffe, vice president of the CIO practice at The Futurum Group, is that there is a belief in the industry that there is a lack of sufficiently high-quality, domain-specific function calling datasets. “The creation and open sourcing of APIGen highlights the challenges in sourcing or generating the necessary data to effectively train models for function calling tasks,” Hinchcliffe said, adding that function calling datasets are crucial because they provide the specific types of interactions that models need to learn in order to execute tasks accurately and autonomously. Hinchcliffe also pointed out that, because function calling is a specialized and relatively new focus area within AI development, existing datasets are limited in scope, diversity, or quality, and are also not specific to a given enterprise’s operations. “This scarcity can hinder the training and performance of models that rely heavily on understanding and executing function calls in the context of an existing business,” Hinchcliffe said, adding that APIGen can help fill the void by generating synthetic datasets tailored to more contextual needs of an enterprise. What is Salesforce’s xLAM family of models? Salesforce’s xLAM is a family of large action models of varied sizes — Tiny (xLAM-1B), Small (xLAM-7B), Medium (xLAM-8x7B), and Large (xLAM-8x22B) — that have been released to fit different latency and infrastructure requirements, the company said. LAMs are smaller models inside an LLM, designed using the mixture of experts (MoE) architecture which combines several smaller models, each expert in a certain area or field, to make a large LLM. While LLMs typically generate context, LAMs usually are used to invoke software functions using function calling to complete actions or tasks in the real world. The Tiny (xLAM-1B) LAM, according to the company, features 1 billion parameters and is most suitable for on-device applications where larger models are impractical. The Small (xLAM-7B) LAM is designed for swift academic exploration with limited GPU resources, Salesforce’s Heinecke said, adding that the Medium (xLAM-8x7B) LAM is ideal for industrial applications striving for a balanced combination of latency, resource consumption, and performance. The largest xLAM-8x22B model allows enterprises with a certain level of computational resources to achieve optimal performance, the senior AI research manager further said. In addition, Salesforce has said that the xLAM family of models was tested on the Berkeley Function Calling Leaderboard (BFCL), which offers an evaluation framework for assessing LLMs’ function calling capabilities across various programming languages and application domains like Java, JavaScript, and Python. The smaller versions of the model, in combination with the synthetic data generated via APIGen, performed better than GPT-4, and were second only to Claude 3.5-Sonnet, both of which are models with relatively more parameters. However analysts pointed out that, with the exception of Salesforce, larger or key LLM providers such as Google, OpenAI, Meta, and Anthropic are all working on agent-based models or agent-based features in their existing models. “For instance, Google’s Task-Oriented Dialogue systems and OpenAI’s Codex models are examples of LAMs tailored for specific functions like customer service or code generation, respectively,” said Cameron Marsh, senior analyst at Nucleus Research. Marsh also pointed out that Salesforce’s xLAM family of models, when compared to other large action models, stands out due to its balance of cost-effectiveness and performance. “The xLAM-1B, for example, offers comparatively good performance at a lower cost, and resource consumption typically associated with larger models. This makes it particularly attractive for enterprises that need efficient, scalable AI solutions without the overhead of massive computational requirements,” Marsh said. “While other models may excel in specific niches or under certain conditions, xLAM’s versatility and resource efficiency give it a distinct advantage in broader enterprise applications,” Marsh explained. Smaller but well known examples of action-based AIs include AgentGPT and SuperAGI, Hinchcliffe noted. Why a combination of these products is important for developers? The combination of APIGen and the xLAM family of models is important for developers, analysts said. Developers, according to Bradley Shimmin, chief analyst at Omdia, are very keen to feed JSON data into LLMs and receive JSON data from LLMs to support their software development. “For example, you might have a game developer using an LLM to generate game character descriptions or parameters on the fly. For that to work, the LLM simply ‘must’ generate that information reliably in valid JSON format. Right now, developers have to rely on third party tools like Pydantic, Zod, LangChain, etc. to do things like iteratively run the same prompt until the inference returns data that’s usable, which is not a great solution,” Shimmin said. What developers want is both in-model function calling, where the model’s syntax incorporates this functionality, and a model that itself knows how to work with APIs and formatting languages like JSON properly, the analyst explained, citing the example of Phi-based model NuExtract. Why Salesforce is ‘open sourcing’ these products? Analysts believe that Salesforce is releasing these products as “open source” in order to gain market share for capabilities such as Agentic actions. “By making this tool available to developers and researchers, Salesforce aims to accelerate the development and refinement of function calling models, potentially leading to more robust and reliable AI applications,” Hinchcliffe said, adding that the open-sourcing of APIGen can also lead to more specialized and efficient AI solutions that are better tailored to the unique needs of various industries, enhancing business operations and customer experiences. From a broader perspective, Omdia’s Shimmin pointed out that the release of these products is a critical step for Salesforce and its rivals, such as Oracle, SAP, and any other firm building a line of business software. “For these players, the ability to integrate AI into business workflows, both to augment and to automate processes, will drive a great deal of revenue going forward as enterprise customers push for more and more optimization, such as productivity gains and cost reduction,” Shimmin explained. However, Hinchliffe pointed out, these products might not have an immediate impact on existing Salesforce customers and developers, but once the community helps expand the function calling datasets, these models may be used in several existing Salesforce products, such as the Commerce Cloud. Related content news Go language evolving for future hardware, AI workloads The Go team is working to adapt Go to large multicore systems, the latest hardware instructions, and the needs of developers of large-scale AI systems. By Paul Krill Nov 15, 2024 3 mins Google Go Generative AI Programming Languages news Visual Studio 17.12 brings C++, Copilot enhancements Debugging and productivity improvements also feature in the latest release of Microsoft’s signature IDE, built for .NET 9. By Paul Krill Nov 13, 2024 3 mins Visual Studio Integrated Development Environments Microsoft .NET news Microsoft’s .NET 9 arrives, with performance, cloud, and AI boosts Cloud-native apps, AI-enabled apps, ASP.NET Core, Aspire, Blazor, MAUI, C#, and F# all get boosts with the latest major rev of the .NET platform. By Paul Krill Nov 12, 2024 4 mins C# Generative AI Microsoft .NET news Red Hat OpenShift AI unveils model registry, data drift detection Cloud-based AI and machine learning platform also adds support for Nvidia NIM, AMD GPUs, the vLLM runtime for KServe, KServe Modelcars, and LoRA fine-tuning. By Paul Krill Nov 12, 2024 3 mins Generative AI PaaS Artificial Intelligence Resources Videos