IBM reports that enterprise customers are adopting a multi-model AI strategy, using different large language models (LLMs) for distinct tasks instead of relying on one provider. Armand Ruiz, IBM’s VP of AI Platform, explained at VB Transform 2025 that companies might use Anthropic for coding, others for reasoning, and IBM’s own Granite series for customized workflows.
To support this, IBM launched a model gateway—a single API that lets enterprises switch between LLMs while ensuring governance and monitoring. This allows sensitive workloads to run on open-source models on-premises and less critical tasks via public APIs like AWS Bedrock or Google Cloud’s Gemini.
IBM is also developing agent communication protocols, including its ACP protocol contributed to the Linux Foundation, competing with Google’s Agent2Agent. These standards are critical as some enterprises pilot over 100 AI agents, reducing the need for costly custom integrations.
Ruiz stressed that AI must transform workflows, not just replace chatbots. He cited IBM’s internal HR system where AI agents handle employee questions autonomously, escalating only when necessary.
Enterprises should shift from simple prompt engineering to deep process automation enabling AI agents to execute complex tasks independently.
IBM’s advice: abandon chatbot-first thinking, build multi-model flexibility, and adopt open communication standards to avoid vendor lock-in. Ruiz concluded, “Business leaders need to be AI-first leaders and understand these concepts deeply.”