Elastic brings LLM observability to Azure AI Foundry to optimize AI agents

0

Elastic, the Search AI Company, has announced a new integration with Azure AI Foundry, bringing advanced observability to large language models (LLMs) and agentic AI applications. The integration enables developers and site reliability engineers (SREs) to gain real-time visibility into model performance, token usage, latency, and costs—all within a unified dashboard.

As enterprises increasingly rely on agentic AI for critical workloads, they face operational challenges such as uncontrolled token consumption, latency spikes, and compliance risks. Elastic’s integration with Azure AI Foundry directly tackles these issues, allowing teams to monitor performance, optimize configurations, and ensure reliability at scale.

“Agentic AI is only as strong as the models and infrastructure that power it,” said Santosh Krishnan, General Manager, Observability & Security at Elastic. “With Elastic and Azure AI Foundry, developers gain clear visibility into how their agents are performing, understand the drivers of cost, and fix performance bottlenecks in real time—scaling AI applications faster without compromising reliability or compliance.”

Amanda Silver, Corporate Vice President at Microsoft Azure CoreAI, added:

“This integration with Elastic delivers real-time visibility into token usage, latency, and costs, with built-in safeguards for any model hosted in Azure AI Foundry. Developers can now build and scale agents with the operational clarity and control they need in production.”

The Elastic Azure AI Foundry integration is currently available in tech preview within Elastic Observability, providing a foundational step toward building transparent, cost-efficient, and reliable AI systems.

LEAVE A REPLY

Please enter your comment!
Please enter your name here