Elastic, a leading search analytics company, serving over 20k customers worldwide, enables organizations to securely harness search-powered AI so anyone can find the answers they need in real-time using all their data, at scale. By integrating AI with search technology, the company facilitates the discovery of actionable insights from large volumes of both structured and unstructured data, addressing the need for real-time, scalable data processing. They have cloud-based solutions for search, security, and observability, which help businesses deliver on the promise of AI, and recently, with the help of LangChain and LangSmith, they added an AI Assistant to their security suite.
The Elastic AI Assistant for security is designed to be a premium product that supports the security analyst workflow. Specifically the product helps security teams with tasks such as:
- Alert summarization to explain why an alert was triggered along with a recommended playbook to remediate the attack. This feature generates a dynamic runbook for the organization to provide orderliness during an event.
- Workflow suggestions to guide users on how to complete tasks such as adding an alert exception or creating a custom dashboard.
- Query generation and conversion to support users in migrating from other SIEMs to Elastic more easily. Now, a user can paste a query from another product, or in natural language, and Elastic AI Assistant will convert it into an Elastic query using proper syntax.
- Agent integration advice to guide on the best way to collect data in Elastic.
And much more.
This is an enterprise-only feature, and since its initial launch in June 2023, Elastic has seen significant adoption of the Assistant.The AI Assistant has proven to significantly reduce a customer’s MTTR (Mean time to respond) to alerts generated by Elastic Security, as well as reduce the time it takes to write queries and detection rules, thanks to its ability to craft queries based off of natural language use cases.
How LangChain and LangSmith supported the product development
Elastic designed their application to be agnostic to the LLM from the start, as they wanted their end users to be able to bring their own model and would need to support OpenAI, Azure OpenAI, and Bedrock (amongst others). Giving users this level of control and flexibility from the beginning was a requirement.
Fortunately, much of the tooling to create a RAG application came natively with LangChain, and since LangChain also abstracts the application logic from each of the underlying components, the Elastic team was able to create swappable models and prompts, depending on the user’s preference of vendor, without much engineering overhead. As a bonus, LangChain already had a great integration with Elastic’s vector database, so it was a perfect fit for the job.
As the team started to add additional functionality into the AI Assistant, such as the ability to generate queries in Elastic’s new query language - ES|QL, LangSmith was critical in helping the Elastic team understand what exactly got sent to the model, how long did the full trace take, and how many tokens were consumed in the process. LangSmith helped the team better understand how different models could be good at different tasks and at different price points. This visibility allowed the development team to think through tradeoffs and create as consistent of an experience as possible across all three models supported. As the team iterated on their application, LangSmith helped highlight variances and prevented regressions from making it to production.
“Working with LangChain and LangSmith on the Elastic AI Assistant had a significant positive impact on the overall pace and quality of the development and shipping experience. We couldn’t have achieved the product experience delivered to our customers without LangChain, and we couldn’t have done it at the same pace without LangSmith.” says James Spiteri, Director of Security Product Management, at Elastic.
LangChain and LangSmith also supported Elastic’s workflow to deliver a secure application to the enterprise. Mindful that their users were security experts and naturally skeptical, the Elastic team built out data masking in the app to obfuscate any sensitive data before it was sent to the LLM, exposed the token tracking directly in the end product, so that users have full visibility on usage, and integrated their role based access control with the experience so that admins could limit usage as they wanted.
What’s next with Elastic AI Assistant?
The goal of the AI Assistant is to alleviate as much work as possible for the security analyst and give them more time back in their day. While the product supports three model providers today, the team wants to expand to more models to service an even wider audience.
The next big step in the AI Assistant is to leverage LangChain’s agent framework so that more work can be achieved in the background and have users approve actions. Moving beyond knowledge assistance will take the application to the next level, and the Elastic team feels confident they can deliver with the help of LangChain and LangSmith.
Giving back to the community
In the spirit of open source, the Elastic team has made available a lot of their code that powers the Elastic AI Assistant. You can see exactly how the team implemented their solution by checking out the repository here. For other exciting educational content on ML development with Elastic, check out Elastic Search Labs. Enjoy!