[Week of 4/15] LangChain Release Notes

[Week of 4/15] LangChain Release Notes

3 min read

🤓 Evaluations

We’ve been hearing from our users that they’re still in the dark when it comes to identifying how changes to their application logic move the quantitative (latency, cost) and qualitative metrics that they care about. LangSmith can help! We’ve released a new video series showing you how LangSmith Experimentations can provide a structured workflow for improving your application, spotting regressions, and making tradeoffs amongst latency, cost, and quality.

🦜🔗 New in LangChain

  • Standardized Tool Calling: You can now create agents that work with any tool calling model. Our new standardized tool calling interface allows you to switch between different LLM providers more easily, saving you time and effort. Learn more in our video walkthrough and blog.
  • Documentation Improvements: We've reorganized and cleaned up content to make it easier to find the information and use cases you’re looking for. We hear you that there’s work to do, and we’re investing heavily in our 🐍 Python documentation and soon TypeScript documentation. If you have ideas or want to help, hit reply to this email!

🦜🛠️ New in LangSmith

  • 🥳LangSmith is now SOC2 Type 1 Certified: Security and compliance is important to us and to our growing list of enterprise customers. We’re in our observation period of SOC2 Type 2, and we’ll report back as we have that completed in the next few months.
  • Production Monitoring and Automations: Once you’ve shipped your AI application to production, the hard work begins. In this video series, we show you how to monitor your application and set up automations for online evaluation, dataset construction, and alerts. Your production data and user feedback will be the foundation to help you improve your application, and we show you how on our blog.

🧠 Better RAG

Our RAG From Scratch series breaks down important RAG concepts in short videos with accompanying code. We’ve recently released two more concepts:

  • Indexing w/ RAPTOR: Learn more about hierarchical indexing in this video about RAPTOR, a paper that tackles the challenge of handling both lower and higher level user questions in RAG systems.
  • Feedback + Self-Reflection: RAG systems can suffer from low-quality retrieval and hallucinations. In this video, we introduce the concept of Flow Engineering using LangGraph to orchestrate checks and feedback for more reliable performance.

👀 In Case You Missed It from LangChain

  • 🔁 Optimization of LLM Systems with DSPy – Webinar: Relying on prompt engineering to improve complex LLM systems can be time consuming. DSPy helps with optimization of these systems using automatic evaluation, and we’re starting to incorporate these techniques in LangChain’s suite of products. Learn more in this webinar replay with our CEO Harrison Chase and creator of DSPy Omar Khattab.
  • 💻 Adaptive RAG w/ Cohere's New Command-R+: Adaptive-RAG is a recent paper that combines query analysis and iterative answer construction to handle queries of varying complexity. Watch this video to see us implement these ideas from scratch with LangGraph. LangGraph Code. ReACT Agent Code.
  • 🎙️📹 Audio & Video Structured Extraction with Gemini: Gemini 1.5 Pro now has support for audio and video inputs, opening up the gates for new use cases. In this video, we show you how to perform structured extraction on YouTube videos and audio clips using LangChain JS/TS. Docs.
  • 🦜🧑‍🏫 How to Use LangChain to Build With LLMs – A Beginner's Guide: Learn the up-to-date fundamentals of building with LLMs in our new guide on freeCodeCamp. You’ll learn how to use LCEL, streaming, LangSmith tracing, and more. This is a great resource if you’re just starting out!

🤝 From the Community

642 words