[Week of 3/18] LangChain Release Notes

[Week of 3/18] LangChain Release Notes

Self-learning GPT, LangSmith crosses 100k signups— with version controlled datasets, PII masking, custom model rates for cost tracking, and ability to run prompt over a dataset

3 min read

🤓 Using Feedback to Improve Your Application: Self Learning GPT

Everyone is trying to figure out how to get better quality LLM-applications. And for us, prompt optimization is top of mind. We’ve been experimenting with if we can take all the data and feedback that we collect in LangSmith to improve prompting strategy, and we’re quite excited by the initial results. Self-learning will be a critical part to quality and eventually personalization. Read more about what we’re working on here.

🦜🛠️ New in LangSmith

It’s been a little over 1 month since we GA’d LangSmith, and we’re so grateful for all the new users. We’ve crossed 100k signups 🥳 Thank you.

If you haven’t checked it out, we’ve shipped some exciting new features:

  • Version controlled datasets: We now let you tag a dataset to a moment in time, so that when you run your tests, you can make sure you control for the same dataset in each of your runs, even if examples have been added, removed, or changed.
  • Run a prompt over a dataset: Now in the Prompt Hub playground, you can run a prompt over all the inputs of a dataset and see the results in the Datasets and Testing tab. This helps ensure your prompt works well over a wide range of inputs.
  • Custom model rates for cost tracking: Head to the Settings tab to add custom rates for a model that you want to track the cost of. We support OpenAI models with no setup, but if you want to add additional rates for other models, this is now possible.
  • PII Masking: We now let you mask the input and output of a single trace. See docs. If you detect a trace might contain PII data client side, mask all the text before sending it to LangSmith. We want to invest more in PII masking, but this should help you guard against the occasional call the contains PII and shouldn’t be shared with a 3rd party.

📅 Upcoming LangChain Events

👀 In Case You Missed It

  • 🤖 Deploying code agents without all the agonizing pain: Do you want to build a coding agent inspired by Devin or AlphaCodium? Learn how to secure the prototype using Modal Sandboxes and deploy it as a web app with LangServe on Modal in this video by Lance and Charles. Repo.
  • 🪡 Multi Needle in a Haystack Benchmark: As model context windows grow, everyone is questioning what will happen to RAG. RAG often isn't about just retrieving a fact, it's about retrieving multiple facts and reasoning over them. In this blog post, we find that retrieval performance decreases with more facts (needles) and reasoning over retrieved facts (needles) is harder than retrieving.
  • 🦜🏓 LangServe Chat Playground: We've added a brand-new, chat-focused playground to LangServe. Any chain that takes chat input / output can use it. It supports streaming and message history editing, as well as feedback and sharing traces publicly with LangSmith. Enable it by setting playground_type="chat" when adding your chain as a LangServe route. Docs. Youtube.
  • 🦜🕸️ New LangGraph Quickstarts: We've made it easier to get started building multi-agent apps with LangGraph and LangGraph.js. Check out our quickstarts: Python docs and JavaScript docs.
  • 🚄 Support for NVIDIA NIM: LangChain integrates NVIDIA NIM for GPU-optimized LLM inference in RAG. Read more on the blog.

🤝 From the Community