[Week of 9/4] LangChain Release Notes

[Week of 9/4] LangChain Release Notes

3 min read

New in LangSmith

  • LangChain Hub: a new home for uploading, browsing, pulling, and managing your prompts. Explore the Hub here.
  • >>Why we built it: Twitter thread here. Blog post here.
  • >>Need access? If you would like to upload a prompt but don't have access to LangSmith, fill out this form and we will expedite your access.
  • >>Monitoring: Easily track analytics on your project over time. Track feedback, usage, latency, errors, time-to-first-token. Release thread here.
  • Feedback Recipes: on the heels of last week’s monitoring launch, we’ve been adding examples of effective patterns for leveraging context to generate better, more automated, feedback metrics.
  • >>Fine-Tuning Cookbook: improve your LLM’s performance choosing the right data is hard. A guide on how to fine-tune on real, relevant traces/
  • >>Algorithmic Feedback Cookbook: set up an automated feedback pipeline, including metrics.
  • >>Custom evaluation: use any custom python evaluator to add evaluation metrics to an existing LangSmith test project.
  • >>Curate fine-tuning data: easy-to-use filters for tags, content, and feedback to help curate better training data for your chat models. Makes it easier to know when to swap out certain points in your system.
  • Not using python or JS? A walkthrough on tracing with LangSmith's REST API here.

New in Open Source

  • LangChain Indexing API: Syncing data sources to vector stores is a pain. You need data provenance and record-keeping to ensure that stale docs are deleted without redundant work when pipelines/sources change. The Indexing API makes this easy. Docs here. Blog post here.
  • Privacy and Moderation: We’ve added several integrations to help ensure increase the safety of your LLM usage
  • >>Integration with Microsoft Presidio for PII anonymization
  • >>Integration with Amazon Comprehend to detect and handle PII and toxicity
  • Anthropic Function Calling: Function calling is extremely useful, but still only available for OpenAI models. Our experimental wrapper for using function calling with Anthropic models is now available in JS.
  • We reached our 1500th contributor! Some shoutouts to a handful of our contributors here. If you’re looking to contribute but haven’t yet, here’s a helpful starting point.

In case you missed it

  • Harrison on Latent Space Podcast. Talking about the origins and evolution of LangChain.
  • Tabular Data Retrieval Webinar Recording here. If you're wrangling relational databases, cloud data warehouses, query engines, data lakes, etc, this one’s for you. We had Cube, Delphi Labs, and Pampa Labs team on to talk about how they actually built this into their LLMs.
  • More on the topic: this blog post on ‘Incorporating domain specific knowledge in SQL-LLM’ solutions by Manuel and Francisco from Pampa Labs.
  • Request for Streamlit Hackathon Projects (with prizes): we’re partnering with Streamlit to bring their LLM hackathon to life. It’s not too late to get hacking! Here’s a getting started guide including a list of projects we’d love to see (and send you some unofficial prizes for building)!
  • Fine-turning in Your Voice Webinar Recording here. Getting your LLM app to feel and sound more like you. Advice from people actually building!
  • Applications Open for Our (inaugural) Student Hacker in Residence Program: know any students that want to build LLM apps with us this semester? Send them our way. Application link here. Blog post with more details here.

Favorite Prompts on LangChain Hub