[Week of 10/2] LangChain Release Notes

[Week of 10/2] LangChain Release Notes

3 min read

New in LangSmith

  • Fireworks and PaLM Support in the Playground: rapidly workshopping prompts and testing them against broad range of different LLMs is becoming a critical step in the LLM app development process. With Fireworks and PaLM integrations, unlock a broader set of LLMs you can test with (esp OSS models). For FREE! Blog post here.
  • More collaboration in LangChain Hub: teams want to share and work together on prompts, we’ve introduced forking to help teams build on each other’s work and commenting to bring more colleagues into the conversation.

Quality of life improvements:

  • Collapsable Traces: runs get big. we made it easier to inspect them by collapsing/expanding each section.
  • Bulk Add to Dataset: Select many runs at once and add them to an existing dataset. Or create a new dataset with them.
  • Split View in Logger: easily preview and navigate between runs. makes debugging a lot smoother and faster.

New in Open Source

Two new template applications/use-cases: we love building these apps because we believe it’s important to dogfood our own framework and provide templates for the community. Our two latest:

Better support for Runnables: Runnables are a clearer way to assemble components into chains. We’ve made the following fundamental improvements:

  • Input and Output schemas for Runnables: provides a way to enforce Pydantic schema on input and output. docs here.
  • Stream Intermediate Steps from any Runnable: docs here.
  • Easy streaming of partial results for OpenAI function calling (parsed into JSON): more details on two streaming modes here.
  • New integrations: support for Mistral, Bedrock, fireworks chat, cohere chat
  • NOTE: we are continue to move some chains with CVEs to langsmith_experimental. This may cause some breaking changes for LLMSymbolicMathChain, LLMBashChain, and jinja templates. See the details of this here

In case you missed it

Webinar Recordings

Blog Posts

  • Handle PII with LangChain Blog Post: Managing PII (personally identifiable information) data with LLMs can be especially tricky. Insights from Francisco Ingham on how to do it safely.
  • Retrieve SEC Filings with LangChain Blog Post: with Kay (embedding API designed for RAG) and Cybersyn (datasets).
  • Bringing Free OSS Models to the Playground with Fireworks AI Blog Post: making it easy (and free!) to try out prompts with an OSS model.
  • How "Correct" are LLM Evaluators? Blog Post: We tested LangChain's LLM-assisted evaluators on common tasks to provide guidelines on how to best use them in your practice.

Coming Soon

Conferences & Hackathons

  • TED AI Hackathon Kickoff [Oct 14]: we’re offering a prize for the beset LLM app! Learn more about the Hackathon and check out project ideas/resources for getting started here.
  • Harrison Chase, LangChain cofounder and CEO, is speaking at AI Engineer’s Summit [10/8-10],  IA Summit [10/10-11], and TED AI [10/17-18] and about building context-aware reasoning applications with LangChain.

Favorite Prompts

  • YouTube transcript to article: take a given YouTube transcript and transform it into a well-structured and engaging article.
  • WebLangChain search query: Rephrase a conversation into a good standalone question for a search query.
  • RAG Prompt (Mistral): Prompt for retrieval-augmented-generation (e.g., for chat, QA) with Mistral 7B Instruct.
  • explore more prompts at smith.langchain.com/hub. If you need early access to LangSmith to collaborate on prompts with your team, fill out this form.