[Week of 10/2] LangChain Release Notes New in LangSmith * Fireworks and PaLM Support in the Playground: rapidly workshopping prompts and testing them against broad range of different LLMs is becoming a
Building (and Breaking) WebLangChain Important Links: * Hosted WebLangChain * Open-source code for WebLangChain Introduction One of the big shortcomings of LLMs is that they can only answer questions about data
Fine-tune your LLMs with LangSmith and Lilac In taking your LLM from prototype into production, many have turned to fine-tuning models to get more consistent and high-quality behavior in their applications. Services
[Week of 9/18] LangChain Release Notes New in LangSmith * Org Support in LangChain Hub: share and collaborate on prompts across your team. Easily pull in organizationally-approved prompts into your LangChain code.
Announcing our Student Hacker in Residence Program, Fall '23 Semester Today, we're opening up applications for our inaugural student hacker in residence program. We're looking for 3-5 students to work alongside
Announcing LangChain Hub Today, we're excited to launch LangChain Hub–a home for uploading, browsing, pulling, and managing your prompts. (Soon, we'll be adding
[Week of 8/21] LangChain Release Notes New in Retrieval There was a lot happening in the retrieval space these past two weeks, so we wanted to highlight these explicitly! * MultiVector Retriever:
Chat Loaders: Fine-tune a ChatModel in your Voice Summary We are adding a new integration type, ChatLoaders, to make it easier to fine-tune models on your own unique writing style. These utilities help
Using LangSmith to Support Fine-tuning Summary We created a guide for fine-tuning and evaluating LLMs using LangSmith for dataset management and evaluation. We did this both with an open source
Benchmarking Question/Answering Over CSV Data This is a bit of a longer post. It's a deep dive on question-answering over tabular data. We discuss (and use) CSV data
GPT Researcher x LangChain Here at LangChain we think that web research is fantastic use case for LLMs. So much so that we wrote a blog on it about
Making Data Ingestion Production Ready: a LangChain-Powered Airbyte Destination A big focus of ours over the past few months has been enabling teams to go from prototype to production. To take apps they developed