Announcing Data Annotation Queues đź’ˇData Annotation Queues are a new feature in LangSmith, our developer platform aimed at helping bring LLM applications from prototype to production. Sign up for
Query Transformations Naive RAG typically splits documents into chunks, embeds them, and retrieves chunks with high semantic similarity to a user question. But, this present a few
LangChain's First Birthday It’s LangChain’s first birthday! It’s been a really exciting year! We worked with thousands of developers building LLM applications and tooling. We
Beyond Text: Making GenAI Applications Accessible to All Editor's Note: This post was written by Andres Torres and Dylan Brock from Norwegian Cruise Line. Building UI/UX for AI applications is
Robocorp’s code generation assistant makes building Python automation easy for developers Challenge Robocorp was founded in 2019 out of frustration that the promise of developers being able to automate monotonous work hadn’t been realized. Right
Multi-Vector Retriever for RAG on tables, text, and images Summary Seamless question-answering across diverse data types (images, text, tables) is one of the holy grails of RAG. We’re releasing three new cookbooks that
LangServe Playground and Configurability Last week we launched LangServe, a way to easily deploy chains and agents in a production-ready manner. Specifically, it takes a chain and easily spins
Constructing knowledge graphs from text using OpenAI functions: Leveraging knowledge graphs to power LangChain Applications Editor's Note: This post was written by Tomaz Bratanic from the Neo4j team. Extracting structured information from unstructured data like text has been
A Chunk by Any Other Name: Structured Text Splitting and Metadata-enhanced RAG There's something of a structural irony in the fact that building context-aware LLM applications typically begins with a systematic process of decontextualization, wherein
You.com x LangChain Editor's Note: the following is a guest blog post from our friends at You.com. We've seen a lot of interesting
The Prompt Landscape Context Prompt Engineering can steer LLM behavior without updating the model weights. A variety of prompts for different uses-cases have emerged (e.g., see @dair_
Test Run Comparisons One pattern I noticed is that great AI researchers are willing to manually inspect lots of data. And more than that, they build infrastructure that