Using Feedback to Improve Your Application: Self Learning GPTs We built and hosted a simple demo app to show how applications can learn and improve from feedback over time. The app is called "
LangChain Integrates NVIDIA NIM for GPU-optimized LLM Inference in RAG Roughly a year and a half ago, OpenAI launched ChatGPT and the generative AI era really kicked off. Since then we’ve seen rapid growth
Enhancing RAG-based application accuracy by constructing and leveraging knowledge graphs A practical guide to constructing and retrieving information from knowledge graphs in RAG applications with Neo4j and LangChain Editor's Note: the following is
Benchmarking Query Analysis in High Cardinality Situations Several key use cases for LLMs involve returning data in a structured format. Extraction is one such use case - we recently highlighted this with
Multi Needle in a Haystack Key Links * Video * Code Overview Interest in long context LLMs is surging as context windows expand to 1M tokens. One of the most popular and
Iterating Towards LLM Reliability with Evaluation Driven Development Editor's Note: the following is a guest blog post from the Devin Stein, CEO of Dosu. Dosu is an engineering teammate that helps
Use Case Accelerant: Extraction Service Today we’re excited to announce our newest OSS use-case accelerant: an extraction service. LLMs are a powerful tool for extracting structured data from unstructured
LangGraph for Code Generation Key Links * LangGraph cookbook * Video Motivation Code generation and analysis are two of most important applications of LLMs, as shown by the ubiquity of products
Reflection Agents Reflection is a prompting strategy used to improve the quality and success rate of agents and similar AI systems. This post outlines how to build 3 reflection techniques using LangGraph, including implementations of Reflexion and Language Agent Tree Search.
JSON agents with Ollama & LangChain Learn to implement an open-source Mixtral agent that interacts with a graph database Neo4j through a semantic layer Editor's note: This post is
Supercharging If-Statements With Prompt Classification Using Ollama and LangChain Editor's Note: Andrew Nguonly has been building one of the more impressive projects we've seen recently - an LLM co-pilot for
Winning in AI means mastering the new stack Authors: Edo Liberty, Guillermo Rauch, Ori Goshen, Robert Nishihara, Harrison Chase AI in 2030 AI is rapidly changing. Too rapidly for most. Ten years ago