Improving core tool interfaces and docs in LangChain

Improving core tool interfaces and docs in LangChain

See our latest improvements to our core tool interfaces that make it turn any code into a tool, handle diverse inputs, enrich tool outputs, and handle tool errors effectively.

4 min read

Tools” in the context of LLMs are utilities designed to be called by a model. They have well-defined schemas that can be input to a model and generate outputs that can be fed back to the model. Tools are needed whenever you want a model to control parts of your code or call out to external APIs, making them an essential building block of LLM applications.

Over the past few weeks, we’ve focused on improving our core tool interfaces and documentation. These updates make it easier to:

  • Turn any code into a tool
  • Pass different types of inputs to tools
  • Return complex outputs from tools
  • Create more reliable tools using architectures

Let’s dive into these improvements for integrating, using, and managing tools in LangChain below.

Simplified tool definitions

Tool integration can be complex, often requiring manual effort like writing custom wrappers or interfaces. At LangChain, we’ve reduced complexity starting from tool definition.

  • You can now pass any Python function into ChatModel.bind_tools() , which allows normal Python functions to be used directly as tools. This simplifies how you define tools, as LangChain will just parse type annotations and docstrings to infer required schemas. Below is an example where a model must pull a list of addresses from an input and pass it along into a tool:
from typing import List
from typing_extensions import TypedDict

from langchain_anthropic import ChatAnthropic

class Address(TypedDict):
    street: str
    city: str
    state: str

def validate_user(user_id: int, addresses: List[Address]) -> bool:
    """Validate user using historical addresses.

    Args:
        user_id: (int) the user ID.
        addresses: Previous addresses.
    """
    return True

llm = ChatAnthropic(
    model="claude-3-sonnet-20240229"
).bind_tools([validate_user])

result = llm.invoke(
    "Could you validate user 123? They previously lived at "
    "123 Fake St in Boston MA and 234 Pretend Boulevard in "
    "Houston TX."
)
result.tool_calls
[{'name': 'validate_user',
  'args': {'user_id': 123,
   'addresses': [{'street': '123 Fake St', 'city': 'Boston', 'state': 'MA'},
    {'street': '234 Pretend Boulevard', 'city': 'Houston', 'state': 'TX'}]},
  'id': 'toolu_011KnPwWqKuyQ3kMy6McdcYJ',
  'type': 'tool_call'}]

The associated LangSmith trace shows how the tool schema was populated behind the scenes, including the parsing of the function docstring into top-level and parameter-level descriptions.

Learn more about creating tools from functions in our how-to guides for Python and JavaScript.

  • Additionally, any LangChain runnable can now be cast into a tool, making it easier to re-use existing LangChain runnables, including chains and agents. Reusing existing runnables reduces redundancies and allowing you to deploy new functionality faster. For example, below we equip a LangGraph agent with another “user info agent” as a tool, allowing it to delegate relevant questions to the secondary agent.
from typing import List, Literal
from typing_extensions import TypedDict

from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent


llm = ChatOpenAI(temperature=0)


user_info_agent = create_react_agent(llm, [validate_user])


class Message(TypedDict):
    role: Literal["human"]
    content: str

agent_tool = user_info_agent.as_tool(
    arg_types={"messages": List[Message]},
    name="user_info_agent",
    description="Ask questions about users.",
)


agent = create_react_agent(llm, [agent_tool])

See how to use runnables as tools in our Python and JavaScript docs.

Flexible tool inputs

Tools must handle diverse inputs coming from varying data sources and user interactions. Validating these inputs can be cumbersome, especially determining which inputs should be generated by the model versus provided by other sources.

  • In LangChain, you can now pass in model-generated ToolCalls directly to tools (see Python, JS docs). While this streamlines executing tools called by a model, there’s also cases where we don’t want all inputs to the tool to be generated by the model. For example, if our tool requires some type of user ID, this input will likely come from elsewhere in our code and not from a model. For these cases, we’ve added annotations that specify which tool inputs shouldn’t be generated by the model. See docs here (Python, JS).
  • We’ve also added documentation on how to pass LangGraph state to tools in Python and JavaScript. We’ve also made it possible for tools to access the RunnableConfig object associated with a run. This is useful for parametrizing tool behavior, passing global params through a chain, and accessing metadata like Run IDs — which provide more control over tool management. Read the docs (Python, JS).

Enriched tool outputs

Enriching your tool outputs with additional data can help you use these outputs in subsequent actions or processes, increasing developer efficiency.

  • Tools in LangChain can now return results needed in downstream components but that should not be part of the content sent to the model via an artifact attribute in ToolMessages. Tools can also return ToolMessages to set the artifact themselves, giving developers more control over output management. See docs here (Python, JS).
  • We’ve also enabled tools to stream custom events, providing real-time feedback that improves your tools’ usability. See docs here (Python, JS).

Robust handling of tool call errors

Tools can fail for various reasons — as a result, implementing fallback mechanisms and learning how to handle these failures gracefully is important to maintaining app stability. To support this, we’ve added:

  • Docs for how to use prompt engineering and fallbacks to handle tool calling errors (Python, JS).
  • Docs for how to use flow engineering in your LangGraph graph to handle tool calling errors (Python, JS).

What’s next

In the coming weeks we’ll continue adding how-to guides and best practices for defining tools and designing tool-using architectures. We’ll also refresh the documentation for our many tool and toolkit integrations. These efforts aim to empower users to maximize the potential of LangChain tools as they build context-aware reasoning applications.

If you haven’t already, check out our docs to learn more about LangChain for Python and JavaScript.