7 min read

Building ReAct Agents with LangGraph: A Beginner’s Guide

AI

ThinkTools Team

AI Research Lead

Introduction

ReAct agents have quickly become a cornerstone of modern large‑language‑model (LLM) applications. By combining reasoning steps with action calls, these agents can interact with external tools, retrieve information, and produce more accurate, context‑aware outputs than a single pass of a language model. Yet, for many developers, the idea of wiring together a reasoning loop, a set of tools, and a stateful memory can feel daunting. LangGraph, a lightweight yet powerful framework, was designed to lower that barrier. It offers a declarative graph‑based approach to building LLM workflows, allowing you to focus on the logic of your agent rather than plumbing details.

In this post we walk through the entire process of creating a ReAct agent with LangGraph, from understanding the underlying concepts to deploying a working prototype. Whether you’re a researcher experimenting with new prompting strategies or a product engineer looking to embed LLM capabilities into a service, this guide will give you the practical knowledge you need to get started.

Main Content

What Are ReAct Agents?

ReAct, short for “Reasoning and Acting,” is a paradigm that encourages an LLM to alternate between internal reasoning steps and external actions. The model first generates a thought that explains what it intends to do next. It then produces an action—such as calling a search API, querying a database, or performing a calculation. After the action returns a result, the model incorporates that result into its next thought, and the loop continues until a final answer is produced. This iterative process mitigates hallucinations and improves task completion rates, especially for complex, multi‑step problems.

The ReAct framework is typically implemented as a state machine: the agent’s state includes the current prompt, the history of thoughts and actions, and any external data retrieved. By formalizing this flow, developers can reason about the agent’s behavior, debug failures, and extend functionality with new tools.

Why LangGraph?

LangGraph builds on top of the ReAct concept by providing a graph‑centric abstraction. Instead of writing imperative code that manually updates state, you declare nodes that represent reasoning steps, tool calls, or data transformations. Edges between nodes capture the flow of control, and the framework automatically handles state propagation, retries, and error handling.

One of the key advantages of LangGraph is its modularity. Each node can be a simple function, a wrapper around an external API, or a more complex sub‑graph. This makes it trivial to swap out a tool or add a new one without touching the rest of the pipeline. Additionally, LangGraph integrates seamlessly with popular LLM providers such as OpenAI, Anthropic, and Hugging Face, allowing you to experiment with different models without changing your graph definition.

Setting Up the Environment

Before you can start building, you’ll need a few prerequisites. First, install Python 3.10 or newer and create a virtual environment. Then, install LangGraph and an LLM client. For example:

pip install langgraph openai

You’ll also need an API key for your chosen LLM provider. Store it in an environment variable (e.g., OPENAI_API_KEY) so that LangGraph can access it securely.

Building a Simple ReAct Agent

Let’s construct a minimal ReAct agent that answers factual questions by searching the web. The graph will consist of three nodes:

  1. Think – The model generates a plan or a question to ask.
  2. Search – A node that calls a search API and returns results.
  3. Answer – The model combines the search results with its prior thoughts to produce a final answer.

In LangGraph syntax, you declare each node as a function and then connect them using the GraphBuilder. The framework automatically creates a state that carries the conversation history and any intermediate data.

from langgraph import GraphBuilder, State
from langgraph.llms import OpenAI

# Define the LLM
llm = OpenAI(model="gpt-4o-mini")

# Node: Think
async def think(state: State):
    prompt = f"You are a helpful assistant. Question: {state['question']}\nThink about how to answer."
    thought = await llm.generate(prompt)
    state['thought'] = thought
    return state

# Node: Search
async def search(state: State):
    query = state['thought']
    # Placeholder for an actual search API call
    results = f"Results for '{query}': ..."
    state['search_results'] = results
    return state

# Node: Answer
async def answer(state: State):
    prompt = f"Using the search results: {state['search_results']}, answer the question: {state['question']}"
    final_answer = await llm.generate(prompt)
    state['answer'] = final_answer
    return state

# Build the graph
builder = GraphBuilder()
builder.add_node("think", think)
builder.add_node("search", search)
builder.add_node("answer", answer)

builder.add_edge("think", "search")
builder.add_edge("search", "answer")

graph = builder.build()

Running the graph with an initial state containing the user’s question will produce a final answer that incorporates the search results. This simple example demonstrates how the ReAct loop is expressed as a clean, declarative graph.

Extending with Custom Tools

LangGraph’s flexibility shines when you need to add domain‑specific tools. Suppose you want your agent to query a SQL database. You can implement a query_db node that receives a SQL string from the LLM, executes it, and returns the result. Because the graph treats nodes uniformly, you can insert this new node anywhere in the flow.

async def query_db(state: State):
    sql = state['thought']
    # Execute SQL and fetch results
    result = "SELECT * FROM users WHERE id=1;"
    state['db_result'] = result
    return state

builder.add_node("query_db", query_db)
builder.add_edge("think", "query_db")
builder.add_edge("query_db", "answer")

Now the agent can decide whether to perform a web search or a database query based on the content of its thoughts. By inspecting the state after each step, developers can debug why a particular tool was chosen and refine the prompting strategy.

Testing and Debugging

Because LangGraph preserves the entire state at each node, debugging becomes a matter of inspecting the state dictionary. The framework also offers a built‑in visualizer that renders the graph and highlights the current node, making it easier to spot infinite loops or missing edges. When an external API fails, LangGraph can automatically retry or fall back to a default response, ensuring robustness in production.

Deployment Considerations

Deploying a ReAct agent built with LangGraph is straightforward. The graph can be exported as a lightweight Python module or wrapped in a REST endpoint using frameworks like FastAPI. If you need to scale, consider using a serverless platform or container orchestration. Remember to monitor LLM usage to stay within token limits and cost budgets. LangGraph’s stateful design also allows you to persist conversation history in a database, enabling context‑aware interactions across multiple sessions.

Conclusion

ReAct agents represent a powerful shift in how we harness LLMs for real‑world tasks. By interleaving reasoning with action, they reduce hallucinations and improve task fidelity. LangGraph provides a clean, declarative way to build, extend, and maintain these agents, abstracting away the plumbing while keeping full control over the flow. Whether you’re prototyping a knowledge‑base assistant or integrating LLM capabilities into a commercial product, the combination of ReAct and LangGraph offers a scalable, maintainable foundation.

Call to Action

If you’re ready to experiment, clone the example repository on GitHub, replace the placeholder search API with a real service, and start building your own ReAct agent. Share your results, ask questions in the community forum, or contribute improvements back to LangGraph. By embracing these tools, you’ll be at the forefront of the next wave of AI‑powered applications.

We value your privacy

We use cookies, including Google Analytics, to improve your experience on our site. By accepting, you agree to our use of these cookies. Learn more