LangGraph Basics: Part 3 β€” Conditional Edges & Routing Logic

Author Photo
LangGraph Basics: Part 3 β€” Conditional Edges & Routing Logic

πŸ”€ 1. What are Conditional Edges?

In Part 1 you learned that LangGraph graphs are made of nodes connected by edges. A normal edge β€” add_edge("A", "B") β€” always sends execution from node A to node B, no matter what. That works perfectly for a straight pipeline where every input follows the same steps in the same order.

But real-world AI applications rarely work in straight lines. Sometimes you need to check a condition and go one way if it's true and another way if it's false. Sometimes you want different processing paths depending on what the user sent. This is where conditional edges come in.

A conditional edge doesn't connect two nodes directly. Instead, it says: "after this node runs, call a function to decide where to go next." That function β€” called the router function β€” reads the current state and returns a string naming the next node. LangGraph then routes execution to that node.

1.1

🚧 The Problem with Fixed Paths

Imagine you're building an AI customer support system. A customer might write about a billing problem, a technical issue, or a general question. With normal edges, you can only wire one path:

graph.add_node("support", support_node) graph.add_edge(START, "support") # always goes to the same node graph.add_edge("support", END)

Every message β€” billing complaint, app crash report, or opening-hours question β€” hits the same support_node. That one node now has to detect the message type internally, switch its behaviour, and handle all three cases. It becomes a tangled function trying to do everything at once. The graph carries no routing intelligence; it's all buried inside a single node.

⚠️ The cost of a fixed path: when one node handles all cases, it becomes harder to read, test, and improve. A bug in the billing response path can silently break the technical one. Conditional edges let you separate concerns cleanly β€” one specialist node per responsibility.

With conditional edges, you split this into three dedicated nodes β€” billing_support, technical_support, and general_support β€” and let the graph decide which one to call based on the message type. The routing logic moves out of the node and into the graph structure, where it belongs.

1.2

☎️ A Real-World Analogy

Think about calling a customer support hotline. You don't immediately speak to a billing specialist. First, a recorded menu β€” or a live operator β€” asks what your issue is about. Based on your answer, it routes your call: "Press 1 for billing, press 2 for technical support, press 3 for all other inquiries."

Without that routing step, every call would land on the same desk. One agent would have to handle billing disputes, troubleshoot app crashes, and answer FAQ questions simultaneously β€” doing all of it less effectively than a specialist would. The routing operator exists precisely to direct each caller to the agent best equipped to help them.

πŸ”— In LangGraph terms: the routing operator is the router function, the customer's issue category is stored in state, and the specialist agents are the downstream nodes. add_conditional_edges() is the mechanism that wires this routing logic into the graph.

With that mental model in place, let's get the environment set up and then build each piece of the routing system from scratch.


βš™οΈ 2. Installation & Setup

If you've followed Parts 1 and 2, your environment is already set up β€” you can skip straight to Section 3. If this is your first post in the series, follow the steps below to get everything ready.

Python version. This project requires Python 3.12.

python --version # Python 3.12.x

Create and activate a virtual environment.

python -m venv langgraph source langgraph/bin/activate # macOS / Linux langgraph\Scripts\activate # Windows

Install dependencies. All packages for the entire series are in one shared requirements.txt at the root of the langgraph/ folder.

langchain==1.2.17 langgraph==1.1.10 langchain-google-genai==4.2.2 python-dotenv==1.2.2 gradio==6.14.0
pip install -r requirements.txt

Gemini API key. This project uses Google Gemini as the LLM. Get your free API key from Google AI Studio, then create a .env file inside the langgraph/ folder:

GEMINI_API_KEY=your_api_key_here GEMINI_MODEL_NAME=gemini-2.0-flash GEMINI_TEMPERATURE=0.7 GEMINI_MAX_RETRIES=2
⚠️ Never commit your .env file to version control. Add it to .gitignore to keep your API key safe.
2.1

πŸ”§ Configuring the LLM

config.py reads the .env file and exposes the settings as class attributes. All other modules import from Config directly β€” no instantiation needed.

import os from dotenv import load_dotenv load_dotenv(dotenv_path=os.path.join(os.path.dirname(__file__), "..", ".env")) class Config: MODEL_NAME = os.getenv("GEMINI_MODEL_NAME", "gemini-2.0-flash") TEMPERATURE = float(os.getenv("GEMINI_TEMPERATURE", 0.7)) MAX_RETRIES = int(os.getenv("GEMINI_MAX_RETRIES", 2))

llm.py wraps ChatGoogleGenerativeAI with those settings. Every node that needs an LLM instantiates GeminiLLM() once and calls get_llm().

from langchain_google_genai import ChatGoogleGenerativeAI from config import Config class GeminiLLM: def __init__(self): self.llm = ChatGoogleGenerativeAI( model=Config.MODEL_NAME, temperature=Config.TEMPERATURE, max_retries=Config.MAX_RETRIES, ) def get_llm(self): return self.llm

With the environment ready, let's look at how conditional edges differ from the normal edges you've already used.


βš–οΈ 3. Normal Edges vs Conditional Edges

You've used add_edge() in every previous post. Here's a side-by-side look at what changes when you switch to add_conditional_edges().

Normal edge β€” hardwired, always the same destination:

# Normal edge: node_a always goes to node_b, every single time graph.add_edge("node_a", "node_b")

Conditional edge β€” the destination is chosen at runtime by a router function:

# Conditional edge: after node_a runs, call my_router(state) # and go to whichever node it returns graph.add_conditional_edges( "node_a", # source node my_router, # router function β€” decides where to go { # path map β€” translates return values to node names "path_x": "node_b", "path_y": "node_c", "path_z": "node_d", } )

The behaviour difference is significant: a normal edge always produces one solid arrow in the graph diagram; a conditional edge produces multiple dashed arrows β€” one per possible destination β€” but only one of them is actually followed during each run.

Feature Normal Edge Conditional Edge
API call add_edge(src, dst) add_conditional_edges(src, fn, map)
Path at runtime Fixed β€” always the same Dynamic β€” chosen by router function
Possible destinations One One or more
Decision logic None needed Router reads state and returns next node
Diagram arrow Solid arrow (β†’) Dashed arrows (- - β†’) for each branch
Best for Linear pipelines Classification, branching, routing
βœ… When to choose which: if the next node is always the same regardless of input or state, use add_edge(). If the graph needs to make a decision β€” "which specialist should handle this?" β€” use add_conditional_edges().

The key ingredient that makes conditional edges work is the router function. Let's build a complete understanding of how to write one.


🧭 4. The Router Function

A router function is the brain behind a conditional edge. It looks at the current state and decides which node should run next. LangGraph calls it automatically β€” you just write the logic, and the framework handles the rest.

4.1

πŸ“‹ What a Router Function Does

A router function has a simple contract β€” it takes the current state as its only argument and returns a string. That string is the name of the next node to run.

def my_router(state: MyState) -> str: if state["priority"] == "urgent": return "urgent_handler" return "standard_handler"

That's the entire contract. There is no special class to inherit from, no decorator to apply, no LangGraph-specific import needed. Any plain Python function that accepts one argument (state) and returns a string qualifies as a router function.

Here's the three-branch pattern from our customer support project:

def route_by_category(state: SupportState) -> str: category = state["category"] # read the classification from state if category == "billing": return "billing_support" # name of the node to run next elif category == "technical": return "technical_support" else: return "general_support"

Each branch simply returns a string. LangGraph looks up that string in the path map (covered in Section 5.2) and sends execution to the matching node.

πŸ’‘ Keep router functions focused. A router function should only read state and return a string β€” it should not call an LLM, write to state, or produce side effects. All computation belongs in nodes.
4.2

πŸ” Reading State Inside the Router

Here is something important to understand about timing: LangGraph calls the router function after the source node has already finished running and its state updates have been merged. By the time your router is invoked, the state already contains everything the previous node wrote.

This is why the pattern works so naturally. The source node does the computation β€” classifying the customer's message β€” and stores the result in state. The router then simply reads that result and returns the right node name. No computation in the router, just a lookup.

Here's the step-by-step sequence for our project:

  • Step 1: classify_node runs β†’ asks the LLM to categorise the message β†’ returns {"category": "billing"}
  • Step 2: LangGraph merges {"category": "billing"} into the full state
  • Step 3: LangGraph calls route_by_category(state) β€” state["category"] is now "billing"
  • Step 4: The router returns "billing_support"
  • Step 5: LangGraph runs the billing_support node
πŸ“Œ Key point: the router runs between nodes, not inside them. It is not a node itself β€” it's a gate that LangGraph calls to decide which node comes next. It does not appear in the node registry (add_node()) and it does not modify state.
4.3

🏷️ Type-Safe Routing with Literal

A router that returns plain str works fine, but annotating the return type with Literal from Python's typing module makes the code much clearer and safer.

from typing import Literal def route_by_category( state: SupportState, ) -> Literal["billing_support", "technical_support", "general_support"]: mapping = { "billing": "billing_support", "technical": "technical_support", "general": "general_support", } return mapping.get(state["category"], "general_support")

Literal["a", "b", "c"] tells Python β€” and anyone reading the code β€” that this function will only ever return one of those three exact strings. Here's why this matters in practice:

  • Self-documenting: a reader can see all possible routing targets just from the function signature β€” no need to read the function body.
  • Typo protection: if you mistype a return value (e.g. "billng_support"), your IDE's type checker will flag it immediately instead of silently producing a routing error at runtime.
  • Graph diagram accuracy: LangGraph can use the Literal annotation to draw all possible branches in the diagram automatically.

Making it a habit to annotate router functions with Literal costs nothing and pays back every time someone (including future you) reads the code. Now let's look at how to connect this router into the graph.


πŸ”Œ 5. Wiring it Together: add_conditional_edges()

Writing the router function is only half the job. You also need to tell LangGraph which node triggers the router and what to do with the string it returns. That's exactly what add_conditional_edges() does.

5.1

🧩 Syntax and Parameters

graph.add_conditional_edges(source, path, path_map=None)

The method takes three arguments:

  • source (string) β€” the name of the node whose completion triggers the routing decision. After this node runs and its state updates are merged, LangGraph calls the router. In our project this is "classify".
  • path (callable) β€” the router function. LangGraph calls path(state) and expects a string back. In our project this is route_by_category.
  • path_map (dict, optional) β€” maps the router's return values to registered node names. It can be omitted when the router returns node names directly (see Section 5.3).

A complete call for our customer support router:

graph.add_conditional_edges( "classify", # ← source: the node that runs first route_by_category, # ← path: called with state after "classify" finishes { # ← path_map: maps return values to node names "billing_support": "billing_support", "technical_support": "technical_support", "general_support": "general_support", } )
5.2

πŸ—ΊοΈ The Path Map

The path map is a Python dictionary where:

  • Keys are the strings your router function can return
  • Values are the names of nodes registered in the graph via add_node()

Think of the path map as a lookup table. When the router returns "billing_support", LangGraph checks the path map, finds the matching entry, and sends execution to the registered node named "billing_support".

# router returns "billing_support" # ↓ # path_map["billing_support"] β†’ "billing_support" (registered node name) # ↓ # LangGraph runs the "billing_support" node

The path map decouples the router from the graph's internal node names. If you ever rename the node from "billing_support" to "handle_billing", you only update the path map value β€” the router code stays untouched:

graph.add_conditional_edges( "classify", route_by_category, { "billing_support": "handle_billing", # ← only this value changes "technical_support": "technical_support", "general_support": "general_support", } )
5.3

βœ‚οΈ Skipping the Path Map

When your router returns node names directly β€” the return values exactly match registered node names β€” the path map can be omitted entirely:

# These two calls are equivalent when router returns node names directly: # With path map (explicit β€” recommended): graph.add_conditional_edges("classify", route_by_category, { "billing_support": "billing_support", "technical_support": "technical_support", "general_support": "general_support", }) # Without path map (implicit): graph.add_conditional_edges("classify", route_by_category)
βœ… Recommendation: include the path map even when it seems redundant. It makes the valid routing targets visible at a glance without having to read the router function, and it protects against silent bugs when node names change later.

Now that you understand every moving part β€” the router function, the Literal annotation, and add_conditional_edges() β€” let's put them all together in a real working project.


🎧 6. Complete Example: Customer Support Router

The project we're building is a Customer Support Router β€” a four-node LangGraph application that classifies an incoming customer message and routes it to a dedicated support node. The classify node runs first and writes category to state; the router reads that category and sends the message to exactly one of three specialist nodes, each powered by a different LLM prompt.

6.1

πŸ“ Project Structure

basics-3-conditional-edges/ β”œβ”€β”€ config.py # env variables and model settings β”œβ”€β”€ llm.py # Gemini LLM wrapper β”œβ”€β”€ state.py # SupportState TypedDict β”œβ”€β”€ router.py # route_by_category function β”œβ”€β”€ nodes.py # classify + three support node functions β”œβ”€β”€ graph.py # graph construction and compilation β”œβ”€β”€ support_runner.py # entry point: runs the graph β”œβ”€β”€ app.py # Gradio web UI └── prompts/ # LLM prompt templates, one file per node β”œβ”€β”€ classify.txt # classify the support category β”œβ”€β”€ billing.txt # prompt for billing queries β”œβ”€β”€ technical.txt # prompt for technical issues └── general.txt # prompt for general inquiries

config.py and llm.py handle environment setup (Section 2.1). state.py defines the shared data structure. router.py holds the routing function (Section 4). nodes.py contains all four node functions. graph.py wires everything together using add_conditional_edges() (Section 5). support_runner.py is the entry point (Section 6.3), and app.py provides the Gradio web interface (Section 8).

6.2

πŸ“ Full Code Walkthrough

state.py β€” shared data structure.

The state has three fields. message holds the customer's original text, category is written by classify_node and read by the router, and response is the final reply from whichever support node runs.

from typing import TypedDict class SupportState(TypedDict): message: str category: str # set by classify_node; drives the conditional edge response: str

All three fields use the default last-write-wins behaviour β€” no Annotated reducers needed here. Each field is written by exactly one node and never needs to accumulate across multiple writes.

router.py β€” the routing function.

The router lives in its own file, separate from the nodes. This separation keeps the routing logic easy to find, read, and test on its own. The function maps each category string to the corresponding node name, with a safe fallback in case the LLM returns something unexpected.

from typing import Literal from state import SupportState def route_by_category( state: SupportState, ) -> Literal["billing_support", "technical_support", "general_support"]: mapping = { "billing": "billing_support", "technical": "technical_support", "general": "general_support", } # Falls back to general_support if the LLM returns an unexpected value return mapping.get(state["category"], "general_support")

The mapping.get(key, default) pattern is a defensive measure. If the LLM ever returns something other than the three expected strings, the graph gracefully routes to general_support instead of crashing with a KeyError.

nodes.py β€” all four node functions.

SupportNodes initialises the LLM and loads all four prompt templates once in __init__. The classify node determines the category; the three support nodes each respond to one type of customer message using a specialised prompt.

import os from llm import GeminiLLM from state import SupportState _PROMPTS_DIR = os.path.join(os.path.dirname(__file__), "prompts") def _load_prompt(filename: str) -> str: with open(os.path.join(_PROMPTS_DIR, filename), "r") as f: return f.read() class SupportNodes: def __init__(self): self.llm = GeminiLLM().get_llm() self.classify_prompt = _load_prompt("classify.txt") self.billing_prompt = _load_prompt("billing.txt") self.technical_prompt = _load_prompt("technical.txt") self.general_prompt = _load_prompt("general.txt") def classify_node(self, state: SupportState) -> dict: prompt = self.classify_prompt.format(message=state["message"]) response = self.llm.invoke(prompt) raw = self._extract_text(response).strip().lower() category = raw if raw in ("billing", "technical", "general") else "general" return {"category": category} # ← only writes category; router reads it next def billing_node(self, state: SupportState) -> dict: prompt = self.billing_prompt.format(message=state["message"]) return {"response": self._extract_text(self.llm.invoke(prompt)).strip()} def technical_node(self, state: SupportState) -> dict: prompt = self.technical_prompt.format(message=state["message"]) return {"response": self._extract_text(self.llm.invoke(prompt)).strip()} def general_node(self, state: SupportState) -> dict: prompt = self.general_prompt.format(message=state["message"]) return {"response": self._extract_text(self.llm.invoke(prompt)).strip()} def _extract_text(self, response) -> str: content = response.content if isinstance(content, list): return " ".join( block.get("text", "") for block in content if isinstance(block, dict) and block.get("type") == "text" ) return content

Each of the three support nodes receives the full state but only uses message. Because each has its own prompt file, you can tune them independently β€” for example, making the billing node more empathetic or the technical node more step-by-step β€” without touching any other node.

graph.py β€” wiring everything together.

This is where add_conditional_edges() appears. The graph registers all four nodes, then wires them: START β†’ classify with a fixed edge, classify β†’ ? with a conditional edge, and all three support nodes to END with fixed edges.

import os from langgraph.graph import END, START, StateGraph from nodes import SupportNodes from router import route_by_category from state import SupportState class SupportGraph: def __init__(self): self.nodes = SupportNodes() self.compiled_graph = self._build() def _build(self): graph = StateGraph(SupportState) # Register all four nodes graph.add_node("classify", self.nodes.classify_node) graph.add_node("billing_support", self.nodes.billing_node) graph.add_node("technical_support", self.nodes.technical_node) graph.add_node("general_support", self.nodes.general_node) # Fixed edge: every run starts with classify graph.add_edge(START, "classify") # Conditional edge: route_by_category picks which support node runs graph.add_conditional_edges( "classify", route_by_category, { "billing_support": "billing_support", "technical_support": "technical_support", "general_support": "general_support", }, ) # All three support nodes converge at END graph.add_edge("billing_support", END) graph.add_edge("technical_support", END) graph.add_edge("general_support", END) return graph.compile() def get_compiled_graph(self): return self.compiled_graph

The graph has one fixed entry (START β†’ classify), one conditional fork (classify β†’ ?), and three fixed exits (? β†’ END). The ? is resolved at runtime by the router β€” only one branch runs per invoke.

support_runner.py β€” entry point.

SupportRunner wraps the graph and exposes a simple run(message) method. The initial state passes empty strings for category and response β€” both are filled in by the graph before it ends.

from graph import SupportGraph class SupportRunner: def __init__(self): self.support_graph = SupportGraph() self.app = self.support_graph.get_compiled_graph() def run(self, message: str) -> dict: return self.app.invoke({ "message": message, "category": "", # filled by classify_node "response": "", # filled by whichever support node runs }) def format_output(self, result: dict) -> str: return "\n".join([ f"πŸ“© Message : {result['message']}", f"🏷️ Category : {result['category'].upper()}", "─" * 60, f"πŸ’¬ Response :\n{result['response']}", ])
6.3

▢️ Running & Output

Run the entry point from inside the basics-3-conditional-edges/ directory:

python support_runner.py

The runner tests three messages β€” one billing issue, one technical problem, and one general inquiry β€” and prints the category and response for each:

============================================================ LangGraph Basics β€” Customer Support Router Demo ============================================================ Saving graph architecture... Graph saved β†’ figure/graph.mmd Graph saved β†’ figure/graph.png πŸ“© Message : I was charged twice for my subscription this month. 🏷️ Category : BILLING ──────────────────────────────────────────────────────────── πŸ’¬ Response : We're sorry about the duplicate charge. Please log in to your account and go to Billing History to confirm the transactions. If two charges appear, submit a refund request through the portal and our team will process it within 3–5 business days. ============================================================ πŸ“© Message : My app keeps crashing whenever I try to upload a file. 🏷️ Category : TECHNICAL ──────────────────────────────────────────────────────────── πŸ’¬ Response : Let's get that fixed. First, make sure you're on the latest app version. Then try clearing the app cache in Settings β†’ Storage. If the crash persists, please send us your device model and OS version so our team can investigate further. ============================================================ πŸ“© Message : What are your customer support hours? 🏷️ Category : GENERAL ──────────────────────────────────────────────────────────── πŸ’¬ Response : Our support team is available Monday to Friday, 9 AM – 6 PM (EST). For urgent issues outside these hours, submit a ticket through the Help Center and we'll respond within 24 hours. ============================================================

Each message lands in exactly the right specialist node. The billing complaint gets an empathetic, resolution-focused reply. The app crash gets clear troubleshooting steps. The hours question gets a direct, informative answer. The graph handled all the routing automatically β€” the individual support nodes never had to check the message type themselves.


πŸ“Š 7. Graph Diagram

LangGraph can export the compiled graph as a Mermaid diagram. Call runner.save_figure() (already wired into support_runner.py) to generate figure/graph.mmd and figure/graph.png. Here's what the branching structure looks like:

%%{init: {"flowchart": {"curve": "linear"}}}%%
graph TD
    S([__start__]):::first
    CL(classify)
    BS(billing_support)
    TS(technical_support)
    GS(general_support)
    E([__end__]):::last
    S  --> CL
    CL -.-> BS
    CL -.-> TS
    CL -.-> GS
    BS --> E
    TS --> E
    GS --> E
    classDef default fill:#f2f0ff,line-height:1.2
    classDef first fill-opacity:0
    classDef last fill:#bfb6fc
											

Graph architecture of the Customer Support Router. Solid arrows (β†’) are normal edges; dashed arrows (- -β†’) are conditional edges. Only one dashed branch is followed per run.

Two things stand out in this diagram. First, the three dashed arrows leaving classify represent the conditional branches β€” all three are shown in the graph structure, but only one runs per invocation. Second, all three support nodes reconnect to __end__ with solid arrows β€” regardless of which branch ran, execution always ends in the same place.

πŸ’‘ Solid vs dashed arrows: solid arrows (add_edge) mean "always go here". Dashed arrows (add_conditional_edges) mean "go here if the router says so". A glance at the diagram tells you exactly where decisions happen and how many branches are possible.

As graphs grow more complex in later posts β€” with loops, multiple conditional forks, and checkpoints β€” this diagram becomes an essential tool for understanding and debugging the flow.


🌐 8. Web UI with Gradio

The project includes a Gradio chat interface so you can test the router interactively. app.py wraps SupportRunner in a ChatInterface and launches a local web server. Type any customer message into the chat box and the graph will classify it and return the appropriate specialist response.

import gradio as gr from support_runner import SupportRunner class SupportApp: def __init__(self): self.runner = SupportRunner() def respond(self, message: str, _history: list) -> str: # _history is required by Gradio's ChatInterface but not used here if not message.strip(): return "" result = self.runner.run(message) return self.runner.format_output(result) def launch(self): gr.ChatInterface(fn=self.respond, title="🎧 Customer Support Router").launch() if __name__ == "__main__": SupportApp().launch()

Run it with:

python app.py

Gradio will print a local URL (usually http://127.0.0.1:7860). Open it in your browser and try sending a billing question, a technical complaint, and a general inquiry to watch each one route to the right specialist node.

Customer Support Router Gradio web interface

The Customer Support Router running in a Gradio chat interface. The category and routed specialist response are shown for each message.


🏁 9. Conclusion

Conditional edges are one of the most powerful ideas in LangGraph. They move routing decisions out of individual nodes and into the graph structure itself, keeping each node focused and the overall flow easy to understand and maintain.

Here's what you learned in this post:

  • Normal edges (add_edge) are fixed; conditional edges (add_conditional_edges) choose the next node at runtime based on state.
  • A router function is a plain Python function that takes the current state and returns a string naming the next node to run.
  • It runs after the source node has already updated state β€” so it can safely read values written by that node.
  • Annotating the return type with Literal documents valid destinations, catches typos early, and helps LangGraph render accurate graph diagrams.
  • add_conditional_edges(source, path, path_map) wires the router into the graph. The path map translates router return values to registered node names.
  • In the Customer Support Router, classify_node labels each message; the router reads the label and sends execution to the matching specialist node.
βœ… Up next β€” Part 4: Checkpointers, Memory & Streaming. So far, every graph run has been stateless β€” the state lives only for the duration of one invoke() call. In Part 4 you'll learn how to persist state across calls using checkpointers, giving your graph true multi-turn memory, and how to stream LLM output token by token as it's generated.

Technical Stacks

Technical Stacks

Python Python
LangGraph LangGraph
LangChain LangChain
Gemini Gemini
Gradio Gradio
Download

Download Source Code

LangGraph Basics β€” Customer Support Router

View on GitHub
πŸ“š

References