In recent years, AI agents have become a cornerstone in developing intelligent systems. With advancements in machine learning and natural language processing (NLP), we can now create systems that are capable of complex tasks, from simple text generation to invoking external tools for real-time data processing. One such framework that facilitates the building of powerful AI agents is LangGraph, a tool built on top of LangChain, which enables creating customizable AI agents that can interact with different external tools.
In this blog post, we’ll dive into creating a simple Python-based AI Agent using LangGraph and OpenRouter. The goal is to show how to build an AI agent that integrates tools like weather retrieval and PDF parsing, all while streaming outputs in real-time. We’ll also cover how to set up the environment, implement custom tools, and invoke them in a LangGraph workflow.
What is LangGraph?
LangGraph is a Python-based framework built on top of LangChain, designed to create flexible workflows that can include external tools, APIs, and models. It allows you to design an AI agent that can interact with various services and handle multiple tasks in sequence or parallel.
The power of LangGraph lies in its ability to define state graphs—workflow structures where nodes represent the AI model and external tools, and edges define the transitions between them. These state graphs allow developers to design complex AI workflows by connecting different components logically.
What is OpenRouter?
OpenRouter is a platform that allows developers to easily integrate with models from OpenAI and other third-party providers. By using OpenRouter, you can access powerful language models like GPT-4 or GPT-3.5 for text generation and other NLP tasks. It simplifies model access and enables you to run applications without worrying about the underlying infrastructure.
Step-by-Step Guide to Building an AI Agent with LangGraph and OpenRouter
Let’s break down how to build an AI agent that interacts with external tools such as a weather service and a PDF parser using LangGraph, Python, and OpenRouter.
1. Set Up Your Environment
Before we start coding, ensure that you have the following tools installed:
pip install langchain langgraph openrouter langchain_openai python-dotenv requests
Additionally, you need to set up environment variables for your OpenRouter API keys. Create a .env file and add:
OPENROUTER_API_KEY=your_openrouter_api_key_here
OPENAI_API_KEY=your_openai_api_key_hereThese keys are essential for accessing the language models that will power your AI agent.
2. Create External Tools for the Agent
In our example, we’ll create two mock external tools:
- Weather tool: This tool simulates fetching weather information for a city.
- PDF parsing tool: This tool simulates extracting content or metadata from a PDF document.
Let’s first define these tools:
Weather Tool (weather_tool.py)
from langchain_core.tools import tool
@tool
def get_weather(city: str) -> str:
"""Simulate fetching weather data for a city."""
return f"It's always sunny in {city}!"
__all__ = ["get_weather"]
PDF Parsing Tool (pdf_tool.py)
from langchain_core.tools import tool
@tool
def parse_pdf(url: str, extract: str = "summary") -> str:
"""Simulate extracting data from a PDF document."""
if extract == "content":
return f"Extracting content from {url} (mock)..."
elif extract == "metadata":
return f"Extracting metadata from {url} (mock): title=Demo, pages=12"
else:
return f"Extracting summary from {url} (mock): This is a summary of the PDF about AI agents."
__all__ = ["parse_pdf"]
3. Building the AI Model and Workflow
Next, let’s define the model and create the workflow (state graph) for our AI agent. This agent will interact with the user, invoke the weather tool or PDF parser based on the user input, and stream real-time updates.
Main AI Agent Code (ai_agent.py)
import os
import sys
from typing import Dict
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langgraph.graph import MessagesState, StateGraph
from langgraph.prebuilt import ToolNode, tools_condition
from weather_tool import get_weather # Import weather tool
from pdf_tool import parse_pdf # Import PDF tool
# Load environment variables
load_dotenv()
# Initialize tools
TOOLS = [get_weather, parse_pdf]
def build_model() -> ChatOpenAI:
api_key = os.getenv("OPENROUTER_API_KEY") or os.getenv("OPENAI_API_KEY")
if not api_key:
raise RuntimeError("Please set OPENROUTER_API_KEY or OPENAI_API_KEY.")
base_url = os.getenv("OPENROUTER_BASE_URL", "https://openrouter.ai/api/v1")
model_name = os.getenv("OPENROUTER_MODEL", "openai/gpt-4o")
return ChatOpenAI(
model=model_name,
base_url=base_url,
api_key=api_key,
temperature=0.2,
max_retries=2,
).bind_tools(TOOLS)
def call_model(state: MessagesState) -> Dict[str, list]:
ai = build_model().invoke(state["messages"])
return {"messages": [ai]}
def build_graph() -> StateGraph:
graph = StateGraph(MessagesState)
graph.add_node("model", call_model)
graph.add_node("tools", ToolNode(TOOLS))
graph.add_conditional_edges("model", tools_condition)
graph.add_edge("tools", "model")
graph.set_entry_point("model")
return graph
def pretty_print_update(node: str, update: Dict):
if not update:
return
msgs = update.get("messages") or []
for m in msgs:
if isinstance(m, AIMessage):
content = m.content if isinstance(m.content, str) else str(m.content)
print(f"[Model Output] {content}")
elif isinstance(m, ToolMessage):
print(f"[Tool Output] {m.name}: {m.content}")
# Main function to run the agent
if __name__ == "__main__":
system_prompt = input("Enter system prompt: ")
user_msg = input("Enter your message: ")
app = build_graph().compile()
inputs = {"messages": [SystemMessage(content=system_prompt), HumanMessage(content=user_msg)]}
print("\n=== Streaming Execution Started ===")
for step in app.stream(inputs, stream_mode="updates"):
for node, update in step.items():
print(f"\n--- Node: {node} ---")
pretty_print_update(node, update)
print("\n=== Streaming Execution Finished ===")
In this script:
build_model(): Configures the AI model using OpenRouter or OpenAI.build_graph(): Constructs the state graph, which defines the flow of messages and tool invocations.pretty_print_update(): Formats the updates and outputs messages as the agent interacts with the user.
4. Running the AI Agent
To run the AI agent, execute the script:
python ai_agent.pyYou will be prompted to input a system prompt (e.g., “You are a helpful assistant”) and a user message (e.g., “What’s the weather in Paris?”).
Example Interaction
- User Input: “What’s the weather like in Paris?”
- AI Output: “It’s always sunny in Paris!”
Conclusion
In this blog post, we’ve demonstrated how to create an AI agent using LangGraph and OpenRouter in Python. This agent is capable of interacting with external tools like weather services and PDF parsers, and can stream real-time updates as it processes the input.
By combining LangGraph’s flexible state graph architecture with OpenRouter’s model access, we can build highly interactive AI agents capable of handling complex workflows and tool invocations. Whether you’re building a chatbot or integrating real-time external services, this approach offers a scalable and efficient way to create intelligent systems.
The next step would be to replace the mock tools with actual implementations (e.g., calling a real weather API or parsing PDFs using a library like PyPDF2) to build a production-ready AI agent.






