Feb 26, 2025

Introduction to Agentic Programming Part 2

Introduction to Agentic Programming Part 2

Take a dive into Agentic programming with OpenAI Swarm, LangGraph, and n8n in part 2 of our introduction agentic programming series.

Picking up where we left off, in Part 1 we reviewed the basic concepts of agentic programming and some common frameworks like ReACT, Plan-and-Execute, and Tree of Thoughts. We then walked through two in-depth examples using LangChain (with ReACT) and CrewAI. In Part 2, we will take a look at OpenAI’s Swarm framework, LangGraph, and n8n. As always feel free to follow along with the video while you comb through the tutorial.

OpenAI Swarm

OpenAI Swarm is a framework that focuses on orchestrating multiple agents to work together on complex tasks. It allows for the creation of a swarm of agents, each with its own specialization, that can be dynamically called upon based on the requirements of a given task.

First thing, first. Set up a Python environment. Since we are creating a lot of projects, this helps us ensure we aren’t installing incompatible libraries from app to app.

python -m

In our app directory, create a requirements.txt file.


You’ll notice we are using an older version of openai. That’s on purpose for compatibility when running Swarm.

Then issue the following command to install all of them:

pip install -r

Here’s an example of how to set up a simple swarm using OpenAI Swarm:

from swarm import Swarm, Agent
import os
import config
os.environ['OPENAI_API_KEY'] = config.OPENAI_API_KEY
client = Swarm()
def transfer_to_agent_b():
    return agent_b
agent_a = Agent(
    name="Agent A",
    instructions="You are a helpful agent.",
    functions=[transfer_to_agent_b],
)
agent_b = Agent(
    name="Agent B",
    instructions="Only speak in Haikus.",
)
response = client.run(
    agent=agent_a,
    messages=[{"role": "user", "content": "I want to talk to agent B."}],
)
print(response.messages[-1]["content"])

This fun example demonstrates how to create two agents with different roles and instructions, and how to transfer control between them. The message that says, “I want to talk to agent B.” should hand you off to Agent B, where you will be interpreting poetry rather than getting helpful responses.

Let’s work on a second example to test the waters a little bit. Here’s the full block of code:

import os
import config
from swarm import Swarm, Agent
import requests

os.environ['OPENAI_API_KEY'] = config.OPENAI_API_KEY

# Define the Greeting Agent
greeting_agent = Agent(
    name="Greeting Agent",
    instructions="You are a friendly assistant that greets the user and tells a joke.",
)

# Define the News Function
def fetch_news(country="us", category=None):
    """
    Fetches the top news headlines for a given country and category.

    Args:
        country (str): The country code (default is "us").
        category (str): The news category (optional).

    Returns:
        str: A string containing the top news headlines or an error message.
    """
    base_url = "https://newsapi.org/v2/top-headlines"
    params = {
        "country": country,
        "apiKey": config.NEWS_API_KEY
    }

    # Add a category if provided
    if category:
        params["category"] = category

    # Make a GET request to the News API
    response = requests.get(base_url, params=params)
    data = response.json()

    # Check if the request was successful
    if response.status_code == 200 and 'articles' in data:
        # Extract the titles of the first 10 articles
        headlines = [article['title'] for article in data['articles'][:10]]
        return f"Here are the top news for {country.upper()} ({category or 'all categories'}):\n" + "\n".join(headlines)
    else:
        # Return an error message if the request failed
        return f"Sorry, I couldn't fetch the news at the moment: {data.get('message', 'Unknown error')}"

# Define the News Agent
news_agent = Agent(
    name="News Agent",
    instructions="You provide the top news headlines for a given country and category.",
    functions=[fetch_news]
)

# Define function that transfers the task to the agent
def transfer_to_agent(agent_name):
  agents = {
      "Greeting Agent": greeting_agent,
      "News Agent": news_agent
  }
  return agents[agent_name]

# Define the Main AI Agent
main_agent = Agent(
    name = "Main Agent",
    instructions = """
    You are the main assistant.
    Based on the user's request, you decide which specialized agent should handle the task.
    - If the user wants a greeting or a joke, transfer to the Greeting Agent.
    - If the user wants to fetch the news, transfer to the News Agent.
    """,
    functions = [transfer_to_agent]
)

# Start the connection the OpenAI API
client = Swarm()

# Run the application
response = client.run(
    agent = main_agent,
    messages = [{"role": "user",
                 "content": "give me the first 10 news headlines from the US"}]  #try good morning
)
print(response.messages[-1]['content'])

Let’s break that down a bit. In this example, we have a main agent, a greeting agent, and a news agent. We’ll get to our main agent a little later. It is going to umbrella our other two agents. Our greeting agent is rather simple. It’s going to give you an Alexa-like response with a greeting and a joke (probably a cheesy one). Our news agent is a little more fun. In this code below, we are defining the news agent, and you will notice it is using a tool called:[fetch_news]:

news_agent = Agent(
    name="News Agent",
    instructions="You provide the top news headlines for a given country and category.",
    functions=[fetch_news]
)

The fetch_news() is being used to grab the top headlines of the day to feed them into the prompt. The functions = [{{Tool Name}}] gives our Agent this function as a resource it can use to construct its response.

You’ll also need to get an API key from newsapi.org, which has a very generous free tier.

Now we can use our main agent to effectively route the user to the other two agents when applicable.

def transfer_to_agent(agent_name):
  agents = {
      "Greeting Agent": greeting_agent,
      "News Agent": news_agent
  }
  return agents[agent_name]

# Define the Main AI Agent
main_agent = Agent(
    name = "Main Agent",
    instructions = """
    You are the main assistant.
    Based on the user's request, you decide which specialized agent should handle the task.
    - If the user wants a greeting or a joke, transfer to the Greeting Agent.
    - If the user wants to fetch the news, transfer to the News Agent.
    """,
    functions = [transfer_to_agent]
)

In our example, when we run the Swarm client, it should trigger the news agent.

# Start the connection the OpenAI API
client = Swarm()

# Run the application
response = client.run(
    agent = main_agent,
    messages = [{"role": "user",
                 "content": "give me the first 10 news headlines from the US"}]  #try good morning
)
print(response.messages[-1]['content'])

Feel free to modify thecontent of the message to trigger the greeting agent instead. Maybe even try giving it a conflicting message like “Greet me for the day and give me the news” to see how it responds.

LangGraph

LangGraph is a powerful tool for creating complex, multi-agent systems with dynamic workflows. It allows developers to define nodes representing individual agents and edges depicting the connections between them.

What’s going on here? The Analyze node acts almost like anif statement. It qualifies the user query, and that will determine which node it will go to next. If the query warrants the use of code in the response, the agent will move onto the Code node and if it’s just a generic type of question, it will pass onto the Generic node.

Here are the components of LangGraph:

  • Nodes

    Nodes represent individual agents, each with a specific role. These agents might be responsible for planning, executing tasks, reviewing outputs, or utilizing specialized tools.

  • Edges

    The edges represent the dynamic connections between agents. These connections adapt based on the context, enabling the system to cycle information through different agents as needed.

  • Conditional Nodes

    Conditional nodes are decision-making points within the graph where the flow of execution is determined based on specific conditions or criteria.

  • State

    State is the information that can be passed between nodes in a graph.

To get started, create a new environment like we did before. Use the following as your requirements.txt file this time and do the pip install -r requirements.txt command like before:

langgraph
langchain
langchain-openai 
langchain-community
config

The first thing to make is our agents.py file:

from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI

# Creating the first analysis agent to check the prompt structure
# This print part helps you to trace the graph decisions
def analyze_question(state):
    llm = ChatOpenAI()
    prompt = PromptTemplate.from_template("""
    You are an agent that needs to define if a question is a technical code one or a general one.

    Question : {input}

    Analyse the question. Only answer with "code" if the question is about technical development. If not just answer "general".

    Your answer (code/general) :
    """)
    chain = prompt | llm
    response = chain.invoke({"input": state["input"]})
    decision = response.content.strip().lower()
    return {"decision": decision, "input": state["input"]}

# Creating the code agent that could be way more technical
def answer_code_question(state):
    llm = ChatOpenAI()
    prompt = PromptTemplate.from_template(
        "You are a software engineer. Answer this question with step by steps details : {input}"
    )
    chain = prompt | llm
    response = chain.invoke({"input": state["input"]})
    return {"output": response}

# Creating the generic agent
def answer_generic_question(state):
    llm = ChatOpenAI()
    prompt = PromptTemplate.from_template(
        "Give a general and concise answer to the question: {input}"
    )
    chain = prompt | llm
    response = chain.invoke({"input": state["input"]})
    return {"output": response}

This file defines three key agents that form our LangGraph workflow. The analysis agent classifies incoming questions as either technical or general. The code agent handles technical queries with detailed programming responses, while the generic agent provides straightforward answers for general questions. Each agent uses ChatOpenAI and custom prompts to generate appropriate responses. These correspond to the nodes in the earlier graph exactly.

Now that we have our agents defined, we need to connect them with a graph in our graph.py file:

from langgraph.graph import StateGraph, END
from typing import Annotated, TypedDict
from agents import analyze_question, answer_code_question, answer_generic_question

#You can precise the format here which could be helpfull for multimodal graphs
class AgentState(TypedDict):
    input: str
    output: str
    decision: str

#Here is a simple 3 steps graph that is going to be working in the bellow "decision" condition
def create_graph():
    workflow = StateGraph(AgentState)

    workflow.add_node("analyze", analyze_question)
    workflow.add_node("code_agent", answer_code_question)
    workflow.add_node("generic_agent", answer_generic_question)

    workflow.add_conditional_edges(
        "analyze",
        lambda x: x["decision"],
        {
            "code": "code_agent",
            "general": "generic_agent"
        }
    )

    workflow.set_entry_point("analyze")
    workflow.add_edge("code_agent", END)
    workflow.add_edge("generic_agent", END)

    return workflow.compile()

This file structures the workflow between agents using a state-driven graph. It establishes the routing logic where questions flow from the analysis node to either the code or generic agent based on classification.

Finally, it all comes together in ourapp.py:

import os
import config

os.environ["OPENAI_API_KEY"] = config.OPENAI_API_KEY

from graph import create_graph
from typing import Annotated, TypedDict
from langgraph.graph import StateGraph, END

class UserInput(TypedDict):
    input: str
    continue_conversation: bool

def get_user_input(state: UserInput) -> UserInput:
    user_input = input("\nEnter your question (ou 'q' to quit) : ")
    return {
        "input": user_input,
        "continue_conversation": user_input.lower() != 'q'
    }

def process_question(state: UserInput):
    graph = create_graph()
    result = graph.invoke({"input": state["input"]})
    print("\n--- Final answer ---")
    print(result["output"])
    return state

def create_conversation_graph():
    workflow = StateGraph(UserInput)

    workflow.add_node("get_input", get_user_input)
    workflow.add_node("process_question", process_question)

    workflow.set_entry_point("get_input")

    workflow.add_conditional_edges(
        "get_input",
        lambda x: "continue" if x["continue_conversation"] else "end",
        {
            "continue": "process_question",
            "end": END
        }
    )

    workflow.add_edge("process_question", "get_input")

    return workflow.compile()

def main():
    conversation_graph = create_conversation_graph()
    conversation_graph.invoke({"input": "", "continue_conversation": True})

if __name__ == "__main__":
    main()

This creates our interactive conversation loop. It manages user input, processes questions through our agent system, and handles the conversation flow.

n8n

n8n is a powerful workflow automation platform that can be leveraged for agentic programming by orchestrating agent workflows through a visual, no-code interface. Unlike the code-first approaches we've explored so far, n8n allows you to create complex agent systems by connecting nodes in a visual workflow editor.

Let's set up a simple n8n workflow that integrates with OpenAI to create an agentic system:

First, install n8n:

docker volume create n8n_data

docker run -it --rm --name n8n -p 5678:5678 -v

Once n8n is running, navigate to http://localhost:5678 in your browser to access the workflow editor.

Here's how easy it is to create an agentic workflow with n8n:

  1. Create a new workflow: Click on "Create New Workflow" and give it a name like "Agent Workflow"

  2. Add a On chat message node:

  3. Add an Advanced AI node

  4. Add agent nodes: Create OpenAI nodes for each agent type you want to use. For example, here is an Analyze Image node:

That’s it! Just add nodes as needed.

If you’re having trouble deciding a path or want to know other possibilities, n8n also provides templates to start from, like this simple AI agent for a web search.

What’s great about the n8n interface is you can start testing your inputs right away. Let’s ask our new agentic tool what the weather in Flint, MI is.

Our result was 14 degrees and cloudy. With really no code at all, we’ve provided a prompt, then our agent pinged the SERP API which generated this response:

Then, our agent also translated that back into natural language for the user.

There’s almost 1,200 templates as of the writing of this tutorial. Even if you want to build your own application from scratch, n8n can act as a well of inspiration.

n8n represents a different paradigm for creating agentic systems than what we have reviewed before, one with connectivity, and accessibility while maintaining many of the advanced capabilities of the code-based frameworks. This makes it particularly valuable for team environments where different stakeholders need visibility into how agents operate

Conclusion

Throughout this two-part tutorial series, we've explored the world of agentic programming from fundamentals to advanced implementations. You've learned how autonomous agents leverage LLMs to make decisions through frameworks like ReACT, Plan-and-Execute, and Tree of Thoughts, with hands-on examples using LangChain and CrewAI demonstrating their ability to perform complex, multi-step tasks independently.

In Part 2, we expanded your toolkit with OpenAI's Swarm framework for orchestrating multiple specialized agents and LangGraph's approach to creating dynamic, conditional workflows between agents. You've seen how main agents can intelligently route queries to specialized agents and how to implement sophisticated decision-making networks that analyze queries and direct them through appropriate channels.

You now have practical skills to implement agentic programming in your own projects—whether automating tedious tasks, enhancing customer interactions, or building complex problem-solving systems. As this field continues to evolve, we encourage you to keep experimenting with these frameworks and join communities like the OpenAI Application Explorers Meetup Group to share experiences and continue learning in this exciting domain.

If you had fun with this tutorial, be sure to join the OpenAI Application Explorers Meetup Group to learn more about awesome apps you can build with AI.