Feb 6, 2025

Introduction to Agentic Programming
In the rapidly evolving landscape of artificial intelligence, agentic programming has emerged as a powerful paradigm for creating autonomous systems capable of complex problem-solving and decision-making. At a recent OpenAI Applications Explorers Meetup, Godfrey Nolan, President of RIIS gave a grand tour walkthrough of many of the agentic solutions available for developers. This tutorial will cover everything he covered at the meetup. Also, you can follow along with the YouTube Video of the meetup.
Understanding Autonomous Agents
At the heart of agentic programming are autonomous agents - intelligent software entities designed to operate independently, make decisions, and take actions to achieve specific goals without direct human intervention. These agents possess several key characteristics that set them apart from traditional software systems:
Autonomy: Agents can act independently, making decisions based on their programming and available information.
Goal-oriented behavior: Each agent is designed with specific objectives in mind, driving their actions towards achieving these goals.
Contextual awareness: Agents can understand and adapt to their environment, using input and data to inform their decision-making process.
Continuous learning: Through interaction with their environment and other agents, autonomous agents can improve their performance over time.
Applications of Autonomous Agents
The versatility of autonomous agents has led to their adoption across various industries and use cases. Some notable applications including handling recruitment screening and scheduling, lead qualification, and 24/7 customer support through intelligent automation. These systems reduce human overhead while maintaining service consistency in HR, sales, and client relations.
Their analytical capabilities drive innovation in content generation and data processing, creating dynamic educational materials with real-time updates and extracting insights from large datasets in finance and research. If it’s a repetitive task that includes words or combing through vast stores of data, agents are pushing the capabilities of what LLMs can do for you.
The Role of Large Language Models When Deploying Agents
When an agent encounters a query or task, it first attempts to leverage its LLM to provide a response. If the LLM doesn’t have the necessary information, the agent can then utilize various tools and external resources to find the answer. It makes its observation, then returns that to the LLM which will decide to continue the loop until the answer is satisfied and then returns the final response.

Other Components
Agent Action: Actions or tasks
Agent Finish: Final response
Intermediate Steps: Processing and reasoning
Reason and Act - ReAct
You may have heard about Chain of Thought (CoT) if you are using web interfaces for LLMs. ReAct is the next step beyond CoT. In ReAct we get the benefit of the Reason loop, where the Language Model (LM) creates its CoT loop and combines it with Actions and Observations within our tools environment. The Observations are turned into a tool selection, and then fed into the next step.

Now that we have a basic understanding of how agents work, let’s move on to our first example.
LangChain
LangChain is a widely used framework that provides a comprehensive set of tools for building applications with LLMs. It offers features like state management, agent coordination, and the ability to create stateful graphs.
To test the limits of agents with LangChain, we are going to see if it can handle the multi-stage task of finding the world’s tallest building and then determining how many of them stacked on top of each other would it take to reach the moon.

Is this the tallest building in the world?
For this task, a ReACT agent can use various tools, such as a search engine (DuckDuckGo) to find the building's height and a calculator (LLMMathChain) to perform the stacking calculation.
Okay, finally, let’s get coding!
This code sets up the basic components needed for an agent, including the language model and tools for math calculations and web searches.
This part creates the agent using the defined tools and prompt, and then executes it with a specific query.
If you’ve done some AI application development before, you might be caught off guard by the use of prompt = hub.pull("hwchase17/openai-functions-agent")
. This is basically a pre-baked prompt from LangSmith. Rather than getting into the dark arts of prompt engineering, you can just use LangSmith to find a tried and tested prompt for your use case. There’s more to go into about LangSmith, but, we’ll leave it there for today.
When we run the script, we the first part of the response should find the tallest building, which is currently the Burj Khalifa:

hen it will use the math tool to determine how many of them it would take to reach the moon. That’s about 462,089 Burj Khalifas to reach the moon. Even though each step is not very hard, the time it would take a human to do this would be greater, so you can see by using agents effectively you can complete tedious tasks faster.
Other Frameworks
Plan-and-Execute Agents

ReAct is not the only game in town. For example, there are Plan-and-Execute Agents, which are an advanced type of AI agent that separates the planning phase from the execution phase to handle complex tasks. In this framework, task and prioritization agents first use a large language model (LLM) to create a detailed, multi-step plan for accomplishing the given objective. Once the plan is generated, a separate execution agent, typically equipped with various tools, carries out each step of the plan sequentially. As tasks are carried out, a context agent assists the execution agent by enriching the data, and then the results get passed back to the prioritization and task creation agent until the entire task list is completed with no new tasks needing to be added.
Tree of Thoughts

The Tree of Thoughts (ToT) framework is really powerful, surpassing traditional prompting techniques by implementing a sophisticated, hierarchical approach to problem-solving. Unlike its predecessor, Chain of Thought (CoT), ToT constructs a branching, tree-like structure of reasoning pathways. At each decision point, multiple thought trajectories are explored concurrently, allowing for a more comprehensive examination of potential solutions.
This parallel processing enables the AI to dynamically evaluate and expand upon the most promising branches, effectively pruning less viable options. This process can be very compute intensive, but the results tend to be more human-like.
CrewAI
CrewAI is a framework designed to enable AI agents to assume specific roles and collaborate as a cohesive unit. The framework consists of four primary components:
Agents: These are the AI entities that perform tasks within the workflow.
Tasks: The specific jobs or assignments given to agents.
Tools: Resources available to agents to complete their tasks.
Crew: The container that encompasses agents, tools, and tasks.
Tools are particularly interesting for CrewAI, because there’s just so many of them. Head over to the CrewAI Docs to see the full list, but this image gives you a small tasting:

There’s scrapers, vision tools, SQL tools, RAG tools, and more. Basically if you can think of it, someone has built a tool for it.
A REAL WORLD EXAMPLE
Sometimes it can feel like the concept of agents opens up infinite possibilities, but that’s not necessarily helpful when trying to figure out what they are good for. To help you get a better understanding we’re going to use a real world example of using CrewAI to automate away some tedious tasks in a sales funnel.
Here’s an outline of the agents we want to deploy:
Lead Research Specialist - Gather comprehensive information about the lead company and their industry | gpt-3.5-turbo Company and Product Specialist - Analyze our company’s offerings and how they align with the lead’s needs | gpt-3.5-turbo Sales Content Strategist - Create compelling content for a one-page sales PDF | gpt-3.5-turbo
If you aren’t comfortable with coding, CrewAI does offer a paid version called CrewAI Studio that offers no code templates.

If you’ve made it this far though, your probably okay with a little code, so let’s break down how a typical CrewAI agent-based app will work. A basic CrewAI app will feature three files, agents.py
, main.py
, and tasks.py
.
Let’s look at agents.py
:
Our __init__
function is calling our model just like any other LLM style app. In our create_agent()
we have our descriptions
which define the role
's our agents can inhabit. In our return
we return an Agent()
loaded with a generalized prompt that will also feed in our role.
Next let’s look at tasks.py
Here in our create_task()
we have our task types and our expected outputs, which will help format the output response.
Then we have our main.py
sometimes referred to as our Crew which is our orchestrator:
Let’s take a look at how this works. The main class BusinessAutomationCrew
initializes with a business type. Then we create our agents and tasks. The Crew()
is going to process them. We assign it to crew
variable, then we issue the method kickoff()
. The __main__
at the bottom creates some user prompts to feed in the essential info to build our agents and tasks.
Let’s run that main.py
file in our terminal

The command line should then ask you, “What business do you seek to build today?”
We’re going to tell it “Software services company” because that’s what RIIS does, but feel free to put in what your company does.

Okay, we are seeing our agent and task information fed back to us. Now here’s the output.

You’ll see it then queues up the next task and agent. This will continue until all the tasks are completed.
Conclusion
In this tutorial, you've learned the fundamentals of agentic programming, from understanding autonomous agents and their key characteristics to implementing practical examples using frameworks like LangChain and CrewAI. You've seen how agents can leverage LLMs to handle complex tasks through approaches like ReAct, explored the components of agent architecture, and worked through a real-world example of automating sales processes. In Part 2, we'll dive into more advanced frameworks including OpenAI's swarm architecture and LangGraph, which enable even more sophisticated multi-agent collaborations and complex workflow orchestration.