The Fundamentals of Agentic AI Workflow That Software Engineers Need to Know
Understanding Agentic AI workflow helps you to build and leverage AI agents
Howdy friends 👋,
I hope you are doing well.
It's been a bit longer than the usual two-week wait for this newsletter — life and work have kept me busy this past month! Thank you for your patience. I'm excited to jump back in with this edition on agentic AI, a topic I've been researching extensively and couldn't wait to share with you.
I've packed this issue with insights on how these AI systems are changing software development and practical ways to leverage them in your daily work.
There’s no doubt that Generative AI has created a paradigm shift and changed the way we work. As AI adoption moves forward, a new buzzword in AI is becoming more pervasive: Agentic AI.
I’m not a Machine learning expert or an AI engineer to give you an expert view of Agentic AI. But I’m a curious software engineer who wants to understand and leverage these systems.
In this post, I’ll help you understand the fundamentals of agentic AI. I’ll also explain how they are being used in a programming context (some of you may already be familiar) and how to leverage them as a software engineer to boost your productivity.
At first, I’ll briefly explain why you should understand agentic AI and why it is important.
Why should you understand Agentic AI?
In the AI stack, there’s a lot of attention around the foundation models or Large Language Models (LLM) like ChatGPT, Claude and Gemini. When these LLM providers add new features or release new versions, it creates a buzz on the internet.
But most opportunities lie in the application layer, according to Andrew Yn, a well-known AI leader and tech entrepreneur. That’s the reason applications like lovable.dev, an AI software development platform for non-tech people, went from 0 to $17m in annual recurring revenue (ARR) within 3 months.
The agentic AI systems sit mostly on the application layer. They take the capabilities of the underlying foundation models and enhance them with specific agentic capabilities. Therefore, it would be beneficial to understand agentic to leverage it.
What is Agentic AI?
Agentic AI refers to the workflow where AI agents autonomously perform a task with little to no assistance from human beings. Unlike non-agentic workflow, where we give prompts to Large Language Models (LLM), and it answers in one shot, agentic workflow includes:
Planning
Deep thinking
Research
These are done in a feedback loop to improve the final result and performance of the LLM.
How does it differ from Generative AI?
Generative AI is focused on creating content, while Agentic AI is oriented towards performing actions. Agentic AI can also initiate action on its own. For example, an AI agent can search information on the web (without explicitly telling it to do so) and generate additional prompts to refine its output.
The core of the AI agents is Large Language Models, though, which are important for processing and generating natural language.
Agentic design patterns
Andrew Yn highlights the following four design patterns that improve the performance of Large Language Models. I’ll explain the core idea from his post by using my words.
Reflection
Tool Use
Planning
Multi-agent collaboration
1. Reflection
Reflection is a process that helps an AI agent to critically analyse and improve its work. LLM like Claude and ChatGPT do not always produce satisfactory results in one go. And you may have to give prompts multiple times. Adding the reflection pattern would help to automate the process of giving feedback to the LLM multiple times.
2. Tool Use
Tool use is another design pattern that allows LLM to use tools on your computer (for ex., browser, terminal, code interpreter) to perform tasks. It opens the door for a wide range of opportunities.
With tool use, you can command LLM to perform various tasks rather than just using it for informative purposes. For example, you can instruct the AI agent to respond to your email, create a task on your calendar, and book you a flight and accommodation for your next trip.
3. Planning
Planning is another agentic capability that allows the LLM to create a sequence of steps to carry out the task. The addition of the planning step helps the agent to solve the task effectively, as LLMs are good at solving smaller tasks.
Andrew Ng said that planning is a powerful capability, but it can produce less predictable results (hard to predict what the LLM would do).
4. Multi-agent collaboration
Multi-agent collaboration refers to the pattern of decomposing a complex task into sub-tasks and assigning them to different roles. The different roles can be defined and created based on your requirements.
For a programming task, you may create AI agents like a software engineer, tester, designer, and product manager. The agents can be created by prompting a single or multiple LLMs.
For example, if you need to build something or solve a complex problem, each of the tasks involved in solving the problem can be broken down and assigned to a different AI agent.
A planner agent can take up the role of planning
A coding agent can write the code
A tester agent can test the code to validate if the solution is working
A reviewer agent can review the code for any potential bugs or design errors
I find that the multi-agent design pattern is the most powerful because it allows multiple agents to review and critique each other's work, thus improving the outcome.
Agentic AI in programming
The foundation LLMs like Claude and ChatGPT already provide the core capabilities for agentic AI. For example, Anthropic introduced Computer use capability in its Claude 3.5 Sonnet model last year (October 2024). It allows the LLM to use a computer like humans do — moving a cursor, clicking links, typing text, and opening a browser.
Similarly, OpenAI introduced Deep research while Anthropic added Extended thinking to their models in February 2025. These core agentic capabilities allow the LLMs to plan, reason and research deeply, thus helping in producing much better responses.
Now, let’s review some of the AI code assistants that enable the agentic capabilities on top of the ones provided by foundation LLMs.
Cursor IDE
Cursor is the most popular Integrated Development Environment (IDE) with agentic AI coding capabilities widely adopted by software engineers right now. For those who don’t know, it’s a fork of VS Code with agentic features like code suggestions and tool use capabilities. I’ve been using it since last year and it has hugely boosted my productivity.
Currently, Cursor IDE (version 0.48.7) provides the following key agentic AI features:
Planning (When thinking mode is enabled, use supporting LLMs to plan out the steps required to complete the task)
Tool use (Read, edit, delete, and search for files, run terminal, search the web for information)
Reflection (Self-monitor and evaluate its work. For ex, if a specific file is not found, look for other related files)
Multi-agent collaboration (Switches between different roles — explorer while searching files, analyst while understanding code, and editor when making targeted changes)
Claude Code
Claude code is an agentic coding tool developed by Anthropic. It can be installed as an npm package and is accessible from the terminal. Unlike IDEs, there are limitations in the way to interact with Claude code because it is based on the terminal.
However, it also provides the following powerful agentic features:
Thinking/Planning (when you prompt “think” or “think deeply” in the query)
Tool use (Create, read and edit files; search through codebase and git history; run shell commands, search web)
The benefit of the Claude code is that it doesn’t require context to be fed like Cursor or other tools. It can explore the codebase on its own as needed. However, it’s not as user-friendly as other IDEs, as the ways to interact with it are limited.
How to leverage AI agent to their full potential?
It is clear that AI is only going in one direction: toward more capability, more autonomy, and deeper integration into the development workflow. What began as simple code completion has evolved into pair programming assistants.
It is now moving toward truly agentic systems that can tackle complex programming tasks with minimal supervision. Therefore, the more we learn to integrate these tools in our daily workflow, the better we can maximise our productivity as software engineers.
Here are a few tips on how to leverage AI agents as a software engineer:
1. Improve context quality with documentation
AI agents are only as good as the context they're given. Software engineers need to develop skills in providing high-quality, relevant context. Whether it is doc blocks or inline comments, they will provide valuable context to AI agents and help them to understand your codebase thoroughly.
Documentation Practices for AI Readability
Create documentation that serves both humans and AI agents:
Write clear and concise documentation (doc block) for function and file headers that explain their purpose
Use consistent naming conventions that convey semantic meaning
Create architectural decision documents and add them to your project git repository
Organizing AI-Friendly Codebases
Consider how your codebase organization affects AI understanding:
Create clear README files for each directory that explain the purpose and relationships
Maintain a high-level architecture document that agents can reference
Use descriptive file and directory names that convey their purpose
Consider adding an "AI assistant guide" to complex repositories that explains key concepts
If you use Cursor IDE, then you can specify the Project rules that help AI to understand your codebase and follow the project’s conventions. In addition, Cursor allows you to index your codebase for better and more accurate codebase responses.
2. Prompt Engineering for Engineers
My first attempt at prompting was terrible. After a lot of trial and error, I learned that software engineers need a different approach to prompt engineering than content creators. The goal isn't just to get functioning code, but to leverage the agent's planning and reasoning capabilities.
The Engineering Prompt Framework
Effective engineering prompts generally follow this structure:
Context: Provide relevant background about your codebase and problem
Goal: Clearly state what you're trying to accomplish
Constraints: Specify any technical limitations or requirements
Reasoning Request: Ask the agent to think step-by-step before implementing
Feedback Loop: Review the output and refine with additional prompts
Examples of Effective vs. Ineffective Prompts
❌ Ineffective Prompt:
Write a function that calculates Fibonacci numbers.
✅ Effective Prompt:
I need a function to calculate Fibonacci numbers for a resource-constrained system.
Goal: Implement an efficient function that returns the nth Fibonacci number.
Constraints:
- Must be optimised for speed and memory usage
- Must handle inputs up to n=10,000 without stack overflow
- Must use TypeScript with proper typing
Before writing the code, please think through different implementation approaches (recursive, iterative, and matrix-based) and explain which would be most appropriate given my constraints.
The Planning-First Approach
The most powerful agentic capability for engineers is often planning, not code generation. Train yourself to request plans before implementations:
I need to implement a feature that allows users to export their data in multiple formats. Before generating any code, please create a detailed technical plan that covers:
1. The overall architecture
2. Data flow
3. Key functions/components needed
4. Potential edge cases
5. Testing strategy
Review and refine this plan before asking for implementation code. This approach leads to more thoughtful, maintainable solutions than jumping straight to code generation.
3. Building Your AI Toolkit
Different agentic AI tools have different strengths. Building an effective AI toolkit means selecting complementary tools and knowing when to use each one.
Core Toolkit Components
IDE Integration (e.g., Cursor)
Best for: In-line coding assistance, refactoring, documentation
When to use: During active development when you need contextual help
Terminal-Based Agents (e.g., Claude Code)
Best for: Codebase exploration, complex refactoring, broad analyses
When to use: When you need to understand or modify code across multiple files
Chat-Based LLMs (e.g., Claude, ChatGPT)
Best for: Planning, architectural discussions, learning new concepts
When to use: Before coding to plan an approach or when you need to reason through complex problems
Final thoughts
When I first started using AI coding tools, I was skeptical. Like many engineers, I wondered if they would just generate low-quality and buggy code or if I'd spend more time fixing their mistakes than writing code myself. I’ve even written an article last year on my website about it. But after months of integrating agentic AI into my workflow, I've had to completely rethink that position.
These aren't just smarter autocomplete tools - they're fundamentally changing how we approach software development. The best engineers I know aren't those fighting against this wave but those riding it by learning to collaborate effectively with these AI agents.
There are challenges, of course. We need to be mindful of security when giving agents access to our codebases. The tools are still evolving and sometimes produce frustrating results. And there's a learning curve to using them effectively. My prompting skills today are worlds better than when I started.
But one thing is clear - agentic AI isn't just another passing tech trend. It's becoming as fundamental to software development as version control or package managers. The engineers who thrive in the coming years won't be those who write the most code, but those who become skilled at orchestrating AI agents to solve complex problems.
I encourage you to start small - pick one agentic tool and integrate it into a non-critical part of your workflow. Experiment with the different patterns we've discussed, learn how to communicate effectively with these systems, and build your skills gradually.
Thank you for reading and supporting this newsletter 🙏. You can hit the like ❤️ button at the end of this post to support my work. It means a lot to me!