Agentic AI

Agentic AI represents a significant leap forward in the field of artificial intelligence, moving beyond static models that merely process data to systems capable of independent action, planning, and self-correction. This paradigm shift positions AI not just as a tool for analysis but as an active participant capable of achieving complex goals in dynamic environments. Understanding Agentic AI is crucial as it underpins the next generation of intelligent systems, from advanced robotics to sophisticated automated workflow managers.

The term "agentic" fundamentally speaks to agency—the capacity of an entity to act independently and make its own choices. In the context of AI, this means building models that can formulate high-level objectives, break them down into actionable steps, execute those steps, monitor the results, and adapt their strategy if failures occur, all without constant human intervention. This capability is what distinguishes an intelligent agent from a traditional, reactive algorithm.

As we delve deeper, we will explore the architecture that makes this autonomy possible. This introduction will establish a foundational understanding of what Agentic AI is, contrasting it with previous AI forms, and setting the stage for an examination of the critical components that allow these digital entities to operate effectively in the real world.

What Exactly is Agentic AI?

Agentic AI refers to artificial intelligence systems designed with the inherent capability to operate autonomously toward a defined set of goals. Unlike the large language models (LLMs) that dominated recent discourse, which excel at generating content based on prompts, an agentic system is built to act on that generation. It possesses the necessary scaffolding to interface with external tools, maintain a persistent memory, and navigate complex decision trees to solve problems in a multi-step fashion.

The key differentiator here is the concept of proactive execution. A standard AI might tell you the best route to a destination; an Agentic AI will book the necessary transport, monitor for delays, and automatically rebook if a flight is canceled, all based on its initial objective. This shift moves the burden of orchestration from the human user to the AI system itself, demanding higher reliability and a more robust internal reasoning loop.

This autonomy is usually structured around a continuous loop of perception, planning, action, and reflection. The system perceives its current state (via sensors or data feeds), plans the next best move, executes an action using its available tools (like APIs or software commands), and then reflects on the outcome to refine its subsequent plan. This iterative process is the engine driving genuine artificial agency.

Core Components of Autonomous Agents

The functionality of any robust Agentic AI hinges on several interconnected core components that work in concert to enable complex behavior. The first and arguably most crucial component is the Reasoning Engine, often powered by a sophisticated LLM. This engine is responsible for high-level strategic thinking—interpreting the goal, deciding which sub-tasks are necessary, and formulating the logical sequence for execution. It acts as the system’s "brain."

Secondly, an agent requires Memory and Context Management. This involves both short-term memory (the immediate context of the current task, often managed via the LLM’s context window) and long-term memory (a persistent knowledge base, frequently implemented using vector databases). This memory allows the agent to recall past actions, learned successes or failures, and maintain a consistent understanding of the overall mission over extended periods, preventing repetitive errors.

Finally, the third essential pillar is Tool Use and Action Execution. An intelligent agent is only as capable as the tools it can wield. This component includes the interface mechanisms that allow the agent to interact with the external world, whether that means running code, querying databases, sending emails, or controlling physical hardware. The agent must possess an accurate catalog of its available tools and the ability to correctly format the necessary calls (tool-use prompting) to achieve the intended action, closing the feedback loop necessary for true agency.

Agentic AI is not just an incremental upgrade; it represents a fundamental paradigm shift toward truly autonomous systems capable of sustained, goal-oriented behavior. By integrating advanced reasoning engines with robust memory systems and reliable tool-use capabilities, we are building digital entities that move from being passive responders to active problem-solvers. While challenges remain concerning safety, alignment, and error handling, the core components discussed—reasoning, memory, and action—provide the blueprint for the next wave of intelligent automation that promises to redefine productivity across industries.

Deepseek 7B LLM with LlamaIndex Benchmark Results Overview

LlamaIndex Benchmark Results

Query: What is LlamaIndex?
Average response time: 11.04 seconds

LlamaIndex is an open source project developed by DeepSeeK Labs that provides a comprehensive solution for managing and querying large amounts of structured and unstructured data efficiently. It offers vector indexing, parallel processing, query caching, and supports various data sources including text files, PDFs, web pages, databases, and APIs. The platform is designed to be scalable and flexible for high-performance data management in LLM applications.

Query: How does LlamaIndex help with data processing?
Average response time: 24.70 seconds

LlamaIndex improves data processing by providing:

  • Built-in connectors for diverse data sources
  • Advanced chunking strategies to optimize processing
  • Vector indexing for fast retrieval
  • Query optimization based on cached results
  • Support for parallel processing and distributed workloads

These features collectively enhance performance and scalability for handling large datasets in LLM applications.

Query: What are the main features of LlamaIndex?
Average response time: 24.44 seconds

  • Data connectors: For text files, PDFs, web pages, databases, APIs
  • Chunking strategies: Organize unstructured data efficiently
  • Vector indexing: Semantic search and fast retrieval
  • Query optimization: Improves response accuracy and speed
  • Response synthesis: Combines structured and unstructured data
  • Evolving ecosystem: Integrates new technologies continuously

Query: How can LlamaIndex improve response time?
Average response time: 7.79 seconds

  • Efficient indexing techniques
  • Query caching for faster lookup
  • Optimized retrieval parameters
  • Parallel processing for high throughput
  • Vector-based retrieval for semantic matching

These mechanisms enable LlamaIndex to return faster and more accurate results from large datasets.

Query: What data sources does LlamaIndex support?
Average response time: 2.25 seconds

  • Text files
  • PDFs
  • Web pages
  • Databases
  • APIs

Deep Learning – Linear Regression

import numpy as np

# y = wx + b

def cal_err_linear_given_points(b, w, points):
totalErr = 0
for i in range (0, len (points)):
x = points[i, 0]
y = points[i, 1]
totalErr += (y - (w * x + b)) ** 2
return totalErr / float (len (points))

def step_grad(b_current, w_current, points, learnRate):
b_grad = 0
w_grad = 0
N = float (len (points))

for i in range (0, len (points)):
x = points[i, 0]
y = points[1, 1]
b_grad += -(2 / N) * (y - ((w_current * x) + b_current))
w_grad += -(2 / N) * x * (y - ((w_current * x) + b_current))
new_b = b_current - (learnRate * b_grad)
new_w = w_current - (learnRate * w_grad)
return [new_b, new_w]

# iterate to optimize
def grad_descent_exe(points, starting_b, starting_w, learnRate, num_iterations):
b = starting_b
w = starting_w
for i in range (num_iterations):
b, w = step_grad (b, w, np.array (points), learnRate)
print (b, w)
return [b, w]

def exe():
points = np.genfromtxt ("data.csv", delimiter=",")
# print(points)
learnRate = 0.0001
initial_b = 0 # guessing y-intercept
initial_w = 0 # guessing slope
num_iterations = 1000
print ("Starting grad descent at b = {0}, m = {1}, err = {2}"
.format (initial_b, initial_w, cal_err_linear_given_points (initial_b, initial_w, points)))
[b, w] = grad_descent_exe (points, initial_b, initial_w, learnRate, num_iterations)
print ("After {0} interations b = {1}, m = {2}, err = {3}"
.format (num_iterations, b, w, cal_err_linear_given_points (b, w, points)))

if __name__ == '__main__':
exe ()

Python – List Comprehension

Let’s learn about list comprehensions! You are given three integers  x, y, z and  representing the dimensions of a cuboid along with an integer n . Print a list of all possible coordinates given by  on a 3D grid where the sum of  is not equal to n . Please use list comprehensions rather than multiple loops

Example
x = 1

y = 1

z= 2

n = 3


All permutations of [i,j,k]  are:

[[0,0,0], [0,0,1], [0,0,2], [0,1,0],[0,1,1]………..]

Print an array of the elements that do not sum to n.

if __name__ == '__main__':
    print('Please enter value x :')
    x = int(input())
    print('Please enter value y :')
    y = int(input())
    print ('Please enter value z :')
    z = int(input())
    print('Please enter elements that do not sum to:')
    n = int(input())

print([[i, j, k] for i in range(0, x+1) for j in range(0, y+1) for k in range(0,z +1) if i+j+k != n])