.jpg)
The Future of AI in Security, Explained for Non Experts
Nobody is truly an AI expert right now. The field is too new, too fast-moving, and we’re all learning as we go. Some folks reading this may be confused by all the hype, conflicting tools, and constant M&A changes around AI. That’s completely normal. Everyone is struggling to keep up with the pace of AI innovation in cybersecurity.
For me, coming out of this year’s Black Hat conference, one thing was clear: AI was the star of the show, yet many people left with more questions than answers.
1. How will AI really change security operations?
2. Will it actually reduce workload or just add more tools?
3. And what does “AI security” even mean?
So, in this post, I'm going to do my best to break down both where we are now with AI in cybersecurity, and where things are headed.
Security teams today are used to working with an array of dashboards and apps, your SIEM for log analysis, SOAR for automated responses, EDR for endpoint defense, etc. But the truth is that the future of security operations won’t revolve around any single human driven tool. Instead, you’ll have a general purpose AI control layer that oversees all your security functions, essentially an army of headless, automated agents running quietly across your infrastructure.
These AI agents will gather signals from everywhere (network traffic, endpoints, cloud logs, etc.), make decisions in real time, and only escalate to humans when they actually need our input or approval.
Unlike today’s human-in-the-loop phase, where AI just runs pre-defined actions or recommends actions and requires human approval, these agents will already be performing loads of tasks autonomously. It will be like having 100,000 smart interns working around the clock, handling most tasks without human intervention.
Eventually, we’ll reach (for certain domains) full AI automation, where systems handle 99% of security workflows on their own and only loop in humans for the toughest edge cases.
This progression isn’t just theoretical; Microsoft’s security team, for example, predicts that within months we’ll have AI agents able to reason and deploy tools autonomously, and within a couple of years agents that self improve to meet high level goals.
Why Point Solutions Will Feel Like Tool Sprawl (Again)
Right now, a lot of startups are building narrow agentic tools that automate specific security tasks, one tool for phishing analysis, another for cloud misconfiguration, another for compliance checks, and so on.
They might solve one problem well and win short term deals, but if you adopt a lot of them it leads to the same fragmentation and complexity we’ve seen before. It’s déjà vu: just as early SOC tooling resulted in dozens of siloed apps that didn’t play nicely together, an explosion of single purpose AI tools could recreate that sprawl.
In the next 12–24 months, organizations will shift from buying pre-packaged AI point solutions to building with AI agents as needed. Instead of relying on 30 different little AI apps that don’t talk to each other, companies will prioritize toolkits and platforms that let them develop many agents in house and have them work in concert.
That means you’ll want platform level control, governance, and context sharing across all these agents, not a bunch of isolated automations that each operate in a vacuum. Gartner analysts predict that by 2028, 70% of organizations deploying multiple AI agents or language models will use integration/orchestration platforms to connect and manage them (up from less than 5% doing so in 2024).
The Three Distinct Phases of AI in Security
Let’s add some context and look at the current situation more clearly.
We can roughly divide the overall progress into three main phases:
Phase 1 - Model Centric Tools (2021-2024)
This describes much of the last 3+ years.
Organizations, often internal research and development teams or eager consultants, were obsessed with the AI models themselves. Many rushed to fine tune large language models (LLMs) on their own data, or even train custom models from scratch for security use cases. A whole ecosystem of tools sprung up around supporting this model centric approach:
1. Tools for model training and fine tuning, vector databases for AI knowledge bases, prompt engineering and testing frameworks, secure model hosting platforms, etc. Huge VC investment flowed here.
2. Specialized offerings to harden model security and ensure compliance (for example, preventing sensitive data leaks or adversarial attacks on ML models) also emerged as companies realized putting AI into production has unique risks.
The problem here is that most enterprises never truly needed to train their own AI models for security. Unless you’re a tech giant with very unique data, you can get great results fine tuning or leveraging existing foundation models. A lot of companies were nudged into phase 1 by overenthusiastic data science teams or consultants, only to end up with complex ML infrastructure to maintain and limited ROI.
Now they’re realizing it’s costly and impractical to keep up with the state-of-the-art on their own. As a result, a lot of the phase 1 tooling is being rethought or scrapped in favor of using reliable models provided by others.
Phase 2 - Security Tools with AI (2024–present)
Over the past 12–18 months, we’ve seen a wave of vendors slapping generative AI features onto existing security products. Think “GPT inside your EDR” or “chatbot for your SIEM”. Every traditional security software company wants to claim it’s now AI powered.
Frankly, this phase feels a lot like the SaaS boom all over again. It’s familiar and thus attractive to users. We’re already seeing legacy security providers acquire smaller AI startups to bolt on some AI capabilities and stay relevant.
The reality, however, is that most of these are AI washed tools, not truly AI native. There’s a big difference between integrating AI versus reimagining a product around AI from first principles.
A lot of vendors have taken the shortcut of simply calling a public LLM API in their app and marketing it as “AI driven X”. It might provide nicer summaries or automate a few queries, but it doesn’t rethink how security workflows could be done in an AI-first world.
This phase won’t last forever. Expect consolidation as “GenAI + old product” companies get acquired or fade away once the novelty wears off. Security teams will pilot some of these AI enhanced tools, but many will realize that just having a built in chatbot is incremental improvement, not the step change we actually need.
Phase 3 - Agentic Platforms (The Future)
This is where we (and a few other pioneers) believe the industry is headed in the coming years. Instead of big monolithic security applications with an AI tacked on, phase 3 is about platforms designed from scratch for massive scale agent orchestration:
1. They support swarms of small, purpose built AI agents working in concert. Each agent might handle a focused task (e.g. one agent watches login anomalies, another continuously scans configs against best practices), rather than one giant model trying to do everything by calling external APIs for each step.
2. They provide the connective tissue, governance, observability, secure integrations, and approval workflows, to manage these agents safely. Think of it as the operating system for your army of security AI. You define policies and guardrails, and the platform makes sure agents follow them. You can monitor what the agents are doing, get audit logs, and intervene or set checkpoints where human approval is required.
3. They plug in cleanly with the rest of your stack. A phase 3 platform isn’t an island; it’s engineered to integrate with your data sources, identity systems, ticketing, cloud platforms, etc., so that agents can take action and fetch information just like a human analyst would only faster and at machine scale.
But orchestration is only part of it. A true agentic platform also provides governance, observability, secure integrations, and approval workflows. It’s the connective tissue that ensures agents work safely and within policy. Think of it as the operating system for your AI workforce: one place to deploy, coordinate, and monitor millions of lightweight AI workers.
We built Kindo to be a platform like this, an AI native automation fabric for technical operations. It’s designed for that agent orchestration mission from the ground up. Our view is that phase 3 systems will ultimately replace the patchwork of phase 2 tools, just as cloud platforms replaced a lot of isolated on-prem software in the last generation.
A Note on Terminology and Confusion
Again, it's the norm to get tripped up by AI terminology. For instance, if and when we say we’re an AI security company, this can cause confusion because AI security can mean completely different things to different people. It’s worth clarifying the distinctions:
1. Some security leaders think it means they first need to implement a heavy data governance framework (like Microsoft Purview) before using AI. They don’t necessarily need to do that first, you can pursue AI driven security improvements in parallel with improving data governance.
2. Others mix up model security vs. security with AI vs. secure AI operations. These are distinct concepts: model security means protecting AI models from tampering and adversarial attacks; AI for security means using AI to defend your systems; and secure AI ops means safely deploying and managing AI systems in your organization. All important, but not the same.
3. A lot of security practitioners who trained in the SaaS era, still expect products to look like traditional apps and dashboards. The idea of an invisible mesh of AI agents running in the background is a bit mind bending when you’re used to thinking in terms of user interfaces and reports. So, there’s a learning curve to understand the value of an agentic approach.
Here’s a reality check to cut through the confusion: more money has been spent on GPUs (the hardware powering AI) in the last 3 years than was spent on the entire internet buildout of the 1990s and 2000s. That is an astounding fact, and while it’s hard to verify exact dollar-to-dollar comparisons, it speaks to the scale of investment pouring into AI.
Nvidia’s revenue from AI chips has skyrocketed, and companies worldwide are investing in AI infrastructure at rates we’ve never seen before. A shift of this magnitude isn’t happening for some marginal gains or a fad. It signals a fundamental change in computing and, by extension, in security. The confusion will clear up as these technologies mature, but by then the leaders (and winners) in the market will be those who anticipated the change early.
The Web Is Already Changing & Software Will Follow
More people are beginning to understand that the web is evolving in an AI-first direction.
Any developer can now plug OpenAI’s agent API into their app to create an intelligent assistant or automate a user task. It’s becoming easy to build AI driven features in consumer experiences. What fewer people have realized is that all software will eventually be built this way, not just web apps. The next big platforms in tech won’t be traditional applications at all; they’ll be agent ecosystems.
If that’s where we’re headed, it raises a few important questions: do you really want to wire 100,000 OpenAI powered agents directly into your core business processes? Where is the ownership of your logic and data in that scenario? Where’s the governance, the assurance that those agents won’t violate policy or expose you to new risks?
Using third party AI (via API calls to someone else’s model) for some tasks is fine, but if your entire automation strategy depends on a vendor’s opaque AI, then in a real sense it’s not truly yours, you’re outsourcing your digital workforce to an external party.
The leading companies will instead develop their own agent ecosystems or use platforms that let them retain control.
This doesn’t necessarily mean training custom models from scratch, but it does mean owning the orchestration layer and having the ability to plug in specialized models where needed (including open source or partner-provided models that can run on premises for privacy). It means having visibility and command over how AI is operating within your business, rather than letting it run as a black box in the cloud.
You Don’t Have to Build Everything, Build with AI
Leveraging AI in security doesn’t mean every organization has to invent all the pieces themselves. There will still be specialized AI products worth buying, and foundational infrastructure (like good vector databases for knowledge retrieval, evaluation frameworks for AI outputs, secure prompt handling tools, etc.) that remain essential.
We’re not throwing all existing tech away at all.
But the key is that the future belongs to those who build with AI, who embrace assembling secure, manageable AI agent networks to automate their operations, rather than just slapping “GPT” onto a login screen and calling it a day.
That’s the future we’re building towards at Kindo.
We believe secure, scalable automation through AI agents is the next evolution of security operations. It’s not about buzzwords or one off features; it’s about reimagining how work gets done when humans and AI systems collaborate at scale.
The bottom line for any security leader or business: you don’t need to boil the ocean or reinvent AI from scratch, but you absolutely should be rethinking your architecture with AI in mind. Those who start that journey now will be far better prepared for the AI world that’s fast approaching.