
Escaping the Garbage Can: How AI-Native Ops Replace Tribal Automation
When you think about a large company with thousands of employees, you picture flowcharts, spreadsheets and runbooks that define who does what and when. These invisible threads hold things together, but they also hide a truth that a lot of leaders ignore: most business processes are a patchwork of informal rules, duplicate steps and work arounds.
In a study by Ruthanne Huising, employees mapping their own operations found unused activities, unofficial shortcuts and endless duplication. The CEO who reviewed the map admitted the disorder was “even more fucked up than I imagined.” Organizational theory calls this the Garbage Can model, where problems, solutions and decision makers drift in a chaotic mix.
In such an environment, defining “how we do things here” is difficult, and scaling any technology that relies on clean processes becomes near impossible. As enterprise teams bring artificial intelligence (AI) into this chaos, they face two paths:
One tries to codify tribal knowledge by mapping and automating every rule. The other follows the Bitter Lesson where systems which learn from outcomes outperform those designed by hand. The first is tribal automation; the second is AI native. This post explores why the traditional path struggles, how the AI native approach works, and how we apply it at Kindo.
Why Tribal Processes Hold You Back
Tribal knowledge refers to unwritten, informal and experience based know-how that lives inside the heads of seasoned engineers, operators and analysts. In a factory it might be the trick to prevent a machine from overheating; in a security team it could be the intuition about which log pattern signals a breach.
This knowledge is important, but because it is rarely documented, it is fragile and hard to transfer. A lot of transformation projects begin by trying to extract this expertise, draw exhaustive process maps and then build automation around them. The promise is attractive, capture your experts’ experience once and then automate it forever. However, the reality is actually a lot more sobering.
1) Mapping the entire operation is slow and expensive. Huising’s teams found so many undocumented, contradictory and unused steps that they became disillusioned. Managers often discover that there is no single correct process, only a tangle of work arounds created over time. In our own interactions with enterprise customers, we see this every day: no two engineers perform incident response the same way, and runbooks are updated ad hoc. When your foundation is shifting sand, any automation built on top of it inherits that fragility.
2) Tribal automation lacks adaptability. Traditional orchestration systems like SOAR (Security Orchestration, Automation and Response) rely on static logic trees defined in advance. They work well for known scenarios, like blocking a malicious IP or rotating credentials when a certain alert fires, but they break when incidents fall outside the predefined paths.
3) Codifying tribal knowledge doesn’t guarantee success. A MIT study found that despite billions of dollars spent on generative AI, 95% of organizations actually realize no return. The barrier isn’t hardware or algorithms; it’s organizational learning and context. Without clear objectives and coordinated usage, AI becomes another tool tossed into the garbage can, an expensive project in search of a problem.
The Bitter Lesson and AI Native Methods
Rich Sutton’s Bitter Lesson explains that methods which use massive computation and learning consistently outperform techniques that rely on human encoded knowledge. Chess, Go, speech recognition and computer vision all went through phases where researchers tried to capture expert heuristics, only to see simpler, search driven or learning based methods eclipse them.
The lesson is bitter because it suggests that our hard won expertise and intuition matter less than we think. Yet it is also liberating: instead of obsessing over encoding every nuance of a process, we can focus on defining what good outputs look like and let AI discover how to achieve them.
A blog post by Ethan Mollick shows this tension in the context of organizational AI usage. He compares two agentic systems. Manus, a hand made agent built with hundreds of lines of prompt engineering, dutifully follows a predefined to do list to produce a graph. ChatGPT, an agent trained via reinforcement learning on outcomes rather than specific steps, produces the same graph by taking its own route.
Manus encodes hard-won knowledge while ChatGPT agent uses generalized methods. When both are asked to create an Excel file, ChatGPT’s version works; Manus contains errors. Improvements to Manus require more prompt engineering; improvements to ChatGPT agent come from more examples and more compute.
So, why does this actually matter for enterprise operations? Because the Bitter Lesson suggests that rather than mapping every process, we should train AI on what a good outcome looks like and allow it to learn how to navigate the garbage can.
The process map that made the CEO despair can still serve a purpose, not as a blueprint for automation but as a diagnostic of the messy environment the AI will face. Our goal shifts from untangling the mess to defining success and giving AI the tools to achieve it. Mollick infers that if reinforcement trained agents can navigate chaotic organizations by focusing on outputs, the despair of untidy processes may be misplaced.
This vision aligns with the rise of vertical AI: specialized models trained on domain specific data that capture and productize tribal knowledge. Vertical AI doesn’t aim to replicate human heuristics; it absorbs them, learns from them and generalizes them.
Our own model, Deep Hat, demonstrates this approach. Deep Hat is a 7 billion parameter, open weight large language model trained on a vast corpus of DevOps and SecOps data, from vulnerability management and incident response to malware analysis and infrastructure-as-code. Rather than censoring offensive or defensive tasks, it embraces dual-use scenarios responsibly, allowing red teams to generate exploits and then propose secure patches.
Its training uses a fine tuning regime that progressively escalates tasks from simple to complex. The result is a model that works like an engineer, runs on a single GPU or even a high end laptop, and outperforms larger general purpose models on DevSecOps tasks.
A native AI approach also changes how we build automation. Instead of designing static workflows, we create agentic systems that iterate toward a goal. In the context of incident response, a blog post we wrote describes how an agent handles a surge of HTTP 500 errors: it pulls logs, checks traces and metrics, inspects Kubernetes pods for crash loops and examines recent code changes.
From this data it infers whether the root cause is a bad release or a resource shortage and proposes a fix for human approval. When facing widespread “permission denied” errors, the agent inspects IAM policies across cloud and on-prem resources, reviews Vault and Kubernetes configurations and spots mismatches. It can then generate a pull request to correct the misconfiguration and wait for confirmation. The agent is not following a script; it is exploring, reasoning and learning within guardrails.
This is basically the Bitter Lesson applied to operations.
Building an AI native layer for technical operations
At Kindo we have internalized the Bitter Lesson.
Our platform is built AI first to serve enterprise operations. We aren’t a legacy SOAR tool or a chatbot that sits on top of your existing stack; we are an intelligent automation layer designed to replace the duct taped software you’ve been holding together.
Here’s how our approach embodies the AI native philosophy:
Entirely agentic by design
Kindo agents can operate semi autonomously or fully autonomously. They make decisions and take actions, turning intent into impact in real time. When you ask the platform to “diagnose and fix a spike in 500 errors” or “search for exposed secrets and rotate them,” you’re not triggering a predetermined flowchart. You’re giving AI a goal. It will iterate, gathering data, running tools, refining its plan, until it accomplishes the objective or asks for help. This approach mirrors reinforcement learning: improvement comes from more examples and more computation, not more branching logic.
Domain specific intelligence through Deep Hat
Our flagship model, Deep Hat, powers these agents. Purpose trained to understand infrastructure, security and incident response, it transforms natural language into real time action. Deep Hat is a model that thinks like a DevOps or SecOps engineer because it was raised on domain data. Its expanded knowledge base includes web security, malware analysis, container scanning and threat intelligence. It can orchestrate multi-step tasks, parse logs, interpret vulnerability reports and propose next steps. Thanks to its efficient design, it can run on premises without a GPU cluster, keeping your sensitive data under your control.
Enterprise grade control and deployment flexibility
An AI native platform must be trustworthy. Data loss prevention, audit logging, and role based access control are built into Kindo. We provide centralized access controls where only authorized personnel can trigger or approve actions. Our agents monitor data transfers for anomalies, preventing leaks. Every action is logged to meet compliance frameworks like ISO 27001, GDPR, and NIST CSF. The platform aligns with regional regulations like UAE NESA and SAMA. For organizations with sensitive workloads or strict regulatory requirements, we offer full control through self managed deployment: you can run Kindo entirely within your infrastructure, on premises, in the cloud, or across hybrid environments with no third party exposure.
Integrating with your existing stack
We understand that most enterprises cannot rip and replace every tool overnight. Our platform integrates with GitHub, ServiceNow, and a wide range of DevOps and security tools. We wire your repos, log streams and cloud application programming interfaces into a single command line conversation, creating an autonomous control loop that diagnoses issues, decides on fixes and deploys them. This integration allows Kindo to act as an AI native terminal for technical operations: one prompt leads to full stack action.
Take Your Next Steps With Kindo
Organizations face a fork in the road.
One path attempts to codify every process, capture every nuance of tribal knowledge and build static automation. This approach can deliver incremental benefits but struggles to scale and adapt; it often falls victim to the garbage can of uncoordinated initiatives. The other path accepts the Bitter Lesson: humans cannot out engineer the complexity of their own organizations, but they can define what success looks like and empower AI to find the way. An AI native approach relies on specialized models trained on domain data, on agents that iterate toward goals and on platforms that combine autonomy with human oversight.
At Kindo we believe the present and future belongs to AI-native technical operations.
We have built a platform that unifies agentic AI and security into one layer, enabling enterprises to move faster, stay secure and scale without limits. Our agents don’t memorize your messy processes; they learn from outcomes. Our models capture the deep tribal knowledge of DevSecOps and productize it so that every operator can benefit. Our platform runs where you need it, on premises or in the cloud, and keeps you in control. If you’re ready to make that leap, from the tribal to the AI native, our team is here to help. Request a demo today to get started on your AI transformation.