Summary
SOAR promised to automate the SOC. Instead, it created a new job title (SOAR Engineer), a new category of technical debt (playbook rot), and a coverage ceiling that left 40% of alerts untouched. Gartner placed SOAR in the "trough of disillusionment" on its 2024 ITSM Hype Cycle, and the successor tools crowding the market, Tines and Torq among them, still follow the same fundamental architecture: scripted workflows with a friendlier interface. They lower the playbook barrier. They do not eliminate it.
xpander.ai represents a different path. It is a general-purpose agentic development platform, not a security product, that gives AI agents the ability to navigate an existing F500 security stack (CrowdStrike, Splunk, Sentinel, Okta, ServiceNow, and dozens more) without requiring anyone to write a playbook. The distinction is structural: while SOAR follows a script, xpander.ai follows a goal. With air-gapped and self-hosted deployment, native human-in-the-loop approval gates, and sandbox execution with guardrails, xpander.ai addresses the governance requirements that make F500 CISOs lose sleep over autonomous AI. The agentic SOC is not a vendor pitch. It is the operating model that a threat landscape demands when attackers move from initial access to lateral movement in under 30 seconds.
The SOAR Promise vs. the SOAR Reality
Gartner did not kill SOAR. SOAR killed itself. The 2024 trough-of-disillusionment placement just made the obituary official. What went wrong is worth examining carefully, because the same failure modes are lurking inside every successor tool that ships with the word "automation" in its tagline.
Playbook Debt
The original SOAR value proposition was straightforward: codify your analysts' decisions into playbooks, run them at machine speed, free up humans for the hard stuff. In practice, every SOAR deployment became a maintenance project. Static Python playbooks broke on every upstream API change, every schema migration, every vendor update. Industry data from D3 Security frames the problem bluntly: "One architect, 200 static playbooks, zero backup plan."
The coverage numbers tell the rest of the story. Most SOAR deployments handle only 30-40% of alerts. According to Torq's industry research, 40% of alerts are never investigated at all. That is not an automation program. That is a partial solution with a full-time maintenance cost.
The Integration Tax
SOAR vendors sold "500+ integrations" on their marketing pages. Building and maintaining custom connectors against a messy F500 stack (three clouds, legacy on-prem tools, fragmented identity providers) turned integration into a specialist's burden, not an analyst's workflow.
The SOAR Engineer Bottleneck
The cruelest irony: a technology sold as analyst empowerment required a new, rare, expensive role just to keep the lights on. SOAR Engineers became the single point of failure for the entire automation program. When that person left, the playbooks went with them.
What Came Next: Two Paths, One Wrong Turn
The market responded to SOAR's failures by splitting in two directions. Gartner's 2025 Hype Cycle for Security Operations added "AI SOC Agents" as an emerging category, signaling where the analyst community sees the future. The global SOAR market ($1.2B in 2024, projected $3.5B by 2032) is not shrinking. The spend is migrating toward AI-native successors.
The two directions: hyperautomation workflow tools with AI layered on top, and agentic AI SOC platforms built from scratch around goal-following agents.
No-Code Is Still Script-Following
Tines and Torq represent the first path. They are well-built products that made real improvements over legacy SOAR.
Best for: Mid-market and enterprise SOC teams that want lower-friction playbook creation without deep Python expertise.
Pros:
Faster playbook authoring through visual, no-code builders that reduce time-to-first-automation from weeks to days
Security-domain focus with pre-built templates and connectors for common SOC workflows like phishing triage and alert enrichment
Lower barrier to entry for analysts who are not software engineers, broadening the pool of people who can build automations
Cons:
Still script-following at core, meaning every new threat type or edge case requires a new workflow to be authored and maintained
Playbook debt persists because the underlying model is still "if X then Y," just with a drag-and-drop interface instead of Python
Security-tool lock-in constrains orchestration to the security domain, leaving gaps when investigations span IT ops, identity, or cloud infrastructure
Torq's own marketing validates the gap: they frame their AI SOC platform as handling "threat scenarios no playbook exists for." That framing is correct. The question is whether an architecture built on scripted workflows can deliver on that promise, or whether it requires something structurally different.
The Agentic Difference: Goal-Following vs. Script-Following
An agentic SOC platform does not execute step 1, then step 2, then step 3. It receives a goal ("determine whether this alert represents a real threat and take appropriate action") and reasons through the investigation, selecting which tools to query, what context to gather, and how to escalate, based on what it finds at each step. When the CSA states that "incident response must operate at machine speed" because breakout happens in 27 seconds (the time between an attacker gaining initial access and moving laterally to other systems), the response architecture needs to be adaptive, not pre-scripted.
The distinction matters most for gray-area investigations: the alerts that do not match any existing playbook, the novel TTPs, the incidents that span multiple systems and require judgment about what to check next. Those are the 40% that never get investigated.
xpander.ai as the SOC World Model
xpander.ai is not a security tool. It does not ship with a library of phishing response templates. It is an agentic development platform that happens to be exceptionally well-suited to security because F500 security stacks are exactly the kind of messy, heterogeneous environment where agents need both hands and context.
Best for: Security Engineering leads and SOC Transformation teams at Fortune 500 companies that need AI agents to operate across a fragmented security stack without ripping out existing tools.
The Enterprise World Model Applied to Security
xpander.ai maintains a persistent shared representation of organizational state: systems, data, processes, and decisions. In a security context, that means agents operating on xpander.ai carry awareness of the organization's security posture, tool inventory, alert history, and response patterns. An agent triaging an alert at 3 AM does not start from zero. It knows which hosts are critical, which users have elevated privileges, and what the organization's escalation thresholds look like.
Cross-run memory makes correlation possible across incidents. If an agent investigated a suspicious login from a Brazilian IP on Tuesday and then sees the same IP in a DNS exfiltration alert on Thursday, that context carries forward. Legacy SOAR treated every playbook execution as stateless. xpander.ai treats investigations as episodes in an ongoing narrative.
Connecting the F500 Security Stack
xpander.ai orchestrates the tools already deployed: CrowdStrike Falcon for EDR, Splunk or Microsoft Sentinel for SIEM, Okta for identity, ServiceNow and Jira for ticketing, VirusTotal and Recorded Future for threat intelligence, Wiz or Prisma Cloud for cloud security posture. The connective tissue spans all of these without replacing any of them.
Pros:
Air-gapped and self-hosted deployment means data never leaves the customer's perimeter, whether on-prem, in a VPC, or in a classified environment. For F500 CISOs, this is not a feature. It is a prerequisite.
AI-native workflow resolution eliminates manual data mapping between tools. Steps are defined in natural language with runtime field resolution, making integrations resilient to upstream API changes (the exact failure mode that generated SOAR's playbook debt).
Stateful long-running execution with checkpointing and retries supports investigations that take minutes, hours, or days, not just the sub-second automations that workflow tools optimize for.
Cross-run memory enables alert correlation across incidents, giving agents pattern-detection capability that isolated playbook runs cannot provide.
Multi-trigger invocation allows agents to fire from SIEM alerts, webhooks, Slack messages, cron schedules, or other agents, fitting into whatever operational surface the SOC already uses.
Developer-first with a business-user layer means security engineers build production-grade agents with full control, while SOC managers can configure and invoke without writing code.
Cons:
General-purpose, not security-specific means xpander.ai does not ship with out-of-the-box SOC playbook templates. Teams build agents against their specific stack and processes, which requires upfront design work.
Agentic architecture is newer and demands a shift in mental model from "define every step" to "define the goal and the guardrails." Teams accustomed to deterministic SOAR workflows will need to build trust in the agent's reasoning incrementally.
What an Agent Does That a Playbook Cannot
A concrete example makes the difference tangible. Consider a standard alert triage scenario:
An xpander.ai agent ingests a SIEM alert from Splunk, queries CrowdStrike for the process tree and parent-child relationships on the affected endpoint, enriches the findings with VirusTotal and Recorded Future threat intelligence, scores the severity based on the organization's risk context (is this a developer workstation or a domain controller?), and either opens a ServiceNow ticket for low-severity findings or escalates to an on-call analyst in Slack for high-severity ones.
No playbook was written. No workflow was drawn. The agent received a goal (triage this alert), had access to the relevant tools through xpander.ai's orchestration layer, and reasoned through the investigation. When the next alert comes in with a slightly different shape, involving a different endpoint type, a different malware family, a different set of indicators, the agent adapts. A playbook would need to be rewritten.
Safe Enough for the F500 CISO
If you have been in security leadership for any length of time, your first reaction to "autonomous AI agents in the SOC" is probably: "What happens when the agent isolates a production database server during a false positive?" That is the right question. Governance in xpander.ai is structural, not aspirational.
Infrastructure Isolation First
The primary governance pillar is deployment architecture. xpander.ai runs air-gapped, self-hosted, or in the customer's own VPC. No telemetry, no alert data, no investigation context leaves the network perimeter. For organizations operating in regulated industries or classified environments, this is the conversation that either opens or closes the door. Microsoft's Cyber Pulse report (February 2026) found that 80% of Fortune 500 companies now use active AI agents, with observability, governance, and security as the defining concerns. Infrastructure isolation is where governance starts.
Human-in-the-Loop by Design
xpander.ai's Wait nodes are native to the execution model, not bolted on as an afterthought. When an agent reaches a high-consequence action (isolating a host, revoking credentials, blocking an IP range), the workflow pauses and routes an approval request to the designated human through Slack, Teams, or any configured surface. Execution resumes only after explicit approval.
The design is deliberate: autonomous for low-risk, high-volume decisions. Human-gated for irreversible ones. The threshold between those two categories is configurable per organization, per use case, per severity level.
Sandbox Execution and Guardrails
Agents run in isolated workspaces, preventing accidental impact on production systems during investigation or remediation. Before any action fires, Guardrails nodes perform AI-powered output validation, checking that the proposed action is consistent with the goal, within the defined boundaries, and safe to execute. Think of Guardrails as a second agent reviewing the first agent's homework before it gets submitted.
Comparison Table
Capability | Cortex XSOAR / Splunk SOAR | Tines / Torq | xpander.ai |
|---|---|---|---|
Architecture | Script-following playbooks | No-code script-following workflows | Goal-following agentic AI |
Playbook requirement | Yes, Python-heavy | Yes, visual builder | No playbook needed |
API change resilience | Low, manual remapping | Low-moderate | High, runtime AI field resolution |
Air-gapped deployment | Varies by vendor | Limited | Native, standalone |
Human-in-the-loop | Bolt-on or manual | Partial | Native Wait nodes |
Cross-incident memory | None | None | Cross-run memory |
Domain scope | Security-only | Security-only | Any enterprise system |
Alert coverage model | Static, 30-40% of alerts | Broader but still script-bound | Adaptive, goal-driven |
Who Should Be Reading This
The buyer for agentic SOC infrastructure is not the CISO alone, and it is not the Tier 1 analyst. It is the Security Engineering lead or the Head of SOC Transformation: the person responsible for closing the gap between the SOC's current tooling and its actual operational needs.
The Stack You Already Have
xpander.ai does not ask you to rip out CrowdStrike, migrate off Splunk, or abandon your ServiceNow investment. It sits between the AI model and the security stack, providing the interface layer that lets agents operate across all of them. If you have spent the last five years building a defensible security architecture, xpander.ai makes that architecture accessible to AI without unwinding it.
The Outcome: Time to Certainty
The metric that matters in the AI-native SOC is time to certainty: how quickly can you determine whether an alert represents a real threat and take the appropriate action? An xpander.ai agent completes an investigation in roughly 2 minutes that would take an analyst 40. It does so without a fragile playbook that breaks when CrowdStrike ships a new API version. And it does so inside your perimeter, with human approval gates on every action that could cause harm.
The SOAR era taught F500 security teams an expensive lesson: automation that creates more maintenance than it eliminates is not automation. It is overhead with a dashboard. The agentic SOC replaces that overhead with AI that reasons, adapts, and operates within the governance boundaries your organization requires. xpander.ai is the infrastructure layer that makes that transition possible without starting over.


