Artificial intelligence
The Aurora® Superintelligence Platform is our cybersecurity platform built for the AI era. The platform combines agentic AI, generative AI, machine learning (ML), and customer-specific context to improve detection, investigation, and response across endpoint, network, cloud, and identity environments.
AI features
Aurora AI is Arctic Wolf®'s portfolio of AI capabilities embedded across the Aurora Superintelligence Platform.
- Aurora Agentic SOC — An agentic framework with hundreds of specialized AI agents that lead security investigations, analyze evidence, and escalate to human experts when required.
- Aurora Security Assistant — A generative AI interface that helps customers and analysts ask security questions, summarize investigations, and quickly access insights about their environment in natural language.
- ML and detection models — ML models trained on more than a trillion security events per day across over 10,000 customer environments. These models support behavioral analytics, anomaly detection, and threat classification across endpoint, network, cloud, and identity telemetry.
- Customer Context — A persistent AI layer that learns each customer's environment, including alerting preferences, escalation paths, and what is normal for their users and devices, so that every detection and investigation decision is tailored to the customer.
AI and the Concierge Delivery Model
AI is not replacing the Concierge Security® Team (CST). We use AI for more repetitive work such as triage, enrichment, summarization, and ticket preparation. Using AI improves the speed and quality of investigations, reduces unnecessary back-and-forth between customers and the SOC, and helps customers gain value faster while giving both Arctic Wolf experts and customer teams more time to focus on proactive security work and progress along the customer's security journey.
Model training
Arctic Wolf uses ML algorithms that are trained on open-source data sets, commercial data sets, and certain executables collected from customers who opted in.
At this time, none of our generative AI features are trained on customer information. However, Arctic Wolf does use relevant customer and security data when invoking large language models (LLMs). We provide the LLMs with context based on years of experience operating a SOC, including a broad foundation of security data from our open platform architecture.
Model testing
Predictive ML detections are validated through backtesting, which runs new models against previously identified malware. Our threat research teams also run purple testing simulations to validate the efficacy of the models. Updated models are run in parallel with current models for final validation before they are deployed in production. The foundational LLMs used by our generative AI tools are tested by the developers of those LLMs for transparency and bias.
Data collection
Data sent to Arctic Wolf's AI tools does not leave Arctic Wolf's AWS environment.
- The applicable product agreement
- Arctic Wolf Subprocessor List
- Privacy Notice for Customers
- How Aurora Endpoint Security products collect and use data
Customer data and customer context are isolated by tenant and are not shared across environments. Customer-specific data is not surfaced in generative AI outputs to another customer.
Impact and security
An impact assessment was performed on all AI tools that are implemented at Arctic Wolf. No third-party evaluations have been conducted on the tools used. The use of AI at Arctic Wolf is considered minimal risk, according to the definitions outlined in the European Union AI Act.
Our ML algorithms classify files as malware, and your individual customer policy determines whether or not files that are classified as malware are blocked. The ML algorithms themselves do not make any automated decisions.
Aurora Security Assistant does not make any automated decisions, but might suggest steps or other actions that a human can choose to take.
Our internal generative AI tools make automated decisions to support triage workflows, based on defined logic. As part of the AI Trust Engine™, our AI agents have bounded autonomy and can only support response actions within their designated area of expertise. Agents can access only the data, tools, and actions required for their specific function, and those permissions are enforced centrally so agents cannot operate outside their intended role. Customer data is also kept logically separated, so that if an agent is investigating an incident for one customer, it cannot inadvertently access or respond with information from another customer. Humans retain authority over irreversible, high-impact, or low-confidence actions, and customer-facing escalations currently require human approval.
Opt out
Our ML algorithms and internal AI tools are a core part of our service delivery, so they cannot be disabled. You can choose to not automatically block files that were classified as malware by the ML algorithms, but we recommend keeping this functionality enabled for optimal performance.
Aurora Security Assistant is an optional tool that you can choose not to use. Aurora Security Assistant in the Unified Portal is not enabled unless your organization joins the beta program using the Aurora Security Assistant Opt-In page.