Artificial intelligence
Artificial intelligence (AI) is an important component of service delivery at Arctic Wolf®.
We use generative AI to deliver tools that can answer general security questions and provide insights about your environment, including Aurora Security Assistant in the Unified Portal and the Aurora Endpoint Defense console.
We also use generative AI to deliver internal tools that assist the Arctic Wolf security team with investigating and triaging security events, and machine learning (ML) algorithms to improve malware detection.
Model training
Arctic Wolf uses ML algorithms that are trained on open-source data sets, commercial data sets, and certain executables collected from customers who opted in. The algorithms are tested in a malware and attack simulation lab, and then reviewed by a security researcher or analyst.
Our generative AI tools utilize foundational large language models (LLMs) that are trained by the developers of those LLMs. Arctic Wolf develops cybersecurity-specific skills that invoke LLMs as needed. They are not trained on customer information.
Model testing
Predictive ML detections are validated through backtesting, which runs new models against previously identified malware. Our threat research teams also run purple testing simulations to validate the efficacy of the models. Updated models are run in parallel with current models for final validation before they are deployed in production.
The foundational LLMs used by Arctic Wolf's generative AI tools are tested by the developers of those LLMs for transparency and bias. Aurora Security Assistant in the Unified Portal collects feedback for the Arctic Wolf R&D team to make improvements, such as for prompt engineering, but the LLM itself is not retrained.
Data collection
Data sent to Aurora Security Assistant does not leave Arctic Wolf's AWS environment.
- The applicable product agreement
- Arctic Wolf Subprocessor List
- Privacy Notice for Customers
- How Aurora Endpoint Security products collect and use data
Impact and security
An impact assessment was performed on all AI tools that are implemented at Arctic Wolf. No third-party evaluations have been conducted on the tools used. The use of AI at Arctic Wolf is considered minimal risk, according to the definitions outlined in the European Union AI Act.
Our ML algorithms classify files as malware, and your individual customer policy determines whether or not files that are classified as malware are blocked. The ML algorithms themselves do not make any automated decisions.
Aurora Security Assistant in the Aurora Endpoint Defense console does not make any decisions, and instead explains alerts and other artifacts in natural language.
Aurora Security Assistant in the Unified Portal also does not make any automated decisions, but might suggest steps or other actions that a human can choose to take.
Arctic Wolf's internal generative AI tools make automated decisions to support triage workflows, based on defined logic.
Opt out
ML algorithms and Arctic Wolf's internal usage of AI tools are a core part of our service delivery, so they cannot be disabled. You can choose to not automatically block files that were classified as malware by the ML algorithms, but we recommend keeping this functionality enabled for optimal performance.
Aurora Security Assistant is an optional tool that you can choose not to use. Aurora Security Assistant in the Unified Portal is not enabled unless your organization joins the beta program using the Aurora Security Assistant Opt-In page.