Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
What is Anvilogic?
Anvilogic is a Multi-data platform SIEM that enables detection engineers and threat hunters to detect, hunt, and investigate seamlessly across disparate data lakes and tools without the need to centralize data, learn new languages or deploy new agents.
Anvilogic helps enterprise security operation centers implement more accurate detections and hunt more efficiently and effectively with its low-code detection builder, automated workflows, and AI-powered insights for tuning, maintenance and critical alert escalations. You can go from threats to detections in minutes with 1000s of curated detections available to deploy with single clicks. The platform provides you continuous SOC maturity scoring to always assess and get a real-time view into your detection posture against your highest priority threats.
Notable new features, enhancements, and bug fixes.
The Anvilogic platform releases continuously. This list is periodically updated with the latest functionality and changes
March 13, 2025
This release includes the following enhancements:
Pipeline updates for the Crowdstrike EDR and Proofpoint vendor alerts, as well as push alerts.
March 6, 2025
This maintenance release includes various bug fixes.
February 27, 2025
This maintenance release includes various bug fixes.
February 20, 2025
This release provides support for integrating Databricks on AWS as a Beta feature for all customers.
February 13, 2025
This maintenance release includes the following enhancements:
If you don't have Splunk configured, EOI drilldown links won't be active anymore.
Updates to the algorithm for recommending rules in the Armory.
Bug fixes in the Azure and Databricks integrations.
February 6, 2025
This release includes enhancement to the Databricks integration to include creating a group, and adding the service principal to the group, along with a variety of bug fixes.
January 30, 2025
This release includes the following enhancements:
Enhancements to the Azure UD integration.
Bug fixes and enhancements to the Databricks integration.
Enhancements to some tuning recommendations. including priority tuning for some threat identifiers.
January 23, 2025
This release enables you to configure rule enrichment macros used to customize the Enrich component in the Unified Detect builder.
January 16, 2025
This release includes the refactored workflow for creating threat identifiers. The new workflow integrates the Unified Detect rule builder for various data repositories.
January 9, 2025
This release enables you to quickly identify macros with available updates from the home page, list of macros, and macro details.
December 19, 2024
This release includes the following new features and enhancements:
Deobfuscator tool for Monte Copilot.
Updated Monte Copilot API endpoints to provide entity analysis in your investigations.
December 12, 2024
This release includes various bug fixes and enhancements related to data onboarding.
November 21, 2024
This release includes various bug fixes.
November 14, 2024
This release includes a new hunting insight for unusual IP location.
November 7, 2024
This release introduces the following enhancements:
Enhancements to Azure UD data feeds, ability to configure the ADX cluster size, and including Azure as a data logging platform in the first-time onboarding.
Enhancements to SSO SAML configuration:
Prevent password change and reset
Invitations are no longer sent to new users for SAML enabled accounts
SAML Group Mapping enable accounts will reflect the corresponding roles in Anvilogic, and the roles are also included in audit events.
Google Cloud Platform (GCP) logs Snowflake integration.
Previously, you can only view diffs for rules imported from the Armory. This release expands this capability so that you can view diffs for custom imported rules.
October 24, 2024
This release introduces a new Snowflake integration to onboard the data in your Amazon S3 buckets and generate detections on that data.
October 17, 2024
An architectural adjustment to have threat scenarios are run directly on the Anvilogic platform, rather than being deployed on your Snowflake environment. This change simplifies the management of threat scenarios, reducing the overhead involved in maintaining custom code for multiple data repositories. This adjustment also ensures minimal delays or data loss when gathering EOIs from various log repositories, leading to more effective threat detection.
September 26, 2024
This release introduces the following feature enhancements:
The threat scenario deployment workflow is updated so that threat scenarios are first added to the Workspace before they are deployed.
The Search and Unified Detect page is enhanced to support search across Azure data feeds and macros on the Anvilogic platform.
September 19, 2024
This release introduces Azure as a supported data logging platform.
August 22, 2024
This release includes the following enhancements:
The integrations workflow to get data sources into Snowflake is enhanced to provide self-managed pipeline options the UI when available.
The QnA tool in Monte Copilot is enriched with information from the Anvilogic Armory so that it can now pull information about threat identifiers and threat scenarios, in addition to its existing capability of pulling data from Google searches and Anvilogic Forge Threat Reports.
August 8, 2024
This release makes Monte Copilot generally available under a licensing model.
This release also addresses a variety of bug fixes, including the following:
The ability for users to validate a rule via API.
The ability to sync feeds when onboarding task is incomplete and is pending on data feeds sync from Snowflake.
July 29, 2024
This release includes the following enhancements:
MonteAI Copilot is enhanced with additional tools such as IoC to check if an URL or IP address is an indicator of compromise (IoC), and AnvilogicAllowlistProcessRegexGenerator to help you generate regex patterns for allowlisting benign processes.
The UI libraries are updated to provide an enhanced experience. In some cases, you may notice a slight difference in the look and feel of the page or component.
July 11, 2024
This release includes the following new features and enhancements:
On-demand sync for data feeds.
Additional event types are supported for the Lacework vendor alert integration.
The existing Snowflake Custom Data integration for Anvilogic-managed pipelines is replaced by separate Cribl Stream and Forward Events integrations.
June 27, 2024
This release provides the ability to create and manage your own techniques and sub-techniques outside of the MITRE ATT&CK framework.
June 13, 2024
The Threat Priorities page is updated so that when you are viewing your prioritized threat techniques, the default view is now a list of prioritized techniques. Previously, you saw a matrix view of your prioritized techniques by default. You can click List View or Matrix View to switch between the views.
May 30, 2024
This release includes the following features and enhancements:
Ability to auto accept tuning insights.
Upgrade to version 15.1 of the MITRE ATT&CK framework. This upgrade introduces additional data categories for alerts on the Anvilogic platform.
May 16, 2024
This release includes enhancements to the alert ingestion pipeline with machine learning-based enrichments and improved performance.
May 9, 2024
This maintenance release provides support for Microsoft Security Alerts and Incidents vendor alert integration.
May 2, 2024
This release includes the following features and enhancements:
(Beta) MonteAI Copilot, your SOC assistant trained by the common personas within the SOC to help assist any person within the SOC. MonteAI Copilot has access to the commonly used tools and data sets that enable these personas to perform their day-to-day activities.
(Beta) Auto Investigate automatically populates the Hypothesis and Resolution in hunting insights generated after May 2, 2024 to help you perform more efficient investigations.
The CrowdStrike FDR integration for self-managed pipelines is enhances to support additional data types.
April 18, 2024
This release includes a variety of bug fixes, and the addition of the edit_hunting_insight_automation privileges to the Content Developer role.
April 4, 2024
This release provides Google Workspace Snowflake integration to get your admin, drive, and login events into Snowflake to generate detections on the Anvilogic platform.
March 21, 2024
This release introduces the beta version of the ability to create native Snowflake threat scenarios.
March 7, 2024
This is a maintenance release to address performance issues and includes several bug fixes.
February 22, 2024
This release includes back-end enhancements in the Unified Search area, along with a variety of bug fixes.
February 8, 2024
This release provides the following new features and enhancements:
A redesigned investigation experience, featuring a new timeline that makes it easier to pivot from the timeline and add EOIs and notes to the timeline.
The EOI Summary dashboard is moved under the Detect in the navigation bar.
Saved investigation are called cases, which can be managed and shared by your team of analysts.
The workflow for deploying trending topics and detection packs is updated to align with the threat scenario deployment workflow, where macro verification happens at the end of the workflow instead of at the beginning.
February 1, 2024
Introducing Ask MonteAI, enabling you to interact with the product documentation using MonteAI from any page on the Anvilogic platform.
January 25, 2024
This release provides the ability to create and manage your own platforms outside of the MITRE ATT&CK framework.
Threat identifiers created using Unified Detect now show the update () icon when there are rule updates available. Previously, update notifications were not available for threat identifiers created from Unified Detect.
New features and enhancements for the 5.x Anvilogic platform releases.
See What's New for a summary of the most recent releases and their new features and enhancements.
August 17, 2023
This release introduces health insights to notify you when detections fail to run.
August 10, 2023
This release provides the following new features and enhancements:
Ability to ingest custom raw data into Snowflake. Each field from the raw log is extracted as a table column in Snowflake.
Enhancements to the Hunt experience, including the following:
Ability to add multiple events of interest (EOIs) to evidence with a single step.
Improved performance dealing with issues related to timeouts and maximum number of entries.
July 27, 2023
This release provides the following new features and enhancements:
A guided onboarding workflow so new customers can integrate the Anvilogic platform with Splunk, perform key actions, and start getting detections.
Added the following third-party vendor alert integrations:
GitHub Dependabot ALerts
Wiz.io Alerts
Ability to add to an allow list from the event viewer in Unified Detections.
Validation for threat scenario deployment so make sure that any dependent threat identifiers and their dependent macros are also available and deployed.
June 29, 2023
This release provides the following new features and enhancements:
A guided onboarding workflow so new customers can integrate the Anvilogic platform with Splunk or Snowflake, perform key actions, and start getting detections.
A scratchpad is added to the Unified Detections canvas so you can create custom SQL queries and save your own macros.
MITRE mapping for third-party vendor integration data is improved to consider the data categories mapped in your environment.
June 15, 2023
This release provides the following new features and enhancements:
Ability to ingest Google Security Command Center vendor alerts.
After you integrate the Anvilogic platform with Snowflake, you can view details of the integration, such as the connected Snowflake instance and search commands.
Improvements to Hunt and Investigate.
Usability enhancements and bug fixes.
June 2, 2023
This release provides the following new features and enhancements:
When you deploy a rule in a threat identifier, a new workflow validates and checks for dependent macros before the rule is deployed.
The Anvilogic platform provides recommended follow-up tasks upon completion of certain workflows, such as building and deploying content, maturity score updates, and investigations.
May 18, 2023
This release provides the following new features and enhancements:
Ability to ingest Tanium Cloud endpoint vendor alerts.
Usability enhancements and bug fixes.
May 4, 2023
This release adds the ability to ingest Microsoft Defender logs into Snowflake and enable Anvilogic detections.
April 20, 2023
This release includes the following new features and enhancements:
New Hunt pages to help you drill down and investigate your hunting insights, or start a new hunt from scratch.
Add evidence to any hunt to create a threat trail.
Generate a PDF report of any hunt.
Save and revisit your hunts at any time.
Ability to ingest AWS Cloudtrail logs into Snowflake and enable Anvilogic detections.
The tactics, techniques, and threat groups on the Anvilogic platform are updated to support the latest version of the MITRE ATT&CK framework.
This may have impact on your threat priorities, maturity score, and MITRE mappings within existing threat identifiers and threat scenarios. Any impacted threat scenarios are queued for deployment. Ensure that you deploy them, if you do not have auto-deploy turned on.
April 6, 2023
This release includes the following new features and enhancements:
The home page is updated to highlight prioritized trending topics, personalized tasks and recommended content for the user who is logged in, and actionable insights.
Stylistic enhancements on the Maturity Score history page.
March 30, 2023
This release provides a unified searching capability so you can create threat identifiers across multiple connected data repositories.
After you log in, use the guided onboarding experience to define your company's threat profile.
Use the guided onboarding to define your company's threat profile and make the Anvilogic platform work according to your needs and priorities.
Anvilogic provides prioritized content recommendations based on the following factors:
Your threat priorities
Market and industry trends
Your trusted group activity
Popular search terms
Activity from organizations similar to you
Gather your organization’s specific threat priorities to help Anvilogic recommend use cases specific to your organization rather than generic recommendations based on external factors.
To build your company profile, provide the information listed in the table. This information helps to filter the MITRE techniques most applicable to you, so that the most relevant recommended content is generated.
Region
Select the geographical region in which your company operates. If you operate in multiple regions, select Global.
Industry
Select the industry vertical that best represents your company. You can select more than one industry.
Infrastructure
Select the infrastructure used within your organization. Select as many as apply to your organization.
As your organization matures over time, you can revisit and update your threat profile to accommodate changes to your infrastructure, including platforms, threat groups, techniques, and data categories.
After you define your threat profile, Select your data repository and get data in.
High-level steps for downloading and install the Anvilogic App for Splunk on Splunk Cloud Platform.
Perform the following tasks to download and install the Anvilogic App for Splunk on Splunk Cloud Platform:
Integrate Splunk with the Anvilogic platform using the Anvilogic App for Splunk.
The Anvilogic App for Splunk provides triage, allow list and suppressions management, and analytics used by the data feed and productivity scores on the maturity score pages.
You can also enable automated threat detection in the Anvilogic App for Splunk, which is required to generate tuning insights and some hunting insights.
Snowflake-only customers can get tuning insights without the Anvilogic App for Splunk.
If you are already using Splunk Enterprise or Splunk Cloud Platform, follow the instructions in the documentation to download and install the Anvilogic App for Splunk.
Next step
Select one of the following to continue:
If you don't have Splunk, and you want the capabilities provided by the Anvilogic App for Splunk, Anvilogic will provision a Splunk instance for you and manage the installation and upgrade of the Anvilogic App for Splunk.
Next step
After the Anvilogic platform is connected to a hosted Splunk instance, Review data feeds.
Log in for the first time and set your password on the Anvilogic platform.
Your welcome packet email from Anvilogic contains a link to log in to the Anvilogic platform for the first time. You must change your password the first time you log in to the Anvilogic platform.
Make sure you are a user with administrator privileges on the Anvilogic platform.
Click Set Password in the welcome package email. You are directed to set password page.
Enter the email address with which you have registered. This email address must match the email address that the welcome email was sent to.
Enter a password meeting the password requirements.
Re-enter the password for confirmation.
Review the Master service agreement and the privacy policy and click on the check box indicating your consent.
Click Submit.
After you log in, you will see the first screen of the guided onboarding.
Click Let's Start to begin.
Congratulations and welcome to Anvilogic!
This guide will help you log in, complete the guided onboarding to set threat priorities and integrate a data repository, get data in, and deploy detections.
The following flowchart summarizes the tasks you will complete to get started.
The Anvilogic platform is supported on the highest versions of Google Chrome and Mozilla Firefox.
If you're ready, Log in and set your password to get started with your Anvilogic onboarding.
Integrate the Anvilogic platform with your Splunk Enterprise or Splunk Cloud Platform instance.
After defining your company profile in the guided onboarding, select Splunk as the data logging platform.
New features and enhancements for the 6.x Anvilogic platform releases.
See What's New for a summary of the most recent releases and their new features and enhancements.
January 9, 2024
This release provides the following new features and enhancements:
Continuous assessment of data feeds to provide assurance to the SOC team that the underlying data is being logged, collected, and extracted as expected.
Ability to create custom threat groups.
Ability to automatically escalate hunting insights by type.
December 21, 2023
This release introduces the following new features and enhancements:
Ability for Snowflake users to create additional enrichments in your Unified Detect queries.
Ability to ingest Orca Security vendor alerts.
December 14, 2023
This release provides the ability to push ExtraHop vendor alerts directly to the Anvilogic platform.
November 30, 2023
This release provides bug fixes and performance enhancements.
November 16, 2023
This release provides Enhanced health insight error details provided by MonteAI so that you can understand an error without knowing all the details about the error codes and error snippets.
November 2, 2023
This release provides the following new features and enhancements on list view pages such as the Data Feeds, Threat identifiers, Threat Scenarios, and Macros:
Searches are now applied to all content on the Anvilogic platform. Only filters apply to local content.
On each page, the top 100 results are listed instead of the top 10, making it easier to find results using Cmd-F on Macs or Ctrl-F on Windows.
Various performance and usability enhancements related to count results when applying filters and how data is fetched when searches are issued.
October 19, 2023
October 5, 2023
This release provides the following new features and enhancements:
Ability to ingest Crowdstrike IDP vendor alerts.
Ability to view the history of actions taken on any health insight.
September 21, 2023
This release introduces the ability to automatically deploy new recommended threat identifiers that meet a minimum score threshold and specific data categories.
September 7, 2023
This release introduces the ability to change the owner of any saved hunt on the Anvilogic platform.
August 25, 2023
This release provides the following new features and enhancements:
Productivity metrics in the maturity scoring algorithm take into account analyst activity along with health, tuning, and hunting insights.
Usability enhancements and bug fixes.
If you run into any issues, see for information about how you can contact us.
This release provides bug fixes, performance enhancements, and significant updates to the navigation in the and the in-product user guides.
High-level steps for downloading and install the Anvilogic App for Splunk on Splunk Cloud Platform.
Perform the following tasks to download and install the Anvilogic App for Splunk on Splunk Enterprise
Select the data repository where you store your logs.
After defining your company profile in the guided onboarding, select a data repository:
Follow the instructions for your data repository.
Verify the requirements on this page before you download and install the Anvilogic App for Splunk.
You can integrate the Anvilogic platform with Splunk Enterprise versions 9.0.x and 8.0 - 8.3.x.
Splunk Enterprise Security (ES) versions 5.0 - 7.0.x are supported.
Install the Anvilgic App for Splunk on your Splunk search head. The server where you install the Anvilogic App for Splunk must meet the following requirements:
If you have multiple Splunk Enterprise instances, install the Anvilogic App for Splunk in only one of those environments.
For performance considerations, review the following factors in your Splunk Enterprise deployment:
The number of concurrent users.
The number of concurrent searches.
The types of searches used.
When you deploy threat identifiers on the Anvilogic platform, saved searches are created in your Splunk deployment. You can use cron scheduler recommendations on the Anvilogic platform to manage the load on your Splunk deployment.
Last updated 1 month ago
The server must be able to connect to over port 443. This is required to download Splunk code and rules metadata.
The server must be able to connect to over port 443.
The server must be able to connect to over port 443 to send events for third party vendor alert integrations.
See in the Splunk Enterprise Capacity Planning Manual.
Resource and hardware considerations for the Anvilogic App for Splunk match the recommendations for your Splunk Enterprise deployment. See in the Splunk Enterprise Capacity Planning Manual.
Verify the requirements on this page before you download and install the Anvilogic App for Splunk.
You can integrate the Anvilogic platform with Splunk Cloud Platform versions 8.0.x and higher. Splunk Enterprise Security (ES) versions: 5.0 - 7.0.x are supported.
If you are using the Splunk Cloud Platform Classic experience, you won't be able to accept tuning insights.
If you are installing the Anvilogic App for Splunk on Splunk Enterprise Security (ES) search heads in Splunk Cloud Platform, and you also have search heads that are not on Splunk ES, you must allow all IPs to send to the Splunk Cloud HTTP event collector (HEC) endpoint on port 443 since Splunk Cloud Platform does not assign static IPs to the Splunk Cloud Platform search heads.
If your environment includes Splunk ES running on Splunk Cloud Platform Victoria and Splunk Enterprise, the Anvilogic App for Splunk is installed in both environments. You must submit a support ticket with Splunk Support to remove the Anvilogic App for Splunk from one of those environments.
After verifying the requirements, Install the Anvilogic App for Splunk.
This page provides instructions for downloading the Anvilogic App for Splunk.
Perform the following steps to download the Anvilogic App for Splunk:
Click Login and log in with your Splunk account.
Type Anvilogic in the Search for apps field. Click on Anvilogic App for Splunk in the results.
Click Download.
If you don't have the permissions to download the app, you will see Download Restricted when you try to download the app.
If this happens, you must provide Anvilogic with your Splunk.com or Splunkbase username to satisfy Splunk's access control requirements. You must provide this username for each user who requires download access for the Anvilogic App for Splunk.
Perform the following tasks to find your Splunkbase username:
Click you user profile photo or avatar, then select My Profile.
Find your username at the top of the screen, such as kevin.hwang@anvilogic.com in the following example:
If needed, you can download the Anvilogic App for Splunk from the Anvilogic platform:
In the Anvilogic Splunk App field, click Download.
The downloaded file is an SPL (Splunk application package) file that can be installed in your Splunk environment.
See for more information about the differences between Splunk Cloud Platform Classic Experience and Splunk Cloud Platform Victoria Experience.
This setting requires an HEC token for authentication and is often used to send data to Splunk Cloud Platform from multiple devices with dynamic IPs, such as mobile devices. See in the Splunk Cloud Platform Admin Config Service Manual for instructions.
Access .
Make sure you are logged in to .
In the Anvilogic platform, click Settings ().
Create the required custom indexes on the Splunk platform.
The Anvilogic App for Splunk requires custom Splunk indexes used by the HTTP Event Collector (HEC) collector command for auditing, metrics and reporting:
Create an index named <your-org-name>_anvilogic for storing Anvilogic rule output and auditing the app. See in the Splunk Enterprise Managing Indexers and Clusters of Indexers manual.
Create a metrics index named <your-org-name>_anvilogic_metrics for storing the output of baselining rules. See in the Splunk Enterprise Managing Indexers and Clusters of Indexers manual.
After you install the Anvilogic App for Splunk, you must configure the app to connect to the Anvilogic platform.
Perform the following steps to complete your initial configurationand connect the Anvilogic App for Splunk to the Anvilogic platform:
In Splunk Web, select Apps > Anvilogic to access the Anvilogic App for Splunk.
If this is your first time installing the Anvilogic App for Splunk, you are prompted to set up the app. Click Continue to app setup page. To access the app configuration settings after the initial configuration, go to Settings > App Configuration.
Complete the general settings.
On the Anvilogic platform, select Settings > Generate API Key. Copy the generated API key.
Navigate to the Anvilogic App for Splunk.
Select Setting > App Configuration.
Click and expand the General Settings section.
Click and expand the API Settings section.
Paste the API key you copied earlier into the API Key field.
If your network requires a proxy to connect to Anvilogic, configure the proxy settings in the Anvilogic App for Splunk configuration page.
Click Save.
In your Splunk instance, run the following Splunk search to verify your app's connection with the Anvilogic platform:
You can view your connection status along with other system health information in the Health Monitoring dashboard in the Anvilogic App for Splunk.
Upload your existing detections using a CSV file.
If you have existing detections, you can export them to a CSV file, then import the CSV into Anvilogic. Doing this helps you get an idea of what your MITRE coverage looks like, so you can address and strengthen the areas where you need additional coverage.
Review the category mappings and quality of your data feeds.
Your data feeds are automatically categorized and synchronized to the Anvilogic platform every 7 days. When you add a data feed, you can view it on the Data Feeds page within 7 days.
To change or add categories to a data feed:
Click on the name of the data feed.
Click Tags.
In the Data Categories, field, enter the data categories you want associated with this data feed.
Click Update when you are finished.
An initial quality feed assessment is made by the Anvilogic platform for any new data feed added to the Anvilogic platform.
Perform your own evaluation of the timeliness, logging level, field extraction, and monitoring scope for each data feed so you can assign a proper data feed quality. Feed quality is important because only Good quality feeds are used to generate recommendations on the Anvilogic platform.
To manually change the quality of a data feed:
Click on the name of the data feed.
Select one of the qualities from the Feed Quality dropdown.
Click Update when you are finished.
The CSV file must have the title, description, and search of the existing detection. See for instructions to import the CSV file into Anvilogic. This document also describes how to properly format the CSV file when you create it.
Verify the category of your data feeds matches what you expect, as this affects your MITRE coverage. Select Maturity Score () > Data Feeds from the navigation bar, the review the categories for each data feed:
Select Maturity Score () > Data Feeds from the navigation bar, the review the quality for each data feed:
Auto-compute feed qualities are available for Windows event logs in Splunk. See .
.
Assign the avl_admin role to your admin users.
avl_admin
avl_senior_developer
avl_developer
avl_senior_triage
avl_triage
avl_readonly
You can customize the avl_senior_developer, avl_developer, avl_senior_triage, and
avl_triage roles. The avl_admin and avl_readonly roles can't be modified.
For example, perform the following tasks to customize the capabilities allowed or restricted by the AVL Senior Developer role:
In the Anvilogic App for Splunk, select Settings > App Configuration.
Click User Settings to expand the section.
Click Customize AVL Senior Developer Role to expand the section for that role.
Deselect any capabilities you want to remove for this role, or select a capability to add it to the role.
Click Save.
The following table lists the roles in the Anvilogic App for Splunk and the privileges granted by each role. You can customize the privileges enabled for each role as desired.
Allowlist privileges
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
✓
avl_add_al_global_entry
✓
✓
✓
avl_remove_al_global_entry
✓
✓
✓
avl_modify_al_global_entry
✓
✓
✓
avl_manage_rule_al
✓
✓
✓
avl_manage_global_al
✓
✓
✓
Triage privileges
avl_change_first_alert_status
✓
avl_change_all_alert_status
✓
✓
✓
avl_change_alert_status_to_new
✓
✓
✓
avl_bulk_alert_status
✓
✓
✓
avl_add_observation
✓
✓
✓
✓
avl_remove_observation
✓
✓
✓
avl_rate_rule
✓
✓
✓
✓
avl_add_rule_feedback
✓
✓
✓
✓
avl_create_case
✓
✓
✓
✓
avl_suppress_alert
✓
✓
✓
✓
avl_suppress_global_alert
✓
✓
✓
✓
Content deployment privileges
avl_deploy_content
✓
avl_write_hec
✓
✓
✓
avl_post_rest_platform
✓
✓
avl_post_rest
✓
✓
✓
avl_get_rest
✓
✓
✓
✓
avl_rest_config_access_get
✓
✓
✓
✓
avl_rest_config_access_post
Get your data into Snowflake, where it can be used to generate detections on the Anvilogic platform.
This document assumes you have completed the guided onboarding:
You have defined your company threat profile.
You have integrated Snowflake as your data repository
Before you continue, make sure you are a user with administrator privileges on the Anvilogic platform.
The following flowchart summarizes the process for getting your data into Snowflake.
Pick one of the following next steps, depending on your infrastructure:
Use Splunk Web to assign the avl_admin role to app administrators. See in the Securing Splunk Enterprise manual for instructions.
The following roles are available on the Anvilogic App for Splunk. See to see a summary of the privileges provided by each role.
Before you begin, make sure you read . This document contains important information for optimizing your data onboarding for the best performance.
After you review the best practices, see for supported data sources and onboarding instructions for each data source.
See for a list of supported data sources. Click on the name of a data source and follow the instructions to get the data into Snowflake. Anvilogic manages the pipelines for these data sources once you have the data source integrated.
If you have a data source that is not listed here, use to get your data in. is the recommended way to get your data sources into Snowflake. If you don't use Cribl Stream, you can use your own pipelines to Snowflake.
Create a HEC token that can write to the custom indexes you just created.
The Anvilogic App for Splunk contains a custom Splunk command that uses the HTTP Event Collector (HEC) to send results from threat identifiers into the events of interest index. This command is critical to the frameworks ability to store events for advanced correlation, and manages auditing on all objects.
Perform the following steps to create inputs on a single search head. Some steps may vary if you are managing a search head cluster.
In Splunk Web, select Settings > Data inputs.
Select HTTP Event Collector > New Token.
Fill in relevant information:
Specify a name of avl_hec_token.
Leave the Source Name Override blank.
Enter HEC Input for Anvilogic Detection Framework as the description.
Leave the Output Group as none.
Leave the Enable indexer acknowledgement box unchecked.
Click Next to configure the input settings:
Source type = Automatic
App Context = Anvilogic (anvilogic)
index = anvilogic AND index = anvilogic_metrics
Default Index = anvilogic
Click Review, then click Submit.
Copy the token value.
Perform the following steps to update the global settings and enable the tokens:
In Splunk Web, select Settings > Data inputs.
Select HTTP Event Collector > Global Settings.
Ensure the following settings are enabled:
All Tokens: Enabled
Enable SSL - Check
HTTP Port Number = Default is 8088
More information on the HEC and how to set it up can be found in in the Splunk Enterprise Getting Data In manual.
Anvilogic implementation with Azure (Data Explorer, Log Analytics, and Fabric).
Below is the generic architecture digram for how Anvilogic works on top of Azure.
Currently, we do not support querying a Log Analytics Workspace in a different tenant than the Anvilogic Azure Data Explorer Cluster.
We can only install the Anvilogic integration in one Azure tenant.
We can only support querying Log Analytics (LA) if it's within the same Azure tenant that Anvilogic is installed in.
We can support the ability to query multiple ADX tenants, even if the ADX clusters are in different tenants than Anvilogic's resoruce group.
Diagram:
PDF Download:
Review and deploy a variety of detections on the Anvilogic platform.
The Anvilogic platform generates recommended content for you to deploy based on your threat priorities and good quality data feeds.
The table defines additional types of recommended content on the Anvilogic platform and how you can deploy them.
The following infrastructure will be created in the resource group you create for Anvilogic. We use an to deploy the infrastructure.
Visit -> Type in "azure data explorer" under products
Azure Data Explorer offers :
The will then be able to query the KQL database.
We connect into your ADX cluster and then use the Microsoft to initiate a query to any other LA, ADX cluster, or Fabric workspace that our app service principal have access to.
The will then be able to query the KQL database.
You can view recommended content on the Home page and in the, which shows you all available detections not yet deployed in your system. See .
Perform to set up user access and authentication.
See for instructions on how to add users who can access the Anvilogic platform. When you add a new user, you assign them roles which grant certain privileges to the user when they access the platform. See for a full list of platform roles.
You can configure additional authentication settings for access to the Anvilogic platform, such as multi-factor authentication (MFA) or single sign-on (SSO). See and for more information.
Threat identifiers
Recommended threat identifiers can be viewed on the Home page and in the Armory. See for an example of how to deploy a recommended threat identifier from the Home page.
Trending topics
Trending topics are in-product versions of the Forge Threat Detection Report emails sent to existing customers. Trending topics can be found on the Home page and the Armory. See for an example of how to deploy all the content in a trending topic.
Detection packs
Detection packs are collections of threat identifiers, threat scenarios, and macros that address a specific security issue. Detection packs can be viewed in the Armory. See for an example of how to deploy all the content in a detection pack.
The process to get the Anvilgic App for Splunk differs depending on whether you are using Splunk Cloud Platform Classic Experience or Splunk Cloud Platform Victoria Experience.
File a service ticket to have Splunk install the Anvilogic App for Splunk for you.
Follow the instructions in in the Splunk Cloud Pkatform Admin Manual to install the Anvilogic App for Splunk.
Yes, if you have a streaming tool (ex. Cribl, , Apache NiFi) you can send custom data sources directly to Anvilogic’s ingestion pipeline. Anvilogic also has some out of the box support for raw data ingestion sources in our integrations armory.
Install the Anvilogic App for Splunk in your Splunk Enterprise environment.
Follow the instructions in the Splunk documentation to install the Anvilogic App for Splunk in your environment:
You must restart Splunk to complete the installation.
If you have a distributed Splunk Enterprise deployment, use the deployer to install the app on your search heads. See in the Splunk Supported Add-ons manual.
If you have a single-instance Splunk Enterprise deployment, install the app on the search head. See in the Splunk Supported Add-ons manual.
Thank you for taking a free trial of Anvilogic. During this 30 day trial you will be given access to a shared Anvilogic platform account. While this account will not have any production data or connect to your security analytics environment, you can use this free trial environment to get a feel for a few of the available features and workflows in Anvilogic.
A pilot or product deployment will have far more capabilities and features, but using this environment will enable you to experience:
Anvilogic’s detection engineering workflow, including adding detections to the workspace, creating tasks, and editing content
Security detection content provided by Anvilogic’s purple-team-as-a-service (the Forge) in the Anvilogic Armory
Building your own custom high-fidelity detections using Anvilogic’s no-code scenario builder
Reviewing MITRE ATT&CK coverage and gaps
If you don’t have access to Free Trial,
ML-driven for streamlining maintenance
ML- and human-driven for high fidelity detections you might otherwise never detect
In order to test all of the features and benefits of Anvilogic, and ask for a no-cost and no-obligation full production pilot. In addition to the above features, you will also get to test out:
and security vendor alert integration for a truly integrated view of detections regardless of where your data lives
and testing of Anvilogic’s detection content in your environment on your data
Anvilogic and 3rd-party workflow integration
The process of walking through the interface is described in step-by-step detail throughout the rest of this guide. Alternatively, if you'd rather have a video walk-through that you can work along with, or embedded throughout the following pages.
Problem: Detection engineering using current processes and tools is often slow, manual, and requires multiple people and systems to accomplish and track. Prior to Anvilogic, our customers said a single detection could take weeks to operationalize. This results in a time lag between threats in the wild and your ability to detect these threats and deliver high-quality, correlated alerts to the SOC, without generating noisy false positives.
Anvilogic Solution: We provide customers with both high-quality, correlated detection content and an integrated, automated platform to research, test, deploy, and maintain all of your detection content regardless of the data platforms and security tools in the environment.
The heart of the Anvilogic platform is to enable detection engineers to go from threats to detections in minutes, instead of days or weeks. In this section we will deploy a recommended detection in just a few clicks. In a full pilot, this would actually create a scheduled detection search against your data to look for suspicious activity. Let’s start in the main armory view (which will be covered more in depth in the next section).
From the main navigation on the left, hover over and click on “Armory”
Scroll down if necessary to see the “recommended detections” list. Click "See all" on the right, then click the Threat Identifiers button on the top. Choose any one of the recommended detections and click on its name. This will take you into a detailed view of the threat identifier use case.
Within this view you will see general information about the detection near the top including the name, data domain, MITRE ATT&CK techniques, killchain mappings, and a description of the detection. You will also see a list of the individual detection rules within the threat identifier use case. Every threat identifier will have 1 or more rules, where each rule is mapped to a particular combination of data repository and data category. There are expand/collapse arrows on the left side of each rule, which you can press to hide or show the details of the rule.
Notice that at least one of these rules will have a star icon near on the left side near the rule number. This indicates that this particular rule is recommended for your environment based on the good quality data in your data stores. If it isn’t already expanded, expand this rule by clicking on the expansion arrow on the left.
This expanded view of the rule contains summary data, the rule logic (including multiple versions of it if there is more than one), and tabs to see things such as threat examples, data source validator searches, risk scores, performance analytics, custom and standard tags, and default triage steps. Much of this metadata is used to enrich the alerts and warning signals that fire from this detection. You can also take a look at the rule logic, which contains the detection patterns, data manipulations, as well many macros used to retrieve data, normalize it, and enrich it. At the end of the logic, a description of the event is written and all the important fields are written to the events of interest index for further correlation.
Since we found this detection in the armory, in order to fully deploy it we will complete 2 basic steps: add it to our workspace as a local copy, then deploy it to our detection platform.
First, we need to make a local copy of it from the global copy in the Anvilogic armory, which we do by adding it to our workspace. Click on the blue “+ Workspace” button in the upper right corner. This will bring up the add to workspace dialog.
From here, you can check or uncheck additional boxes for different rules (recommended ones will be checked). You can and should also create an associated task in the lower part of the dialog (the default). You can change the status and priority or add a comment if you like, but leave the assignee set to your user name. Click the Add button to continue. You will then be taken to the local copy of the threat identifier in your workspace based on the armory copy.
You can click on the Edit button in the upper right corner if you want to make any changes to the rule at this point. In a full Anvilogic pilot you would be able to click a “Test” button here as well to work with and test the detection logic in your production environment before deploying the rule, though this feature is not available in our test drive environment.
Before deploying the rule, however, let’s take a look at the task you created in the previous step. From the main navigation on the left hover and then click on “Tasks.” This brings up the task view.
From here you should see the task to deploy the threat identifier detection you just created when you added it to your workspace. If you click on the Task ID you will see the details and history of the task itself.
To deploy the detection and get back to where we were previously, in the column entitled “Related Content ID #s” click on the rule ID (AVL_RXXXXX) on the right side of that column. You will then see the rule again with the Deploy button ready to go.
Click on the deploy button. This will bring up the rule deployment dialog. You will see that it offers some information as well as dependency checks (which you can ignore in this test drive).
You will notice that Anvilogic automatically recommends a cron expression that optimizes the best time to run this detection logic based on the current workload on the detection engine. You can drill into this further if you like by clicking on the “Suggest Optimal Schedules” link, but note that you will need to be in a pilot environment to see any meaningful data here. Now click the “Deploy” button and you will briefly see a message that the rule has been queued for deployment. In a full pilot environment, this could actually fully deploy the detection rule, or you can add another layer of testing and approval within the universal search engine platform for additional separation of duties.
You have now gone from threat to detect in minutes, without having to write a single line of code and all from a single, integrated platform that supports full auditing and separation of duties. Feel free to repeat this process for other detections that interest you, or even for trending topics and scenario based detections.
This page is designed to help customers leverage the Forward Events integration within their Anvilogic account for FluentBit.
Anvilogic account
Snowflake data repository connected to your Anvilogic account
Anvilogic will provide a S3 bucket and the corresponding access keys/ids (note these change for each integration) when you create a forward events integration in your Anvilogic deployment.
Create a credential file on the machine that fluentBit can read from. For example, /home/<username>/creds
. Inside the file please paste the following config with your specific access key/id
Since our credentials are already updated in the /home/<username>/creds
file, we need to configure the service config file for Fluent Bit and set the path to this credential file (see image for reference). To do that, fire up your favorite text editor and edit the fluent-bit.service file located at /usr/lib/systemd/system/fluent-bit.service.
Environment="AWS_SHARED_CREDENTIALS_FILE=/home/<username>/creds"
Then run the following commands in a terminal window
sudo systemctl daemon-reload
sudo systemctl start fluent-bit
Next we need to configure fluentbit to read our logs and send them to S3. In this example, I will be looking at apache2 access logs and sending them to S3.
Once you have pasted the above config into your fluentBit.conf file (typically located at /etc/fluent-bit/fluent-bit.conf)
NOTE: You can also edit or add any of your own customer parsers for logs by editing the parser.conf file at /etc/fluent-bit/
Once you have edited your fluent-bit.conf, please restart the fluentBit service sudo systemctl restart fluent-bit
You can validate that your config is working by heading to /tmp/fluent-bit/s3/ and looking inside that folder.
You can now confirm that data has landed in your snowflake account.
Please update the input section of this example config to fit your exact needs.
Integrate the Anvilogic platform with Snowflake.
After defining your company profile in the guided onboarding, select Snowflake as the data logging platform.
You must have admin privileges in Snowflake in order to complete the integration.
Perform the following steps to complete the integration with Snowflake:
Input your Snowflake account identifier to establish a connection between your Snowflake instance and the Anvilogic platform.
Click Copy Code, then click Go to Snowflake to go to your Snowflake instance and run the copied SQL commands. This set of SQL commands creates the necessary Snowflake components, the anvilogic_service Snowflake user used by the Anvilogic platform, and assigns the necessary permissions to the anvilogic_admin role for the anvilogic_service user.
Perform the following tasks in your Snowflake instance:
Open a new worksheet.
Change the role from PUBLIC to ACCOUNTADMIN.
Paste the copied SQL commands into the new worksheet.
Click the All Queries checkbox to run all the commands.
Click Run.
Look for the Statement executed successfully message.
Return to the Anvilogic platform, then click Next.
Click Copy Code, then click Go to Snowflake to go to your Snowflake instance and run the copied SQL commands. This set of SQL commands creates the S3 storage integration and allows access to the anvilogic_service user so that a connection to your managed S3 bucket where Snowflake retrieves the data can be made.
Perform the following tasks in your Snowflake instance:
Open a new worksheet.
Change the role from PUBLIC to ACCOUNTADMIN.
Paste the copied SQL commands into the new worksheet.
Click the All Queries checkbox to run all the commands.
Click Run.
Look for the Statement executed successfully message.
Return to the Anvilogic platform, then click Add.
After you have defined your company's threat profile and connected Snowflake as a data repository, it's time to Get data into Snowflake.
When you are signed in you will be taken to the Anvilogic Platform home page.
This page provides you with an overview of your current maturity score, MITRE ATT&CK coverage, detection summary, use cases in your workspace, and recommendations for additional detections to deploy. Feel free to click around.
Please note that some items in the free trial are intentionally disabled as they depend on integration with your security data lake. In a full Anviliogic pilot, you will have full functionality across all features. The intent of the free trial is to give you a taste of the capabilities of the Anvilogic platform without any configuration whatsoever. The rest of this guide will walk you through some of the common activities our customers use the Anvilogic platform for.
Clicking the 3 bars in the upper left of the interface, or simply hovering over the icons on the left side will bring up the navigation menu. From here you can go back to the home page, or access many of the other areas of the platform. Feel free to explore the items here, but the rest of this guide will walk you through several of the key features enabled in this free trial environment.
This page is designed to help customers leverage the Forward Events integration within their Anvilogic account for FluentBit.
Anvilogic account
Snowflake data repository connected to your Anvilogic account
Anvilogic will provide a S3 bucket and the corresponding access keys/ids (note these change for each integration) when you create a forward events integration in your Anvilogic deployment.
Create a credential file on the machine that fluentBit can read from. For example, /home/<username>/creds
. Inside the file please paste the following config with your specific access key/id
Since our credentials are already updated in the /home/<username>/creds
file, we need to configure the service config file for Fluent Bit and set the path to this credential file (see image for reference). To do that, fire up your favorite text editor and edit the fluent-bit.service file located at /usr/lib/systemd/system/fluent-bit.service.
Environment="AWS_SHARED_CREDENTIALS_FILE=/home/<username>/creds"
Then run the following commands in a terminal window
sudo systemctl daemon-reload
sudo systemctl start fluent-bit
Next we need to configure fluentbit to read our logs and send them to S3. In this example, I will be sending logs via Syslog and sending them to S3.
Once you have pasted the above config into your fluentBit.conf file (typically located at /etc/fluent-bit/fluent-bit.conf)
NOTE: You can also edit or add any of your own customer parsers for logs by editing the parser.conf file at /etc/fluent-bit/
Once you have edited your fluent-bit.conf, please restart the fluentBit service sudo systemctl restart fluent-bit
You can validate that your config is working by heading to /tmp/fluent-bit/s3/ and looking inside that folder.
You can now confirm that data has landed in your snowflake account.
Please update the input section of this example config to fit your exact needs.
You should have received an email from Anvilogic Admin with a link to your login credentials. Click on the “Set Password” button or alternative link to be taken to the Anvilogic secure portal, where you can set your password. From there you will be logged in automatically. For future access, simply navigate to and use these credentials. Note that your username is your registered business email address.
This page is designed to help customers leverage the Forward Events integration within their Anvilogic account for FluentBit.
Anvilogic account
Snowflake data repository connected to your Anvilogic account
Anvilogic will provide a S3 bucket and the corresponding access keys/ids (note these change for each integration) when you create a forward events integration in your Anvilogic deployment.
Following the steps of the AWS CLI install, once you have done the installation correctly - Please run aws configure
and paste in the access key and id provided. Once this is completed, validate that the credentials have been created - usually C:\Users\YourUsername.aws\credentials
.
Once that has been validated, we need to create a system variable in order for fluentBit to read/use these credentials. To do so;
Open the Start Menu and search for “Environment Variables.”
Select Edit the system environment variables.
In the System Properties window, click the Environment Variables button.
Under System variables, click New.
Enter the following:
Variable name: AWS_SHARED_CREDENTIALS_FILE
Variable value: C:\Users\YourUsername\.aws\credentials
Next we need to configure fluentbit to read our logs and send them to S3. In this example, we will be ingesting the windows event logs. You can change what channels by simply adding or removing them.
Please note, the bucket will be the bucket name/path.
This could mean that it is sdi_customer_data-1 or -2 or -3.
Once you have pasted the above config into your fluentBit.conf file (typically located at C:\Program Files\fluent-bit\conf )
NOTE: You can also edit or add any of your own customer parsers for logs by editing the parser.conf file at /etc/fluent-bit/
Once you have edited your fluent-bit.conf, please restart the fluentBit application
You can now confirm that data has landed in your snowflake account.
Please update the input section of this example config to fit your exact needs.
Anvilogic implementation with Splunk & Snowflake.
Quickly create SQL queries using the Anvilogic low-code SQL builder for Snowflake
Problem: Detection engineering today mostly relies on simple patterns or string-based detection logic. These types of detections usually result in a lot of downstream noise in the form of false positive alerts delivered to the SOC triage team. This creates alert fatigue and added expense, which ultimately results in a weakened security posture.
Anvilogic Solution: Anvilogic’s approach to alerting involves a 2-tiered method of first collecting warning signals (like traditional pattern based detections looking for suspicious activity), and then correlating those warning signals into risk-based or scenario-based detections. This allows for a much higher fidelity of actionable alert to be delivered to the SOC triage and IR teams without all the noisy false positives. We provide out-of-the-box threat scenarios in our Armory, as well as give customers the ability to create their own scenarios without having to write a single line of detection code.
In a previous section we deployed a threat identifier. In most cases you would configure that threat identifier to collect warning signals as “events of interest” in an Anvilogic data store, but not necessarily generate an actionable alert. In order to ensure that the SOC triage team receives high quality alerts without a lot of noise, Anvilogic customers take advantage of our threat scenario detections. These can be used for risk-based alerting as well as more sophisticated and extremely flexible scenario-based detections, modeled on real-world behavioral attack patterns seen in the wild, or even based on custom patterns you can create in Anvilogic without writing a single line of code.
Let’s start by looking at an out-of-the-box threat scenario provided by Anvilogic. From the main navigation on the left hover and then click on “Armory,” taking you back to the main armory menu. On the top, click on the number below the “Threat Scenarios” label to see a filterable list of provided threat scenarios in the armory. Feel free to create a filter, drill into any of these, and explore the content.
Now click into the search box at the top of the screen, type “malicious file delivering malware,” and click the matching threat scenario at the top of the results. It will take you to the detailed view of the threat scenario.
You can explore the details of this threat scenario. You can see metadata about the kill chain phases and threat groups that use this scenario at the top, along with an expandable description. If you scroll down you will see a graphical representation of the logic and other metadata fields used for enrichment.
This particular scenario consists of 3 stages, 2 of which have multiple groups of events that could trigger it. The entire scenario is correlated based on a common host. It will fire when threat identifier rules matching group 1 criteria are followed by group 2 and group 3 identifiers for a common host within the described time frames. You can click on any group to see details about the threat identifiers.
You can also click the bracket icon (< >) on the upper right of the scenario logic definition section to see the underlying logic as code.
At this point, you could potentially deploy this scenario as long as you have a minimum set of underlying threat identifiers deployed (we make it easy to check this and get them deployed if you need to do so). Instead, let’s explore how easy it is to actually create one of these threat scenarios from scratch without writing any of that underlying code.
From the main navigation menu on the left, click the + icon for New Content near the top.
Select “Threat Scenario” then hit “Proceed.” This will open the new threat scenario no-code builder with a blank threat scenario.
Give your use case a title in the box near the top. Now let's create a scenario. You can follow my example or choose different criteria, but it is best to select conditions from the dropdown lists that have a non-zero number on the left in parenthesis, indicating that you have at least 1 detection deployed that matches that condition.
Click into stage 1 and rename it to “initial access.” Click inside of Group 1 to pull up the definition dialog.
For the first condition, leave “filter by” set to MITRE ATT&CK and hit the drop down for “choose conditions.” From there, select “Initial Access.” You should see at least 1 threat identifier that meets this criteria. Click “add more conditions,” leave the “and” operator, and this time change the “filter by” to “use case categories” and “choose conditions” to “Hacking/Unauthorized Access.” Click the “add” button to save it.
Now look above stage 1 and click into the “entities of interest” box. Leave “correlated host” checked, and check the box for “correlated IP” also.
Now let’s add another stage. Click “Add Stage” then click the diamond with the time in between the stages. You can change it, e.g. to 60 minutes between the first 2 stages correlated across a common host or IP, then hit “update.”
Now change the name of stage 2 to “Execution,” click inside its group 1 box, and set the condition to MITRE ATT&CK execution phase. You should see at least 1 threat identifier selected. Hit “add” to save this stage.
Repeat the process one more time by adding another stage, renaming and drilling into group 1, filtering by MITRE ATT&CK privilege escalation and Data Domain endpoint, clicking “add” to save it.
Your final threat scenario should look something like this.
Click on the bracket icon (< >) on the right side of the scenario definition to see the underlying code that the scenario builder created for you.
At this point, you can hit Save in the upper right corner. From the main navigation menu on the left, click the “Scenarios” icon. This takes you to a list of scenarios in your workspace. Reverse sort on the Last Modified column on the right and you should see the threat scenario you just created at the top.
Problem: Detection engineering using current processes requires specialized sets of skills including security domain expertise as well as data platform query skills across potentially multiple data platforms and languages. It is difficult to create good-quality content quickly enough to keep up with emerging threats.
Anvilogic Solution: Our expert team of security researchers acts like a purple-team-as-a-service, providing our customers with a constant flow of high-quality, correlated detection content. It is developed in labs using real-world attack tools modeled on real-world attack patterns. This content is delivered in a very timely manner (particularly for severe and urgent threats), and spans different data platforms, log sources, and security tools in the environment.
There are many ways to explore and deploy the detections Anvilogic’s Forge team provides to our customers. We have deployed content in the previous section, but now let’s take some time to delve more deeply into the content in the Armory. Let’s start at the main Armory page.
From the main navigation on the left, hover over and click on “Armory”
As you can see, there is a large amount of content available in the armory, and there are many ways to find what you want. Anvilogic’s purple-team-as-a-service is constantly creating new content, both to deepen coverage of existing known threats, and to address new and emerging threats, campaigns, and vulnerabilities. We are able to roll out detections to the armory for instant deployment very quickly for critical vulnerabilities, ensuring our customers can protect their organizations in minutes when these events occur.
The actual detection content falls into 3 major categories:
Threat Identifiers - These are specific detections looking for patterns or strings in log events that indicate something suspicious has occurred. Alerts from your vendor security products (e.g. an EDR) can also be fed into Anvilogic directly, generating their own threat identifiers. Note that most threat identifiers don’t generate alerts to your triage team on their own - rather they generate warning signals (“events of interest”) that are then used as parts of higher fidelity detections based on risk or real-world threat scenarios. You can, however, generate alerts for higher-fidelity threat identifiers as well.
Threat Scenarios - These are correlated detections based on the warning signals created by Threat Identifiers and security vendor alerts. These correlations are based on real-world attack patterns and known adversary behaviors, which yields much better results than merely looking for indicators of compromise or simple pattern-based detections. This is how Anvilogic helps detection engineers deliver much better, actionable results to the SOC without a lot of noisy false positives.
Macros - These are the building blocks of data collection, normalization, and enrichment used within threat identifiers. They allow our detections to gather, normalize, enrich, and tune detection rules easily and inline with your detection searches, creating more useful warning signals and alerts without the need for additional backend enrichment through a SOAR.
You can click on any of the numbers at the top to see the detection content under the labels including the full armory, type of content described above, or broken down by MITRE ATT&CK tactic, platform coverage, or domain coverage. You can further filter any of these views using the filters on the left for a very granular way to find content.
In addition, you can use the search bar at the top of any Anvilogic window to instantly search for detection content using any search criteria.
Going back to the main Armory page (back button on the browser, or main navigation - armory), as you scroll down the page you will see a few key groupings of detections in addition to the counts and breakdowns near the top. These include:
Here you can see if you are covered for this trending topic, read the threat intelligence summary, mass deploy the content, or drill into and deploy individual detection content.
Recommended Detections - Anvilogic will automatically recommend specific detection content based on your custom priorities, coverage gaps, and available good quality data. Each recommended detection is marked with a star and a recommendation score on a scale of 1 to 100. If you hover over the score, you will see why the particular detection is recommended and what contributed to that score.
Recommended Detection Packs - Detection packs are collections of detections mapped to a specific data platform (e.g. Splunk, Snowflake, Azure, or Devo), specific data category (e.g. Windows Events), and specific MITRE ATT&CK tactic (e.g. Initial Access) that you can deploy en masse. These are a great way to generate a lot of relevant warning signals when getting started with the platform, without overwhelming the SOC with alerts.
Monte Copilot comes equipped with specific tools to help answer questions you are asking in real-time.
Monte Copilot comes equipped with specific tools to help answer questions you are asking in real-time. Tools are used to collect information across different security resources so that each answer is as accurate as possible.
Ask Monte Copilot questions about IP addresses, URLs, domains, processes and commands with and without arguments, encoded and plain text processes and commands, and file hashes.
For example, you can ask whether or not a specific IP address is suspicious, ask for an explanation of a particular PowerShell command, or ask Monte Copilot to write a specific command for you.
Current tools used by Monte Copilot, in alphabetical order:
Details about how Monte Copilot is licensed and enforced.
Monte Copilot's licensing model is Metered-Based, measuring the number of questions per day. Due to the non-deterministic nature of its compute requirements—depending on the number of queries, active users, and integrations with external tools (e.g., VirusTotal, Shodan)—pricing based on CPU usage would be complex and unpredictable. By contrast, the "questions per day" metric offers a more predictable and consistent way to measure usage, allowing for better alignment with day-to-day work and past usage patterns.
Anvilogic offers three licensing tiers based on the number of questions per day, allowing you to select the most suitable option for your organization:
Tier 1 (Lowest)
Tier 2
Tier 3
The tiers, defined by the number of questions per day, are recommended based on your estimated usage—taking into account the size of your organization and the number of analysts. However, you are free to select a different daily question limit than the one suggested, based on your specific needs.
Usage of Monte Copilot is tracked as described below:
Anvilogic tracks the number of questions asked by all of your analysts during interactions with Monte Copilot. These metrics are presented to you for monitoring usage.
The question counter resets daily at midnight UTC.
Any question that Monte Copilot cannot answer will not be counted towards your usage.
The following pricing models are available:
Free Trail
By default, when you get started with Anvilogic, you will receive a 30-day trial license. During this period, you will be allocated up to 15 questions per day.
Paid Plan:
You can purchase any of the available licensing tiers, with pricing determined based on the number of questions processed. Higher tiers benefit from reduced pricing per question compared to lower tiers, reflecting the economies of scale from larger volumes.
Anvilogic offers standard billing cycles of 1-year and 3-year terms, giving clients the flexibility to choose between annual or multi-year commitments depending on their operational needs.
Once you reach your daily question limit, Anvilogic temporarily locks further access to Monte Copilot for the remainder of the day. You will regain access after the question counter resets at midnight UTC.
The following is Anvilogic's reference architecture to support your environment.
Navigate to the left navigation panel -> click search
Trending Topics - These are collections of detection content that are routinely put out by the Forge team to address emerging trends, new vulnerabilities, campaigns, techniques, etc. This content is available on a timely and ongoing basis for Anvilogic customers directly in the platform, and anyone can subscribe to our rollup email on emerging threats by subscribing to . Click on one of these, or click on See All to see a larger, filterable list of available trending topics.
If you have tools in mind that we do not cover, add your feedback in .
is an AI-powered assistant designed for the members of the SOC (Security Operations Center). It helps them with tasks like threat research, threat detection, query generation, threat investigations, and more.
If you reach your daily question limit or your license expires, you will be provided with a to request an upgrade. After submitting the request, our team will contact you to guide you through the next steps. You can also proactively request an upgrade before reaching the limit or expiration by using the same form.
If you wish to downgrade your Monte Copilot license, please contact our sales team at for assistance.
For any licensing or billing inquiries, please contact our sales team at for assistance.
- Compute for ad-hoc queries to assist search, hunt, and IR
(Default) or - Run 24/7 executing workflow jobs (detection use cases) on a cron.
Detections execute as jobs within a . Rules built on the Anvilogic platform are converted from a user friendly SQL builder to PySpark functions that run on a defined schedule.
Python Notebooks are used to collect & ingest data from storage and transform raw events into the AVL detection schema using .
Yes, if you have a streaming tool (ex. Cribl, , Apache NiFi) you can send custom data sources directly to your primary storage servers (ex. S3, Blob, etc.) and Anvilogic can orchestrate the ETL process into the correct schema and tables required for detection purposes.
AnvilogicAllowlistProcessRegexGenerator
Generates regex patterns for allowlisting benign processes to reduce alert volume.
Base64Decoder
Used to decode Base64-encoded strings.
CommandAnalyzer
Explains the details of full operating system command calls and analyzes malicious activity.
Deobfuscator
A custom powerful open-ended Deobfuscator with the ability to decode arbitrary inputs, from Base64, to hex, to binary and more. This tool can even unravel nested combinations of obfuscation used by bad actors.
DomainReputation
Checks the reputation and popularity of a domain based on the Cisco Umbrella Popularity List
Entity Analyzer
IoC
Checks if a URL or IP address is listed as an Indicator of Compromise (IoC) against multiple sources including
PhishTank
Feodo Tracker
VirusTotal
Phishstats
TOR Exit Nodes
FireHOL
URLHaus
IPInfo
Provides information about IP addresses, including geolocation, autonomous system information, and more.
LOLBAS
Provides insights regarding binaries, scripts, and libraries that are part of the Windows OS.
QnA
Offers details on a specific question, topic, or keyword. Utilizes:
Google search APIs (SerpAPI)
Anvilogic Forge Threat Reports
Anvilogic Armory Content
Threat Identifiers
Threat Scenarios
You can ask questions about Threat Actors, Vulnerabilities, Exploits, TTPs and more.
Shodan
Offers insights into listening services and ports associated with a given IP address.
Threat Identifier Alert Analyzer
Virustotal
Analyzes URLs, domains, IP addresses, and files for threats like viruses, worms, trojans.
Whois
Retrieves and parses WHOIS data about a URL.
WindowsCommands
Provides information regarding Windows OS commands.
MonteAI is a LLM dedicated to helping you build complex SQL queries in the matter of minutes
Select Snowflake and PROCEED
Drag GATHER DATA component from the right components list
Select avl_get_snowflake_data_edr
Drag Code Block or Filter component from the right components list to begin building queries
Begin to type in a question, or leverage any of the example questions to get started
The following page will help you understand how you can use FluentBit to send data to Anvilogic to ingest into Snowflake.
What is FluentBit?
FluentBit is an open source streaming tool that can be used to send data to Anvilogic to ingest into Snowflake.
Data Type Examples:
Thank you for taking the time to work through this guide and exploring just some of what Anvilogic offers security operations teams. We invite you to continue to play with and explore this environment for up to 30 days from your initial invitation. You can explore more of the armory content, get more of a feel for the detection engineering workflow, or create more threat scenarios. We are here to help if you have any questions - just reach out to the Anvilogic representative who gave you access to the test drive.
A full deep dive demo of the platform, including components not covered in the test drive such as insights and triage
A workshop to explore specific components more deeply
A pilot of the Anvilogic platform in your environment and with your real data
Anvilogic Solution: Anvilogic provides industry- and geography-specific templates for understanding the threat landscape, including how to prioritize adversary groups and attacker techniques. These are fully customizable and can evolve with your organization as your footprint and the threat landscape change. These priorities will form the basis of using Anvilogic to provide continuous assessment of your data and detection coverage maturity.
One of the main features Anvilogic provides is continuous assessment and an improvement framework around the state of your data feeds and detections as measured against a customized subset of the MITRE ATT&CK Framework. We help you understand and tailor priorities for your organization, industry, and geography. We have already done the initial priorities setup in the test drive account, but you can view, explore, and edit this.
From the main navigation on the left, hover over “Maturity Score” -> click Threat Priorities.
This will open the Threat Priorities view.
You can view or edit the Platform Priorities from the initial screen. You can then click on Threat Groups to bring up the list of priorities MITRE ATT&CK Threat Groups.
Again, you can view (multiple pages), edit priorities, search for specific groups, and check/uncheck the filter to hide unprioritized groups.
From here, you can click on Techniques in the Outputs section and see how the Platforms and Threat Groups selected in the previous steps are automatically reflected in a prioritized list of MITRE ATT&CK techniques. You can hover and click on the arrow on a given technique to view or change the sub-techniques and priority level of each technique and sub-technique.
Lastly you can click on Data Categories to see a paginated view of MITRE ATT&CK data categories required to cover the platforms and techniques defined in the previous steps. The selected categories and their priorities are automatically derived from the platforms and techniques, but you can view and edit their priorities in this screen as well.
At this point, you have reviewed and edited your priorities, setting the bar for required high-quality data sources and detection technique coverage. This will impact your maturity score and give you guidance and recommendations on how to improve detection coverage. If you made any changes, you can use the Save button in the upper right, or simply navigate away without saving them.
Anvilogic Solution: Anvilogic provides a platform for continuous assessment of an organization’s security posture measured against the MITRE ATT&CK Framework. We give you a platform for setting priorities specific to your organization, and then baselining your existing data feeds and technique coverage. On an ongoing basis, as you use Anvilogic to deploy detections and connect to your data platforms, it will automatically keep an updated assessment of your data feeds and technique coverage, showing you where you are strong as well as highlighting your most critical gaps in data and detection coverage, giving you a framework for improvement.
Now that we’ve set our organization-specific priorities, we can see how our existing data feeds and detections measure up to the standards we need to achieve, and where we can improve with the most impact. This type of continuous assessment and improvement framework is available in Anvilogic through our Maturity Score functionality.
From the main navigation on the left, hover over and click on “Maturity Score”
You will see an overall maturity score here, as well as Contributing Scores based on Feed, Detection, and Productivity. Note that you will not see much change over time in this test drive environment, and that productive score will only become relevant in an environment connected to your production data stores, as in a full Anvilogic pilot.
You can set a custom date range for the maturity score history by clicking the calendar icon in the upper left corner of the chart. Expanding this out to a wide range will enable you to see some of the changes to the environment that have impacted the score by also scrolling down to see details for the time period.
Scroll back to the top and Click on the Feed Score under contributing scores. This will give you details on how the feed score is derived and how to improve it.
Note that the circle chart on the left indicates where you have coverage and where you lack coverage based on your custom prioritized data categories. Click on the darkened part of the circle or the number above the “Add Missing Feeds” label to see the data feeds you are missing, which provides customers with a framework for prioritizing the onboarding of their most critically needed feeds.
You can also click on the number above the “Enhance Feed Quality” label to view data feeds where the quality is below the “good” threshold.
Click on one of the numbers in the “Data Feeds” column to see a list of data feeds in that category which should include at least 1 data feed with quality that is not “good.” Click on the name of a data feed to see how Anvilogic can help you assess the quality of your data feeds and the dimensions on which data quality is measured. Cancel out when done.
Navigate back to the main maturity score view by clicking the main navigation on the left, hovering over and clicking on “Maturity Score.” This will bring up the main maturity score view as before. Now click on the detection score near the top to bring up the detection score detail view.
From here you can see how many of your prioritized MITRE ATT&CK techniques and sub-techniques you have deployed detections for (including legacy detections as well as detections deployed through the Anvilogic platform), as well as where you high, medium and low priority technique gaps are. In the Technique Coverage section, click the slider under Recommendations and set it to “On” and then scroll down a bit. You will see a filterable matrix of techniques with a star icon indicating ones where there are recommendations. You can hover over any square in the matrix and see the current coverage state.
The green color of a square represents the depth of coverage. As you hover over a square you can see how many rules you have deployed for that technique and its sub-techniques, as well as how many rules are recommended. Click on the highlighted “X rules recommended” for one of the squares with a star to see how easy it is to find detections you can deploy to have an immediate impact on your technique coverage. This will take you to a filtered view of the armory showing rules you can deploy right now that will cover that technique.
Anvilogic Lab's is a environment is an easy way you can play around with a limited set of Anvilogic features without having to connect your own data sources.
Thank you for signing up for Anvilogic Lab. During this trial you will be given access to a shared Anvilogic platform account.
You will get instant access to a full Anvilogic environment loaded with sample data in Snowflake without the need to configure or connect anything in your environment. You will be able to create and test your own custom security detections against common high-value data sources using our Unified Detect Builder with our MonteAI Assistant.
Here are some cool things you can try out in Anvilogic Labs:
Use Monte Copilot to get answers to your questions about entities involved in your investigation.
Monte Copilot is fully integrated with internal and external tools to help you make informed decisions and aid every phase of the investigation, threat hunting, and detection building cycle.
Click Ask MonteAI on the top navigation bar within the Anvilogic platform.
Watch the introductory video to learn how you can use Monte Copilot, or click skip the video. If you skip the video, you can replay it again later by clicking Play the Tutorial.
Feedback on the accuracy of responses is critical to us. Please help to improve by providing feedback in the following manner:
Take note of the following limitations in Monte Copilot at this time:
The context within any conversation is maintained for 15 questions. After 15 questions, clear the conversation and begin a new conversation.
Question input is capped at 2,000 characters.
Monte Copilot is currently not able to search or query any data sets; this functionality is coming soon.
Monte Copilot is not yet trained with Anvilogic-based information, such as your detections, priorities, and data feeds. This improvement is coming soon.
The majority of Monte Copilot's beta features are tools that help with the explainability of events in a log or alerts, including entities within alerts such as IPs, hashes, and command line parameters.
List of frequently asked questions around privacy and security controls for Monte Copilot.
Analyze a string, such as a process or IP address, and determine a verdict (benign, suspicious, malicious) with documentation.
Analyze the outputs from a threat identifier (ex. an EOI) and determine a verdict (benign, suspicious, malicious) with documentation.
Navigate to the left navigation panel -> click search
Select the icon for "Ask MonteAI to generate SQL"
If you feel like Anvilogic might be able to bring value to your organization, to take the next step. This can include:
Problem: Most security operations organizations use the to get an understanding of what types of adversary behavior they need to detect for, including the underlying data to support those detections. This is important because failing to do so can leave major visibility gaps, weakening security posture. A key requirement of using the framework effectively is understanding what platforms, adversaries, data platforms, and adversary techniques are actually in scope for your particular industry and geographic footprint, and what techniques you can de-prioritize or ignore.
Problem: Most security operations organizations use the to get an understanding of what types of adversary behavior they need to detect for, including the underlying data to support those detections. This is important because failing to do so can leave major visibility gaps, weakening security posture. The problem is that assessing this is very difficult and time consuming, involving a lot of manual mapping and tracking in a spreadsheet, if at all. And as the environment changes and the threats evolve, it is nearly impossible to keep this assessment updated.
Want to see more? today!
Click if you think the answer was good.
Click if you think the answer needs improvement.
For detailed security control information, see .
With Anvilogic you can easily download and deploy hundreds of new SQL built detection content in the matter of minutes.
Homepage -> Use Cases -> View Recommended Detections
Select one of the recommended use cases
Adding to your workspace creates your own private branch of the use case that can be fully version controlled and deployed to your Snowflake environment
Once added, your new rule ID will be created and your use case can be modified
Once you have added the use case to your workspace, you will be able to EDIT or CLONE the use case. You are free to edit the logic or the tags.
Once you have added the use case to your workspace, you will be able to TEST and/or DEPLPY the use case.
Test - will execute the job on the connected Snowflake instance, looking back the last 60 minutes. This can be used to help understand the potential volume of the events that get returned before you deploy
Deploy - will create a scheduled task on your connected Snowflake database that will execute on the defined schedule chosen.
You can easily use the platform to set threat priorities and track your detection engineering progress overtime
Platforms - these are the platforms you want to scope for technique detection
Techniques - these are the MITRE ATT&CK techniques that are prioritized for detection based on the 2 inputs
Threat Groups - these are the relevant threat groups you may want to track that target your industry or business
Data Categories - these are the relevant data sources you will need to add to Snowflake to detect those techniques
Keep track of all the detections you are deploying and how it improves your Maturity Score
View a history of all of your activity to date here. MS History gives you an easy way to track your progress and how you have improved your detection coverage overtime.
Goal - try and improve your maturity score by 10 points in the lab by deploying recommended detections from the homepage.
Navigate to the Detection section to see a MITRE Heatmap of your deployed detections.
Want to improve your score?
Click on the Donut chart and select uncovered techniques
Flip the recommendations switch to ON
The Anvilogic detection armory has thousands of out of the box use cases you can deploy to your environment to immediately begin detecting suspicious activity aligned to those techniques.
Select Threat Identifiers (select see more on fields to filter by)
Filter by Rule Format -> Snowflake
Quickly create SQL queries using the Anvilogic low-code SQL builder for Snowflake
MonteAI is a LLM dedicated to helping you build complex SQL queries in the matter of minutes
Select Snowflake and PROCEED
Drag GATHER DATA component from the right components list
Select avl_get_snowflake_data_edr
Drag Code Block or Filter component from the right components list to begin building queries
Begin to type in a question, or leverage any of the example questions to get started
Click +WORKSPACE
Click ADD to save the private copy
Click EDIT
Navigate to the left navigation panel -> Maturity Score -> Threat Priorities
Navigate to left navigation panel -> Maturity Score
Any technique with a BLUE STAR ICON () means there is a recommendation in our armory to cover one of your gaps.
Navigate to left navigation panel -> Maturity Score
Navigate to the left navigation panel -> click search
Navigate to the left navigation panel -> click search
Select the icon for "Ask MonteAI to generate SQL"
This page summarizes the AI security controls and measures in place on the Anvilogic platform.
The table summarizes the security controls in place for AI on the Anvilogic platform.
Control Category
Controls Applied
Context is established and understood.
Intended purposes, potentially beneficial uses, context-specific laws, norms and expectations, and prospective settings in which the AI system will be deployed are understood and documented. Considerations include the specific set or types of users along with their expectations; potential positive and negative impacts of system uses to individuals, communities, organizations, society, and the planet; assumptions and related limitations about AI system purposes, uses, and risks across the development or product AI lifecycle; and related test, evaluation, verification, and validation (TEVV) and system metrics.
The organization’s mission and relevant goals for AI technology are understood and documented.
The business value or context of business use has been clearly defined or– in the case of assessing existing AI systems– re-evaluated.
Organizational risk tolerances are determined and documented.
System requirements (e.g., “the system shall respect the privacy of its users”) are elicited from and understood by relevant AI actors. Design decisions take socio-technical implications into account to address AI risks.
Categorization of the AI system is performed.
The specific tasks and methods used to implement the tasks that the AI system will support are defined (e.g., classifiers, generative models, recommenders).
Scientific integrity and test, evaluation, verification, and validation (TEVV) considerations are identified and documented, including those related to experimental design, data collection and selection (e.g., availability, representativeness, suitability), system trustworthiness, and construct validation.
AI capabilities, targeted usage, goals, and expected benefits and costs compared with appropriate benchmarks are understood.
Potential benefits of intended AI system functionality and performance are examined and documented.
Potential costs, including non-monetary costs, which result from expected or realized AI errors or system functionality and trustworthiness– as connected to organizational risk tolerance– are examined and documented.
Targeted application scope is specified and documented based on the system’s capability, established context, and AI system categorization.
Processes for operator and practitioner proficiency with AI system performance and trustworthiness– and relevant technical standards and certifications– are defined, assessed, and documented.
Processes for human oversight are defined, assessed, and documented in accordance with organizational policies.
Risks and benefits are mapped for all components of the AI system including third-party software and data.
Internal risk controls for components of the AI system, including third-party AI technologies, are identified and documented.
Impacts to individuals, groups, communities, organizations, and society are characterized.
Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past uses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed the AI system, or other data are identified and documented.
Practices and personnel for supporting regular engagement with relevant AI actors and integrating feedback about positive, negative, and unanticipated impacts are in place and documented.
Manage deployment environment governance.
When developing contracts for AI system products or services Consider deployment environment security requirements.
Ensure a robust deployment environment architecture.
Establish security protections for the boundaries between the IT environment and the AI system.
Identify and protect all proprietary data sources the organization will use in AI model training or fine-tuning. Examine the list of data sources, when available, for models trained by others.
Harden deployment environment configurations.
Apply existing security best practices to the deployment environment. This includes sandboxing the environment running ML models within hardened containers or virtual machines (VMs), monitoring the network, configuring firewalls with allow lists, and other best practices for cloud deployments.
Review hardware vendor guidance and notifications (e.g., for GPUs, CPUs, memory) and apply software patches and updates to minimize the risk of exploitation of vulnerabilities, preferably via the Common Security Advisory Framework (CSAF).
Secure sensitive AI information (e.g., AI model weights, outputs, and logs) by encrypting the data at rest, and store encryption keys in a hardware security module (HSM) for later on-demand decryption.
Implement strong authentication mechanisms, access controls, and secure communication protocols, such as by using the latest version of Transport Layer Security (TLS) to encrypt data in transit.
Ensure the use of phishing-resistant multifactor authentication (MFA) for access to information and services. [2] Monitor for and respond to fraudulent authentication attempts.
Protect deployment networks from threats.
Use well-tested, high-performing cybersecurity solutions to identify attempts to gain unauthorized access efficiently and enhance the speed and accuracy of incident assessments.
Integrate an incident detection system to help prioritize incidents. Also integrate a means to immediately block access by users suspected of being malicious or to disconnect all inbound connections to the AI models and systems in case of a major incident when a quick response is warranted.
Continuously protect the AI system.
Models are software, and, like all other software, may have vulnerabilities, other weaknesses, or malicious code or properties. Continuously monitor AI system.
Validate the AI system before and during use.
Store all forms of code (e.g., source code, executable code, infrastructure as code) and artifacts (e.g., models, parameters, configurations, data, tests) in a version control system with proper access controls to ensure only validated code is used and any changes are tracked.
Secure exposed APIs.
If the AI system exposes application programming interfaces (APIs), secure them by implementing authentication and authorization mechanisms for API access. Use secure protocols, such as HTTPS with encryption and authentication.
Enforce strict access controls.
Prevent unauthorized access or tampering with the AI model. Apply role-based access controls (RBAC), or preferably attribute-based access controls (ABAC) where feasible, to limit access to authorized personnel only. Distinguish between users and administrators. Require MFA and privileged access workstations (PAWs) for administrative access.
Ensure user awareness and training.
Educate users, administrators, and developers about security best practices, such as strong password management, phishing prevention, and secure data handling. Promote a security-aware culture to minimize the risk of human error. If possible, use a credential management system to limit, manage, and monitor credential use to minimize risks further.
Conduct audits and penetration testing.
Engage external security experts to conduct audits and penetration testing on ready to-deploy AI systems.
Implement robust logging and monitoring.
Establish alert systems to notify administrators of potential oracle-style adversarial compromise attempts, security breaches, or anomalies. Timely detection and response to cyber incidents are critical in safeguarding AI systems.
The Measure function employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts. It uses knowledge relevant to AI risks identified in the MAP function.
Measure
Measure Subcategories
MEASURE 1: Appropriate methods and metrics are identified and applied.
MEASURE 1.1: Approaches and metrics for measurement of AI risks enumerated during the MAP function are selected for implementation starting with the most significant AI risks. The risks or trustworthiness characteristics that will not– or cannot– be measured are properly documented.
MEASURE 1.2: Appropriateness of AI metrics and effectiveness of existing controls are regularly assessed and updated, including reports of errors and potential impacts on affected communities.
MEASURE 1.3: Internal experts who did not serve as front-line developers for the system and/or independent assessors are involved in regular assessments and updates. Domain experts, users, AI actors external to the team that developed or deployed the AI system, and affected communities are consulted in support of assessments as necessary per organizational risk tolerance.
MEASURE 2: AI systems are evaluated for trustworthy characteristics.
MEASURE2.1: Test sets, metrics, and details about the tools used during test, evaluation, verification, and validation (TEVV) are documented.
MEASURE 2.2: Evaluations involving human subjects meet applicable requirements (including human subject protection) and are representative of the relevant population.
MEASURE 2.3: AI system performance or assurance criteria are measured qualitatively or quantitatively and demonstrated for conditions similar to deployment setting(s). Measures are documented.
MEASURE 2.4: The functionality and behavior of the AI system and its components– as identified in the MAP function– are monitored when in production.
MEASURE 2.5: The AI system to be deployed is demonstrated to be valid and reliable. Limitations of the generalizability be yond the conditions under which the technology was developed are documented.
MEASURE 2.6: The AI system is evaluated regularly for safety risks– as identified in the MAP function. The AI system to be deployed is demonstrated to be safe, its residual negative risk does not exceed the risk tolerance, and it can fail safely, particularly if made to operate beyond its knowledge limits. Safety metrics reflect system reliability and robustness, real-time monitoring, and response times for AI system failures.
MEASURE 2.7: AI system security and resilience– as identified in the MAP function– are evaluated and documented.
MEASURE 2.8: Risks associated with transparency and account ability– as identified in the MAP function– are examined and documented.
MEASURE 2.9: The AI model is explained, validated, and documented, and AI system output is interpreted within its context as identified in the MAP function– to inform responsible use and governance.
MEASURE 2.10: Privacy risk of the AI system– as identified in the MAP function– is examined and documented.
MEASURE 2.11: Fairness and bias– as identified in the MAP function– are evaluated and results are documented.
MEASURE 2.12: Environmental impact and sustainability of AI model training and management activities– as identified in the MAP function– are assessed and documented.
MEASURE 2.13: Effectiveness of the employed test, evaluation, verification, and validation (TEVV) metrics and processes in the MEASURE function are evaluated and documented.
MEASURE 3: Mechanisms for tracking identified AI risks over time are in place.
MEASURE 3.1: Approaches, personnel, and documentation are in place to regularly identify and track existing, unanticipated, and emergent AI risks based on factors such as intended and actual performance in deployed contexts.
MEASURE 3.2: Risk tracking approaches are considered for settings where AI risks are difficult to assess using currently available measurement techniques or where metrics are not yet available.
MEASURE 3.3: Feedback processes for end users and impacted communities to report problems and appeal system outcomes are established and integrated into AI system evaluation metrics.
MEASURE 4: Feedback about efficacy of measurement is gathered and assessed.
MEASURE4.1: Measurement approaches for identifying AI risks are connected to deployment context(s) and informed through consultation with domain experts and other end users. Approaches are documented.
MEASURE 4.2: Measurement results regarding AI system trust worthiness in deployment context(s) and across the AI lifecycle are informed by input from domain experts and relevant AI ac tors to validate whether the system is performing consistently as intended. Results are documented.
MEASURE 4.3: Measurable performance improvements or declines based on consultations with relevant AI actors, including affected communities, and field data about context relevant risks and trustworthiness characteristics are identified and documented.
Frequently asked questions around privacy and security controls for Monte Copilot and AI used within the Anvilogic platform.