Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
What is Anvilogic?
Anvilogic is an AI SOC solution and multi-data platform that enables detection engineers and threat hunters to detect, hunt, and investigate seamlessly across disparate data lakes and SIEMs without the need to centralize data, learn new languages or deploy new sensors.
Anvilogic empowers enterprise SOCs to rapidly mature their detection programs with a dual approach: instantly deployable, curated detections and a powerful low-code builder for crafting correlated custom alerts. With thousands of expert-built detections ready to deploy in a single click, teams can accelerate threat coverage from day one. Anvilogic’s platform also features automated workflows and AI-driven insights for tuning, triage, maintenance, and critical alert escalation—helping SOCs hunt threats with greater speed and precision. Real-time SOC maturity scoring gives teams continuous visibility into their detection posture, mapped against their most critical threats.
Congratulations and welcome to Anvilogic!
This guide will help you log in, complete the guided onboarding to set threat priorities and integrate a data repository, get data in, and deploy detections.
The following flowchart summarizes the tasks you will complete to get started.
The Anvilogic platform is supported on the highest versions of Google Chrome and Mozilla Firefox.
If you're ready, to get started with your Anvilogic onboarding.
If you run into any issues, see for information about how you can contact us.

Integrate Splunk with the Anvilogic platform using the Anvilogic App for Splunk.
The Anvilogic App for Splunk provides triage, allow list and suppressions management, and analytics used by the data feed and productivity scores on the maturity score pages.
You can also enable automated threat detection in the Anvilogic App for Splunk, which is required to generate tuning insights and some hunting insights.
Snowflake-only customers can get tuning insights without the Anvilogic App for Splunk.
If you are already using Splunk Enterprise or Splunk Cloud Platform, follow the instructions in the documentation to download and install the Anvilogic App for Splunk.
Next step
Select one of the following to continue:
If you don't have Splunk, and you want the capabilities provided by the Anvilogic App for Splunk, Anvilogic will provision a Splunk instance for you and manage the installation and upgrade of the Anvilogic App for Splunk.
Next step
After the Anvilogic platform is connected to a hosted Splunk instance, .
Install the Anvilogic App for Splunk in your Splunk Enterprise environment.
Follow the instructions in the Splunk documentation to install the Anvilogic App for Splunk in your environment:
If you have a distributed Splunk Enterprise deployment, use the deployer to install the app on your search heads. See Install an add-on in a distributed Splunk Enterprise deployment in the Splunk Supported Add-ons manual.
If you have a single-instance Splunk Enterprise deployment, install the app on the search head. See Install an add-on in a single-instance Splunk Enterprise deployment in the Splunk Supported Add-ons manual.
You must restart Splunk to complete the installation.
High-level steps for downloading and install the Anvilogic App for Splunk on Splunk Cloud Platform.
Perform the following tasks to download and install the Anvilogic App for Splunk on Splunk Cloud Platform:
After you log in, use the guided onboarding experience to define your company's threat profile.
Create the required custom indexes on the Splunk platform.
The Anvilogic App for Splunk requires custom Splunk indexes used by the HTTP Event Collector (HEC) collector command for auditing, metrics and reporting:
Create an index named <your-org-name>_anvilogic for storing Anvilogic rule output and auditing the app. See in the Splunk Enterprise Managing Indexers and Clusters of Indexers manual.
Create a metrics index named <your-org-name>_anvilogic_metrics for storing the output of baselining rules. See in the Splunk Enterprise Managing Indexers and Clusters of Indexers manual.
Market and industry trends
Your trusted group activity
Popular search terms
Activity from organizations similar to you
Gather your organization’s specific threat priorities to help Anvilogic recommend use cases specific to your organization rather than generic recommendations based on external factors.
To build your company profile, provide the information listed in the table. This information helps to filter the MITRE techniques most applicable to you, so that the most relevant recommended content is generated.
Region
Select the geographical region in which your company operates. If you operate in multiple regions, select Global.
Industry
Select the industry vertical that best represents your company. You can select more than one industry.
Infrastructure
Select the infrastructure used within your organization. Select as many as apply to your organization.
As your organization matures over time, you can revisit and update your threat profile to accommodate changes to your infrastructure, including platforms, threat groups, techniques, and data categories.
After you define your threat profile, Select your data repository and get data in.
As an admin user, grant additional users access to the Anvilogic platform, or set up more secure authentication settings.
See Add a new user for instructions on how to add users who can access the Anvilogic platform. When you add a new user, you assign them roles which grant certain privileges to the user when they access the platform. See User roles and privileges (RBAC) for a full list of platform roles.
You can configure additional authentication settings for access to the Anvilogic platform, such as multi-factor authentication (MFA) or single sign-on (SSO). See and for more information.
Review and deploy a variety of detections on the Anvilogic platform.
The Anvilogic platform generates recommended content for you to deploy based on your threat priorities and good quality data feeds.
You can view recommended content on the Home page and in the Armory, which shows you all available detections not yet deployed in your system.
After you install the Anvilogic App for Splunk, you must configure the app to connect to the Anvilogic platform.
The following page will help you understand how you can use Fluentd to send data to Anvilogic to ingest into Snowflake.
What is Fluentd?
Fluentd is an open source streaming tool that can be used to send data to Snowflake to leverage with Anvilogic.
The following page will help you understand how you can use FluentBit to send data to Anvilogic to ingest into Snowflake.
What is FluentBit?
Fluentd is an open source streaming tool that can be used to send data to Snowflake to leverage with Anvilogic.
Data Type Examples:
High-level steps for downloading and install the Anvilogic App for Splunk on Splunk Cloud Platform.
Perform the following tasks to download and install the Anvilogic App for Splunk on Splunk Enterprise
Upload your existing detections using a CSV file.
If you have existing detections, you can export them to a CSV file, then import the CSV into Anvilogic. Doing this helps you get an idea of what your MITRE coverage looks like, so you can address and strengthen the areas where you need additional coverage.
The CSV file must have the title, description, and search of the existing detection. See for instructions to import the CSV file into Anvilogic. This document also describes how to properly format the CSV file when you create it.

In Splunk Web, select Apps > Anvilogic to access the Anvilogic App for Splunk.
If this is your first time installing the Anvilogic App for Splunk, you are prompted to set up the app. Click Continue to app setup page. To access the app configuration settings after the initial configuration, go to Settings > App Configuration.
Complete the general settings.
On the Anvilogic platform, select Settings > Generate API Key. Copy the generated API key.
Navigate to the Anvilogic App for Splunk.
Select Setting > App Configuration.
Click and expand the General Settings section.
Click and expand the API Settings section.
Paste the API key you copied earlier into the API Key field.
If your network requires a proxy to connect to Anvilogic, configure the proxy settings in the Anvilogic App for Splunk configuration page.
Click Save.
In your Splunk instance, run the following Splunk search to verify your app's connection with the Anvilogic platform:
You can view your connection status along with other system health information in the Health Monitoring dashboard in the Anvilogic App for Splunk.
| avlmanage command=check_app_healthThe table defines additional types of recommended content on the Anvilogic platform and how you can deploy them.
Threat identifiers
Recommended threat identifiers can be viewed on the Home page and in the Armory. See for an example of how to deploy a recommended threat identifier from the Home page.
Trending topics
Trending topics are in-product versions of the Forge Threat Detection Report emails sent to existing customers. Trending topics can be found on the Home page and the Armory. See for an example of how to deploy all the content in a trending topic.
Detection packs
Detection packs are collections of threat identifiers, threat scenarios, and macros that address a specific security issue. Detection packs can be viewed in the Armory. See for an example of how to deploy all the content in a detection pack.
Perform Additional tasks to set up user access and authentication.
Verify the requirements on this page before you download and install the Anvilogic App for Splunk.
You can integrate the Anvilogic platform with Splunk Enterprise versions 9.0.x and 8.0 - 8.3.x.
Splunk Enterprise Security (ES) versions 5.0 - 7.0.x are supported.
Install the Anvilgic App for Splunk on your Splunk search head. The server where you install the Anvilogic App for Splunk must meet the following requirements:
The server must be able to connect to over port 443. This is required to download Splunk code and rules metadata.
The server must be able to connect to over port 443.
The server must be able to connect to over port 443 to send events for third party vendor alert integrations.
If you have multiple Splunk Enterprise instances, install the Anvilogic App for Splunk in only one of those environments.
For performance considerations, review the following factors in your Splunk Enterprise deployment:
The number of concurrent users.
The number of concurrent searches.
The types of searches used.
See in the Splunk Enterprise Capacity Planning Manual.
When you deploy threat identifiers on the Anvilogic platform, saved searches are created in your Splunk deployment. You can use cron scheduler recommendations on the Anvilogic platform to manage the load on your Splunk deployment.
Resource and hardware considerations for the Anvilogic App for Splunk match the recommendations for your Splunk Enterprise deployment. See in the Splunk Enterprise Capacity Planning Manual.
Last updated 1 month ago
Integrate the Anvilogic platform with Snowflake.
After defining your company profile in the guided onboarding, select Snowflake as the data logging platform.
You must have admin privileges in Snowflake in order to complete the integration.
Perform the following steps to complete the integration with Snowflake:
Input your Snowflake account identifier to establish a connection between your Snowflake instance and the Anvilogic platform.
Click Copy Code, then click Go to Snowflake to go to your Snowflake instance and run the copied SQL commands. This set of SQL commands creates the necessary Snowflake components, the anvilogic_service Snowflake user used by the Anvilogic platform, and assigns the necessary permissions to the anvilogic_admin role for the anvilogic_service user.
Perform the following tasks in your Snowflake instance:
After you have defined your company's threat profile and connected Snowflake as a data repository, it's time to .
Learn how to configure Azure Lighthouse to enable cross-tenant searches in Microsoft Log Analytics.
In order to execute cross-tenant queries against a Microsoft Azure Log Analytics Workspace, the proper permissions first need to be configured. This can be done using Azure Lighthouse, a free service that assists customers in managing multiple Azure tenants. In this case, it is used to assign role-based access control (RBAC) permissions to grant service principals permissions across tenants.
What follows are the instructions to set up Azure Lighthouse to enable the Anvilogic Azure integration to query across Log Analytics Workspaces in different Azure tenants.
Provider - The tenant that is providing the service (in which the Anvilogic ADX cluster was deployed).
Customer - The tenant that the provider needs access to. This contains the Log Analytics Workspaces that will be searched.
There is only one provider, but there can be many customers.
At the moment, Microsoft does not support resource-level permissions. Their guidance is to place active DENY permissions for the Anvilogic service principal on any resources in the Customer Resource Group that you don't want it to be able to access.
Alternatively, you can move the Log Analytics Workspace to it's own resource group using the . This is a non-destructive change and would not impact the workspace (i.e. it can be done while in production).
Microsoft also recommends that customers have only one Log Analytics Workspace per region. If customers are using multiple, that is an anti-pattern from Microsoft's perspective. For more information, see .
Select the data repository where you store your logs.
After defining your company profile in the guided onboarding, select a data repository:
Follow the instructions for your data repository.
The process to get the Anvilgic App for Splunk differs depending on whether you are using Splunk Cloud Platform Classic Experience or Splunk Cloud Platform Victoria Experience.
Create a HEC token that can write to the custom indexes you just created.
The Anvilogic App for Splunk contains a custom Splunk command that uses the HTTP Event Collector (HEC) to send results from threat identifiers into the events of interest index. This command is critical to the frameworks ability to store events for advanced correlation, and manages auditing on all objects.
More information on the HEC and how to set it up can be found in in the Splunk Enterprise Getting Data In manual.
Perform the following steps to create inputs on a single search head. Some steps may vary if you are managing a search head cluster.
In Splunk Web, select Settings > Data inputs.
Select
Verify the requirements on this page before you download and install the Anvilogic App for Splunk.
You can integrate the Anvilogic platform with Splunk Cloud Platform versions 8.0.x and higher. Splunk Enterprise Security (ES) versions: 5.0 - 7.0.x are supported.
If you are using the Splunk Cloud Platform Classic experience, you won't be able to accept tuning insights.
See for more information about the differences between Splunk Cloud Platform Classic Experience and Splunk Cloud Platform Victoria Experience.
Log in for the first time and set your password on the Anvilogic platform.
Your welcome packet email from Anvilogic contains a link to log in to the Anvilogic platform for the first time. You must change your password the first time you log in to the Anvilogic platform.
Make sure you are a user with administrator privileges on the Anvilogic platform.
Click Set Password in the welcome package email. You are directed to set password page.
Enter the email address with which you have registered. This email address must match the email address that the welcome email was sent to.
Fill in relevant information:
Specify a name of avl_hec_token.
Leave the Source Name Override blank.
Enter HEC Input for Anvilogic Detection Framework as the description.
Leave the Output Group as none.
Leave the Enable indexer acknowledgement box unchecked.
Click Next to configure the input settings:
Source type = Automatic
App Context = Anvilogic (anvilogic)
index = anvilogic AND index = anvilogic_metrics
Default Index = anvilogic
Click Review, then click Submit.
Copy the token value.
Perform the following steps to update the global settings and enable the tokens:
In Splunk Web, select Settings > Data inputs.
Select HTTP Event Collector > Global Settings.
Ensure the following settings are enabled:
All Tokens: Enabled
Enable SSL - Check
HTTP Port Number = Default is 8088
If you are installing the Anvilogic App for Splunk on Splunk Enterprise Security (ES) search heads in Splunk Cloud Platform, and you also have search heads that are not on Splunk ES, you must allow all IPs to send to the Splunk Cloud HTTP event collector (HEC) endpoint on port 443 since Splunk Cloud Platform does not assign static IPs to the Splunk Cloud Platform search heads.
This setting requires an HEC token for authentication and is often used to send data to Splunk Cloud Platform from multiple devices with dynamic IPs, such as mobile devices. See Configure IP allow lists for Splunk Cloud Platform in the Splunk Cloud Platform Admin Config Service Manual for instructions.
If your environment includes Splunk ES running on Splunk Cloud Platform Victoria and Splunk Enterprise, the Anvilogic App for Splunk is installed in both environments. You must submit a support ticket with Splunk Support to remove the Anvilogic App for Splunk from one of those environments.
After verifying the requirements, Install the Anvilogic App for Splunk.

Frequently asked questions around privacy and security controls for Monte Copilot and AI used within the Anvilogic platform.
Change the role from PUBLIC to ACCOUNTADMIN.
Paste the copied SQL commands into the new worksheet.
Click the All Queries checkbox to run all the commands.
Click Run.
Look for the Statement executed successfully message.
Return to the Anvilogic platform, then click Next.
Click Copy Code, then click Go to Snowflake to go to your Snowflake instance and run the copied SQL commands. This set of SQL commands creates the S3 storage integration and allows access to the anvilogic_service user so that a connection to your managed S3 bucket where Snowflake retrieves the data can be made.
Perform the following tasks in your Snowflake instance:
Open a new worksheet.
Change the role from PUBLIC to ACCOUNTADMIN.
Paste the copied SQL commands into the new worksheet.
Click the All Queries checkbox to run all the commands.
Click Run.
Look for the Statement executed successfully message.
Return to the Anvilogic platform, then click Add.

Type Anvilogic in the Search for apps field. Click on Anvilogic App for Splunk in the results.
Click Download.
If you don't have the permissions to download the app, you will see Download Restricted when you try to download the app.
If this happens, you must provide Anvilogic with your Splunk.com or Splunkbase username to satisfy Splunk's access control requirements. You must provide this username for each user who requires download access for the Anvilogic App for Splunk.
Perform the following tasks to find your Splunkbase username:
Make sure you are logged in to Splunkbase.
Click you user profile photo or avatar, then select My Profile.
Find your username at the top of the screen, such as [email protected] in the following example:
If needed, you can download the Anvilogic App for Splunk from the Anvilogic platform:
In the Anvilogic platform, click Settings ().
In the Anvilogic Splunk App field, click Download.
The downloaded file is an SPL (Splunk application package) file that can be installed in your Splunk environment.
Enter a password meeting the password requirements.
Re-enter the password for confirmation.
Review the Master service agreement and the privacy policy and click on the check box indicating your consent.
Click Submit.
After you log in, you will see the first screen of the guided onboarding.
Click Let's Start to begin.
This page is designed to help customers leverage the Forward Events integration within their Anvilogic account for FluentBit.
Anvilogic account
Snowflake data repository connected to your Anvilogic account
Anvilogic will provide a S3 bucket and the corresponding access keys/ids (note these change for each integration) when you create a forward events integration in your Anvilogic deployment.
Following the steps of the AWS CLI install, once you have done the installation correctly - Please run aws configure and paste in the access key and id provided. Once this is completed, validate that the credentials have been created - usually C:\Users\YourUsername.aws\credentials
Once you have pasted the above config into your fluentBit.conf file (typically located at C:\Program Files\fluent-bit\conf )
NOTE: You can also edit or add any of your own customer parsers for logs by editing the parser.conf file at /etc/fluent-bit/
Once you have edited your fluent-bit.conf, please restart the fluentBit application
You can now confirm that data has landed in your snowflake account.
Please update the input section of this example config to fit your exact needs.
Get your data into Snowflake, where it can be used to generate detections on the Anvilogic platform.
This document assumes you have completed the guided onboarding:
You have defined your company threat profile.
You have integrated Snowflake as your data repository
Before you continue, make sure you are a user with administrator privileges on the Anvilogic platform.
The following flowchart summarizes the process for getting your data into Snowflake.
Pick one of the following next steps, depending on your infrastructure:
Before you begin, make sure you read . This document contains important information for optimizing your data onboarding for the best performance.
After you review the best practices, see for supported data sources and onboarding instructions for each data source.
See for a list of supported data sources. Click on the name of a data source and follow the instructions to get the data into Snowflake. Anvilogic manages the pipelines for these data sources once you have the data source integrated.
If you have a data source that is not listed here, use to get your data in. is the recommended way to get your data sources into Snowflake. If you don't use Cribl Stream, you can use your own pipelines to Snowflake.
Anvilogic implementation with Splunk & Snowflake.










Once that has been validated, we need to create a system variable in order for fluentBit to read/use these credentials. To do so;
Open the Start Menu and search for “Environment Variables.”
Select Edit the system environment variables.
In the System Properties window, click the Environment Variables button.
Under System variables, click New.
Enter the following:
Variable name: AWS_SHARED_CREDENTIALS_FILE
Variable value: C:\Users\YourUsername\.aws\credentials
Next we need to configure fluentbit to read our logs and send them to S3. In this example, we will be ingesting the windows event logs. You can change what channels by simply adding or removing them.
Please note, the bucket will be the bucket name/path.
This could mean that it is sdi_customer_data-1 or -2 or -3.



[INPUT]
Name winlog
Channels Security, Application, System
Interval_Sec 1
[OUTPUT]
Name s3
Match *
bucket avl-raw-prod-s3-221-24243202/sdi_custom_data-1
region us-east-1
use_put_object On
Store_dir C:\Windows\Temp\fluent-bit\s3
s3_key_format /$TAG/%Y/%m/%d/%H-%M-%SPDF Download:
Anvilogic implementation with Azure (Data Explorer, Log Analytics, and Fabric).
Below is the generic architecture digram for how Anvilogic works on top of Azure.
We support querying a Log Analytics Workspace in a different tenant than the Anvilogic Azure Data Explorer Cluster.
In order to execute cross-tenant queries against a Microsoft Azure Log Analytics Workspace, the proper permissions first need to be configured.
This can be done using , a free service that assists customers in managing multiple Azure tenants.
Diagram:
PDF Download:
This page is designed to help customers leverage the Forward Events integration within their Anvilogic account for FluentBit.
Anvilogic account
Snowflake data repository connected to your Anvilogic account
Anvilogic will provide a S3 bucket and the corresponding access keys/ids (note these change for each integration) when you create a forward events integration in your Anvilogic deployment.
Create a credential file on the machine that fluentBit can read from. For example, /home/<username>/creds . Inside the file please paste the following config with your specific access key/id
Since our credentials are already updated in the /home/<username>/creds file, we need to configure the service config file for Fluent Bit and set the path to this credential file (see image for reference). To do that, fire up your favorite text editor and edit the fluent-bit.service file located at /usr/lib/systemd/system/fluent-bit.service.
Environment="AWS_SHARED_CREDENTIALS_FILE=/home/<username>/creds"
Then run the following commands in a terminal window
sudo systemctl daemon-reload
sudo systemctl start fluent-bit
Once you have pasted the above config into your fluentBit.conf file (typically located at /etc/fluent-bit/fluent-bit.conf)
NOTE: You can also edit or add any of your own customer parsers for logs by editing the parser.conf file at /etc/fluent-bit/
Once you have edited your fluent-bit.conf, please restart the fluentBit service sudo systemctl restart fluent-bit
You can validate that your config is working by heading to /tmp/fluent-bit/s3/ and looking inside that folder.
You can now confirm that data has landed in your snowflake account.
Please update the input section of this example config to fit your exact needs.
Review the category mappings and quality of your data feeds.
Your data feeds are automatically categorized and synchronized to the Anvilogic platform every 7 days. When you add a data feed, you can view it on the Data Feeds page within 7 days.
Verify the category of your data feeds matches what you expect, as this affects your MITRE coverage. Select Maturity Score () > Data Feeds from the navigation bar, the review the categories for each data feed:
To change or add categories to a data feed:
Click here to learn more.
ADX Cluster, database, and tables Azure Container App/environment/jobs/instance
Log analytics workspace for the container app
Azure Container app registry & cache
Visit Azure Pricing Page -> Type in "azure data explorer" under products
In the Instance section, type "E8ads"
Refer to Azure Costs Estimates for more details.
Create an Anvilogic resource group
Go through our integration set up on the Anvilogic Platform









[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY[INPUT]
Name tail
Tag apache
Path /var/log/apache2/access.log
Parser apache2
Mem_Buf_Limit 50MB
[OUTPUT]
Name s3
Match *
bucket avl-raw-prod-s3-111-12345678/sdi_custom_data-0
region us-east-1
use_put_object On
Store_dir /tmp/fluent-bit/s3
s3_key_format /$TAG/%Y/%m/%d/%H/%M/%SDiagram:
ETL Parsing & Normalization Process
PDF Download:

Click on the name of the data feed.
Click Tags.
In the Data Categories, field, enter the data categories you want associated with this data feed.
Click Update when you are finished.
Select Maturity Score () > Data Feeds from the navigation bar, the review the quality for each data feed:
An initial quality feed assessment is made by the Anvilogic platform for any new data feed added to the Anvilogic platform.
Perform your own evaluation of the timeliness, logging level, field extraction, and monitoring scope for each data feed so you can assign a proper data feed quality. Feed quality is important because only Good quality feeds are used to generate recommendations on the Anvilogic platform.
To manually change the quality of a data feed:
Click on the name of the data feed.
Select one of the qualities from the Feed Quality dropdown.
Click Update when you are finished.
Auto-compute feed qualities are available for Windows event logs in Splunk. See Data feed quality auto computation.

Unified Detect for Azure supports both Azure Log Analytics, Azure Data Explorer (ADX), and Microsoft Fabric.
Installing Anvilogic's UD for Azure creates a new Azure Data Explorer cluster in your environment that is used to manage objects to run the Unified Detect framework.
During the set up process, a VM is created that will manage the Data Explorer Cluster. The default size upon our automated installation of that VM is a Standard_E2ads_v5 (Medium 8vCPUs) in a production cluster with SLA. This can be changed at any time if the amount of detections you have running requires more compute resources.
Review your billing configurations for ADX pricing tiers that control cluster management to ensure proper scaling expectations and configuration for the Anvilogic service to not get terminated.
See and .
The table below assumes each deployed job run averages 1 minute and every rule deployed has the specified job run frequency. In reality, you could have a mix of how long the jobs take to run and how often they run. The table below is a guideline to be used for estimating capacity, and is based on the limits, which is the number of cores multiplied by 10.
3 Concurrency job runs are reserved for adhoc jobs executed from the Azure TI Builder view when creating or editing a threat identifier. The remaining jobs are reserved for deployed rules.
The table shows the estimated monthly cost for various cluster sizes.


480
Standard_E2ads_v5
20
60
960
Standard_E4ads_v5
40
5
180
Standard_E4ads_v5
40
15
540
Standard_E4ads_v5
40
30
1,080
Standard_E4ads_v5
40
60
2,160
Standard_E8ads_v5
80
5
380
Standard_E8ads_v5
80
15
1,140
Standard_E8ads_v5
80
30
2,280
Standard_E8ads_v5
80
60
4,560
Standard_E16ads_v5
160
5
780
Standard_E16ads_v5
160
15
2,340
Standard_E16ads_v5
160
30
4,680
Standard_E16ads_v5
160
60
9,360
Standard_D32d_v4
320
5
1,580
Standard_D32d_v4
320
15
4,740
Standard_D32d_v4
320
30
9,480
Standard_D32d_v4
320
60
18,960
$24,600
Standard_E16ads_v5
16
$4,099
$49,188
Standard_D32d_v4
32
$7,781
$93,372
Standard_E2ads_v5
20
5
80
Standard_E2ads_v5
20
15
240
Standard_E2ads_v5
20
Standard_E2ads_v5
2
$512
$6,144
Standard_E4ads_v5
4
$1,024
$12,288
Standard_E8ads_v5
8
30
$2,050

Assign the avl_admin role to your admin users.
Use Splunk Web to assign the avl_admin role to app administrators. See Create and manage roles with Splunk Web in the Securing Splunk Enterprise manual for instructions.
The following roles are available on the Anvilogic App for Splunk. See to see a summary of the privileges provided by each role.
avl_admin
avl_senior_developer
avl_developer
avl_senior_triage
You can customize the avl_senior_developer, avl_developer, avl_senior_triage, and avl_triage roles. The avl_admin and avl_readonly roles can't be modified.
For example, perform the following tasks to customize the capabilities allowed or restricted by the AVL Senior Developer role:
In the Anvilogic App for Splunk, select Settings > App Configuration.
Click User Settings to expand the section.
Click Customize AVL Senior Developer Role to expand the section for that role.
Deselect any capabilities you want to remove for this role, or select a capability to add it to the role.
The following table lists the roles in the Anvilogic App for Splunk and the privileges granted by each role. You can customize the privileges enabled for each role as desired.
.
This page is designed to help customers leverage the Forward Events integration within their Anvilogic account for FluentBit.
Anvilogic account
Snowflake data repository connected to your Anvilogic account
Anvilogic will provide a S3 bucket and the corresponding access keys/ids (note these change for each integration) when you create a forward events integration in your Anvilogic deployment.
Create a credential file on the machine that fluentBit can read from. For example, /home/<username>/creds . Inside the file please paste the following config with your specific access key/id
Since our credentials are already updated in the /home/<username>/creds file, we need to configure the service config file for Fluent Bit and set the path to this credential file (see image for reference). To do that, fire up your favorite text editor and edit the fluent-bit.service file located at /usr/lib/systemd/system/fluent-bit.service.
Environment="AWS_SHARED_CREDENTIALS_FILE=/home/<username>/creds"
Then run the following commands in a terminal window
sudo systemctl daemon-reload
sudo systemctl start fluent-bit
Once you have pasted the above config into your fluentBit.conf file (typically located at /etc/fluent-bit/fluent-bit.conf)
NOTE: You can also edit or add any of your own customer parsers for logs by editing the parser.conf file at /etc/fluent-bit/
Once you have edited your fluent-bit.conf, please restart the fluentBit service sudo systemctl restart fluent-bit
You can validate that your config is working by heading to /tmp/fluent-bit/s3/ and looking inside that folder.
You can now confirm that data has landed in your snowflake account.
Please update the input section of this example config to fit your exact needs.
avl_readonly
Click Save.
avl_remove_al_rule_entry
✓
✓
✓
avl_modify_al_rule_entry
✓
✓
✓
✓
avl_add_al_global_entry
✓
✓
✓
avl_remove_al_global_entry
✓
✓
✓
avl_modify_al_global_entry
✓
✓
✓
avl_manage_rule_al
✓
✓
✓
avl_manage_global_al
✓
✓
✓
Triage privileges
avl_change_first_alert_status
✓
avl_change_all_alert_status
✓
✓
✓
avl_change_alert_status_to_new
✓
✓
✓
avl_bulk_alert_status
✓
✓
✓
avl_add_observation
✓
✓
✓
✓
avl_remove_observation
✓
✓
✓
avl_rate_rule
✓
✓
✓
✓
avl_add_rule_feedback
✓
✓
✓
✓
avl_create_case
✓
✓
✓
✓
avl_suppress_alert
✓
✓
✓
✓
avl_suppress_global_alert
✓
✓
✓
✓
Content deployment privileges
avl_deploy_content
✓
avl_write_hec
✓
✓
✓
avl_post_rest_platform
✓
✓
avl_post_rest
✓
✓
✓
avl_get_rest
✓
✓
✓
✓
avl_rest_config_access_get
✓
✓
✓
✓
avl_rest_config_access_post
Allowlist privileges
avl_add_al_rule_entry
✓
✓
✓
✓





[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY[INPUT]
Name syslog
Mode udp
Listen 0.0.0.0
Port 1515
Parser syslog-rfc3164
Mem_Buf_Limit 10MB
[OUTPUT]
Name s3
Match *
bucket avl-raw-prod-s3-221-24243202/sdi_custom_data-0
region us-east-1
use_put_object On
Store_dir /tmp/fluent-bit/s3
s3_key_format /$TAG/%Y/%m/%d/%H/%M/%SThis page summarizes the AI security controls and measures in place on the Anvilogic platform.
The table summarizes the security controls in place for AI on the Anvilogic platform.
Control Category
Controls Applied
Context is established and understood.
Intended purposes, potentially beneficial uses, context-specific laws, norms and expectations, and prospective settings in which the AI system will be deployed are understood and documented. Considerations include the specific set or types of users along with their expectations; potential positive and negative impacts of system uses to individuals, communities, organizations, society, and the planet; assumptions and related limitations about AI system purposes, uses, and risks across the development or product AI lifecycle; and related test, evaluation, verification, and validation (TEVV) and system metrics.
The organization’s mission and relevant goals for AI technology are understood and documented.
The business value or context of business use has been clearly defined or– in the case of assessing existing AI systems– re-evaluated.
Organizational risk tolerances are determined and documented.
System requirements (e.g., “the system shall respect the privacy of its users”) are elicited from and understood by relevant AI actors. Design decisions take socio-technical implications into account to address AI risks.
The Measure function employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts. It uses knowledge relevant to AI risks identified in the MAP function.
Categorization of the AI system is performed.
The specific tasks and methods used to implement the tasks that the AI system will support are defined (e.g., classifiers, generative models, recommenders).
Scientific integrity and test, evaluation, verification, and validation (TEVV) considerations are identified and documented, including those related to experimental design, data collection and selection (e.g., availability, representativeness, suitability), system trustworthiness, and construct validation.
AI capabilities, targeted usage, goals, and expected benefits and costs compared with appropriate benchmarks are understood.
Potential benefits of intended AI system functionality and performance are examined and documented.
Potential costs, including non-monetary costs, which result from expected or realized AI errors or system functionality and trustworthiness– as connected to organizational risk tolerance– are examined and documented.
Targeted application scope is specified and documented based on the system’s capability, established context, and AI system categorization.
Processes for operator and practitioner proficiency with AI system performance and trustworthiness– and relevant technical standards and certifications– are defined, assessed, and documented.
Processes for human oversight are defined, assessed, and documented in accordance with organizational policies.
Risks and benefits are mapped for all components of the AI system including third-party software and data.
Internal risk controls for components of the AI system, including third-party AI technologies, are identified and documented.
Impacts to individuals, groups, communities, organizations, and society are characterized.
Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past uses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed the AI system, or other data are identified and documented.
Practices and personnel for supporting regular engagement with relevant AI actors and integrating feedback about positive, negative, and unanticipated impacts are in place and documented.
Manage deployment environment governance.
When developing contracts for AI system products or services Consider deployment environment security requirements.
Ensure a robust deployment environment architecture.
Establish security protections for the boundaries between the IT environment and the AI system.
Identify and protect all proprietary data sources the organization will use in AI model training or fine-tuning. Examine the list of data sources, when available, for models trained by others.
Harden deployment environment configurations.
Apply existing security best practices to the deployment environment. This includes sandboxing the environment running ML models within hardened containers or virtual machines (VMs), monitoring the network, configuring firewalls with allow lists, and other best practices for cloud deployments.
Review hardware vendor guidance and notifications (e.g., for GPUs, CPUs, memory) and apply software patches and updates to minimize the risk of exploitation of vulnerabilities, preferably via the Common Security Advisory Framework (CSAF).
Secure sensitive AI information (e.g., AI model weights, outputs, and logs) by encrypting the data at rest, and store encryption keys in a hardware security module (HSM) for later on-demand decryption.
Implement strong authentication mechanisms, access controls, and secure communication protocols, such as by using the latest version of Transport Layer Security (TLS) to encrypt data in transit.
Ensure the use of phishing-resistant multifactor authentication (MFA) for access to information and services. [2] Monitor for and respond to fraudulent authentication attempts.
Protect deployment networks from threats.
Use well-tested, high-performing cybersecurity solutions to identify attempts to gain unauthorized access efficiently and enhance the speed and accuracy of incident assessments.
Integrate an incident detection system to help prioritize incidents. Also integrate a means to immediately block access by users suspected of being malicious or to disconnect all inbound connections to the AI models and systems in case of a major incident when a quick response is warranted.
Continuously protect the AI system.
Models are software, and, like all other software, may have vulnerabilities, other weaknesses, or malicious code or properties. Continuously monitor AI system.
Validate the AI system before and during use.
Store all forms of code (e.g., source code, executable code, infrastructure as code) and artifacts (e.g., models, parameters, configurations, data, tests) in a version control system with proper access controls to ensure only validated code is used and any changes are tracked.
Secure exposed APIs.
If the AI system exposes application programming interfaces (APIs), secure them by implementing authentication and authorization mechanisms for API access. Use secure protocols, such as HTTPS with encryption and authentication.
Enforce strict access controls.
Prevent unauthorized access or tampering with the AI model. Apply role-based access controls (RBAC), or preferably attribute-based access controls (ABAC) where feasible, to limit access to authorized personnel only. Distinguish between users and administrators. Require MFA and privileged access workstations (PAWs) for administrative access.
Ensure user awareness and training.
Educate users, administrators, and developers about security best practices, such as strong password management, phishing prevention, and secure data handling. Promote a security-aware culture to minimize the risk of human error. If possible, use a credential management system to limit, manage, and monitor credential use to minimize risks further.
Conduct audits and penetration testing.
Engage external security experts to conduct audits and penetration testing on ready to-deploy AI systems.
Implement robust logging and monitoring.
Establish alert systems to notify administrators of potential oracle-style adversarial compromise attempts, security breaches, or anomalies. Timely detection and response to cyber incidents are critical in safeguarding AI systems.
Measure
Measure Subcategories
MEASURE 1: Appropriate methods and metrics are identified and applied.
MEASURE 1.1: Approaches and metrics for measurement of AI risks enumerated during the MAP function are selected for implementation starting with the most significant AI risks. The risks or trustworthiness characteristics that will not– or cannot– be measured are properly documented.
MEASURE 1.2: Appropriateness of AI metrics and effectiveness of existing controls are regularly assessed and updated, including reports of errors and potential impacts on affected communities.
MEASURE 1.3: Internal experts who did not serve as front-line developers for the system and/or independent assessors are involved in regular assessments and updates. Domain experts, users, AI actors external to the team that developed or deployed the AI system, and affected communities are consulted in support of assessments as necessary per organizational risk tolerance.
MEASURE 2: AI systems are evaluated for trustworthy characteristics.
MEASURE2.1: Test sets, metrics, and details about the tools used during test, evaluation, verification, and validation (TEVV) are documented.
MEASURE 2.2: Evaluations involving human subjects meet applicable requirements (including human subject protection) and are representative of the relevant population.
MEASURE 2.3: AI system performance or assurance criteria are measured qualitatively or quantitatively and demonstrated for conditions similar to deployment setting(s). Measures are documented.
MEASURE 2.4: The functionality and behavior of the AI system and its components– as identified in the MAP function– are monitored when in production.
MEASURE 2.5: The AI system to be deployed is demonstrated to be valid and reliable. Limitations of the generalizability be yond the conditions under which the technology was developed are documented.
MEASURE 2.6: The AI system is evaluated regularly for safety risks– as identified in the MAP function. The AI system to be deployed is demonstrated to be safe, its residual negative risk does not exceed the risk tolerance, and it can fail safely, particularly if made to operate beyond its knowledge limits. Safety metrics reflect system reliability and robustness, real-time monitoring, and response times for AI system failures.
MEASURE 2.7: AI system security and resilience– as identified in the MAP function– are evaluated and documented.
MEASURE 2.8: Risks associated with transparency and account ability– as identified in the MAP function– are examined and documented.
MEASURE 2.9: The AI model is explained, validated, and documented, and AI system output is interpreted within its context as identified in the MAP function– to inform responsible use and governance.
MEASURE 2.10: Privacy risk of the AI system– as identified in the MAP function– is examined and documented.
MEASURE 2.11: Fairness and bias– as identified in the MAP function– are evaluated and results are documented.
MEASURE 2.12: Environmental impact and sustainability of AI model training and management activities– as identified in the MAP function– are assessed and documented.
MEASURE 2.13: Effectiveness of the employed test, evaluation, verification, and validation (TEVV) metrics and processes in the MEASURE function are evaluated and documented.
MEASURE 3: Mechanisms for tracking identified AI risks over time are in place.
MEASURE 3.1: Approaches, personnel, and documentation are in place to regularly identify and track existing, unanticipated, and emergent AI risks based on factors such as intended and actual performance in deployed contexts.
MEASURE 3.2: Risk tracking approaches are considered for settings where AI risks are difficult to assess using currently available measurement techniques or where metrics are not yet available.
MEASURE 3.3: Feedback processes for end users and impacted communities to report problems and appeal system outcomes are established and integrated into AI system evaluation metrics.
MEASURE 4: Feedback about efficacy of measurement is gathered and assessed.
MEASURE4.1: Measurement approaches for identifying AI risks are connected to deployment context(s) and informed through consultation with domain experts and other end users. Approaches are documented.
MEASURE 4.2: Measurement results regarding AI system trust worthiness in deployment context(s) and across the AI lifecycle are informed by input from domain experts and relevant AI ac tors to validate whether the system is performing consistently as intended. Results are documented.
MEASURE 4.3: Measurable performance improvements or declines based on consultations with relevant AI actors, including affected communities, and field data about context relevant risks and trustworthiness characteristics are identified and documented.