Splunk to Sentinel Migration – Part V – Reports and Dashboards

Now that we have the basics in place, its time to approach some of the harder topics that one would run into during a migration. The first difficult thing falls into the category of reports and dashboards.

This is where things become very much like migrating a non-SharePoint CMS to Sharepoint (which I have done many to many instances of with a high performance tool I created called PowerStream).

Splunk Dashboards are very much like web part pages in SharePoint and ASP.NET. They are made into various sections and placed into an XML/HTML file. On the other side of the equation, you have Workbooks in Azure Sentinel / Log Analytics / Azure Monitor. They are also similar to web part pages with sections.

Sections tend to have variables and queries tied to them and then some kind of context that tells the section how to render. This exists on both sides so a mapping has to be done between source and target.

Splunk Sections

In Splunk, you have the following sections and types:

When searching for these sections, you will look at each of the xml files that are exported for each dashboard. Inside these xml files you will find a s:key element with the name value of “eai:data”. This contains the actual form that will be displayed as part of the dashboard to users. Inside this form are rows and panels:

Inside these panels will be the sections that are of most importance. Things like HTML text and the search queries will be lurking:

As you can see above, the search query is embedded along with the filters and the options that need to be passed to it. These items need to be extracted and then converted to their Azure equivalents.

Sentinel Sections

In order to do the mapping to Azure you will need to understand how to create the workbooks and then upload them properly so they display in Azure Sentinel. In Azure workbooks, they are defined by JSON and have up to 11 different types of “items”. Here are some examples:

  1. Type 1: KqlItem- HTML\Text
  2. Type 3: KqlItem – Query
  3. Type 9: KqlParameterItem – Parameter
  4. Type 10: MetricsItem – Metric from Log Analytics
  5. Type 11: LinkItem – A Url/Link

Again, as part of the migration, you must take the extracted items from the Splunk dashboard and convert to the Workbook version. Once you have this figured out, you now need to upload the workbook.

Uploading Workbooks

Once you have a converted workbook, its time to upload it to Azure Sentinel. Because Sentinel simply builds off a Log Analytics workspace, most things (but definitely not all) are stored in that workspace. To upload, you will make a post to the following rest endpoint:

$url = "/subscriptions/$subscriptionId/resourcegroups/$resourceGroupName/providers/microsoft.operationalinsights/workspaces/$workspaceName"

The post body has some very specific items that must be set in order for the workbook to show up in Sentinel. As it took me some time to determine these parameters, I’ll leave it to use your favorite tool Fiddler to get to the same point 🙂

In the next post, we will explore how permissions work (well they kinda work, as I mentioned in earlier posts its the weakest part of Sentinel at the moment).

Splunk to Sentinel Blog Series

  1. Part I – SPL to KQL, Exporting Objects
  2. Part II – Alerts and Alert Actions
  3. Part III – Lookups, Source Types and Indexes
  4. Part IV – Searches
  5. Part V – Dashboards and Reports
  6. Part VI – Users and Permissions
  7. Part VII – Apps

References:

  1. My Twitter
  2. My LinkedIn
  3. My Email

Splunk to Sentinel Migration – Part IV – Searches

In Splunk, searches are saved queries. These queries can be used in any number of places, or simply just run when needed. Queries are built in the Splunk Programming Language (SPL).

In Part I of this series we discusses exporting the objects in the Splunk instance. Search queries you will discover are at the center of everything.

We have already discussed Alerts and how they use queries, but as we will see in the next post in this series other things like Dashboards and Reports also use them.

Searches

As we discusses in Part I of this series, we have to export the objects in Splunk as the first step. One of the things you will realize is that “Searches” is an overloaded term. In order to get dashboards and reports out, you have to hit the “searches” REST endpoint.

$url = "$apiUrl/servicesNS/-/search/saved/searches..."

When these object are exported, there are a few properties you will need to interrogate to determine what the search was created for:

  1. Report = {entry}.content.is_scheduled
  2. Dashboard = {entry}.content.isdashboard

We will explore these instances in the next series as they have other higher level components that must be considered, for the purposes of this blog, we simply want to discuss where are the possible places we can “save” these queries in Azure Sentinel.

  1. Sentinel Hunting Queries
  1. Sentinel Analytics (Scheduled Rule)
  1. Log Analytics (Saved Query)

Each of the items above have a different endpoint to create them:

#Hunting Query (Sentinel)
$url = "https://management.azure.com/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.OperationalInsights/workspaces/$workspaceName/savedSearches/$($id)?api-version=2017-04-26-preview"
#Scheduled Rule (Sentinel)
$url = "https://management.azure.com/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.OperationalInsights/workspaces/$workspaceName/providers/Microsoft.SecurityInsights/alertRules/$($ruleId)?api-version=2021-03-01-preview";
#Saved Query (Log Analytics)
$url = "https://management.azure.com/subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/microsoft.insights/scheduledqueryrules/$($name)?api-version=2018-04-16";

Each of these endpoints require a POST with very specific properties to tell Sentinel or Log Analytics what to create and what it is for. For example, Azure Sentinel Hunting queries can be targeted at MITRE Framework Tactics, so you’d include those, however, Splunk has no concept of MITRE that you can pull from so you’d have to add that from a reference file.

As was discussed in the first post in this blog series, the above items are the easy part, its the tokenization of the Splunk query and then conversion to KQL that is the hard part.

Splunk to Sentinel Blog Series

  1. Part I – SPL to KQL, Exporting Objects
  2. Part II – Alerts and Alert Actions
  3. Part III – Lookups, Source Types and Indexes
  4. Part IV – Searches
  5. Part V – Dashboards and Reports
  6. Part VI – Users and Permissions
  7. Part VII – Apps

References:

  1. My Twitter
  2. My LinkedIn
  3. My Email

Splunk to Sentinel Migration – Part III – Lookups, Source Types and Indexes

In the previous two blog series articles, we explored exporting objects, alerts and alert actions. We also looked at the concept of converting SPL to KQL. As part of that conversion process their are some special things that will need to be present in order for the queries to run properly on either the Splunk or the Azure Sentinel side.

These items include lookups, source types and indexes.

Lookups (Splunk)

Lookups are an important part of any SIEM/SOAR system. Typically these make up special lists of users, or more commonly Indicators of Compromise such as IP Addresses, Domain Names, etc.

When it comes to Splunk, you can do lookups in a number of different ways. This includes a file upload with can be either a plaintext CSV file, a gzipped CSV file, or a KMZ/KML file.
The maximum file size that can be uploaded through the browser is 500MB.

You can also do script/command based lookups:

So how do lookups translate into Azure Sentinel? There are several ways:

  1. Log Analytics Table
  2. Watchlists
  3. Azure Storage

Picking between the three options depend on the size and use of the actual lookup table. Note, as we will see in a later post, Sentinel doesn’t have a very robust permissions scheme (yet), so they is something to consider as well. In my opinion, this is probably the largest factor when considering moving to Azure Sentinel from another SIEM.

Log Analytics Table

You can use the Data Collector API to bring in Lookups into Azure Sentinel / Log Analytics.

https://docs.microsoft.com/en-us/azure/azure-monitor/logs/data-collector-api

Once imported, they would then show up like this:

Watchlists

Watchlists are the almost identical mapping of a Lookup in Splunk. As you can see here, I imported the exported CSV file programmatically from Splunk into a Watchlist in Azure Sentinel:

Azure Storage

Kusto Query Language (KQL) allows for the runtime creation of a table based on an external file. In this case, we can export the CSV files from Splunk into Azure Storage and then use KQL to bring it into the queries:

https://docs.microsoft.com/en-us/azure/data-explorer/kusto/query/externaldata-operator?pivots=azuredataexplorer

Source Types

In addition to lookup data, a vital component is you have to have log data to run your queries on. This log data comes in all kinds of shapes and formats. There are about 150 out of the box Source Types in Splunk.

Source types defined how data is formatted such that the indexers can index it. It is very much an ingest time type of thing.

Indexes

You can consider these like tables in a database. They have a name, rows and columns with type. Splunk stores the data from logs in an “index”. This data is ingested using the source types mentioned above.

In order to determine if a query is going to run, you have to gain access to all the indexes, then export the schema of those indexes. This of course would all go into the tokenizer / compiler and then save the graph of the query for comparison to the “tables” in Azure Sentinel.

As you can imagine, it is not going to be a one to one mapping on the other side. Some things are pretty common, others, not so much.

As we will see in Part VII, Apps have the ability to import new source types, which then in turn create new indexes. As we will see, Apps hold some similarity to “Connectors” in Azure Sentinel.

Ultimately, the goal is to ensure that we have the same Source Type -> Data Source mappings such that the queries are converted properly. In many cases, you’ll need to create a mapping file of the properties on the left side of the equation with the properties on the right side of the migration equation.

Splunk to Sentinel Blog Series

  1. Part I – SPL to KQL, Exporting Objects
  2. Part II – Alerts and Alert Actions
  3. Part III – Lookups, Source Types and Indexes
  4. Part IV – Searches
  5. Part V – Dashboards and Reports
  6. Part VI – Users and Permissions
  7. Part VII – Apps

References:

  1. My Twitter
  2. My LinkedIn
  3. My Email

Splunk to Sentinel Migration – Part II – Alerts and Alert Actions

Continuing where we left off in Part I, I will explore how to convert Splunk Alerts and Alert Actions into the Sentinel equivalents.

Alerts

Alerts have the same core problem as everything else in Splunk. They are built on SPL queries and must be converted in order for anything meaningful to be accomplished. This means we have to export the alert object, grab the query, convert it, and finally check that it actually executed in Azure Sentinel and then perform the migration.

Alert Actions

Once you have the alert query converted, the Alert very likely has an action tied to it. Splunk provides several out of the box actions:

Migrating these can be a daunting task if you are doing it manually. Not something I’d suggest trying to do in a large Splunk instance. The first step is to export the alert actions into their JSON form:

JSON Export

And then interrogate the various properties for the Alert Action.

Now that you know how to convert the Alert Actions, you need to find all the alerts that are using the actions so you can create them on the Azure side.

Alerts

Alerts are queries that when the query hits a set of targets, will execute one or more Alert Actions. The first step to migrating these is to export them.

Reviewing the exported JSON object, you will find several interesting properties.

The “QualifiedSearch” and “Search” property is the query that the alert is based off of.

This “Actions” property will tell you what actions have been activated for that alert query.

Once you have the list of activated actions, you have to figure out what information you will need to create the corresponding item in Azure Sentinel or Log Analytics.

In the example above, the logevent action has been enabled for the alert. Review the remainder of the properties, you will find a serialization technique Action.{ActionName}.{PropertyName}.{SubPropertyName0}….

Once you get the hang of it, you can then take the exported data and start to build out the migration tasks that need to occur for each Alert Action Type. Here is a basic mapping based on the items above:

  1. Email -> Action Group
  2. LogEvent -> Log to Log Analytics
  3. Lookup -> Insert into Lookup (more to come on lookup migration later in this series)
  4. SMS -> Action Group
  5. Webhook -> Action Group
  6. Script -> Runbook
  7. Custom / App -> Logic App / Runbook

Everything above can be accomplished using Azure Management REST queries (I know as I have done the mappings successfully). Some of these have corresponding Azure CLI or PowerShell commands, however some of them do not. So the best approach is to implement everything using simple REST calls.

Splunk to Sentinel Blog Series

  1. Part I – SPL to KQL, Exporting Objects
  2. Part II – Alerts and Alert Actions
  3. Part III – Lookups, Source Types and Indexes
  4. Part IV – Searches
  5. Part V – Dashboards and Reports
  6. Part VI – Users and Permissions
  7. Part VII – Apps

References:

  1. My twitter
  2. My LinkedIn
  3. My Email

Splunk to Sentinel Migration – Part I

I have been a busy bee the last few years. Solliance has kept me focused on a myriad of projects and clients using various technologies such as Machine Learning, AI, Security, IoT and much more.

Recently, I have been focused on building internal training for a whole new breed of security professionals inside Microsoft called Security Cloud Solution Architects (CSAs). As part of their training, they must learn about Azure Defender, Microsoft Defender, Azure Sentinel etc.

One of the topics I added to the training focuses on moving customers to Azure Sentinel. Which means, you would need to retire your old SIEM/SOAR system in lieu of the new one. Some folks would say that this is a easy transition, however, if you have been tasked with a migration project of this kind, you know better.

At a high level, most SIEMs have the same myriad of concepts such as Alerts, Alert actions, Search Queries (along with the tools SIEM language), Dashboard, etc. Each of these present a different set of challenges when migrating them to another platform such as Azure Sentinel. In the following blogs series, I will attempt to run through how I have solved some of these challenges and how I have automated the process.

First, let’s talk about the artifacts that exist in Splunk:

  1. Alerts
  2. Alert Actions
  3. Source Types
  4. Searches
  5. Dashboard
  6. Reports
  7. Namespaces
  8. Permissions

Splunk Query Language to Kusto Query Language

Splunk customers put a lot of work into adding log sources based on source types and then building queries from the data that is ingested and indexed from them. Once they have built the queries, these queries then tend to be used a multiple places such as Alerts, Dashboards and Reports. Splunk uses the Splunk Processing Language (SPL). This is vastly different from the Azure Sentinel Kusto Query Language (KQL). Because it is not easy to map 1:1 from Splunk to Sentinel as it requires you to understand the SPL syntax and the corresponding KQL syntax. For example, take the following query in SPL:

| rest splunk_server=$splunk_server$ /services/server/info
| fields splunk_server version
| join type=outer splunk_server [rest splunk_server=$splunk_server$ /services/server/status/installed-file-integrity
    | fields splunk_server check_ready check_failures.fail]
| eval check_status = case(isnull('check_failures.fail') AND isnotnull(check_ready), "enabled", 'check_failures.fail' == "check_disabled", "disabled", isnull(check_ready), "feature unavailable")
| eval check_ready = if(check_status == "enabled", check_ready, "N/A")
| fields version check_status check_ready
| rename version AS "Splunk version" check_status AS "Check status" check_ready AS "Results ready?"

Would become something like this in KQL:

let table0 = externaldata (data:string) [@"/services/server/status/installed-file-integrity"]| project splunk_server,check_ready,check_failures.fail;
externaldata (data:string) [@"/services/server/info"]| project splunk_server,version
| join kind = outer (table0) on splunk_server
| extend check_status  = case (isnull(check_failures.fail) and isnotnull(check_ready),"enabled",'check_failures.fail' == "check_disabled","disabled",isnull(check_ready),"feature unavailable")
| extend check_ready  = iff (check_status == "enabled",check_ready,"N/A")
| project-rename version = "Splunk version",check_status = "Check status",check_ready = "Results ready?"

If you thought manually transforming queries was a challenge, think through the complexities of automating the process (which the above was done through). It requires writing a tokenizer, then creating a two-pass compiler to build out the language graph. From there, I have to be able to transform the graph of SPL to a graph of KQL, then spit out the KQL query text from the final converted graph. Can anyone say “Dragon Book”?!? I knew you could!

There are some quirks to all this however, I have to be able to know what are strings, variables, table and/or column names…which requires me to interrogate the Splunk indexes for that information. Once I have all the metadata in hand, err memory, I can more easily determine what each thing is in the query when building the graph and subsequent KQL queries.

Some may find this tasks daunting, however, if you do a search you may find a tool called https://uncoder.io. Although it shows great promise when you load the site, it doesn’t really work (broken most of the time) and thusly, I had to build my own as you can see from above.

Apps and Source Types – Oh My.

But…indexes are based on the Source Types that have been added in Splunk. Splunk provides several out of the box source types, but new ones can be added from Apps. Think things like F5, CheckPoint, etc. These source types provide log data that is then indexed and available for querying. Apps have to be mapped from the Splunk based ones to the Azure Sentinel ones, ouch. The indexes created from Splunk Apps have to be transformed to the similar table and column structure in the Azure Sentinel side.

Once you solve these problems, you can easily automate the query transformations from SPL to KQL.

Exporting Artifacts

In order to automate the entire process, you have to export all the objects in the Splunk instance. This can be done using the REST APIs provided by Splunk. It has a very object oriented way of storing these objects which are then serialized to JSON in the REST responses. This provides a very convenient way of dumping the artifacts that would then be used in a migration tool (such as the one I have written).

Next Post – Alerts and Alert Actions

Join me later this month when I write about how to convert Splunk Actions and Alert Actions to Azure Sentinel via automated means!

Splunk to Sentinel Blog Series

  1. Part I – SPL to KQL, Exporting Objects
  2. Part II – Alerts and Alert Actions
  3. Part III – Lookups, Source Types and Indexes
  4. Part IV – Searches
  5. Part V – Dashboards and Reports
  6. Part VI – Users and Permissions

References:

  1. Splunk Processing Language (SPL)
  2. Kusto Query Language (KQL)
  3. Splunk REST API
  4. Splunk Download
  5. Azure Sentinel
  6. My Twitter
  7. My LinkedIn
  8. My Email

GDPR Platform (High Level)

So what will happen when I release the high level core platform?

All hell.

It has the implementations of all the Microsoft platforms, every CRM platform (generic, real estate, etc). And all your lovely marketing platforms that you so love dearly (and don’t know do shit that is so not #privacy oriented).

The beans will be spilled if/when the higher level platform is released. California and EU lawyers will rejoice.

GDPR.SDK

The GDPR.Common github is the basis for your privacy journey. Examples of how to use this is defined in the SDK. It gives you the ability to do the push/pull models through encrypted messages to your Event Hubs.

The Controller is the abstract web interface that you would implement to accept request from your core #privacy platform over the PGP secured messages.

You can find the source here : https://github.com/givenscj/GDPR.SDK

GDPR.Common – How to use it.

Implementing Privacy in your applications is easy. You don’t need to spend a ton of money on outside platforms to do it. I have made the GDPR core public on GitHub. This post will help you understand how to use it (a little bit anyway).

The main core platform I have not release yet to the public, but the lower layer I have and that’s what this post is about.

In order to implement GDPR, you must implement the interface I have defined and when I mean implement, I mean implement every interface for every application you have. This will take 2-3 days as an expert (me), 7-14 days as a great coder, 4 weeks as a good coder FOR EACH APPLICATION YOU HAVE.

The core is based on an application implementing certain interfaces. An abstract class *GDPR.Applications.GDPRApplicationBase* gives you the starting point for this. It implements the GDPRApplicationCore and IGDPRDataSubjectActions. The IGDPRDataSubjectActions (based on my Data Subject Action pattern definition) is the minimal amount of actions you must implement in order for your application to be FULLY GDPR/Privacy compliant.

It has the following methods:

void ProcessRequest(BaseApplicationMessage message, EncryptionContext ctx);
void ValidateSubject(GDPRSubject subject);
void AnonymizeRecord(Record r);
void AnonymizeSubject(GDPRSubject subject);
List GetAllRecords(GDPRSubject subject);
List SubjectSearch(GDPRSubject search);
void SubjectNotify(GDPRSubject subject);
bool SubjectCreateIn(GDPRSubject subject);
bool SubjectCreateOut(GDPRSubject subject);
RecordCollection SubjectDeleteIn(GDPRSubject subject);
bool SubjectDeleteOut(GDPRSubject subject);
bool SubjectUpdateIn(GDPRSubject subject);
bool SubjectUpdateOut(GDPRSubject subject);
bool SubjectHoldIn(GDPRSubject subject);
bool SubjectHoldOut(GDPRSubject subject);
bool RecordCreateIn(Record r);
bool RecordCreateOut(Record r);
bool RecordDeleteIn(Record r);
bool RecordDeleteOut(Record r);
void RecordHold(Record r);
bool RecordUpdateIn(Record old, Record update);
bool RecordUpdateOut(Record r);
List GetAllSubjects(int skip, int count, DateTime? changeDate);
List GetChanges(DateTime changeDate);
ExportInfo ExportData(List records);
ExportInfo ExportData(string applicationSubjectId);
ExportInfo ExportData(string applicationSubjectId, GDPRSubject s);
void Discover();
bool Consent(string applicationSubjectId);
bool Consent(string applicationSubjectId, List types);
bool Consent(GDPRSubject subject);
bool Consent(GDPRSubject subject, List types);
bool Unconsent(string applicationSubjectId);
bool Unconsent(string applicationSubjectId, List types);
bool Unconsent(GDPRSubject subject);
bool Unconsent(GDPRSubject subject, List types);
bool GetConsentTypes();
bool PhoneNormalization();
bool GetSubjectConsents(GDPRSubject subject);

You MUST implement all of those methods for EVERY APPLICATION in order to be privacy compliant.

Many CRMs do NOT allow you to do all of these. Many Microsoft platforms DO NOT allow you to do all of these through “public” apis. Only hidden ones can you get to 100% compliance.

There is NOT A SINGLE COMPANY ON THE PLANET that has implemented this for their corporate-wide organization. Every lawyer on the planet can sue and win if your application developers did not follow the above pattern.

Automating GDPR Request for Microsoft Dynamics

And just like Office 365, On-Premises SharePoint and any other application that has a Privacy Central application stub sitting in front of it, you can also automate your Microsoft Dynamics privacy requests.

Check out the video here:

Privacy Central – Privacy Score – What is it?

Not all applications or companies will take advantage of the new Privacy Platform. Reasons being they may be large social media companies such as Facebook (et all), Google (et all), Apple, Microsoft…etc.

Why? Because they implement something on their own. Its their best try at giving you something to appease the regulators in Europe, California or wherever else you may have regulations.

Ok, good to know they can’t use the platform so what’s the point?

The SaaS platform, www.privacycentral.com, allows for what are called “Redirect Applications”. These are applications that will redirect to the companies data request form or link. These links are usually hidden or only visible as a logged in user. These are the applications we are targeted for Privacy Scores. Although, there are some instances such as SalesForce, Dynamics, etc that will also get a privacy score.

What is the Privacy Score? It is a proprietary set of evaluation points (over 75+) that evaluate a company or a specific company’s application to tell you how safe it is. The specifics of the score are a close secret, but a couple things I can share is that some of the points are evaluated on the privacy policy and the terms of usage.

The goal is to add 2-3 Privacy Scores every week and post them to the PrivacyInc twitter handle along with the score to the SaaS platform. Eventually all major applications will be scored you will have a full view as to what applications are safe, and which ones are incredibly lacking.

You can get a view of the current set of scores here. Notice Instagram is the worst application you could possible utilize. And to think of all the children that are using that thing…not good.