What is Splunk Essentials Predictive Maintenance add-on?
As a starting point, Splunk Essentials for Predictive Maintenance add-on is an interactive tutorial guide created for engineers and maintenance reliability leaders. This guide can significantly improve their preventive maintenance programs by applying machine learning techniques to process data.
The Splunk Essentials for Predictive Maintenance guide consists of four key stages. The first stage is data collection and ingestion, which uses Splunk software to collect, store, and structure asset metrics. Stage two involves data exploration, which covers pre-processing and exploring data to understand the dataset’s characteristics. The third stage is analysis, teaching three critical options for performing predictive maintenance analysis. And the final stage, operationalization, teaches how to apply the model to a broader implementation, creating reports and alerts for operational actions.
Add-on – Prerequisites
The add-on requires certain apps such as the Splunk Machine Learning Toolkit, Python for Scientific Computing, and 3D Scatterplot – Custom Visualization. Follow our Splunk Machine Learning Preparation Guide to install these add-ons.
Installing Splunk Essentials Predictive Maintenance add-on
You can install it by using our Install Splunk Add-ons Guide. We recommend installing the Splunk Essentials for Predictive Maintenance app on a separate test instance from the production infrastructure. Since the app injects a sample jet dataset into your Splunk database.
More about Splunk Machine Learning Toolkit
Furthermore, the Splunk Machine Learning Toolkit (MLTK) app is essential for those interested in predictive analytics. With MLTK, users can train machine learning models and use them to derive insights from their data. It’s a simple process similar to using lookups, and if you can create a lookup from your data, you can also use ML.
Splunk has provided MLTK tutorials to guide users through everyday use cases. For example, you can train a model that estimates data generation by a particular source type at a specific time of day, compare this estimate to the actual data volume, and set up a dashboard or an alert based on this comparison.