This is a question from someone who is mostly ignorant of the possibilities of ML and AI in general, keep that in mind
I’m working on a project where timeseries data is ingested from various sensors (think: temperature, pressure, etc.). One of the core responsibilities of the product is to define specifications for these streams of values, and trigger notifications when things go out of specification. In other words, it’s all about alarms. But the specifications can be somewhat more complex than defining threshold values, as there are strategies to consider a window where n out of N values cannot exceed some threshold for example. Other, rather straightforward rules can also be specified.
Now, I was wondering whether there are ML related strategies that can perform the detection of anomalies in such timeseries data? Maybe even without defining the thresholds (or in addition to a comparison to some thresholds). For example, it would be nice to detect drifting values (which is harder to detect because of slowness of change). Or when there are patterns of temperature change (sometimes the temperature does not stay constant, but it varies according to some pattern during some kind of production process), it would be nice to detect deviations from the patterns that usually occur.
I hope my question is clear. I have high hopes for machine learning and AI in this domain, but I have zero knowledge or intuition of how to tackle or explore the possibilities.
TLDR: any tips or references for detecting anomalies in timeseries data? Is ML even a good fit for such an analysis?