top of page

motivation for opscotch

opscotch was born out of an observability managed-services consultancy with two constant customer demands - "keep our various observability platforms running" and "help me capture some awkward data".

As a managed service provider of monitoring platforms (AppDynamics, New Relic, Sumo Logic, Splunk, Elastic etc.), there was a constant requirement for "daily health checks" or "best practice audits" - these, at best, were monotonous chores and, at worst error-prone or forgotten or inaccessible.

 

The data points needed to perform these tasks were often not designed to be queried: only available through a webpage such as a settings or administrative page. A lot of the time, the data needed to be correlated with another data source in the customer's corporate network.

The traditional approach for the observability industry is to use big-data technologies and send all the data for analysis using simple collectors. When this worked, it often resulted in a large amount of data to process, which the software vendors loved with their ingest-based pricing.

 

However, more often than not, the traditional approach failed because of non-trivial multi-step authentication or simply a flat-out refusal from customers with a predilection for security and privacy.

In short:

  • Repetitive tasks needed to be performed by a human.

  • The data points might be on a webpage or in an unhelpful format.

  • Data points might be distributed across multiple sources.

  • Customers are wary of data exfiltration vectors.

  • Traditional approaches generate excess data that is of low value.

  • Some data points require multiple authentication steps.

The consultancy didn't need to waste people's time on monotonous chores: opscotch is excellent at solving ALL these problems.

 

Rubric 1: The traditional observability industry message is "collect everything, analyse later"; the opscotch paradigm is the opposite of this: "collect only what you need, analyse at the point of collection". This paradigm guides users to think about the problem, identify and collect only the required data, perform necessary transformations and calculations, and then emit the least amount of distilled data as signals or metrics. 

Rubric 2: The opscotch agent can run in the customer's network and executes workflows that provide the structure to codify tasks into a directed graph of steps, allowing calls out to multiple services and the execution of more tasks. Retrieved data can be transformed and processed to produce input for the next step or signal/metric.

Overlaying these two simple rubrics with the consultancy problem:

  • Repetitive tasks can be codified into workflows.

  • Webpages or API responses can be easily parsed into useful formats.

  • Workflows can call out to multiple services and merge and process the results.

  • Data does not need to leave the customer network.

  • Large volumes of data are reduced into small signals or metrics.

  • Workflows can execute as many steps as required to perform complex authentication flows.

The consultancy was able to codify all their daily checks, and best practice audits into a set of workflows that could be effortlessly applied to multiple customers monitoring systems. The signals emitted are small and discrete enough to be easy to process and void of any customer-sensitive data.

opscotch was born.

opscotch-graphic (2).png

© opscotch 2023

bottom of page