Techniker ist informiert.

Hostile observability

In an increasingly connected world we are starting to see more sophisticated attacks being commoditized and sold very similar to how you would buy a word processing suite. Due to this we are seeing a surge in movement toward the removal of trusted networks and apps and an increase in isolation and security domains. While that is a great start I honestly thing we could do better and we should not only avoid trusting anything in the stack we need to have ubiquitous low impact monitoring of every part of the system.

This post is just a stream of thoughts right now and will contain no wizbang how-tos just my ramblings of why system observability is important.

There is an adage that goes “who watches the watcher” and this is increasingly becoming a larger concern in modern systems as we rely more and more on the operating system to help us isolate everything all we have managed to do is move the trust boundary back instead of eliminate it, and now the OS is the watcher, but how do we know for sure that it is working for us. The answer is for a lot of systems we really don’t even open source systems like Linux are only really now starting to come around to this and due to that you are not seeing it used very much in production yet.

Honestly techs like Dtrace and eBPF are not only miracles for profiling systems they also help us look under hood. With tools like that you can write scripts and live instrumentation and tooling that tell you things like what has been writing to your password file, you can do it in real time and fire alerts off when something unexpected does so as an example. Another example of what you might do with this is plug it into SEIMs that do behavior analysis, so Eve is working on Mallory’s team and is not part of Alice’s skunkworks team, Eve managed to figure out Bob’s password and for some reason now why Bob is on vacation He is logged in from the office and accessing top secret files and changing configuration by hand on a production server outside of Bob and Alice’s configuration management and change control system. The SEIM would pick up the various signals here including the one gleaned from not trusting any part of the system and let your agents know that someone from Eve’s terminal is logged in as Bob and is messing with secrets.

In addition to this if we start watching and logging things like process launches, system file modifications, and other bits of the systems state you can glean things like where a pice of malware came from. What were the first machines, how were they infected, are there any anomalies we can use to spot similar stuff as it happens. As an example of this when Not-Petya came into existence Microsoft was able to use the telemetry it collects into its own black box, which collects a lot of similar info, to find the first machine infected with the malware and was able to develop a detection signature based off the first few things it did. With out this info they would have eventually figured its out by infecting machines and watching them from a controlled environment, but that cycle is getting longer and longer. The various groups that are making fortunes selling commodity malware have wised onto that game and have been trying to detect and lock out researchers.

But to sum this up, we need more observable systems. Stop trusting anything in your environment, or at the very least trust but verify. Collect the data, create alarms and monitoring based on it. If your monitoring misses something add to it. Above all else once you gain this data share as much as your risk profile allows. By working together we can protect ourselves and others, and make the Interconnected world a bit safer.

Discuss...