If you’ve been around systems long enough, you know that opportunity for performance gains goes up dramatically, the further up the stack you look.. Continue reading “Look Up the Stack!”
Author: bduncan
(Ab)use of the R Language
For years I’ve done most of my log scraping and analysis with the usual suspects; bash, sed, awk, perl even. The log scraping still uses those tools, but lately I’ve been toying around with “R” for the analysis. Continue reading “(Ab)use of the R Language”
Averages Mostly Suck at Almost Everything..
..unless you’re dealing with baseball. When dealing with systems, many of us think “Average” is a measure of “Typical” or “Normal“. Many systems people will also use averages to look for “Abnormal“. However, average (or mean) doesn’t represent either “normal” or “abnormal” very well.. Continue reading “Averages Mostly Suck at Almost Everything..”
WSMeter: Performance Evaluation for Warehouse-Scale Computers
Many of us have dealt with making changes in production environments, possibly against hundreds or thousands of systems and, we’d like to know how the change impacted performance. It was with this in mind that I eagerly read through the paper describing WSMeter. Continue reading “WSMeter: Performance Evaluation for Warehouse-Scale Computers”
Friday the 13th One Liner
Just for fun, how many combinations of months are there where Friday falls on the 13th? This one-liner will print out a table of month combinations along with the years for a given range. Continue reading “Friday the 13th One Liner”
Deep Dive, EPL Dotplots
While working at RIM, I had the privilege of working with some brilliant engineers. During that time I developed a few of the techniques that I’ll be describing; the EPD (Event-Pair-Difference) graph described in my previous post and the EPL (Event-Pair-Latency) Dotplot are a few of them. Continue reading “Deep Dive, EPL Dotplots”
Shades of Grey
System failures are often not black and white, but shades of grey (gray?)..
Detecting and alerting on “performance-challenged” system components are a lot more difficult than detecting black or white (catastrophic failures). The metrics used are usually of the “time vs. latency” or “time vs. event count” variety, often aggregated and, often by using averages. All of these tend to obscure what we are looking for and have a very low “signal to noise ratio“.