Many of us have dealt with making changes in production environments, possibly against hundreds or thousands of systems and, we’d like to know how the change impacted performance. It was with this in mind that I eagerly read through the paper describing WSMeter.
You can read an excerpt here:
WSMeter: A performance evaluation methodology for Google..
That page has a link at the top for the full paper from ACM.
Quoted from the text:
“The goal is to boil the performance of the whole WSC down to a single number..”
My first red flag. Boiling performance down to a single number just isn’t reasonable. I’ll have more on that with an upcoming article.
It’s also not clear what the metric is that they are using. IPC? This is the description they give:
“Instructions per Cycle (IPC) as the performance metric. For Google jobs, this correlates well with user-level performance metrics.”
What user level performance metrics? How do they calculate and derive IPC? When we speak of “performance”, we are usually referring to either “throughput” or “latency” and being good at one doesn’t necessarily translate to being good at the other. Their description of jobs makes it sounds like “batch jobs” which would imply throughput as the factor they want to measure/improve. Yet “user-level performance metrics” are often measuring latency. It’s not clear from the paper, what they are measuring..
Thinking that some of the answers might be “in the code”, I went looking. It doesn’t appear that Google have open-sourced any of this work.
Where’s the beef?