Feeds:
Posts
Comments

Archive for January, 2010

While looking at Ontario Power Generation‘s official web site, I noticed this number in the bottom right corner of the page:

It contains the amount of power being generated as well as the date/time of the last update. I refreshed a few times and realized that updates occur every five minutes. Curious, I thought I’d whip up a quick module to scrape this information from the web site and produce some nice graphs with RRDTool. I used the open source RRDTool::OO module to do this, which is freely available on the CPAN.

Recognizing that web scraping is not the most reliable means of getting data from a web site, I contacted OPG via e-mail and requested an API for this data. In the latest iteration of WWW::OPG (version 1.004 already on CPAN), a smaller machine-readable text file provides the same data in an easier-to-parse format. Thanks to someone I know only as “Rose” from OPG for providing this file, which is much easier to parse and less likely to change.

As OPG supplies roughly 70% of Ontario’s electric power demand, the consumption statistics provide a relatively good reflection on our behaviour patterns over time. During the course of this, I learned how to work with Round Robin Databases (and wrote an article about it) and was able to observe some interesting trends even in the first week of operation:

Power generation for week of 2009-12-25

The graph begins Saturday, December 26th, 2009 (Boxing Day) and continues through the week approaching the new year 2010. These particular trends are interesting because, while two observable peaks occur each day, the overall power consumption (including 95th percentile consumption) seems much lower than usual.

By comparison, consider this graph of a week ended 14 January 2010 (there were some rather long-lasting outages in the data collection which I’m trying to track down, but it still gives a sense of the general trends):

Power generation for week of 2010-01-07

In this case, the 95th percentile consumption is much higher at about 14GW rather than 10GW. Note that the 95th percentile gives a rather good approximation of an infrastructure’s utilization rate, since it works by indicating peak power after removing the highest 5% of data points. This means that 95% of the time, power consumption was at or below the given line.

Percentile is more important than averages because it indicates the minimum infrastructure to satisfy demand most of the time (95% of the time) so it gives us a simple way to determine whether more infrastructure is needed.

In the specific case of electric power utilities, and because electricity is so important for both industrial and commercial use, legal requirements stipulate that the demand must always be supplied, barring exceptional circumstances such as failures of distribution transformers. In this case, maximum power consumption is a more useful measure for infrastructure planning.

Read Full Post »

A specialized storage system known as a Round Robin Database allows one to store large amounts of time series information such as temperatures, network bandwidth and stock prices with a constant disk footprint. It does this by taking advantage of changing needs for precision. As we will see later, the “round robin” part comes from the basic data structure used to store data points: circular lists.

In the short term, each data point is significant: we want an accurate picture of every event that has occurred in the last 24 hours, which might include small transient spikes in disk usage or network bandwidth (which could indicate an attack). However, in the long term, only general trends are necessary.

For example, if we sample a signal at 5-minute intervals, then a 24-hour period will have 288 data points (24hrs*60mins/hr divided by 5 minutes per sample). Considering each data point is probably1 only 4 (float), 8 (double), 16 (quad) bytes, it’s not problematic to store roughly three hundred data points. However, if we continue to store each sample, a year would require about 105120 (365*288) data points; multiplied over many different signals, this can become quite significant.

To save space, we can compact the older data using a Consolidation Function (CF), which performs some computation on many data points to combine it into a single point over a longer period. Imagine that we take an average of those 288 samples at the conclusion of every 24 hour period; in that case, we would only need 365 data points to store data for an entire year, albeit at an irrecoverable loss of precision. Though we have lost precision (we no longer know what happened at exactly 5:05pm on the first Tuesday three months ago), the data is still tremendously useful for demonstrating general trends over time.

Though perhaps not the easiest to learn, RRDtool seems to have the majority of market share (without having done any research, I’d estimate somewhere between 90% and 98%, to account for those who create their own solutions in-house), and for good reason: it gets the job done quickly, provides appealing and highly customizable charts and is free and open source software (licensed under the GNU General Public License).

In a recent project, I learned to use RRDTool::OO to maintain a database and produce some interesting graphs. Since I was sampling my signal once every five minutes, I decided to replicate the archiving parameters used by MRTG, notably:

  • 600 samples store 2 days and 2 hours of data (at full resolution)
  • 700 samples store 14 days and 12 hours of data (where six samples become a 30-minute average)
  • 775 samples store 64 days and 12 hours of data (2-hour average)
  • 797 samples store 797 days of data (24-hour average)

F0r those interested, the following code snippet (which may be rather easily adapted for languages other than Perl) constructs the appropriate database:

archive => {
 rows    => 600,
 cpoints => 1,
 cfunc   => 'AVERAGE',
},
archive => {
 rows    => 700,
 cpoints => 6,
 cfunc   => 'AVERAGE',
},
archive => {
 rows    => 775,
 cpoints => 24,
 cfunc   => 'AVERAGE',
},
archive => {
 rows    => 797,
 cpoints => 288,
 cfunc   => 'AVERAGE',
},

There are also plenty of other examples of this technique in action, mainly related to computing. However, there are also some interesting applications such as monitoring voltage (for an uninterruptible power supply) or indoor/outdoor temperature (using an IP-enabled thermostat).

Footnotes

1. This may, of course, vary depending on the particular architecture

Read Full Post »