Feeds:
Posts
Comments

The smart grid is born of modern necessity; this article discusses a brief history and establishes practical relevance for a smarter grid.

History

The term smart grid has been in use since at least 2005, when the article “Toward a Smart Grid,” written by S. Massoud Amin and Bruce F. Wollenberg, appeared in the September-October issue of Power and Energy Magazine.  For decades, engineers have envisioned an intelligent power grid system with many of the capabilities mentioned in formal definitions of today’s smart grids.  Indeed, while the development of modern microprocessor technologies has only recently begun making it economical for utilities to deploy smart measurement devices at a large scale, its humble beginnings can be traced as far back as the late 1970s, when Theodore Paraskevakos patented the first remote meter reading and load management system [1].

Relevance

For the next several decades, our global energy strategy will inevitably involve upgrading to a more intelligent grid system.  Three fundamental motivators are driving this change: current bulk generation facilities are reaching their limit; utilities must maximize operational efficiency today in order to postpone the costly addition of new transmission and distribution infrastructure; and they must do all of this without compromising reliability of the power system.  In fact, many governments, including the Essential Services Commission (ESC) of Victoria, Australia [2] are adopting legislation to make crucial components of a smarter grid system mandatory.  In Canada, Hydro One’s distribution system has millions of smart meters already installed [3] in preparation for time-of-use rates slated to become mandatory by 2011 [4].

Capacity

Over the next several decades, consumer advocacy groups and environmental concerns from the public will prevent the construction of centralized generation plants as a measure to meet quickly growing demand for electric power.  Moreover, global electricity demand will require the addition of 1000 MW of generation capacity as well as all related infrastructure every week for the foreseeable future [5].  Traditional bulk generation plants are now prohibitively expensive to construct due to cap-and-trade legislation, which places severe financial penalties on processes that continue to emit carbon dioxide and other harmful greenhouse gases.  In conjunction with the higher economic cost, there are also social pressures and widespread concerns about long-term sustainability.

Reliability

With the exception of hydroelectric and geothermal power, renewable energy sources such as wind and solar present unique challenges since they are unpredictable by nature and may vary significantly in their power output due to unpredictably- and rapidly-changing external factors.  Subsequently, we must retrofit the existing power grid to ensure that it can maintain system stability despite these fluctuations in power output.  Furthermore, utilities must have the ability to monitor key indicators of system reliability on a continual basis, particularly as we approach the grid’s maximum theoretical capacity.

Efficiency

A smarter grid can also improve operational efficiencies by intelligently routing different sources of energy.  Because we currently send electricity from distant power generation facilities to serve customers across hundreds of kilometres of transmission lines, approximately eight percent of the total generated electric power is lost as waste heat [6].  Moreover, we can make better use of the existing power generation infrastructure by reducing peak demand; in fact, the International Energy Agency found that a 5% demand response capability can reduce wholesale electricity prices by up to 50% [7].

[1] Theodoros G. Paraskevakos and W. Thomas Bushman, “Apparatus and method for remote sensor monitoring, metering and control,” USPTO#4241237, December 30, 1980.
[2] Essential Services Commission, “Mandatory Rollout of Interval Meters for Electricity Customers,” Essential Services Commission, Melbourne, Victoria, Draft Decision.
[3] Hydro One. (2009, June) One Million Smart Meters Installed – Hydro One Networks and Hydro One Brampton Reach Important Milestone. [Online].  http://www.hydroone.com/OurCompany/MediaCentre/Documents/NewsReleases2009/06_22_2009_smart_meter.pdf
[4] Ontario Energy Board. (2010, February) Monitoring Report: Smart Meter Deployment and TOU Pricing – 2009 Fourth Quarter. [Online].   http://www.oeb.gov.on.ca/OEB/Industry/Regulatory+Proceedings/Policy+Initiatives+and+Consultations/Smart+Metering+Initiative+%28SMI%29/Smart+Meter+Deployment+Reporting
[5] The ABB Group. (2010, March) Performance of future [power] systems. [Online].      http://www.abb.com/cawp/db0003db002698/c663527625d66b1dc1257670004fb09f.aspx
[6] Hassan Farhangi, “The Path of the Smart Grid,” Power and Energy Magazine, vol. 8, no. 1, pp. 18-28, January-February 2010.
[7] International Energy Agency, “The Power to Choose: Demand Response in Liberalised Electricity Markets,” International Energy Agency, Paris, France, 2003.

I originally wrote this article for a report submitted to ECE4439: Conventional, Renewable and Nuclear Energy, taught by Professor Amirnaser Yazdani
at the University of Western Ontario.

Advertisements

Historical events provide the greatest indication of our need for a more flexible, more intelligent and more reliable power system.  In the Western world, the Tennessee Valley Authority’s bulk transmission system has achieved five nines of availability for ten years (ended 2009) [1], which corresponds to under 5.26 minutes of outage annually.  However, while the grid is generally robust to disturbances, catastrophic events like the 2003 North-eastern Blackout serve as a solemn reminder of the fragility of our system, susceptible to cascading outages originating from a handful of preventable failures in key parts of the system.  More concerning is the increasing incidence of widespread outages: in the US, 58 outages affected over 50,000 customers from 1996 to 2000 (an average of 409,854 customers per incident), compared to 41 occurrences for the same number of customers between 1991 and 1995 [2].

The essence of smart grid technology is the provision of sensors and computational intelligence to power systems, enabling monitoring and control well beyond our current capabilities.  A vital component of our smart grid future is the wherewithal to detect a precarious situation and avert crisis, either by performing preventative maintenance or by reducing the time needed to locate failing equipment.  Moreover, remotely monitoring the infrastructure provides the possibility of improvements to the operational efficiency of the power system, perhaps through better routing of electric power or by dynamically determining equipment ratings based on external conditions such as ambient temperature or weather.

In the face of changing requirements due to environmental concerns as well as external threats, it is becoming extraordinarily difficult for the utility to continue to maintain the status quo.  As the adoption of plug-in [hybrid] electric vehicles intensifies, the utility must be prepared for a corresponding increase in power consumption.  The transition to a more intelligent grid is an inevitable consequence of our ever-increasing appetite for electricity as well as our continued commitment to encouraging environmental sustainability.

The deregulation of the electric power system also presents new and unique challenges, since an unprecedented number of participants need to coordinate grid operations using more information than ever before.  If we are to maintain the level of reliability that customers have come to expect from the power system, we must be able to predict problems effectively, rather than simply react to them as an eventuality.

As the grid expands to serve growing customer demands as well as a changing society, we must proceed cautiously to ensure the system preserves its reputation of reliability.  It is incumbent upon us to carefully analyze past events and implement appropriate protection and control schemes using modern technologies.  It is clear that the power system of tomorrow will depend upon the design and preparation we conduct today.

[1] Tennessee Valley Authority. (2010, March) TVA Transmission System. [Online].   http://www.tva.gov/power/xmission.htm
[2] M. Amin, “North America’s electricity infrastructure: are we ready for more perfect storms? ,” Security and Privacy, IEEE, vol. 1, no. 5, pp. 19-25, September-October 2003.

I originally wrote this article for a report submitted to ECE4439: Conventional, Renewable and Nuclear Energy, taught by Professor Amirnaser Yazdani
at the University of Western Ontario.

Last year, I had a great time participating in the Google Summer of Code with the Debian project. I had a neat project with some rather interesting implications for helping developers to package and maintain their work. It’s still a work-in-progress, of course, as many projects in open source are, but I was able to accomplish quite a bit and am proud of my work. I learned quite a bit about coding in C, working with Debian and met some very intelligent people.

My student peers were also very intelligent and great to learn from. I enjoyed meeting them virtually and discussing our various projects on the IRC channel as the summer progressed and the Summer of Code kicked into full swing. The Debian project in particular also helps arrange travel grants for students to attend the Debian Conference (this year, DebConf10 is being held in New York City!). DebConf provides a great venue to learn from other developers (both in official talks but also unofficial hacking sessions). As the social aspect is particularly important to Debian, DebConf helps people meet those with whom they are working with the most, thereby creating lifelong friendships and making open source fun.

I have had several interviews for internships, and the bit of my work experience most asked about is my time doing the Google Summer of Code. I really enjoyed seeing a project go from the proposal stage, setting a reasonable timeline with my mentor, exploring the state of the art, and most importantly, developing the software. I think this is the sort of indispensible industry-type experience we often lack in our undergrad education. We might have an honours thesis or presentation, but much of the work in the Google Summer of Code actually gets used “in the field.”

Developing software for people rather than for marks is significant in a number of ways, but most importantly it means there are real stakeholders that must be considered at all stages. Proposing brilliant new ideas is important, however, without highlighting the benefits they can have for various users, the reality is that it simply will not gain traction. Learning how to write proposals effectively is an important skill and working with my prospective mentor (at the time – he later mentored my project once it was accepted) to develop mine was tremendously useful for my future endeavours.

The way I see the Google Summer of Code is, in many ways, similar to an academic grant (and the stipend is about the same as well). It provides a modest salary (this year it’s US$5000) but more importantly, personal contact with a mentor. Mentors are typically veterans in software development or the Debian project and act in the same role as supervisors for post-graduate work: they help monitor your progress and propose new ideas to keep you on track.

The Debian Project is looking for more students and proposals. We have a list of ideas as well as application instructions available on our Wiki. As I will be going on internship starting May, I have offered to be a mentor this year. I look forward to seeing your submissions (some really interesting ones have already begun to filter in as the deadline approaches).

Introducing WWW::OPG

While looking at Ontario Power Generation‘s official web site, I noticed this number in the bottom right corner of the page:

It contains the amount of power being generated as well as the date/time of the last update. I refreshed a few times and realized that updates occur every five minutes. Curious, I thought I’d whip up a quick module to scrape this information from the web site and produce some nice graphs with RRDTool. I used the open source RRDTool::OO module to do this, which is freely available on the CPAN.

Recognizing that web scraping is not the most reliable means of getting data from a web site, I contacted OPG via e-mail and requested an API for this data. In the latest iteration of WWW::OPG (version 1.004 already on CPAN), a smaller machine-readable text file provides the same data in an easier-to-parse format. Thanks to someone I know only as “Rose” from OPG for providing this file, which is much easier to parse and less likely to change.

As OPG supplies roughly 70% of Ontario’s electric power demand, the consumption statistics provide a relatively good reflection on our behaviour patterns over time. During the course of this, I learned how to work with Round Robin Databases (and wrote an article about it) and was able to observe some interesting trends even in the first week of operation:

Power generation for week of 2009-12-25

The graph begins Saturday, December 26th, 2009 (Boxing Day) and continues through the week approaching the new year 2010. These particular trends are interesting because, while two observable peaks occur each day, the overall power consumption (including 95th percentile consumption) seems much lower than usual.

By comparison, consider this graph of a week ended 14 January 2010 (there were some rather long-lasting outages in the data collection which I’m trying to track down, but it still gives a sense of the general trends):

Power generation for week of 2010-01-07

In this case, the 95th percentile consumption is much higher at about 14GW rather than 10GW. Note that the 95th percentile gives a rather good approximation of an infrastructure’s utilization rate, since it works by indicating peak power after removing the highest 5% of data points. This means that 95% of the time, power consumption was at or below the given line.

Percentile is more important than averages because it indicates the minimum infrastructure to satisfy demand most of the time (95% of the time) so it gives us a simple way to determine whether more infrastructure is needed.

In the specific case of electric power utilities, and because electricity is so important for both industrial and commercial use, legal requirements stipulate that the demand must always be supplied, barring exceptional circumstances such as failures of distribution transformers. In this case, maximum power consumption is a more useful measure for infrastructure planning.

A specialized storage system known as a Round Robin Database allows one to store large amounts of time series information such as temperatures, network bandwidth and stock prices with a constant disk footprint. It does this by taking advantage of changing needs for precision. As we will see later, the “round robin” part comes from the basic data structure used to store data points: circular lists.

In the short term, each data point is significant: we want an accurate picture of every event that has occurred in the last 24 hours, which might include small transient spikes in disk usage or network bandwidth (which could indicate an attack). However, in the long term, only general trends are necessary.

For example, if we sample a signal at 5-minute intervals, then a 24-hour period will have 288 data points (24hrs*60mins/hr divided by 5 minutes per sample). Considering each data point is probably1 only 4 (float), 8 (double), 16 (quad) bytes, it’s not problematic to store roughly three hundred data points. However, if we continue to store each sample, a year would require about 105120 (365*288) data points; multiplied over many different signals, this can become quite significant.

To save space, we can compact the older data using a Consolidation Function (CF), which performs some computation on many data points to combine it into a single point over a longer period. Imagine that we take an average of those 288 samples at the conclusion of every 24 hour period; in that case, we would only need 365 data points to store data for an entire year, albeit at an irrecoverable loss of precision. Though we have lost precision (we no longer know what happened at exactly 5:05pm on the first Tuesday three months ago), the data is still tremendously useful for demonstrating general trends over time.

Though perhaps not the easiest to learn, RRDtool seems to have the majority of market share (without having done any research, I’d estimate somewhere between 90% and 98%, to account for those who create their own solutions in-house), and for good reason: it gets the job done quickly, provides appealing and highly customizable charts and is free and open source software (licensed under the GNU General Public License).

In a recent project, I learned to use RRDTool::OO to maintain a database and produce some interesting graphs. Since I was sampling my signal once every five minutes, I decided to replicate the archiving parameters used by MRTG, notably:

  • 600 samples store 2 days and 2 hours of data (at full resolution)
  • 700 samples store 14 days and 12 hours of data (where six samples become a 30-minute average)
  • 775 samples store 64 days and 12 hours of data (2-hour average)
  • 797 samples store 797 days of data (24-hour average)

F0r those interested, the following code snippet (which may be rather easily adapted for languages other than Perl) constructs the appropriate database:

archive => {
 rows    => 600,
 cpoints => 1,
 cfunc   => 'AVERAGE',
},
archive => {
 rows    => 700,
 cpoints => 6,
 cfunc   => 'AVERAGE',
},
archive => {
 rows    => 775,
 cpoints => 24,
 cfunc   => 'AVERAGE',
},
archive => {
 rows    => 797,
 cpoints => 288,
 cfunc   => 'AVERAGE',
},

There are also plenty of other examples of this technique in action, mainly related to computing. However, there are also some interesting applications such as monitoring voltage (for an uninterruptible power supply) or indoor/outdoor temperature (using an IP-enabled thermostat).

Footnotes

1. This may, of course, vary depending on the particular architecture

Catalyst on Debian

Earlier in the year, I wrote a similar article discussing the Catalyst Web Framework and the MojoMojo Wiki software. At the beginning of December 2009, I wrote an article which was published in the Catalyst Advent Calendar. I’m re-posting it here for posterity, and because it is still relevant to others today.

Introduction

Because Catalyst is a rapidly evolving project, packages supplied by operating system vendors like Debian, Fedora, Ubuntu, and many others have historically been outdated compared to the stable versions. In effect, this limited users of Debian’s package management system to outdated versions of this software.

In 2009, thanks to the efforts of Matt S Trout and many others, Debian’s Catalyst packages have been improving. The idea that Debian’s Perl packages are outdated is an idea that is itself becoming obsolete. There are many situations where system-wide Debian packages (and similarly, Ubuntu packages) can be preferable to installing software manually via CPAN.

Advantages

Here are some reasons why packages managed by Debian are preferable to installing packages manually:

  • Unattended installation: the majority of our packages require absolutely no user interaction during installation, in contrast to installs via CPAN.
  • Quicker installs for binary packages: since binary packages are pre-built, installing the package is as simple as unpacking the package and installing the files to the appropriate locations. When many modules need to be built (as with Catalyst and MojoMojo), this can result in a significant time savings, especially when one considers rebuilding due to upgrades.
  • No unnecessary updates: if an update only affects the Win32 platform, for example, it does not make sense to waste bandwidth downloading and installing it. Our process separates packages with bugfixes and feature additions from those that have no functional difference to users, saving time, bandwidth, and administrative overhead.
  • Only packages offered by Debian are supported by Debian: if there are bugs in your Debian software, it is our responsibility to help identify and correct them. Often this means coordinating with the upstream software developers (i.e. the Catalyst community) and working toward a solution together – but our team takes care of this on your behalf.
  • Updates occur with the rest of your system: while upgrading your system using aptitude, synaptic, or another package management tool, your Perl packages will be updated as well. This prevents issues where a system administrator forgets to update CPAN packages periodically, leaving your systems vulnerable to potential security issues.
  • Important changes are always indicated during package upgrades: if there are changes to the API of a library which can potentially break applications, a supplied Debian.NEWS file will display a notice (either in a dialog box or on the command line) indicating these changes. You will need to install the “apt-listchanges” utility to see these.

This year has seen greatly improved interaction between the Debian Perl Group and the Catalyst community, which is a trend we’d like to see continue for many years to come. As with any open source project, communicating the needs of both communities and continuing to work together as partners will ultimately yield the greatest benefit for everyone.

Disadvantages

As with all good things, there are naturally some situations where using Debian Perl packages (or, indeed, most operating-system managed packages) is either impossible, impractical, or undesirable.

  • Inadequate granularity: due to some restrictions on the size of packages being uploaded into Debian, there are plenty of module “bundles”, including the main Catalyst module bundle (libcatalyst-modules-perl). Unfortunately, this means you may have more things installed than you need.
  • Not installable as non-root: if you don’t have root on the system, or a friendly system administrator, you simply cannot install Debian packages, let alone our Perl packages. This can add to complexity for shared hosting scenarios where using our packages would require some virtualization.
  • Multiple versions: with a solution like local::lib, it’s possible to install multiple versions of the same package in different locations. This can be important for a number of reasons, including ease of testing and to support your legacy applications. With operating-system based packages, you will always have the most recent version available (and if you are using the stable release, you will always have the most recent serious bug/security fixes installed).
  • Less useful in a non-homogeneous environment: if you use different operating systems, it can be easier to maintain a single internal CPAN mirror (especially a mini-CPAN installation) than a Debian repository, Ubuntu repository, Fedora/RedHat repository, etc.

For my purposes, I use Debian packages for everything because the benefits outweigh the perceived costs. However, this is not the case for everyone in all situations, so it is important to understand that Debian Perl packages are not a panacea.

Quality Assurance

The Debian Perl Group uses several tools to provide quality assurance for our users. Chief among them is the Package Entropy Tracker (PET), a dashboard that shows information like the newest upstream versions of modules. Our bug reports are available in Debian’s open bug reporting system.

If you have any requests for Catalyst-related modules (or other Perl modules) that you’d like packaged for Debian, please either contact me directly (via IRC or email) or file a “Request For Package” (RFP) bug. If you have general questions or would like to chat with us, you’re welcome to visit us at any time – we hang around on irc.debian.org, #debian-perl.

See Also

  • Our IRC channel, irc.debian.org (OFTC), channel #debian-perl
  • Package Entropy Tracker is a dashboard where we can see what needs to be updated. It allows us (and others, if interested!) to easily monitor our workflow, and also contains links to our repository.
  • Our welcome page talks about what we do and how you (yes you!) can join. You don’t need to be a Debian Developer to join the group (actually, I’m not yet a DD and yet I maintain 300+ packages through the group).
  • This guide explains how to file a Request For Package (RFP) bug, so that the modules you use can be added to the Debian archive. Note that Debian is subject to many restrictions, so issues like inadequate copyright information may prevent the package from entering the archive.

Statistics

Here are some statistics of note:

  • We maintain over 1400+ packages as of today. For details, see our Quality Assurance report
  • We have quite a few active members; probably around 10 or 20

Acknowledgments

Thanks to Matt S Trout (mst) for working so closely with the group to help both communities achieve our goal of increasing Catalyst’s profile. Also thanks to Bogdan Lucaciu (zamolxes) for inviting us to contribute this article, and Florian Ragwitz (rafl) for his review and feedback.

Everything that is good in nature comes from cooperation. Neither Catalyst, nor Perl, nor Debian Perl packages could exist without the contributions of countless others. We all stand on the shoulders of giants.

My second “lab” for ECE4464: Power Systems II studied the effects of distributed generation (in particular, a large-scale wind turbine generation project) on the power system. Like ECE3333 (Power Systems I), this course is being taught by Prof. Rajiv Varma, Ph.D.

Using PowerWorld‘s Simulator software, we connected a large four-reactor nuclear generation plant (750MW per reactor, constant output) via two parallel transmission lines with a large city (modeled as an infinite bus). At the midline between the nuclear reactors and the large city, a transformer is installed at the midline bus to supply a different voltage to a nearby town. This town is near the proposed connection point of the Wind Turbine Generation (WTG) project; this project has a peak output of 85MW.

From the objectives:

In this lab, our objective is to study the potential impact of integration of distributed generation (in particular, a wind farm) using the PowerWorld Simulator software.  We simulate a very large capacity plant (a nuclear plant consisting of four reactors producing 750 MW each).  Because these reactors are large, they will provide some voltage regulation by supplying or consuming MVARs.

The nuclear plant serves a large city via two parallel transmission circuits, as well as the smaller city at the mid-line of one of these lines.  To simplify this lab, we model the large city as an infinite bus, which consumes any excess power generated by the plant.

In this manner, we can explore various phenomena resulting from distributed generation systems like wind farms, including the effects on power transfer and power system stability.  This lab provides insight on two very important issues in power systems, notably, the addition of distributed generation and the challenges involved with electrifying remote communities.

For my full report, see: Power Systems 4464 Lab 2 (PDF). Note that the small town bus has a constant 20Mvar reactive power demand, with a 40Mvar (nominal) shunt capacitor installed initially. Some modifications are made to the compensation scheme as part of the study and report.