Feeds:
Posts
Comments

Posts Tagged ‘Power Engineering’

Last week, my mentor mentioned that high-voltage power transmission circuits could sometimes be used to provide reactive power support when on potential but off load, particularly for parallel lines.  Anecdotally, based on my limited understanding of the Ferranti Effect, this seems perfectly reasonable: light loading on the line results in elevated line charging capacitance, which is then injected to the system at the point of connection.

Ferranti Effect

In order to understand the results, we have to understand the cause behind the Ferranti Effect.

All transmission lines (even the kind discussed in Radiation and Propagation courses) behave the same way once the line length approaches a tenth of the signal wavelength.  Due to the relatively low frequency of utility power (60Hz in North America and 50Hz in many other parts of the world), the wavelength is pretty long, so these effects only begin to appear in significantly long (greater than 300km) transmission lines.

The classical model is:

Source: Wikipedia

Lines of a moderate length (greater than 300km) can be modelled simply as a series resistance, series inductance and shunt capacitance – in Power Systems, we often call this model a “Pi section” (this moniker makes more sense if you separate the capacitance at the sending and receiving ends of the line, dividing them by two).  Longer lines (those exceeding 500km) are then an extension of an already-solved problem: they can simply be modelled using multiple moderate-length segments as appropriate.

Keen readers will notice that this is, quite simply, a two-port network model: we can consider each Pi section a black box, with sending-end voltage/current and receiving-end voltage/current.  Many of us rely on an approximation of how wires behave: in most applications, they have infinitesimal impedance, and so the impedance may be neglected in calculations.  However, when we approach power transmission, the voltages and currents are much higher than experienced elsewhere, which can have quite a profound impact on system operation.

I hope that this brief discussion provided a reasonable introduction or review.  If not, the Kathmandu University also has a very good handout on the subject.

Surge Impedance Loading

Based on the telegrapher’s equations and the above model, we can determine the characteristic impedance (also called surge impedance) of the transmission line as:

In all transmission lines, for power or signals alike, optimal power transfer occurs when the load impedance matches the characteristic impedance.  In Power Systems, we like to relate these quantities to units Power (Real, Reactive and Apparent) because these quantities can always be directly compared regardless of phase angles, power factors, harmonic distortion levels or voltage levels.

The Surge Impedance Loading converts the characteristic impedance (ohms) into a power (Watts) value:

If the amount of power being transmitted equals the SIL, the line mutual coupling (the inductance and capacitance in the model) cancels each other out, thus resulting in the line operating at unity power factor.  When the amount of power transferred is below the SIL, the power factor is leading (capacitive), and when the amount of power transferred is above the SIL, the power factor is lagging (inductive).

An intuitive model

Intuitively, I understand this behaviour by thinking about the cause of these impedances, though I am not a physicist, so this intuition is best understood as a useful analogy, not as fact.  I imagine lines have some slight twist when installed, giving rise to the series inductance.  Likewise, lines are conductors of different potential separated by a dielectric (air), which results in some capacitive coupling between lines.

Recall that power loss due to the resistance of a power line can be calculated using Joule’s law:

Similarly, the reactive power absorbed by (or injected from, if Q is negative) a power line into the system can be calculated using (where X is defined as negative for capacitors and positive for inductors):

The inductance is fixed, but the amount of reactive power absorbed by the series inductance is proportional to the current flowing across the line.  On a lightly loaded line, or where the receiving end is an open circuit, the current is very small, so the inductive nature of the line is minimized and the capacitive behaviour dominates.  Thus, the line is below the SIL and operates with a leading (capacitive) power factor.

Power transmission lines as capacitors

Finally, to get to the real point of this article. Given the above background, it follows that lightly loaded or open-ended lines will inject reactive power. With lightly loaded parallel redundant lines, it is therefore possible to open one line and use it to provide reactive power (var) support for the system.

For my simulation, I used two parallel 230kV lines, each 600km long, with three ideally-transposed phases on each right-of-way (in delta configuration with four bundled sub-conductors). These lines were supplied by an infinite bus (voltage source at 22kVrms line-to-line) with a 22/230kV Wye-Delta transformer. At the receiving end, a 300MW+5Mvar load was installed.

Here are the two circuits in steady state (note that BRK2 is open):

Dual 230kV Circuits, One Line Open

 

Note that TLine1 has a depressed receiving end voltage due to the current flowing a cross that line (the line inductance cancels out the Ferranti effect), but TLine2 has an elevated receiving end voltage due to the Ferranti Effect. Also note that the reactive power flow at the sending end for TLine2 is negative, indicating that reactive power is flowing “backwards” to the sending end.

Let’s take a closer look at what’s happening at the receiving end:

Mvar flow at receiving end of 230kV circuits

 

The breakers were configured to begin open and close in at 250ms to energize the circuit. Afterward, BRK1 remained closed and BRK2 was reopened at 500ms. We can see that reactive power initially flows across both lines, but when the receiving end circuit breaker is opened, reactive power ceases to flow. Note that the x-axis shows elapsed time of the simulation (in seconds).

The sending end, by contrast, is much more interesting:

Mvar flow at sending end of 230kV circuits

 

When both breakers are opened (until 250ms), there is a significant line charging capacitance drawing reactive power from the system. After the breakers close, the reactive power demand drops significantly (though it is still slightly capacitive due to both lines being lightly loaded). Once TLine2 is opened at the receiving end at 500ms, something interesting happens: the reactive power injected by that line into the system returns to its line-end-open state, while TLine1 increases its reactive power consumption in unison.

In conclusion, it is entirely possible to use a transmission line as a shunt capacitor.

Read Full Post »

This is the second part of a two-part series (the first part provides an introduction) discussing the role of smart grids in electric power distribution systems. We will explore some past and current installations of smart grids, discussing their motivating factors, planning, implementation and results. Essentially, this article is a discussion where we learn both from our successes and our failures in the power industry, to inform our future decisions.

Netherlands

Smart meters are the some of the earliest intelligent devices installed in distribution networks and critical to enabling the smart grid of the future.  One of the biggest issues that every smart grid initiative encounters when attempting to incorporate the technology into their system is the public perception that smart meters violate the right to privacy.  Consequently, if the utility does not handle the situation tactfully, the reduction in the rate of consumer participation can diminish the practical gain from smart grid installations.

As mentioned previously, smart meters are capable of communicating wirelessly with the utility, receiving consumer usage data with the potential to control OpenHAN-compliant appliances remotely.  In the face of intelligent adversaries with increasingly powerful computing systems, it is important to provide a significant degree of security and future proofing.

In 2005, the Netherlands electricity distribution company Oxxio began widespread introduction of a smart meter for both gas and electricity.  When the European parliament issued a directive to member states to begin installation of smart metering equipment, the public was neither educated nor reassured about the new technology.  Economy minister Maria van der Hoeven decided to push for compulsory installation of smart meters and punishing refusal to install them with a fine of up to €17,000 or six months in prison.  Amidst privacy concerns, consumer protection organizations fought rigorously against the law and won; smart meters can now only be installed on a voluntary basis as requested by consumers [1].

We must learn from this stark lesson and avoid a similar outcome in future installations by ensuring adequate education for the public in order to assuage their fears and uncertainty, ultimately to ensure vital consumer participation.

Ontario

While the amount and timing of data provided by smart meters from the field does not pose serious privacy risks from internal misuse, there many security concerns surrounding external adversaries.  In particular, there is the potential for malicious users to modify their usage data in order to influence consumer billing, either by reducing their own consumption or as a financial attack against someone else.  Since the utilities would be making design decisions based on the recorded trends, outside manipulation of the data could cause catastrophic effects to equipment if not upgraded when needed due to underrepresentation of actual power consumption.

In Ontario, the current smart grid deployment initiative involves the government, Hydro One’s distribution business as well as other local utilities.  It demonstrates the need for very close cooperation between the utilities and their regulatory bodies, especially since much of their current success can be attributed to their work communicating with users.  Learning from errors in past smart grid implementations, the Ontario government established several websites acting as a central point of origin describing smart meters, their function and their overall objectives.

For support for the technical aspects of the deployment, Hydro One has partnered with Trilliant Technologies, which is a company that “provides intelligent network solutions and software to utilities for advanced metering, and Smart Grid management” [2].  Trilliant’s expertise and extensive smart metering technology portfolio reduces Hydro One’s risk and guarantees a higher degree of flexibility than with other vendors.  The smart meters operate in the unlicensed 2.4GHz radio frequency commonly used for ZigBee, Wireless LAN (IEEE 802.11) and Bluetooth, with Trilliant providing both the metering and the related communication infrastructure.  Trilliant also designed the 1.3 million smart meters currently being deployed by Hydro One’s distribution arm.

Thus far, current efforts to ensure network security and likewise to assure and encourage consumer participation in Ontario have been a success, and there are many other similar efforts taking place in other countries at this time.  Because smart meters involve using an extremely complex device to do measurement for billing purposes, it must be completely free of defects, especially in light of Canadian requirements like the Weights & Measures Act.

Australia

As climate change raises the average global temperature, Australia’s climate is one of the hardest hit: becoming hotter and drier than ever before.  Australia continues to consume a considerable amount of electricity; in fact, 261.8 TWh of electricity was produced in Australia during 2006, and that figure is projected to reach 413 TWh by 2030 [3].

With electricity demand continuing to rise, the utility may soon need to consider construction of new generation, transmission and distribution infrastructure.  However, maintenance of an aging system is itself extremely costly, and simultaneously investing in new infrastructure is simply not feasible.  As a result, Australia decided to implement dynamic rating of equipment in both their transmission and distribution systems, allowing them to better utilize existing infrastructure.  For an example comparing static equipment ratings with those dynamically generated by Australia’s control system, see Dynamic Equipment Rating.

[1] Wilmer Heck. (2009, April) Smart energy meter will not be compulsory. [Online].   http://www.nrc.nl/international/article2207260.ece/Smart_energy_meter_will_not_be_compulsory
[2] Trilliant, Inc. (2010, March) Trilliant, Inc. – Communications for the Smart Grid. [Online].   http://www.trilliantinc.com/
[3] Cagil Ozansoy, “Turning Down the Heat,” Australia’s Fast-Growing Electricity Sector Ramps Up Its Global Warming Initiatives, vol. 8, no. 1, pp. 29-36, January-February 2010.

I originally wrote this article for a report submitted to ECE4439: Conventional, Renewable and Nuclear Energy, taught by Professor Amirnaser Yazdani at the University of Western Ontario.

Read Full Post »

Phasor Measurement Units (PMU)

Real time system monitoring is a relatively new tool available to power system operators, allowing them to analyze all aspects of a large power system continuously.  Phasor measurement units are the leading technology behind the newfound ability to provide instant analysis for problems in a geographically enormous power system.  Using a synchronous clock based off GPS timing signals, phasor measurement units (PMU) can very accurately measure current and voltage phasors with almost no time difference between meters [1].  This allows for real time information regarding power angles and power flow, system status, and possible problems.  With phasor measurement refreshing frequencies as high as 60Hz, the synchronized clock time of all PMUs is an essential requirement for providing accurate information to data centers, where a delay of several microseconds could lead to corrupt results.  Because of the timing constraints required by PMUs to perform properly, PMUs now abide by the IEEE standard C37.118, which defines standardized measurement methods, timing tolerances and communication channels.

Wide Area Monitoring Systems (WAMS)

The widespread implementation of PMUs has led to wide area monitoring systems (WAMS) allowing for situation reports for large parts of the transmission system.  PMUs transmit this system information continuously to data centers where computers can record and monitor the state of the entire power system, performing actions to maximize power flow while maintaining system stability [2].

Fault detection and proper relay functioning is one of the most important tasks in transmission systems, accounting for approximately 70% of all major disturbances  [3].  By using PMUs and having snapshots of the entire system updated up to 60 times per second, the detection of faults is very quick.  By having all of this data available at a centralized location, coordination of relays during faults can be optimized for the situation, resulting in the best fault clearing schemes.

2003 North-eastern Blackout

Wide area monitoring systems have been integrated into many transmission systems globally, allowing transmission operators to have continuous real time information about the state of the transmission system.  WAMS technology plays an important role in generation and protection, allowing generating facilities to observe system conditions continuously and maximize their output for different loading scenarios.  The eastern North American WAMS recorded all major transmission system information during the 2003 blackout, and provided critical data for the reconstruction of the sequence of events leading up to the blackout [4].  However, the lack of intelligent control schemes left the system incapable to react quickly enough to maintain stability, resulting in the loss of power to millions of people.

The 2003 North American blackout showed the world and the grid operators that the conventional power system ideas put in place over 100 years ago are not sufficient for the complex and continually growing power system of modern times.  The blackout left over 50 million people without power, and was caused by the incorrect tripping of transmission lines and generation facilities [5].  Investigations conducted revealed that the cause of the cascading blackout was due to distance relays operating within Zone 2 and Zone 3, with preset calculations.

The problem originated when the Midwest Independent System Operator (MISO) had a problem with its state estimator [6], and the information gathered from the eastern WAMS due to offset in PMU sampling times [4].  The state estimator is responsible for indicating potential problems with system parameters and operations, without this tool operators were unaware of the initial problems leading up to the blackout.  After initial transmission trips to which operators were not aware, overloading of remaining lines caused them to reach thermal limits, ground faulting through trees.  After losing several large transmission lines, distance relays started operating incorrectly in zone 3, seeing low impedance due to high load current and low voltage from tripped generation capacity.  Had the WAMS been operating correctly, the initial problematic events that lead to the eventual blackout would have been identified, and problems could have been corrected before the situation elevated to such severe levels.

Open Phasor Data Concentrator (OpenPDC)

Phasor Data Concentrators (PDC) are devices distributed throughout the transmission system designed to collect data from the many phasor measurement units.  Due to the high volume of data collected, each node typically collects data from only five or six individual PMUs and forwards the data to concentrator devices.

In October 2009, the Tennessee Valley Authority (TVA) released data collection software for industry use called SuperPDC (Super Phasor Data Concentrator) [7], which is responsible for aggregating measurements from multiple PDCs and archiving measurements for subsequent event analysis.  It is now available under an open source license under the name openPDC.

This software allows the TVA to collect data from its 120 online PMUs (see Phasor Measurement Unit Map) that together measure almost two thousand parameters several times per second.  In all, the TVA archives 150 million measurements per hour (requiring 36 GB of storage space per day) [8].

Tennessee Valley Authority

In conjunction with graduate students from Washington State University (WSU), the Tennessee Valley Authority collected data from its PMUs to observe local area oscillations within the system during a major switching event.  During a planned switching of 500kV transmission lines at the Cumberland Fossil Plant (CUF), the system experienced a dangerous undamped local-mode power oscillation at 1.2Hz, which continued until operators detected the problem and reversed the switching three minutes later.  At its peak, the oscillations escalated up to a 700MW variation in transmitted power (see Cumberland Fossil Plant Oscillation Event).

Without the phasor measurement units in place, detection of this nearly catastrophic event would not have been possible and the system could have suffered a total collapse.  It remains unknown whether the power system stabilization (PSS) equipment was not yet installed or otherwise out of service during the fault [9].  Fortunately, but both local- and inter-area oscillations can be detected using this method and the software is available for immediate use by any utility [10].

Electric Reliability Council of Texas

The amount of power transfer over transmission lines is limited by the thermal limit of the line, putting constraints on profits and maximum generation capacity.  Lines ratings are typically set to conservative constant values for the sake of safety and reliability, but newer technologies are enabling utilities to vary equipment ratings based on environmental factors including humidity and ambient temperature.

When the Electric Reliability Council of Texas (ERCOT) implemented dynamic rating of their transmission lines, they were able to maximize utilization of existing infrastructure, which had direct financial benefit for bulk generation facilities exporting power.  A control system used data including current atmospheric conditions, forecasted temperatures and system loading to determine the maximum power transfer limits, which usually exceeds the constant ratings given by the manufacturer [11].

[1] Yilu Liu, Lamine Mili, Jaime De La Ree, and Reynaldo Francisco Nuqui, “State Estimation and Voltage Security Monitoring Using Synchronized Phasor Measurement,” Virginia Polytechnic Institute and State University, Blacksburg, VA, PhD Dissertation 2001.
[2] Charles Proteus Steinmetz, “Complex Quantities and Their Use in Electrical Engineering,” in Proceedings of the International Electrical Congress, Chicago, Illinois, 1893, pp. 33-74.
[3] Yi Zhang, M. Prica, M.D. Ilic, and O.K. Tonguz, “Toward Smarter Current Relays for Power Grids,” in IEEE Power Engineering Society General Meeting, Montreal, QC, 2006, p. 8.
[4] J.F. Hauer, N.B. Bhatt, K. Shah, and S. Kolluri, “Performance of “WAMS East1″ in Providing Dynamic Information for the North East Blackout of August 14, 2003,” in IEEE Power Engineering Society General Meeting, Denver, CO, 2004, pp. 1685-1690.
[5] A.P. Apostolov, “Distance Relays Operation During the August 2003 North American Blackout and Methods for Improvement,” in IEEE Russia Power Technology, St. Petersburg, Russia, 2005, pp. 1-6.
[6] Pacific Northwest National Laboratory, Electricity Infrastructure Operations Center (EIOC). (2010, March) Looking back at the August 2003 blackout. [Online].   http://eioc.pnl.gov/research/2003blackout.stm
[7] Tennessee Valley Authority. (2009, October) TVA Opens Data Collection Software for Industry Use. [Online].   http://www.tva.gov/news/releases/octdec09/data_collection_software.htm
[8] Tennessee Valley Authority. (2010) openPDC Introduction. [Online].   http://openpdc.codeplex.com/
[9] Gary Kobet, Ritchie Carroll, Ryan Zuo, and Mani V. VEnkatasubramanian. (2009, October) Oscillation Monitoring System at TVA. [Online].  http://www.naspi.org/meetings/workgroup/2009_october/presentations/kobet_tva_oscillation_monitoring_tools_20091008.pdf
[10] openPDC Extensions. (2010, March) Extensions to the openPDC software, including WSU’s Oscillation Monitoring System (OMS). [Online].   http://openpdc.codeplex.com/wikipage?title=Extensions&referringTitle=Home
[11] Kyeon Hur et al., “High Wire Act: ERCOT Balances Transmission Flows for Texas-Size Savings Using Its Dynamic Thermal Ratings Application,” IEEE Power and Energy Magazine, vol. 8, no. 1, pp. 37-45, January-February 2010.

One of my partners wrote the majority of this article for a report submitted to ECE4439: Conventional, Renewable and Nuclear Energy, taught by Professor Amirnaser Yazdani at the University of Western Ontario. It is included here for completeness with the rest of the articles. I edited the article and wrote the sections entitled: Open Phasor Data Concentrator (OpenPDC) and Tennessee Valley Authority.

Read Full Post »

Over a century ago, power engineers designed the majority of what we see in the today’s power system infrastructure, based upon significant research during the infancy of wide-scale electric power generation, transmission and distribution.  At the time, utilities built centralized electrical generation under the assumption of unidirectional power flow from the plant to the customer.  These concepts were appropriate for the demand and complexity of the power system during that time; however, with growing electrical demand of modern society, we must take a closer look at these assumptions.  Increasing fuel costs for centralized generation as well as changing social attitudes is leading to increased distributed generation from renewable resources including solar and wind.

Distributed Generation

Distributed generation has changed the way that the power system operates, allowing many small generation facilities to contribute power in order to meet current electricity demand collectively.  Consequently, utilities anticipate that distributed generation systems will introduce new problems since it violates the previous assumption of unidirectional power flow.  Distributed generation introduces the phenomenon of bidirectional power flow, resulting in adverse effects on conventional protection and voltage regulation equipment in the existing power system.

Indeed, many American states have adopted renewable portfolio standards, which require a pre-determined amount of electricity to come from renewable sources by as early as 2013 (for details, see Appendix A: Renewable Portfolio Standards).

Plug-in Electric Vehicles

With dwindling supplies of fossil fuels and increasing prices for crude oil and petroleum products, electric vehicles are steadily gaining momentum.  Although electric vehicles are not yet mainstream, they are expected to have a significant impact on the method and amount of power distribution in the near future as drivers begin switching from gasoline-fuelled vehicles to their electric equivalents en masse.  The increasing popularity of plug-in electric and hybrid-vehicles introduces issues to the power system since it effectively doubles or triples power consumption in already strained residential areas.

There are several problems faced due to the way in which the current power system is configured.  The problem lies not with the method by which the electric car is charged, but rather the number of electric cars being charged, as well as the total amount of energy required to charge each car on a daily basis (see Appendix B: PHEV Demand Increase Example).  This large increase in electrical demand will require additional generation facilities to meet the demand, and require new equipment to deal with the increased demand of consumers.  This paradigm shift will severely affect distribution utilities since the current generation of residential transformers is not rated for such high peak demands.

By implementing smart grids, local distribution utilities will be able to mitigate the problem by staggering the charging sequence of each electric vehicle.  Furthermore, utilities can explore the use of hybrid vehicles as a distributed storage technology or as a power factor controller.  Indeed, the smart grid has the potential to reduce loading on residential substations and small distribution transformers, which eliminate the necessity for expensive high-capacity equipment.

Appendix A: Renewable Portfolio Standards

State Amount Deadline Program Administrator
Arizona 15% 2025 Arizona Corporation Commission
California 20% 2010 California Energy Commission
Colorado 20% 2020 Colorado Public Utilities Commission
Conneticut 23% 2020 Department of Public Utility Control
District of Columbia 11% 2022 DC Public Service Commission
Delaware 20% 2019 Delaware Energy Office
Hawaii 20% 2020 Hawaii Strategic Industries Division
Iowa 105MW Iowa Utilities Board
Illinois 25% 2025 Illinois Department of Commerce
Massachusetts 4% 2009 Massachusetts Division of Energy Resources
Maryland 9.5% 2022 Maryland Public Service Commission
Maine 10% 2017 Maine Public Utilities Commission
Minnesota 25% 2025 Minnesota Department of commerce
Missouri 11% 2020 Missouri Public Service Commission
Montana 15% 2015 Montana Public Service Commission
New Hampshire 16% 2025 New Hampshire Office of Energy and Planning
New Jersey 22.5% 2021 New Jersey Board of Public Utilities
New Mexico 20% 2020 New Mexico Public Regulation Commission
Nevada 20% 2015 Public Utilities Commission of Nevada
New York 24% 2013 New York Public Service Commission
North Carolina 12.5% 2021 North Carolina Utilities Commission
Oregon 25% 2025 Oregon Energy Office
Pennsylvania 18% 2020 Pennsylvania Public Utility Commission
Rhode Island 15% 2020 Rhode Island Public Utilities Commission
Texas 5880 MW 2015 Public Utility Commission of Texas
Utah 20% 2025 Utah Department of Environmental Quality
Vermont 10% 2013 Vermont Department of Public Service
Virginia 12% 2022 Virginia Department of Mines, Minerals and Energy
Washington 15% 2020 Washington Secretary of State
Wisconsin 10% 2015 Public Service Commission of Wisconsin

Source: The Smart Grid: An Introduction – For Utilities.  Published by the Office of Electricity Delivery and Energy Reliability, United States Department of Energy.  Page 19.  Retrieved on March 20, 2010 from http://www.smartgrid.gov

Appendix B: PHEV Demand Increase Example

Gasoline car energy

Energy density of gasoline = 32MJ/L*50L/tank = 1600MJ/tank

Gasoline energy per month = 1600MJ/tank * 4tank/month = 6400MJ/month

Note that 50L/week = 200L/month would result in a monthly cost of: 200L/month @ $1.00/L = 200$/month

Electric car energy

1kWh = 3.6MJ

6400MJ/3.6MJ = 1778kWh/month

1778kWh/month @ $0.058/kWh = $103/month

Not only is the total electrical energy usage of the family almost tripled for every month, but charging peaks at night-time would exceed the peaks during the daytime and also prevent transformers from cooling down at night (in case they are being run above rated conditions during the daytime).

A partner and I originally wrote this article for a report submitted to ECE4439: Conventional, Renewable and Nuclear Energy, taught by Professor Amirnaser Yazdani at the University of Western Ontario.

Read Full Post »

The smart grid is born of modern necessity; this article discusses a brief history and establishes practical relevance for a smarter grid.

History

The term smart grid has been in use since at least 2005, when the article “Toward a Smart Grid,” written by S. Massoud Amin and Bruce F. Wollenberg, appeared in the September-October issue of Power and Energy Magazine.  For decades, engineers have envisioned an intelligent power grid system with many of the capabilities mentioned in formal definitions of today’s smart grids.  Indeed, while the development of modern microprocessor technologies has only recently begun making it economical for utilities to deploy smart measurement devices at a large scale, its humble beginnings can be traced as far back as the late 1970s, when Theodore Paraskevakos patented the first remote meter reading and load management system [1].

Relevance

For the next several decades, our global energy strategy will inevitably involve upgrading to a more intelligent grid system.  Three fundamental motivators are driving this change: current bulk generation facilities are reaching their limit; utilities must maximize operational efficiency today in order to postpone the costly addition of new transmission and distribution infrastructure; and they must do all of this without compromising reliability of the power system.  In fact, many governments, including the Essential Services Commission (ESC) of Victoria, Australia [2] are adopting legislation to make crucial components of a smarter grid system mandatory.  In Canada, Hydro One’s distribution system has millions of smart meters already installed [3] in preparation for time-of-use rates slated to become mandatory by 2011 [4].

Capacity

Over the next several decades, consumer advocacy groups and environmental concerns from the public will prevent the construction of centralized generation plants as a measure to meet quickly growing demand for electric power.  Moreover, global electricity demand will require the addition of 1000 MW of generation capacity as well as all related infrastructure every week for the foreseeable future [5].  Traditional bulk generation plants are now prohibitively expensive to construct due to cap-and-trade legislation, which places severe financial penalties on processes that continue to emit carbon dioxide and other harmful greenhouse gases.  In conjunction with the higher economic cost, there are also social pressures and widespread concerns about long-term sustainability.

Reliability

With the exception of hydroelectric and geothermal power, renewable energy sources such as wind and solar present unique challenges since they are unpredictable by nature and may vary significantly in their power output due to unpredictably- and rapidly-changing external factors.  Subsequently, we must retrofit the existing power grid to ensure that it can maintain system stability despite these fluctuations in power output.  Furthermore, utilities must have the ability to monitor key indicators of system reliability on a continual basis, particularly as we approach the grid’s maximum theoretical capacity.

Efficiency

A smarter grid can also improve operational efficiencies by intelligently routing different sources of energy.  Because we currently send electricity from distant power generation facilities to serve customers across hundreds of kilometres of transmission lines, approximately eight percent of the total generated electric power is lost as waste heat [6].  Moreover, we can make better use of the existing power generation infrastructure by reducing peak demand; in fact, the International Energy Agency found that a 5% demand response capability can reduce wholesale electricity prices by up to 50% [7].

[1] Theodoros G. Paraskevakos and W. Thomas Bushman, “Apparatus and method for remote sensor monitoring, metering and control,” USPTO#4241237, December 30, 1980.
[2] Essential Services Commission, “Mandatory Rollout of Interval Meters for Electricity Customers,” Essential Services Commission, Melbourne, Victoria, Draft Decision.
[3] Hydro One. (2009, June) One Million Smart Meters Installed – Hydro One Networks and Hydro One Brampton Reach Important Milestone. [Online].  http://www.hydroone.com/OurCompany/MediaCentre/Documents/NewsReleases2009/06_22_2009_smart_meter.pdf
[4] Ontario Energy Board. (2010, February) Monitoring Report: Smart Meter Deployment and TOU Pricing – 2009 Fourth Quarter. [Online].   http://www.oeb.gov.on.ca/OEB/Industry/Regulatory+Proceedings/Policy+Initiatives+and+Consultations/Smart+Metering+Initiative+%28SMI%29/Smart+Meter+Deployment+Reporting
[5] The ABB Group. (2010, March) Performance of future [power] systems. [Online].      http://www.abb.com/cawp/db0003db002698/c663527625d66b1dc1257670004fb09f.aspx
[6] Hassan Farhangi, “The Path of the Smart Grid,” Power and Energy Magazine, vol. 8, no. 1, pp. 18-28, January-February 2010.
[7] International Energy Agency, “The Power to Choose: Demand Response in Liberalised Electricity Markets,” International Energy Agency, Paris, France, 2003.

I originally wrote this article for a report submitted to ECE4439: Conventional, Renewable and Nuclear Energy, taught by Professor Amirnaser Yazdani
at the University of Western Ontario.

Read Full Post »

Historical events provide the greatest indication of our need for a more flexible, more intelligent and more reliable power system.  In the Western world, the Tennessee Valley Authority’s bulk transmission system has achieved five nines of availability for ten years (ended 2009) [1], which corresponds to under 5.26 minutes of outage annually.  However, while the grid is generally robust to disturbances, catastrophic events like the 2003 North-eastern Blackout serve as a solemn reminder of the fragility of our system, susceptible to cascading outages originating from a handful of preventable failures in key parts of the system.  More concerning is the increasing incidence of widespread outages: in the US, 58 outages affected over 50,000 customers from 1996 to 2000 (an average of 409,854 customers per incident), compared to 41 occurrences for the same number of customers between 1991 and 1995 [2].

The essence of smart grid technology is the provision of sensors and computational intelligence to power systems, enabling monitoring and control well beyond our current capabilities.  A vital component of our smart grid future is the wherewithal to detect a precarious situation and avert crisis, either by performing preventative maintenance or by reducing the time needed to locate failing equipment.  Moreover, remotely monitoring the infrastructure provides the possibility of improvements to the operational efficiency of the power system, perhaps through better routing of electric power or by dynamically determining equipment ratings based on external conditions such as ambient temperature or weather.

In the face of changing requirements due to environmental concerns as well as external threats, it is becoming extraordinarily difficult for the utility to continue to maintain the status quo.  As the adoption of plug-in [hybrid] electric vehicles intensifies, the utility must be prepared for a corresponding increase in power consumption.  The transition to a more intelligent grid is an inevitable consequence of our ever-increasing appetite for electricity as well as our continued commitment to encouraging environmental sustainability.

The deregulation of the electric power system also presents new and unique challenges, since an unprecedented number of participants need to coordinate grid operations using more information than ever before.  If we are to maintain the level of reliability that customers have come to expect from the power system, we must be able to predict problems effectively, rather than simply react to them as an eventuality.

As the grid expands to serve growing customer demands as well as a changing society, we must proceed cautiously to ensure the system preserves its reputation of reliability.  It is incumbent upon us to carefully analyze past events and implement appropriate protection and control schemes using modern technologies.  It is clear that the power system of tomorrow will depend upon the design and preparation we conduct today.

[1] Tennessee Valley Authority. (2010, March) TVA Transmission System. [Online].   http://www.tva.gov/power/xmission.htm
[2] M. Amin, “North America’s electricity infrastructure: are we ready for more perfect storms? ,” Security and Privacy, IEEE, vol. 1, no. 5, pp. 19-25, September-October 2003.

I originally wrote this article for a report submitted to ECE4439: Conventional, Renewable and Nuclear Energy, taught by Professor Amirnaser Yazdani
at the University of Western Ontario.

Read Full Post »

While looking at Ontario Power Generation‘s official web site, I noticed this number in the bottom right corner of the page:

It contains the amount of power being generated as well as the date/time of the last update. I refreshed a few times and realized that updates occur every five minutes. Curious, I thought I’d whip up a quick module to scrape this information from the web site and produce some nice graphs with RRDTool. I used the open source RRDTool::OO module to do this, which is freely available on the CPAN.

Recognizing that web scraping is not the most reliable means of getting data from a web site, I contacted OPG via e-mail and requested an API for this data. In the latest iteration of WWW::OPG (version 1.004 already on CPAN), a smaller machine-readable text file provides the same data in an easier-to-parse format. Thanks to someone I know only as “Rose” from OPG for providing this file, which is much easier to parse and less likely to change.

As OPG supplies roughly 70% of Ontario’s electric power demand, the consumption statistics provide a relatively good reflection on our behaviour patterns over time. During the course of this, I learned how to work with Round Robin Databases (and wrote an article about it) and was able to observe some interesting trends even in the first week of operation:

Power generation for week of 2009-12-25

The graph begins Saturday, December 26th, 2009 (Boxing Day) and continues through the week approaching the new year 2010. These particular trends are interesting because, while two observable peaks occur each day, the overall power consumption (including 95th percentile consumption) seems much lower than usual.

By comparison, consider this graph of a week ended 14 January 2010 (there were some rather long-lasting outages in the data collection which I’m trying to track down, but it still gives a sense of the general trends):

Power generation for week of 2010-01-07

In this case, the 95th percentile consumption is much higher at about 14GW rather than 10GW. Note that the 95th percentile gives a rather good approximation of an infrastructure’s utilization rate, since it works by indicating peak power after removing the highest 5% of data points. This means that 95% of the time, power consumption was at or below the given line.

Percentile is more important than averages because it indicates the minimum infrastructure to satisfy demand most of the time (95% of the time) so it gives us a simple way to determine whether more infrastructure is needed.

In the specific case of electric power utilities, and because electricity is so important for both industrial and commercial use, legal requirements stipulate that the demand must always be supplied, barring exceptional circumstances such as failures of distribution transformers. In this case, maximum power consumption is a more useful measure for infrastructure planning.

Read Full Post »