Feeds:
Posts
Comments

Posts Tagged ‘Standards’

ZigBee

The ZigBee protocol enables communication using multiple network topologies, including star, tree and mesh [1].  It is particularly challenging to ensure reliability of the communication channel for smart meter designs, especially with the use of wireless-based backhaul channels and ZigBee is particularly suited to this application with its mesh network topology.  In case a meter is out of range of a central tower or otherwise obstructed by buildings or other objects, ZigBee-based meters enable communication between meters and relaying of information back to the data collection point [2].  Furthermore, since ZigBee devices utilize the unlicensed 2.4GHz spectrum, they have a very low cost of deployment and allow seamless integration and networking of many ZigBee devices.  Although they have many benefits, ZigBee devices are designed primarily for short-range communication and low device power requirements [3], requiring a separate wireless protocol for long-range transmission.  ZigBee is a key technology enabling the OpenHAN networking standard discussed later in this paper.

OpenADR (Automated Demand Response)

Because electricity is charged at a constant price in the current power system, regardless of time of use, consumers therefore have little incentive to put forth effort to change their usage patterns.  Introducing smart grids will allow for dynamic billing based on market pricing at the time, and thus give more incentive to customers to plan their energy usage [4].  With the proposed automated demand response, individual smart meters will have the capability of monitoring system wide conditions, determining when the system is stressed and appropriately allocate power to different appliances.  Automated demand response will aim for reducing high loading during peak times, in an attempt to remove excess stress from the power system [5].

Open Automated Demand Response is a standard currently under development [6], which aims to ensure interoperability between various smart meter infrastructure devices.  It will provide a way for users program appliances to operate according to current electricity prices, for example to do laundry when power is cheapest.

OpenHAN (Home Area Network)

Open Home Area Network is a proposed standard to interface with the smart meter in residences with appliances in the home.  OpenHAN can allow for utility control of the appliance, customer coordination and timing of appliance activation, and operational states of appliances based on set-points such as price.  Upon completion and implementation of this standard, residents will be capable of having appliances automatically run during times when electricity is cheapest, and utilities will be able to cease operation of appliances during peak loading times.  OpenHAN is the fundamental idea behind automated demand response, where there exists a link between the smart meter of the customer and the customer’s appliance [7].

Worldwide Interoperability for Microwave Access (WiMAX)

WiMAX is an industrial wireless interoperability standard related to the existing technology known as the Global System for Mobile Communications (GSM) [8].  It is typically used for land-based wireless Internet service providers, particularly those serving rural communities; however, it is finding applications within power systems as a backhaul for smart meter telemetry data [9].

Broadband over Power Lines

Several different startup companies have explored the use of Broadband over Power Lines (BPL) as an Internet service delivery or as a backhaul for telemetry from smart meters [10].  While it is no longer a serious contender for delivering Internet access to remote communities, the technology still has its niche applications, particularly within the realm of power systems.  Some smart meter vendors continue to sell smart metering equipment that transmits telemetry over power lines [11] rather than using its own radio frequency, which requires the purchase of costly spectrum.

Furthermore, using BPL couplers traditionally used for sending and receiving data across power lines can be used to listen for types of noise characteristic of certain types of equipment failures; for example, a cracked insulator beginning to fail will induce a specific signature pattern that can be detected using BPL couplers [12].

[1] Peng Ran, Mao-heng Sun, and You-min Zou, “ZigBee ROuting Selection Strategy Based on Data Services and Energy-Balanced ZigBee Routing,” in IEEE Asia-Pacific Conference on Services Computing, Xi’an, China, 2006, pp. 400-404.
[2] Hoi Yan Tung, Kim Fung Tsang, and Ka Lun Lam, “ZigBee Sensor Network for Advanced Metering Infrastructure,” in Power Electronics and Drive Systems, Taipei, Taiwan, 2009, pp. 95-96.
[3] ZigBee Alliance Inc. (2007, October) ZigBee Specification. [Online]. http://zigbee.org/ZigBeeSpecificationDownloadRequest/tabid/311/Default.aspx
[4] David Andrew, “National Grid’s use of Emergency Diesel Standby Generator’s in Dealing with Grid Intermittency and Variability,” in Open University Conference on Intermittency, Milton Keynes, UK, 2006.
[5] Dan Yang and Yanni Chen, “Demand Response and Market Performance in Power Economics,” in Power and Energy Society General Meeting, Calgary, AB, 2009, pp. 1-6.
[6] Ivin Rhyne et al., “Open Automated Demand Response Communications Specification,” Public Interest Energy Research Program (PIER), California Energy Commission, Berkeley, CA, PIER Final Project Report 2009.
[7] UtilityAMI OpenHAN Task Force. (2007, December) Requirements Working Group Specification Briefing. [Online].  http://osgug.ucaiug.org/sgsystems/openhan/HAN%20Requirements/OpenHAN%20Specification%20Dec.ppt
[8] Zheng Ruiming, Zhang Xin, Pan Qun, Yang Dacheng, and Li Xi, “Research on coexistence of WiMAX and WCDMA systems,” in IEEE 19th Internetional Symposium on Personal, Indoor and Mobile Radio Communications, Cannes, France, 2008, pp. 1-6.
[9] G.N. Srinivasa Prasanna et al., “Data communication over the smart grid,” in IEEE International Symposium on Power Line Communications and Its Applications, Dresden, Germany, 2009, pp. 273-279.
[10] X. Qiu, “Powerful talk,” IET Power Engineer, vol. 21, no. 1, pp. 38-43, February-March 2007.
[11] Echelon Corporation. (2010, March) Energy Management Control Networks. [Online].   http://www.echelon.com/products/energyproducts.htm
[12] Larry Silverman, “BPL shouldn’t mimic DSL/cable models,” BPL Today, pp. 1-7, July 2005.

One of my partners wrote the majority of this article for a report submitted to ECE4439: Conventional, Renewable and Nuclear Energy, taught by Professor Amirnaser Yazdani at the University of Western Ontario. It is included here for completeness with the rest of the articles. I edited the article and wrote the sections entitled: Worldwide Interoperability for Microwave Access (WiMAX) and Broadband over Power Lines.

Read Full Post »

Okay, so this is a long-awaited follow-up to my first post on the topic of  Debian Perl Packaging. Some of you might note I was pretty extreme in the first post, which is partially because people only really ever respond to extremes when they’re new to things. When you first begin programming, the advice you get is “hey, never use goto statements” — but as your progress in your ability and your understanding of how it works, what it’s actually doing in the compiler — then it might not be so bad after all. In fact, I hear the Linux kernel uses it extensively to provide Exceptions in C. The Wikipedia page on exception handling in various languages shows how to implement exceptions in C using setjmp/longjmp (which is essentially a goto statement). But I digress.

Back to the main point of this writeup. Previously I couldn’t really think of cases where packaging your own modules is really all that useful, especially when packaging them for Debian means that you benefit many communities — Debian, Ubuntu, and all of the distributions that are based on those.

During a discussion with Hans Dieter Piercey after his article providing a nice comparison between dh-make-perl and cpan2dist. (Aside: I feel like he was slightly biased toward cpan2dist in his writeup, but I’m myself biased toward dh-make-perl, so he might be right, even though I won’t admit it.)

I’m really glad for that article and the ensuing dialog, because it really got people talking about what they use Debian Perl packages for, and where it is useful to make your own.

Firstly, if you’ve got an application that depends on some Perl module that isn’t managed by Debian, but you need it yesterday, then you can either install that module via CPAN or roll your own Debian package. The idea here is to make and install the package so you can use it, but also file a Request For Package bug at the same time — see the reportbug command in Debian, or use LaunchPad if you’re on Ubuntu. This way, when the package is officially released and supported, you can move to that instead, and thus get the benefits of automatic upgrades of those packages.

Secondly, if you’ve got an application that depends on some internally-developed modules, then they probably wouldn’t exist on CPAN (some call this Perl code part of the DarkPAN), except in the rare case that a company open sources their work. But corporations will never open source all of their work, even if they consider the implications of providing some of it to the open source community, so at some point or another you’ll need to deal with internal packages. Previously, the best way to handle this was to construct your own CPAN local mirror, and have other machines install and upgrade from it — thus your internal code is easily distributed via the usual mechanism.

One of the advantages of using CPAN to distribute things is that it’s available on most platforms, builds things and runs tests automatically on many platforms. CPANPLUS will even let you remove packages, which was one of the main reasons I am so pro-Debian packages anyway. However, it does mean you’ll need to rebuild the package on other systems, which is prone to failures that cost time and money to track down and fix. CPAN and CPANPLUS are the Perl tradition of distributing packages.

If you are using an environment mostly with Debian systems, however, you may benefit from using a local Debian repository. This way, you only need to upgrade packages in your repository, and they’ll be automatically upgraded along with the rest of your operating system (you do run update and upgrade periodically right?). There is even the fantastic aptcron program to automate this, so there’s really no excuse not to automatically update.

In either case, creating a local package means you will be able to easily remove anything you no longer need via the normal package management tools. You can also distribute the binary packages between machines — though it sometimes depends on the platform (for modules that incorporate C or other platform-specific code that needs to be rebuilt). Generally, most Perl modules are Pure Perl, and thus you can compile and test it once, on one machine, and distribute it to other ones simply by installing the .deb package on other machines. You can copy packages to machines and use dpkg to install them, or better yet, create a local Debian mirror so it’s done automatically and via the usual mechanism (aptitude, etc.)

In conclusion: if you’re going to make your own Debian packages, do so with caution, and be aware of all the consequences (positive and negative) of what you’re doing. As always, a real understanding of everything is necessary.

Read Full Post »

One thing that makes Perl different from many other languages is that it has a rather small collection of core commands. There are only a few hundred commands in Perl itself, so the rest of its functionality comes from its rich collection of modules,  many of which are distributed via the Comprehensive Perl Archive Network (CPAN).

When CPAN first came on the scene, it preceded many modern package management systems, including Debian’s Advanced Packaging Tool (APT) and Ruby’s gem system, among others. As a consequence of its rich history, the CPAN Shell is relatively simplistic by today’s standards, yet still continues to get the job done quite well.

Unfortunately, there are two issues with CPAN:

  1. Packages are distributed as source code which is built on individual machines when installing or upgrading packages.
    • Since packages must be re-built on every machine that installs it, the system is prone to breaking and wastes CPU time and other resources. (The CPAN Testers system is a great way module authors can try to mitigate this risk, though.)
    • Due to wide variation in packages, many packages cause problems with the host operating system in terms of where they install files, or expect them to be installed. This is because CPAN does not (and cannot) know every environment that packages will be installed on.
  2. It does not integrate nicely with package managers
    • The standard CPAN Shell is not designed to remove modules, only install them. Removals need to be done manually, which is prone to human error such as forgetting to clean up certain files, or breaking other installs in the process.
    • It cannot possibly know the policies that govern the various Linux flavours or Unices. This means that packages might be installed where users do not expect, which violates the Principle of Least Surprise.
    • It is a separate ecosystem to maintain. When packages are updated via the normal means (eg, APT), packages installed via CPAN will be left alone (ie, not upgraded).

Here is the real problem: packages installed via CPAN will be left alone. This means that if new releases come out, your system will retain an old copy of packages, until you get into the CPAN Shell and upgrade it manually. If you’re administrating your own system, this isn’t a big problem — but it has significant implications for collections of production systems. If you are managing thousands of servers, then you will need to run the upgrade on each server, and hope that the build doesn’t break (thus requiring your, or somebody else’s, intervention).

One of the biggest reasons to select Debian is because of one of its primary design goal: to be a Universal Operating System. What this means is that the operating system should run on as many different platforms and architectures as possible, while providing the same rich environment to each of them to the greatest extent possible. So, whether I’m using Debian GNU/Linux x86 or Debian GNU/kFreeBSD x64, I have access to the same applications, including the same Perl packages. Debian has automated tools to build and test packages on every architecture we support.

The first thing I’m going to say is: if you are a Debian user, or a user of its derivatives, there is absolutely no need for you to create your own packages. None. Just don’t do it; it’s bad. Avoid it like the goto statement, mmkay?

If you come across a great CPAN package that you’d really like to see packaged for Debian, then contact the Debian Perl Packagers (pkg-perl) team, and let us know that you’d like a package. We currently maintain well over a thousand Perl packages for Debian, though we are by no means the only maintainers of Perl packages in Debian. You can do this easily by filing a Request For Package (RFP) bug using the command: reportbug wnpp.

On-screen prompting will walk you through the rest, and we’ll try to package the module as quickly as possible. When we’re done, you’ll receive a nice e-mail letting you know that your package has been created, thus closing the bug. A few days of waiting, but you will have a package in perfect working condition as soon as we can create it for you. Moreover, you’re helping the next person that seeks such a module, since it will already be available in Debian (and in due time it will propagate to its derivatives, like Ubuntu).

All 25,000+ Debian packages meet the rigorous requirements of Debian Policy. The majority of them meet the Debian Free Software Guidelines (DFSG), too; the ones which are not considered DFSG-free are placed in their own repository, separate from the rest of packages. A current work in progress is machine-parseable copyright control files, which will hopefully provide a way for administrators to quickly review licensing terms of all the software you install. This is especially important for small- and medium-sized businesses without their own intellectual property legal departments to review open source software, which is something that continues to drive many businesses away from using open source.

For the impatient, note this well: packages which are not maintained by Debian are not supported by Debian. This means that if you install something using a packaging tool (we’ll discuss these later) or via CPAN, then your package is necessarily your own responsibility. In the unlikely event that you totally break your system installing a custom package, it’s totally your fault, and it may mean you will have to restore an earlier backup or re-install your system completely. Be very careful if you decide to go this route. A few days waiting to ensure that your package will work on every platform you’re likely to encounter is worth the couple days of waiting for a package to be pushed through the normal channels.

The Debian Perl Packaging group offers its services freely to the public for the benefit of our users. It is much better to ask the volunteers (preferably politely) to get your package in Debian, so that it passes through the normal testing channels. You really should avoid making your own packages in a vacuum; the group is always open to new members, and it means your package will be reviewed (and hopefully uploaded into Debian) by our sponsors.

But the thing about all rules is that there are always exceptions. There are, in fact, some reasons when you might want to produce your own packages. I was discussing this with Hans Dieter Pearcey the other day, and he has written a great follow-up blog post about the primary differences between dh-make-perl and cpan2dist, two packaging tools with a similar purpose but very different design goals. Another article is to follow this one, where I will discuss the differences between the two.

Read Full Post »

This article was originally published in Project Magazine, a Canadian periodical written by engineering students, for engineering students. The original publication date is unknown, but it was some time in 2008. I am publishing it here because it is still a relevant read, especially in light of our growing use of social networking tools.

Like many widely present inventions, what we know today as the World Wide Web began its life as a simple research project. In the 1980s, Tim Berners-Lee, often attributed with the creation of the Web, sought to provide a system of distributing, sharing and publishing information for the academic community.

Independent of Berners-Lee’s work, the University of Minnesota developed the Gopher protocol as a universal document retrieval system, marking a revolutionary shift in thinking; it was an attempt to model the intricate relationships between documents in a way that computers could understand. The links between these resources, or hypertext, would pave the way for the Web to evolve over the next three decades.

Based on the Gopher’s hypertext linking capabilities and the Generalized Markup Language developed at IBM, the HyperText Markup Language (HTML) enabled the Web to incorporate more advanced features such as embedded media (images, but later sounds and video) and formatted text. Interest in the World Wide Web as the next communication medium became apparent, due largely as a result of the ease of publishing information.

Over fifteen years to follow, many companies including Netscape and Microsoft were in an arms race to develop new features to cater to an exponentially growing market space. During this time, browsers added countless extensions to HTML, some of which became de-facto standards, albeit different from Berners-Lee’s vision for the Web. By the release of HTML 3.0, browser support for tables and other complex formatting became widespread, enabling the publication of an ever-increasing array of scientific and literary works.

By the 21st century, software such as blogs, social networking, wikis and podcasts marked the birth of a second generation of the World Wide Web. The idea of Web 2.0 indicated the transition of many websites from isolated systems to an interlinked, global computing platform. Ultimately, Web 2.0 is about increasing the socialization of the Web, enriching collaboration and utility for users. This had significant implications for both individuals and businesses because it provided a means to make sense of the growing amount of available information.

The progress in the field of web standards has been relentless, and gradual. Under the guidance of the World Wide Web Consortium (W3C), a multinational non-profit organization founded by Tim Berners-Lee, standards are developed through several stages of peer review, and then officially published to the community-at-large. This ensures that updates are logical and consistent with the W3C’s goals of interoperability, flexibility and extensibility.

The largest step forward so far has been a separation of document structure from its presentation in code. Cascading Style Sheets (CSS) enable this by providing a separate language to describe the way data should be output to various devices. This is important particularly for accessibility purposes: after all, information such as fonts being red or bold have no meaning for alternative display systems such as screen readers (text-to-speech) and Braille outputs. In this way, multiple style sheets can be created for each document, allowing them to behave differently depending on the output media.

So what is the future direction of the Internet and the World Wide Web? As we are able to gather increasing amounts of information from our outside environment, we need a way of gathering and organizing the information in an interoperable way. Tim Berners-Lee envisions a Web that is connected not with the data itself, but with computers understanding the meaning of the data. While likely to yield some notable results, another browser war will inevitably prevent this dream from coming into fruition. This is the reason why initiatives proposed by institutions such as the W3C must be adhered to by industry.

In everyday use, the Semantic Web will provide the ability to interpret information in unprecedented ways: for example, the transactions on your bank statements can be overlaid onto a calendar, or inserted into graphs based on arbitrary criteria. Indeed, the possibilities are endless and the technology already exists to make this happen—all we need is one last push to implement it. We are looking to a future where computers can do an increasing amount of work and provide detailed analysis through the implementation of Web standards.

Read Full Post »