Feeds:
Posts
Comments

Posts Tagged ‘User Experience’

ZigBee

The ZigBee protocol enables communication using multiple network topologies, including star, tree and mesh [1].  It is particularly challenging to ensure reliability of the communication channel for smart meter designs, especially with the use of wireless-based backhaul channels and ZigBee is particularly suited to this application with its mesh network topology.  In case a meter is out of range of a central tower or otherwise obstructed by buildings or other objects, ZigBee-based meters enable communication between meters and relaying of information back to the data collection point [2].  Furthermore, since ZigBee devices utilize the unlicensed 2.4GHz spectrum, they have a very low cost of deployment and allow seamless integration and networking of many ZigBee devices.  Although they have many benefits, ZigBee devices are designed primarily for short-range communication and low device power requirements [3], requiring a separate wireless protocol for long-range transmission.  ZigBee is a key technology enabling the OpenHAN networking standard discussed later in this paper.

OpenADR (Automated Demand Response)

Because electricity is charged at a constant price in the current power system, regardless of time of use, consumers therefore have little incentive to put forth effort to change their usage patterns.  Introducing smart grids will allow for dynamic billing based on market pricing at the time, and thus give more incentive to customers to plan their energy usage [4].  With the proposed automated demand response, individual smart meters will have the capability of monitoring system wide conditions, determining when the system is stressed and appropriately allocate power to different appliances.  Automated demand response will aim for reducing high loading during peak times, in an attempt to remove excess stress from the power system [5].

Open Automated Demand Response is a standard currently under development [6], which aims to ensure interoperability between various smart meter infrastructure devices.  It will provide a way for users program appliances to operate according to current electricity prices, for example to do laundry when power is cheapest.

OpenHAN (Home Area Network)

Open Home Area Network is a proposed standard to interface with the smart meter in residences with appliances in the home.  OpenHAN can allow for utility control of the appliance, customer coordination and timing of appliance activation, and operational states of appliances based on set-points such as price.  Upon completion and implementation of this standard, residents will be capable of having appliances automatically run during times when electricity is cheapest, and utilities will be able to cease operation of appliances during peak loading times.  OpenHAN is the fundamental idea behind automated demand response, where there exists a link between the smart meter of the customer and the customer’s appliance [7].

Worldwide Interoperability for Microwave Access (WiMAX)

WiMAX is an industrial wireless interoperability standard related to the existing technology known as the Global System for Mobile Communications (GSM) [8].  It is typically used for land-based wireless Internet service providers, particularly those serving rural communities; however, it is finding applications within power systems as a backhaul for smart meter telemetry data [9].

Broadband over Power Lines

Several different startup companies have explored the use of Broadband over Power Lines (BPL) as an Internet service delivery or as a backhaul for telemetry from smart meters [10].  While it is no longer a serious contender for delivering Internet access to remote communities, the technology still has its niche applications, particularly within the realm of power systems.  Some smart meter vendors continue to sell smart metering equipment that transmits telemetry over power lines [11] rather than using its own radio frequency, which requires the purchase of costly spectrum.

Furthermore, using BPL couplers traditionally used for sending and receiving data across power lines can be used to listen for types of noise characteristic of certain types of equipment failures; for example, a cracked insulator beginning to fail will induce a specific signature pattern that can be detected using BPL couplers [12].

[1] Peng Ran, Mao-heng Sun, and You-min Zou, “ZigBee ROuting Selection Strategy Based on Data Services and Energy-Balanced ZigBee Routing,” in IEEE Asia-Pacific Conference on Services Computing, Xi’an, China, 2006, pp. 400-404.
[2] Hoi Yan Tung, Kim Fung Tsang, and Ka Lun Lam, “ZigBee Sensor Network for Advanced Metering Infrastructure,” in Power Electronics and Drive Systems, Taipei, Taiwan, 2009, pp. 95-96.
[3] ZigBee Alliance Inc. (2007, October) ZigBee Specification. [Online]. http://zigbee.org/ZigBeeSpecificationDownloadRequest/tabid/311/Default.aspx
[4] David Andrew, “National Grid’s use of Emergency Diesel Standby Generator’s in Dealing with Grid Intermittency and Variability,” in Open University Conference on Intermittency, Milton Keynes, UK, 2006.
[5] Dan Yang and Yanni Chen, “Demand Response and Market Performance in Power Economics,” in Power and Energy Society General Meeting, Calgary, AB, 2009, pp. 1-6.
[6] Ivin Rhyne et al., “Open Automated Demand Response Communications Specification,” Public Interest Energy Research Program (PIER), California Energy Commission, Berkeley, CA, PIER Final Project Report 2009.
[7] UtilityAMI OpenHAN Task Force. (2007, December) Requirements Working Group Specification Briefing. [Online].  http://osgug.ucaiug.org/sgsystems/openhan/HAN%20Requirements/OpenHAN%20Specification%20Dec.ppt
[8] Zheng Ruiming, Zhang Xin, Pan Qun, Yang Dacheng, and Li Xi, “Research on coexistence of WiMAX and WCDMA systems,” in IEEE 19th Internetional Symposium on Personal, Indoor and Mobile Radio Communications, Cannes, France, 2008, pp. 1-6.
[9] G.N. Srinivasa Prasanna et al., “Data communication over the smart grid,” in IEEE International Symposium on Power Line Communications and Its Applications, Dresden, Germany, 2009, pp. 273-279.
[10] X. Qiu, “Powerful talk,” IET Power Engineer, vol. 21, no. 1, pp. 38-43, February-March 2007.
[11] Echelon Corporation. (2010, March) Energy Management Control Networks. [Online].   http://www.echelon.com/products/energyproducts.htm
[12] Larry Silverman, “BPL shouldn’t mimic DSL/cable models,” BPL Today, pp. 1-7, July 2005.

One of my partners wrote the majority of this article for a report submitted to ECE4439: Conventional, Renewable and Nuclear Energy, taught by Professor Amirnaser Yazdani at the University of Western Ontario. It is included here for completeness with the rest of the articles. I edited the article and wrote the sections entitled: Worldwide Interoperability for Microwave Access (WiMAX) and Broadband over Power Lines.

Read Full Post »

The smart grid is born of modern necessity; this article discusses a brief history and establishes practical relevance for a smarter grid.

History

The term smart grid has been in use since at least 2005, when the article “Toward a Smart Grid,” written by S. Massoud Amin and Bruce F. Wollenberg, appeared in the September-October issue of Power and Energy Magazine.  For decades, engineers have envisioned an intelligent power grid system with many of the capabilities mentioned in formal definitions of today’s smart grids.  Indeed, while the development of modern microprocessor technologies has only recently begun making it economical for utilities to deploy smart measurement devices at a large scale, its humble beginnings can be traced as far back as the late 1970s, when Theodore Paraskevakos patented the first remote meter reading and load management system [1].

Relevance

For the next several decades, our global energy strategy will inevitably involve upgrading to a more intelligent grid system.  Three fundamental motivators are driving this change: current bulk generation facilities are reaching their limit; utilities must maximize operational efficiency today in order to postpone the costly addition of new transmission and distribution infrastructure; and they must do all of this without compromising reliability of the power system.  In fact, many governments, including the Essential Services Commission (ESC) of Victoria, Australia [2] are adopting legislation to make crucial components of a smarter grid system mandatory.  In Canada, Hydro One’s distribution system has millions of smart meters already installed [3] in preparation for time-of-use rates slated to become mandatory by 2011 [4].

Capacity

Over the next several decades, consumer advocacy groups and environmental concerns from the public will prevent the construction of centralized generation plants as a measure to meet quickly growing demand for electric power.  Moreover, global electricity demand will require the addition of 1000 MW of generation capacity as well as all related infrastructure every week for the foreseeable future [5].  Traditional bulk generation plants are now prohibitively expensive to construct due to cap-and-trade legislation, which places severe financial penalties on processes that continue to emit carbon dioxide and other harmful greenhouse gases.  In conjunction with the higher economic cost, there are also social pressures and widespread concerns about long-term sustainability.

Reliability

With the exception of hydroelectric and geothermal power, renewable energy sources such as wind and solar present unique challenges since they are unpredictable by nature and may vary significantly in their power output due to unpredictably- and rapidly-changing external factors.  Subsequently, we must retrofit the existing power grid to ensure that it can maintain system stability despite these fluctuations in power output.  Furthermore, utilities must have the ability to monitor key indicators of system reliability on a continual basis, particularly as we approach the grid’s maximum theoretical capacity.

Efficiency

A smarter grid can also improve operational efficiencies by intelligently routing different sources of energy.  Because we currently send electricity from distant power generation facilities to serve customers across hundreds of kilometres of transmission lines, approximately eight percent of the total generated electric power is lost as waste heat [6].  Moreover, we can make better use of the existing power generation infrastructure by reducing peak demand; in fact, the International Energy Agency found that a 5% demand response capability can reduce wholesale electricity prices by up to 50% [7].

[1] Theodoros G. Paraskevakos and W. Thomas Bushman, “Apparatus and method for remote sensor monitoring, metering and control,” USPTO#4241237, December 30, 1980.
[2] Essential Services Commission, “Mandatory Rollout of Interval Meters for Electricity Customers,” Essential Services Commission, Melbourne, Victoria, Draft Decision.
[3] Hydro One. (2009, June) One Million Smart Meters Installed – Hydro One Networks and Hydro One Brampton Reach Important Milestone. [Online].  http://www.hydroone.com/OurCompany/MediaCentre/Documents/NewsReleases2009/06_22_2009_smart_meter.pdf
[4] Ontario Energy Board. (2010, February) Monitoring Report: Smart Meter Deployment and TOU Pricing – 2009 Fourth Quarter. [Online].   http://www.oeb.gov.on.ca/OEB/Industry/Regulatory+Proceedings/Policy+Initiatives+and+Consultations/Smart+Metering+Initiative+%28SMI%29/Smart+Meter+Deployment+Reporting
[5] The ABB Group. (2010, March) Performance of future [power] systems. [Online].      http://www.abb.com/cawp/db0003db002698/c663527625d66b1dc1257670004fb09f.aspx
[6] Hassan Farhangi, “The Path of the Smart Grid,” Power and Energy Magazine, vol. 8, no. 1, pp. 18-28, January-February 2010.
[7] International Energy Agency, “The Power to Choose: Demand Response in Liberalised Electricity Markets,” International Energy Agency, Paris, France, 2003.

I originally wrote this article for a report submitted to ECE4439: Conventional, Renewable and Nuclear Energy, taught by Professor Amirnaser Yazdani
at the University of Western Ontario.

Read Full Post »

Historical events provide the greatest indication of our need for a more flexible, more intelligent and more reliable power system.  In the Western world, the Tennessee Valley Authority’s bulk transmission system has achieved five nines of availability for ten years (ended 2009) [1], which corresponds to under 5.26 minutes of outage annually.  However, while the grid is generally robust to disturbances, catastrophic events like the 2003 North-eastern Blackout serve as a solemn reminder of the fragility of our system, susceptible to cascading outages originating from a handful of preventable failures in key parts of the system.  More concerning is the increasing incidence of widespread outages: in the US, 58 outages affected over 50,000 customers from 1996 to 2000 (an average of 409,854 customers per incident), compared to 41 occurrences for the same number of customers between 1991 and 1995 [2].

The essence of smart grid technology is the provision of sensors and computational intelligence to power systems, enabling monitoring and control well beyond our current capabilities.  A vital component of our smart grid future is the wherewithal to detect a precarious situation and avert crisis, either by performing preventative maintenance or by reducing the time needed to locate failing equipment.  Moreover, remotely monitoring the infrastructure provides the possibility of improvements to the operational efficiency of the power system, perhaps through better routing of electric power or by dynamically determining equipment ratings based on external conditions such as ambient temperature or weather.

In the face of changing requirements due to environmental concerns as well as external threats, it is becoming extraordinarily difficult for the utility to continue to maintain the status quo.  As the adoption of plug-in [hybrid] electric vehicles intensifies, the utility must be prepared for a corresponding increase in power consumption.  The transition to a more intelligent grid is an inevitable consequence of our ever-increasing appetite for electricity as well as our continued commitment to encouraging environmental sustainability.

The deregulation of the electric power system also presents new and unique challenges, since an unprecedented number of participants need to coordinate grid operations using more information than ever before.  If we are to maintain the level of reliability that customers have come to expect from the power system, we must be able to predict problems effectively, rather than simply react to them as an eventuality.

As the grid expands to serve growing customer demands as well as a changing society, we must proceed cautiously to ensure the system preserves its reputation of reliability.  It is incumbent upon us to carefully analyze past events and implement appropriate protection and control schemes using modern technologies.  It is clear that the power system of tomorrow will depend upon the design and preparation we conduct today.

[1] Tennessee Valley Authority. (2010, March) TVA Transmission System. [Online].   http://www.tva.gov/power/xmission.htm
[2] M. Amin, “North America’s electricity infrastructure: are we ready for more perfect storms? ,” Security and Privacy, IEEE, vol. 1, no. 5, pp. 19-25, September-October 2003.

I originally wrote this article for a report submitted to ECE4439: Conventional, Renewable and Nuclear Energy, taught by Professor Amirnaser Yazdani
at the University of Western Ontario.

Read Full Post »

Okay, so this is a long-awaited follow-up to my first post on the topic of  Debian Perl Packaging. Some of you might note I was pretty extreme in the first post, which is partially because people only really ever respond to extremes when they’re new to things. When you first begin programming, the advice you get is “hey, never use goto statements” — but as your progress in your ability and your understanding of how it works, what it’s actually doing in the compiler — then it might not be so bad after all. In fact, I hear the Linux kernel uses it extensively to provide Exceptions in C. The Wikipedia page on exception handling in various languages shows how to implement exceptions in C using setjmp/longjmp (which is essentially a goto statement). But I digress.

Back to the main point of this writeup. Previously I couldn’t really think of cases where packaging your own modules is really all that useful, especially when packaging them for Debian means that you benefit many communities — Debian, Ubuntu, and all of the distributions that are based on those.

During a discussion with Hans Dieter Piercey after his article providing a nice comparison between dh-make-perl and cpan2dist. (Aside: I feel like he was slightly biased toward cpan2dist in his writeup, but I’m myself biased toward dh-make-perl, so he might be right, even though I won’t admit it.)

I’m really glad for that article and the ensuing dialog, because it really got people talking about what they use Debian Perl packages for, and where it is useful to make your own.

Firstly, if you’ve got an application that depends on some Perl module that isn’t managed by Debian, but you need it yesterday, then you can either install that module via CPAN or roll your own Debian package. The idea here is to make and install the package so you can use it, but also file a Request For Package bug at the same time — see the reportbug command in Debian, or use LaunchPad if you’re on Ubuntu. This way, when the package is officially released and supported, you can move to that instead, and thus get the benefits of automatic upgrades of those packages.

Secondly, if you’ve got an application that depends on some internally-developed modules, then they probably wouldn’t exist on CPAN (some call this Perl code part of the DarkPAN), except in the rare case that a company open sources their work. But corporations will never open source all of their work, even if they consider the implications of providing some of it to the open source community, so at some point or another you’ll need to deal with internal packages. Previously, the best way to handle this was to construct your own CPAN local mirror, and have other machines install and upgrade from it — thus your internal code is easily distributed via the usual mechanism.

One of the advantages of using CPAN to distribute things is that it’s available on most platforms, builds things and runs tests automatically on many platforms. CPANPLUS will even let you remove packages, which was one of the main reasons I am so pro-Debian packages anyway. However, it does mean you’ll need to rebuild the package on other systems, which is prone to failures that cost time and money to track down and fix. CPAN and CPANPLUS are the Perl tradition of distributing packages.

If you are using an environment mostly with Debian systems, however, you may benefit from using a local Debian repository. This way, you only need to upgrade packages in your repository, and they’ll be automatically upgraded along with the rest of your operating system (you do run update and upgrade periodically right?). There is even the fantastic aptcron program to automate this, so there’s really no excuse not to automatically update.

In either case, creating a local package means you will be able to easily remove anything you no longer need via the normal package management tools. You can also distribute the binary packages between machines — though it sometimes depends on the platform (for modules that incorporate C or other platform-specific code that needs to be rebuilt). Generally, most Perl modules are Pure Perl, and thus you can compile and test it once, on one machine, and distribute it to other ones simply by installing the .deb package on other machines. You can copy packages to machines and use dpkg to install them, or better yet, create a local Debian mirror so it’s done automatically and via the usual mechanism (aptitude, etc.)

In conclusion: if you’re going to make your own Debian packages, do so with caution, and be aware of all the consequences (positive and negative) of what you’re doing. As always, a real understanding of everything is necessary.

Read Full Post »

It’s been some time since I re-installed Debian over my Kubuntu install, so I thought I’d discuss some reasons why I changed back to Debian, what my experience was like, and some learning opportunities.

One reason I made the switch was because there was a utility newly packaged for Debian, Frama-C, which was not available in Kubuntu at the time. It also frustrated me that I was having various frustrations with the installation, not the least of which was an unreliable and quite crashy KDE Plasma.

When I reinstalled this time, I picked the normal install but told it to install a graphical environment, which gave me a GNOME desktop environment. I actually rather like it, the setup didn’t ask too many questions and everything was set up perfectly. There was some minor tweaking, but it was all done by the easily accessible System menu and all the applets therein.

Now, I wanted to be able to use the server both as a virtual machine and as a physical dual-boot. This wasn’t working properly with GRUB-2, so I had to stay with version 1.96, which works rather well. I even spent some time making a pretty splashimage for it, which looks rather nice, even if I don’t see it all that often.

If I boot into the Virtual Machine, all the hardware is detected properly, and there aren’t even complaints about the fact that a bunch of hardware disappeared — certainly very good news if you decide to do something like move your hard drive to a different machine. Likewise, if I boot into the desktop, everything works well there too.

One issue I came across during the installation was having to teach Network-Manager how to configure my network interfaces. In my VMware NAT setup, there is no DHCP server, so the IP address, subnet and gateway information needs to be statically defined. Luckily, Network-Manager was able to do this based on the MAC address of the adapter — inside my virtual machine, it had a VMware-assigned static one. Through this, Network-Manager had an easy way to determine how to configure my network, and it works beautifully for Ethernet and Wireless (when Debian is running as the main operating system) and also for VMware NAT (when inside the virtual machine container).

Anyway, I have now been developing quite happily inside a Debian + GNOME desktop environment. The system runs fine even within a constrained environment, though I miss KDE’s setup with sudo; with GNOME, the option seems to be to have the root password entered every time privilege escalation is necessary. I don’t like using a root password — on my server system I don’t use the root password at all, and do everything I need to do via sudo. It’s okay for me because I log into the server with a private key and have disabled SSH password authentication for security reasons.

One thing that is still weird for me is that my system currently shows a time of 01:53 even though it is 23:57 in my timezone. Presumably the few minutes of difference is because the virtual machine clock and my system hardware clock aren’t synchronized perfectly, but more than that, I think it’s an issue with the Date applet somehow. I haven’t looked into this because the thing is running inside a virtual machine, so it doesn’t bother me much.

I have looked high and low to see where to change the time zone, and to my knowledge the system knows that it’s in the America/Toronto time zone. The house picture next to Timmins (the city I am in right now, though it doesn’t matter since the timezone is the same) seems to indicate to me that it’s set to the appropriate time zone.

I think it’s due to VMware synchronizing the virtual machine clock with my host machine clock. Windows (my host operating system) stores the time in the local format, which I believe Linux thinks is UTC. Still, it doesn’t explain the weird display it’s got going.

Someone noted last time that I didn’t make direct mention of which programs are only offered on Windows and not on Linux/etc, and that do not have reasonable replacement on these systems. Kapil Hari Paranjape noted that I was sounding somewhat like a troll by simply saying that I don’t think Linux is yet ready to replace my environment. Here was my reply:

Far from a troll, I’d really like Debian and Ubuntu, but moreso Linux in general, to improve at the pace it has been doing so. It’s made great progress since the last time I tried it out on my desktop, but I have to acknowledge that there are lots of rough edges right now that should be worked out.

One of the advantages of huge proprietary development organizations like Microsoft is that they have tons of developers and can implement new features at a relatively quick pace, even if they’re half-assed. Developers’ pride in the FOSS community prevents this overly quick pace of development in favour of more secure, more stable platforms. Which is a good thing, I think. But nonetheless it results in a “slower” development pace.

The applications I’m complaining about are things like:
– SolidWorks (a CAD tool for designing parts and assemblies, used in manufacturing and mechanical engineering)
– SpectrumSoft Micro-Cap (a version of software similar to PSpice used by my school)
– AutoCAD (another CAD tool)

Luckily this is changing, but only for the large & most popular distributions:
– MathWorks MATLAB (runs on Linux and Solaris, etc.)
– Wolfram Mathematica (which has versions for Linux and MacOS X)
– FEKO (runs on Linux and Solaris among others)

Anyway, I still consider SolidWorks to be a rather big program not supported on Linux, which is a big issue for those working on Civil Engineering programs. There are most probably others which are very domain-specific that I don’t even know about.

There is a nice matrix comparing cross-platform capabilities of CAD software: http://en.wikipedia.org/wiki/Comparison_of_CAD_software

Oh, one final thought: perhaps that KDE Recommends: should be moved to a Suggests: instead, on account of its heavy dependencies, requiring mysql-server installed on desktop machines.. WTF!

Oh, and on another note, I re-installed Debian using the non-expert Auto Install and it installed Gnome rather flawlessly, much like installing Ubuntu, which was pretty nice. So kudos to those who have been working on the main installer; it seems as though the advanced ones really give you some rope to hang yourself with, though :-)

Oh, and k3ninho told me that there is an initiative from the Ubuntu community called “100 Paper Cuts” to help fix small bugs like those I was complaining about. I hope this leads to an improved user experience, and I’d really like to see some of those changes propagated both upstream to Debian and upstream to the KDE folk.

During my install of Kubuntu + KDE, I felt that plasma was crashing more than Windows Explorer — it felt like the days when I was running Windows ME, and the shell would crash, and the system would restart it. Repeatedly. It’s exactly what seemed to happen with plasma. I’m not sure if it was something I screwed up during configuration (presumably so), but KDE was far too complicated for me to try and debug. It might also have been a result of me running my system within a fairly constrained virtual machine environment – the system only gets 768MB of RAM and no access to the actual graphics processor unit (since it’s virtualized).

Read Full Post »

For my Google Summer of Code project, I have been working with PerlQt4 bindings, which requires that I have Qt4 installed. While this is technically possible under a Win32 environment. Lots of people in the free software community vehemently oppose Windows, but while it has its flaws, I think overall the hardware support is still much better than Linux. True, this is because of Microsoft’s shady business practices, and because many companies keep their driver source code closed. I’m still using Windows XP Professional and quite happy with it, stability-wise and feature-wise.

As an Engineer, many applications we use on a regular basis are simply not available on Linux. They’re simply not replaceable with the current state of open source software, though there is some great stuff out there. Nonetheless, we’re still far from a point where engineers in general can switch to Linux — the application support is as important to an operating system as the kernel. Linux would be nothing without GNU’s binutils, for example.

I tried to install Debian first, as this is an environment I’m very familiar with. I use Debian on my development server, and it has worked wonders there. But everything I do on that server is command-line stuff. When trying to install a desktop environment, I followed the KDE Configuration Wizard, which isn’t too bad, but it expects an Internet connection throughout the process. The problem was that I didn’t have enough Ethernet cables to have both the desktop computer and my laptop plugged in at the same time, even though I had a wireless router set up, which meant I had to unplug the computer while updating packages, etc. Some of the updates took quite a bit of time, which was inconvenient for everyone else.

I eventually got the system to install, and told tasksel to set up a desktop environment. It was installing stuff, I typed ‘apt-get install kde’ and assumed everything would Just Work. After installing a whole bunch of stuff (which included a local install of mysqld, on a desktop machine?! — turns out it was due to one of KDE’s recommended packages, it starts with an A, I forget which). Anyway, then the environment didn’t “just work” as I had expected. Upon booting up my system, it just dropped me to a command line prompt. Fine, I thought, I’ll just use startx. But that was broken too. So after another few hours of fiddling I just gave up altogether.

While trying Ubuntu (the last time I had done so was probably in version 7 or so), I downloaded a recent image of Kubuntu 9.04, the Ubuntu flavour using KDE as a default desktop environment. It’s surprising that there has been lots of progress in Ubuntu and Linux in general. I have found that driver support is much better than it used to be, as it now detects my network card – a Broadcom 43xx chip – and does everything it needs to do. For the most part, my operating system “Just Works.” Great. This looks like something I might be able to slowly transition toward, completely replacing Windows except inside WINE or a Virtual Machine container.

Has Debian and Ubuntu made lots of progress? Sure. I can definitely see that Ubuntu is geared a lot more to the average user, while Debian provides bleeding-edge features to the power user. Unfortunately, despite being involved in packaging Perl modules for Debian, I fall into the former category. I’d really just like my desktop system to just work. Oh, and dual monitor support out-of-the-box would be nice too — I hear the new KDE and Gnome support this.

One thing Windows handles rather well is changing hardware profiles – when my computer is connected to its docking station, a ton of peripherals are attached. When I undock, they’re gone. Windows handles this rather gracefully. In Kubuntu, I got lots of notification boxes repeatedly telling me that eth2 was disconnected, etc. This sort of thing is undecipherable for the average user, so I’d really just like for these operating systems to be more human-friendly before they are ready for prime time on the desktop.

Read Full Post »

One thing that makes Perl different from many other languages is that it has a rather small collection of core commands. There are only a few hundred commands in Perl itself, so the rest of its functionality comes from its rich collection of modules,  many of which are distributed via the Comprehensive Perl Archive Network (CPAN).

When CPAN first came on the scene, it preceded many modern package management systems, including Debian’s Advanced Packaging Tool (APT) and Ruby’s gem system, among others. As a consequence of its rich history, the CPAN Shell is relatively simplistic by today’s standards, yet still continues to get the job done quite well.

Unfortunately, there are two issues with CPAN:

  1. Packages are distributed as source code which is built on individual machines when installing or upgrading packages.
    • Since packages must be re-built on every machine that installs it, the system is prone to breaking and wastes CPU time and other resources. (The CPAN Testers system is a great way module authors can try to mitigate this risk, though.)
    • Due to wide variation in packages, many packages cause problems with the host operating system in terms of where they install files, or expect them to be installed. This is because CPAN does not (and cannot) know every environment that packages will be installed on.
  2. It does not integrate nicely with package managers
    • The standard CPAN Shell is not designed to remove modules, only install them. Removals need to be done manually, which is prone to human error such as forgetting to clean up certain files, or breaking other installs in the process.
    • It cannot possibly know the policies that govern the various Linux flavours or Unices. This means that packages might be installed where users do not expect, which violates the Principle of Least Surprise.
    • It is a separate ecosystem to maintain. When packages are updated via the normal means (eg, APT), packages installed via CPAN will be left alone (ie, not upgraded).

Here is the real problem: packages installed via CPAN will be left alone. This means that if new releases come out, your system will retain an old copy of packages, until you get into the CPAN Shell and upgrade it manually. If you’re administrating your own system, this isn’t a big problem — but it has significant implications for collections of production systems. If you are managing thousands of servers, then you will need to run the upgrade on each server, and hope that the build doesn’t break (thus requiring your, or somebody else’s, intervention).

One of the biggest reasons to select Debian is because of one of its primary design goal: to be a Universal Operating System. What this means is that the operating system should run on as many different platforms and architectures as possible, while providing the same rich environment to each of them to the greatest extent possible. So, whether I’m using Debian GNU/Linux x86 or Debian GNU/kFreeBSD x64, I have access to the same applications, including the same Perl packages. Debian has automated tools to build and test packages on every architecture we support.

The first thing I’m going to say is: if you are a Debian user, or a user of its derivatives, there is absolutely no need for you to create your own packages. None. Just don’t do it; it’s bad. Avoid it like the goto statement, mmkay?

If you come across a great CPAN package that you’d really like to see packaged for Debian, then contact the Debian Perl Packagers (pkg-perl) team, and let us know that you’d like a package. We currently maintain well over a thousand Perl packages for Debian, though we are by no means the only maintainers of Perl packages in Debian. You can do this easily by filing a Request For Package (RFP) bug using the command: reportbug wnpp.

On-screen prompting will walk you through the rest, and we’ll try to package the module as quickly as possible. When we’re done, you’ll receive a nice e-mail letting you know that your package has been created, thus closing the bug. A few days of waiting, but you will have a package in perfect working condition as soon as we can create it for you. Moreover, you’re helping the next person that seeks such a module, since it will already be available in Debian (and in due time it will propagate to its derivatives, like Ubuntu).

All 25,000+ Debian packages meet the rigorous requirements of Debian Policy. The majority of them meet the Debian Free Software Guidelines (DFSG), too; the ones which are not considered DFSG-free are placed in their own repository, separate from the rest of packages. A current work in progress is machine-parseable copyright control files, which will hopefully provide a way for administrators to quickly review licensing terms of all the software you install. This is especially important for small- and medium-sized businesses without their own intellectual property legal departments to review open source software, which is something that continues to drive many businesses away from using open source.

For the impatient, note this well: packages which are not maintained by Debian are not supported by Debian. This means that if you install something using a packaging tool (we’ll discuss these later) or via CPAN, then your package is necessarily your own responsibility. In the unlikely event that you totally break your system installing a custom package, it’s totally your fault, and it may mean you will have to restore an earlier backup or re-install your system completely. Be very careful if you decide to go this route. A few days waiting to ensure that your package will work on every platform you’re likely to encounter is worth the couple days of waiting for a package to be pushed through the normal channels.

The Debian Perl Packaging group offers its services freely to the public for the benefit of our users. It is much better to ask the volunteers (preferably politely) to get your package in Debian, so that it passes through the normal testing channels. You really should avoid making your own packages in a vacuum; the group is always open to new members, and it means your package will be reviewed (and hopefully uploaded into Debian) by our sponsors.

But the thing about all rules is that there are always exceptions. There are, in fact, some reasons when you might want to produce your own packages. I was discussing this with Hans Dieter Pearcey the other day, and he has written a great follow-up blog post about the primary differences between dh-make-perl and cpan2dist, two packaging tools with a similar purpose but very different design goals. Another article is to follow this one, where I will discuss the differences between the two.

Read Full Post »

Older Posts »