Feeds:
Posts
Comments

Posts Tagged ‘Support’

Last year, I had a great time participating in the Google Summer of Code with the Debian project. I had a neat project with some rather interesting implications for helping developers to package and maintain their work. It’s still a work-in-progress, of course, as many projects in open source are, but I was able to accomplish quite a bit and am proud of my work. I learned quite a bit about coding in C, working with Debian and met some very intelligent people.

My student peers were also very intelligent and great to learn from. I enjoyed meeting them virtually and discussing our various projects on the IRC channel as the summer progressed and the Summer of Code kicked into full swing. The Debian project in particular also helps arrange travel grants for students to attend the Debian Conference (this year, DebConf10 is being held in New York City!). DebConf provides a great venue to learn from other developers (both in official talks but also unofficial hacking sessions). As the social aspect is particularly important to Debian, DebConf helps people meet those with whom they are working with the most, thereby creating lifelong friendships and making open source fun.

I have had several interviews for internships, and the bit of my work experience most asked about is my time doing the Google Summer of Code. I really enjoyed seeing a project go from the proposal stage, setting a reasonable timeline with my mentor, exploring the state of the art, and most importantly, developing the software. I think this is the sort of indispensible industry-type experience we often lack in our undergrad education. We might have an honours thesis or presentation, but much of the work in the Google Summer of Code actually gets used “in the field.”

Developing software for people rather than for marks is significant in a number of ways, but most importantly it means there are real stakeholders that must be considered at all stages. Proposing brilliant new ideas is important, however, without highlighting the benefits they can have for various users, the reality is that it simply will not gain traction. Learning how to write proposals effectively is an important skill and working with my prospective mentor (at the time – he later mentored my project once it was accepted) to develop mine was tremendously useful for my future endeavours.

The way I see the Google Summer of Code is, in many ways, similar to an academic grant (and the stipend is about the same as well). It provides a modest salary (this year it’s US$5000) but more importantly, personal contact with a mentor. Mentors are typically veterans in software development or the Debian project and act in the same role as supervisors for post-graduate work: they help monitor your progress and propose new ideas to keep you on track.

The Debian Project is looking for more students and proposals. We have a list of ideas as well as application instructions available on our Wiki. As I will be going on internship starting May, I have offered to be a mentor this year. I look forward to seeing your submissions (some really interesting ones have already begun to filter in as the deadline approaches).

Read Full Post »

Okay, so this is a long-awaited follow-up to my first post on the topic of  Debian Perl Packaging. Some of you might note I was pretty extreme in the first post, which is partially because people only really ever respond to extremes when they’re new to things. When you first begin programming, the advice you get is “hey, never use goto statements” — but as your progress in your ability and your understanding of how it works, what it’s actually doing in the compiler — then it might not be so bad after all. In fact, I hear the Linux kernel uses it extensively to provide Exceptions in C. The Wikipedia page on exception handling in various languages shows how to implement exceptions in C using setjmp/longjmp (which is essentially a goto statement). But I digress.

Back to the main point of this writeup. Previously I couldn’t really think of cases where packaging your own modules is really all that useful, especially when packaging them for Debian means that you benefit many communities — Debian, Ubuntu, and all of the distributions that are based on those.

During a discussion with Hans Dieter Piercey after his article providing a nice comparison between dh-make-perl and cpan2dist. (Aside: I feel like he was slightly biased toward cpan2dist in his writeup, but I’m myself biased toward dh-make-perl, so he might be right, even though I won’t admit it.)

I’m really glad for that article and the ensuing dialog, because it really got people talking about what they use Debian Perl packages for, and where it is useful to make your own.

Firstly, if you’ve got an application that depends on some Perl module that isn’t managed by Debian, but you need it yesterday, then you can either install that module via CPAN or roll your own Debian package. The idea here is to make and install the package so you can use it, but also file a Request For Package bug at the same time — see the reportbug command in Debian, or use LaunchPad if you’re on Ubuntu. This way, when the package is officially released and supported, you can move to that instead, and thus get the benefits of automatic upgrades of those packages.

Secondly, if you’ve got an application that depends on some internally-developed modules, then they probably wouldn’t exist on CPAN (some call this Perl code part of the DarkPAN), except in the rare case that a company open sources their work. But corporations will never open source all of their work, even if they consider the implications of providing some of it to the open source community, so at some point or another you’ll need to deal with internal packages. Previously, the best way to handle this was to construct your own CPAN local mirror, and have other machines install and upgrade from it — thus your internal code is easily distributed via the usual mechanism.

One of the advantages of using CPAN to distribute things is that it’s available on most platforms, builds things and runs tests automatically on many platforms. CPANPLUS will even let you remove packages, which was one of the main reasons I am so pro-Debian packages anyway. However, it does mean you’ll need to rebuild the package on other systems, which is prone to failures that cost time and money to track down and fix. CPAN and CPANPLUS are the Perl tradition of distributing packages.

If you are using an environment mostly with Debian systems, however, you may benefit from using a local Debian repository. This way, you only need to upgrade packages in your repository, and they’ll be automatically upgraded along with the rest of your operating system (you do run update and upgrade periodically right?). There is even the fantastic aptcron program to automate this, so there’s really no excuse not to automatically update.

In either case, creating a local package means you will be able to easily remove anything you no longer need via the normal package management tools. You can also distribute the binary packages between machines — though it sometimes depends on the platform (for modules that incorporate C or other platform-specific code that needs to be rebuilt). Generally, most Perl modules are Pure Perl, and thus you can compile and test it once, on one machine, and distribute it to other ones simply by installing the .deb package on other machines. You can copy packages to machines and use dpkg to install them, or better yet, create a local Debian mirror so it’s done automatically and via the usual mechanism (aptitude, etc.)

In conclusion: if you’re going to make your own Debian packages, do so with caution, and be aware of all the consequences (positive and negative) of what you’re doing. As always, a real understanding of everything is necessary.

Read Full Post »

For my Google Summer of Code project, I have been working with PerlQt4 bindings, which requires that I have Qt4 installed. While this is technically possible under a Win32 environment. Lots of people in the free software community vehemently oppose Windows, but while it has its flaws, I think overall the hardware support is still much better than Linux. True, this is because of Microsoft’s shady business practices, and because many companies keep their driver source code closed. I’m still using Windows XP Professional and quite happy with it, stability-wise and feature-wise.

As an Engineer, many applications we use on a regular basis are simply not available on Linux. They’re simply not replaceable with the current state of open source software, though there is some great stuff out there. Nonetheless, we’re still far from a point where engineers in general can switch to Linux — the application support is as important to an operating system as the kernel. Linux would be nothing without GNU’s binutils, for example.

I tried to install Debian first, as this is an environment I’m very familiar with. I use Debian on my development server, and it has worked wonders there. But everything I do on that server is command-line stuff. When trying to install a desktop environment, I followed the KDE Configuration Wizard, which isn’t too bad, but it expects an Internet connection throughout the process. The problem was that I didn’t have enough Ethernet cables to have both the desktop computer and my laptop plugged in at the same time, even though I had a wireless router set up, which meant I had to unplug the computer while updating packages, etc. Some of the updates took quite a bit of time, which was inconvenient for everyone else.

I eventually got the system to install, and told tasksel to set up a desktop environment. It was installing stuff, I typed ‘apt-get install kde’ and assumed everything would Just Work. After installing a whole bunch of stuff (which included a local install of mysqld, on a desktop machine?! — turns out it was due to one of KDE’s recommended packages, it starts with an A, I forget which). Anyway, then the environment didn’t “just work” as I had expected. Upon booting up my system, it just dropped me to a command line prompt. Fine, I thought, I’ll just use startx. But that was broken too. So after another few hours of fiddling I just gave up altogether.

While trying Ubuntu (the last time I had done so was probably in version 7 or so), I downloaded a recent image of Kubuntu 9.04, the Ubuntu flavour using KDE as a default desktop environment. It’s surprising that there has been lots of progress in Ubuntu and Linux in general. I have found that driver support is much better than it used to be, as it now detects my network card – a Broadcom 43xx chip – and does everything it needs to do. For the most part, my operating system “Just Works.” Great. This looks like something I might be able to slowly transition toward, completely replacing Windows except inside WINE or a Virtual Machine container.

Has Debian and Ubuntu made lots of progress? Sure. I can definitely see that Ubuntu is geared a lot more to the average user, while Debian provides bleeding-edge features to the power user. Unfortunately, despite being involved in packaging Perl modules for Debian, I fall into the former category. I’d really just like my desktop system to just work. Oh, and dual monitor support out-of-the-box would be nice too — I hear the new KDE and Gnome support this.

One thing Windows handles rather well is changing hardware profiles – when my computer is connected to its docking station, a ton of peripherals are attached. When I undock, they’re gone. Windows handles this rather gracefully. In Kubuntu, I got lots of notification boxes repeatedly telling me that eth2 was disconnected, etc. This sort of thing is undecipherable for the average user, so I’d really just like for these operating systems to be more human-friendly before they are ready for prime time on the desktop.

Read Full Post »

One thing that makes Perl different from many other languages is that it has a rather small collection of core commands. There are only a few hundred commands in Perl itself, so the rest of its functionality comes from its rich collection of modules,  many of which are distributed via the Comprehensive Perl Archive Network (CPAN).

When CPAN first came on the scene, it preceded many modern package management systems, including Debian’s Advanced Packaging Tool (APT) and Ruby’s gem system, among others. As a consequence of its rich history, the CPAN Shell is relatively simplistic by today’s standards, yet still continues to get the job done quite well.

Unfortunately, there are two issues with CPAN:

  1. Packages are distributed as source code which is built on individual machines when installing or upgrading packages.
    • Since packages must be re-built on every machine that installs it, the system is prone to breaking and wastes CPU time and other resources. (The CPAN Testers system is a great way module authors can try to mitigate this risk, though.)
    • Due to wide variation in packages, many packages cause problems with the host operating system in terms of where they install files, or expect them to be installed. This is because CPAN does not (and cannot) know every environment that packages will be installed on.
  2. It does not integrate nicely with package managers
    • The standard CPAN Shell is not designed to remove modules, only install them. Removals need to be done manually, which is prone to human error such as forgetting to clean up certain files, or breaking other installs in the process.
    • It cannot possibly know the policies that govern the various Linux flavours or Unices. This means that packages might be installed where users do not expect, which violates the Principle of Least Surprise.
    • It is a separate ecosystem to maintain. When packages are updated via the normal means (eg, APT), packages installed via CPAN will be left alone (ie, not upgraded).

Here is the real problem: packages installed via CPAN will be left alone. This means that if new releases come out, your system will retain an old copy of packages, until you get into the CPAN Shell and upgrade it manually. If you’re administrating your own system, this isn’t a big problem — but it has significant implications for collections of production systems. If you are managing thousands of servers, then you will need to run the upgrade on each server, and hope that the build doesn’t break (thus requiring your, or somebody else’s, intervention).

One of the biggest reasons to select Debian is because of one of its primary design goal: to be a Universal Operating System. What this means is that the operating system should run on as many different platforms and architectures as possible, while providing the same rich environment to each of them to the greatest extent possible. So, whether I’m using Debian GNU/Linux x86 or Debian GNU/kFreeBSD x64, I have access to the same applications, including the same Perl packages. Debian has automated tools to build and test packages on every architecture we support.

The first thing I’m going to say is: if you are a Debian user, or a user of its derivatives, there is absolutely no need for you to create your own packages. None. Just don’t do it; it’s bad. Avoid it like the goto statement, mmkay?

If you come across a great CPAN package that you’d really like to see packaged for Debian, then contact the Debian Perl Packagers (pkg-perl) team, and let us know that you’d like a package. We currently maintain well over a thousand Perl packages for Debian, though we are by no means the only maintainers of Perl packages in Debian. You can do this easily by filing a Request For Package (RFP) bug using the command: reportbug wnpp.

On-screen prompting will walk you through the rest, and we’ll try to package the module as quickly as possible. When we’re done, you’ll receive a nice e-mail letting you know that your package has been created, thus closing the bug. A few days of waiting, but you will have a package in perfect working condition as soon as we can create it for you. Moreover, you’re helping the next person that seeks such a module, since it will already be available in Debian (and in due time it will propagate to its derivatives, like Ubuntu).

All 25,000+ Debian packages meet the rigorous requirements of Debian Policy. The majority of them meet the Debian Free Software Guidelines (DFSG), too; the ones which are not considered DFSG-free are placed in their own repository, separate from the rest of packages. A current work in progress is machine-parseable copyright control files, which will hopefully provide a way for administrators to quickly review licensing terms of all the software you install. This is especially important for small- and medium-sized businesses without their own intellectual property legal departments to review open source software, which is something that continues to drive many businesses away from using open source.

For the impatient, note this well: packages which are not maintained by Debian are not supported by Debian. This means that if you install something using a packaging tool (we’ll discuss these later) or via CPAN, then your package is necessarily your own responsibility. In the unlikely event that you totally break your system installing a custom package, it’s totally your fault, and it may mean you will have to restore an earlier backup or re-install your system completely. Be very careful if you decide to go this route. A few days waiting to ensure that your package will work on every platform you’re likely to encounter is worth the couple days of waiting for a package to be pushed through the normal channels.

The Debian Perl Packaging group offers its services freely to the public for the benefit of our users. It is much better to ask the volunteers (preferably politely) to get your package in Debian, so that it passes through the normal testing channels. You really should avoid making your own packages in a vacuum; the group is always open to new members, and it means your package will be reviewed (and hopefully uploaded into Debian) by our sponsors.

But the thing about all rules is that there are always exceptions. There are, in fact, some reasons when you might want to produce your own packages. I was discussing this with Hans Dieter Pearcey the other day, and he has written a great follow-up blog post about the primary differences between dh-make-perl and cpan2dist, two packaging tools with a similar purpose but very different design goals. Another article is to follow this one, where I will discuss the differences between the two.

Read Full Post »

The long and short answer? Both.

Recently an article crawled up Proggit talking about whether we should be using Content Management Systems, or rather, more generic frameworks. The main contention of the author seems to be that generic frameworks are more useful, because programmers end up spending a lot of time undoing “features” offered by CMS packages.

Being from a Perl philosophy, I think TIMTOWDI (pronounced Tim-Toady) – There Is More Than One Way To Do It – applies here. Don’t use a nail when a thumbtack will suffice. Don’t use a Framework when a ready-made open-source/cheap CMS will suffice.

The reason that solutions like SharePoint and WordPress and numerous others exist, and why they are popular, is because they solve a particular set of problems. WordPress lets you get a blog up pretty quickly, but it was never designed for creating full-fledged web sites like Recovery.gov or what-have-you.

Ostensibly, I think that these solutions come from smaller shops or individuals that need a solution that is “close enough” to what they really want. Certainly, it’s cheaper to get up and running with a few minutes spent downloading and installing WordPress (or better yet, using the service of WordPress.com).

Many people and groups cannot afford or do not wish to incur the expense of hiring a web programmer to do the job for them. Content Management Systems still serve an important role, especially since they are general enough to enjoy the same sorts of updates. So when WordPress releases a new version of its flagship product, you get new features that are (pick one):

  • Most requested by its users
  • Required internally by WordPress for its own uses
  • Added by its contributors, or backported from forks of the software
  • Useful for increasing the security of the product

You lose a lot of these benefits with Web Frameworks, but you gain control and flexibility over your software.

My point? Reuse things if they are available; do what is necessary to get the job done, but don’t lose sight of the problem in favour of some perceived “elegant” solution. Sometimes even $40,000 to buy software up front is cheaper than paying a programmer $60,000 a year for several years to maintain the product. Along the same lines as open source, consider contacting the authors of their software to see if they will be willing to write additional features for your company on a contractual basis. Many authors offer this, and many more would probably be perceptive to the idea. And, since most changes are likely to be pretty minor, it doesn’t have to cost much, either.

Read Full Post »

One of the things that many students don’t realize is that we are essentially customers. We give the university money (tuition) in exchange for knowledge and a degree. We are often forced to put up with a terrible customer experience we would not accept anywhere else; yet, we do.

How do organizations get away with providing awful support for their products?

Well, put simply, support is not a criteria we often use to select a company we wish to deal with. We look at cost effectiveness, we look at the short-term gains we expect to achieve, we look at whether there are other solutions available, we look at whether we need the solution in the first place. But as everyday consumers, we don’t often require that an appropriate level of support is in place.

Why? Because we don’t think about the inevitable; we always like to pretend that we will never need our car insurance, that we will never need to use the limited manufacturer warranties that come with our products. We often don’t even bother to stop and read the fine print, instead preferring to believe whatever promises salespeople leave us with.

As a student of both Electrical Engineering and Computer Science, I am actually part of two faculties – a full time student in the Faculty of Engineering, while a part time student in the Faculty of Science. This arrangement means I can enroll in courses from both faculties, but it also means that in order to do so, I must deal with the faculty that has control over the particular course.

So when I needed to have something done with regard to a Computer Science course, I had to go to the Faculty of Science Dean’s Office. Upon arrival at about 08:30, I received a ticket and waited in the reception area. When my number was called, I was sent to a triage-type area, where the counselling assistant determined whether or not we needed to make an appointment to see the Academic Counsellor.

When I finally got to the Academic Counsellor who could actually do what I needed, it was 10:30 – two hours later and shortly before my next class. The transaction itself took a few minutes, which left me wondering why my situation wasn’t dealt with in a more timely fashion.

I understand that it’s not always the fault of the staff. After all, there are lots of students in the Faculty of Science, and only four counsellors, only one of which was accepting drop-in appointments that day. So perhaps this is an intrinsic problem with the way we allocate people.

This graph illustrates the number of students each academic counsellor is responsible for, assuming equal distribution of students per counsellor.

This data was compiled from data published by the University of Western Ontario as part of the CUDO – Common University Data Ontario – initiative. The data from 2008 was used to compile this graph, with counsellor counts coming from the respective faculty web sites.

As we can see, the number of students each counsellor must handle is large, and has little to do with the total number of students in each faculty. So while there are 11091 students in the Faculty of Science, there remain only 4 academic counsellors capable of special review tasks. In the Faculty of Engineering, there are 1788 students and 3 counsellors. Does this make sense? I think not.

So, support should always be part of the equation. In business, support is a important metric for making the next buying decision. As a result, the Business Division of Dell Computer provides excellent and prompt response with minimal waiting — offering services like quick advice from highly trained personnel backed up by Next Business Day service.

This is important in industry. Why shouldn’t it be important to us consumers? Why shouldn’t we demand more support personnel, or different ways to quickly apply for these types of special considerations? Perhaps some sort of online queuing system could be a solution to this; and definitely we need some review here.

Read Full Post »