Feeds:
Posts
Comments

Archive for the ‘Computer Science’ Category

The CPAN ecosystem is one of the most compelling reasons for the continued growth of the Perl programming language. It has been discussed at length by numerous people, and there have been several attempts to imitate this aspect of the Perl community through projects like: CRAN, CCAN, JSAN and more.

Unfortunately, in equal parts due to its age and design philosophy, the PAUSE system powering CPAN makes it difficult for distributions to be maintained by a group, rather than an individual. The inspiration for this post comes from a discussion I had recently with Florian Ragwitz, who contributes to several key Perl projects, including Catalyst, Moose, DBIx::Class and many more.

Permissions

First, a bit about how permissions on CPAN work.

In order to make a package installable using the CPAN Shell, there must be some mechanism to disambiguate a module name. Consider this simple example:

  1. I upload Acme::Package to CPAN.
  2. Some time passes, and unbeknownst to me, another author uploads a different package, but which is called Acme::Package to CPAN as well.

In the absence of any permission checking, if I then instructed users to install Acme::Package using the CPAN Shell, they would inadvertently install the wrong distribution! This has some rather serious implications: the other Acme::Package is probably quite different from mine, and a malicious author could have taken my software and added a backdoor vulnerability.

CPAN solves this issue by tracking each module namespace separately using the PAUSE Indexer, which assigns upload permissions to users through two mechanisms:

  1. The module namespace registration list.
  2. First-come status (the first uploader of a given package namespace “owns” that namespace).

Going back to the example given, the second uploader of Acme::Package would not have permission to use the namespace. The package will be accepted into the archive, but will not be indexed, meaning that users installing Acme::Package will still get my distribution.

If users want to install the other author’s package (which is marked as an UNAUTHORIZED upload in big red letters on CPAN Search), they would need to explicitly specify AUTHOR/Acme-Package-1.00.tar.gz.

For packages maintained by several people, it is also possible to assign co-maintainer status to others, so that they may also upload a package and have it correctly indexed. This way, two or more people can work on the same package together, and upload it under their own accounts (without causing the upload to be marked unauthorized). Thus, PAUSE credentials do not need to be shared.

This provides a nice solution to the malicious upload problem, but also has implications for team-maintained packages. In particular, consider the case where there are two authors working on Acme::Library.

  1. Alice uploads the first version to CPAN, containing modules: Acme::Library and Acme::Library::Main.
  2. The PAUSE Indexer grants Alice first-come permissions to both Acme::Library and Acme::Library::Main.
  3. Alice grants Bob co-maintainer status on both Acme::Library and Acme::Library::Main.
  4. Bob creates a new Acme::Library::Other module and adds it to the  package.
  5. The PAUSE Indexer grants Bob first-come permissions to Acme::Library::Other.
  6. Subsequent uploads by Alice will cause the upload of Acme::Library::Other to be marked UNAUTHORIZED.

Solutions

Clever Perl authors have attempted to solve this problem in many different ways over the years, but none of them have been widely successful because they all rely on some degree of human interaction.

Shared PAUSE Accounts

Some notable projects have attempted to solve the issue by creating a shared PAUSE user to hold the requisite first-come or module list upload permissions, which may then be granted to all other team members through the existing co-maintainer facility.

Alternatively, since it is easier for smaller projects, many modules simply assign first-come permissions to a single person, who is then in charge of providing co-maintainer permissions to others who would like to work on it.

Both of these approaches have the same limitation: any people uploading new modules must remember to assign first-come permissions to the group or user in question. In our case, Bob should have assigned first-come permissions to Acme::Library::Other to Alice, who then must pass co-maintainer permissions back to Bob. Unfortunately, this almost never happens, and Alice must chase down Bob (who happens to be on vacation in Antarctica) or, alternatively, the already over-worked PAUSE administrators.

Single Uploader

Some projects deal with this issue by sharing a version control system and having all the uploads go through a single person, in our case, Alice. This fixes the permission problem, since first-come permissions are always granted to Alice, but it results in a single point of failure. If there are some serious security issues requiring an immediate release, Alice must be available (and, as luck would have it, she is vacationing in Antarctica at the time).

Enter x_authority

One proposed solution, which is used in projects including Moose and Catalyst, is to use a special field in the CPAN Metadata file (META.yml or META.json) that defines someone as the “authority” for first-come namespaces in a distribution.

This is how it would work for Alice‘s Acme::Library distribution:

  1. Alice uploads a package to CPAN, containing modules: Acme::Library and Acme::Library::Main.
  2. Alice specifies, in META.yml:
    x_authority: cpan:ALICE

    This refers to Alice‘s PAUSE login, and is the person to whom permissions for new modules uploaded in this distribution are assigned.

  3. Alice grants Bob co-maintainer status on both Acme::Library and Acme::Library::Main.
  4. Bob creates a new Acme::Library::Other module and adds it to the package
  5. The PAUSE indexer, seeing the x_authority defined in META.yml, grants Alice (not Bob!) first-come permissions to Acme::Library::Other. At this time, Bob also automatically gets co-maintainer permissions to Acme::Library::Other.
  6. Subsequent uploads by Alice will be indexed properly.

Problems

There are still some outstanding issues that need to be resolved, but the x_authority proposal represents a giant leap forward for team-maintained software.

The name: any keys not part of the CPAN Metadata Specification must be prefixed with “x_” – eventually, once it is used by more people and accepted into the specification, this name will become, simply, “authority.”

Other comaintainers: if Charlie joined the project prior to Bob‘s upload of Acme::Library::Other, then Alice still needs to grant co-maintainer permissions to Charlie. Unfortunately, the PAUSE Indexer cannot automatically grant permissions to him, since it has no notion of a “distribution,” only module namespaces.

Malicious uploaders: in the worst case, if Eve joins the project and maliciously (or unintentionally!) changes the x_authority, she will automatically get first-come permissions on the namespace of any modules she adds. However, this is the same behaviour that we had in the absence of x_authority.

Conclusions

Ultimately, the benefits of this feature (making group maintenance easier) drastically outweigh the cost (only a few small changes need to be made to the PAUSE Indexer). They are unlikely to cause any problems in practice, and the worst-case behaviour is the same as if we did not have x_authority at all.

It isn’t perfect, but it is a solution that requires minimal effort and minimal changes to PAUSE. Eventually, the goal is to create a more sophisticated system that will handle the issues outlined above, as well as more complex ones, such as renaming distributions or moving modules between distributions.

Thanks to Florian Ragwitz for spending some time discussing x_authority at length with me. He and Leon Timmermans proofread this article prior to publication.

Advertisements

Read Full Post »

Last year, I had a great time participating in the Google Summer of Code with the Debian project. I had a neat project with some rather interesting implications for helping developers to package and maintain their work. It’s still a work-in-progress, of course, as many projects in open source are, but I was able to accomplish quite a bit and am proud of my work. I learned quite a bit about coding in C, working with Debian and met some very intelligent people.

My student peers were also very intelligent and great to learn from. I enjoyed meeting them virtually and discussing our various projects on the IRC channel as the summer progressed and the Summer of Code kicked into full swing. The Debian project in particular also helps arrange travel grants for students to attend the Debian Conference (this year, DebConf10 is being held in New York City!). DebConf provides a great venue to learn from other developers (both in official talks but also unofficial hacking sessions). As the social aspect is particularly important to Debian, DebConf helps people meet those with whom they are working with the most, thereby creating lifelong friendships and making open source fun.

I have had several interviews for internships, and the bit of my work experience most asked about is my time doing the Google Summer of Code. I really enjoyed seeing a project go from the proposal stage, setting a reasonable timeline with my mentor, exploring the state of the art, and most importantly, developing the software. I think this is the sort of indispensible industry-type experience we often lack in our undergrad education. We might have an honours thesis or presentation, but much of the work in the Google Summer of Code actually gets used “in the field.”

Developing software for people rather than for marks is significant in a number of ways, but most importantly it means there are real stakeholders that must be considered at all stages. Proposing brilliant new ideas is important, however, without highlighting the benefits they can have for various users, the reality is that it simply will not gain traction. Learning how to write proposals effectively is an important skill and working with my prospective mentor (at the time – he later mentored my project once it was accepted) to develop mine was tremendously useful for my future endeavours.

The way I see the Google Summer of Code is, in many ways, similar to an academic grant (and the stipend is about the same as well). It provides a modest salary (this year it’s US$5000) but more importantly, personal contact with a mentor. Mentors are typically veterans in software development or the Debian project and act in the same role as supervisors for post-graduate work: they help monitor your progress and propose new ideas to keep you on track.

The Debian Project is looking for more students and proposals. We have a list of ideas as well as application instructions available on our Wiki. As I will be going on internship starting May, I have offered to be a mentor this year. I look forward to seeing your submissions (some really interesting ones have already begun to filter in as the deadline approaches).

Read Full Post »

A specialized storage system known as a Round Robin Database allows one to store large amounts of time series information such as temperatures, network bandwidth and stock prices with a constant disk footprint. It does this by taking advantage of changing needs for precision. As we will see later, the “round robin” part comes from the basic data structure used to store data points: circular lists.

In the short term, each data point is significant: we want an accurate picture of every event that has occurred in the last 24 hours, which might include small transient spikes in disk usage or network bandwidth (which could indicate an attack). However, in the long term, only general trends are necessary.

For example, if we sample a signal at 5-minute intervals, then a 24-hour period will have 288 data points (24hrs*60mins/hr divided by 5 minutes per sample). Considering each data point is probably1 only 4 (float), 8 (double), 16 (quad) bytes, it’s not problematic to store roughly three hundred data points. However, if we continue to store each sample, a year would require about 105120 (365*288) data points; multiplied over many different signals, this can become quite significant.

To save space, we can compact the older data using a Consolidation Function (CF), which performs some computation on many data points to combine it into a single point over a longer period. Imagine that we take an average of those 288 samples at the conclusion of every 24 hour period; in that case, we would only need 365 data points to store data for an entire year, albeit at an irrecoverable loss of precision. Though we have lost precision (we no longer know what happened at exactly 5:05pm on the first Tuesday three months ago), the data is still tremendously useful for demonstrating general trends over time.

Though perhaps not the easiest to learn, RRDtool seems to have the majority of market share (without having done any research, I’d estimate somewhere between 90% and 98%, to account for those who create their own solutions in-house), and for good reason: it gets the job done quickly, provides appealing and highly customizable charts and is free and open source software (licensed under the GNU General Public License).

In a recent project, I learned to use RRDTool::OO to maintain a database and produce some interesting graphs. Since I was sampling my signal once every five minutes, I decided to replicate the archiving parameters used by MRTG, notably:

  • 600 samples store 2 days and 2 hours of data (at full resolution)
  • 700 samples store 14 days and 12 hours of data (where six samples become a 30-minute average)
  • 775 samples store 64 days and 12 hours of data (2-hour average)
  • 797 samples store 797 days of data (24-hour average)

F0r those interested, the following code snippet (which may be rather easily adapted for languages other than Perl) constructs the appropriate database:

archive => {
 rows    => 600,
 cpoints => 1,
 cfunc   => 'AVERAGE',
},
archive => {
 rows    => 700,
 cpoints => 6,
 cfunc   => 'AVERAGE',
},
archive => {
 rows    => 775,
 cpoints => 24,
 cfunc   => 'AVERAGE',
},
archive => {
 rows    => 797,
 cpoints => 288,
 cfunc   => 'AVERAGE',
},

There are also plenty of other examples of this technique in action, mainly related to computing. However, there are also some interesting applications such as monitoring voltage (for an uninterruptible power supply) or indoor/outdoor temperature (using an IP-enabled thermostat).

Footnotes

1. This may, of course, vary depending on the particular architecture

Read Full Post »

I’ve recently been pushing for greater support for Catalyst and MojoMojo on Debian. For the uninitiated, Catalyst is a Model-View-Controller Framework designed for writing web applications. MojoMojo is a Wiki application based on Catalyst that provides a lot of neat features; while it seems less popular than Wikimedia’s MediaWiki software, it’s still got plenty of features other wikis don’t.

Here’s a blurb about it from their homepage:

We also have a bunch of features you won’t find in every wiki, like an attachment system that automatically makes a web gallery of your photos, live AJAX previews as you are editing your text, and a proper full text search engine built straight into the software.

Unfortunately, such a rich feature set comes at a price — this shiny piece of software has a rather large dependency chain. As a result, building the module (after building its prerequisites) from CPAN is both slow and prone to failure, since each module must be individually retrieved, extracted, built, tested and then installed.

To make matters worse, any failure anywhere in the chain (perhaps a new version of a module breaks things) will cause a complete failure to build the module — either Catalyst or MojoMojo — which has some serious implications for production applications.

In Debian, we mitigate this risk by having separate unstable and testing distributions, so if a newer version breaks things in unstable, we will catch it and have a chance to fix it before the package makes it into testing. By packaging these modules for Debian, we get the advantages of a faster installation process (since we’re installing pre-built binaries) combined with better Quality Assurance.

One of the big issues blocking both of these has been missing copyright information for a lot of modules. I’ve worked a lot with Matt S. Trout, one of the primary people behind coordinating the efforts of the Catalyst project, and gathered the necessary information for an upgrade and upload into Debian.

Recently, libcatalyst-modules-perl (version 35) and libcatalyst-modules-extra-perl (version 4) were uploaded to Debian, containing many necessary updates and fixes to improve the Catalyst experience on Debian. The next big push is to get MojoMojo’s dependencies packaged (currently only String::Diff is blocking it, due to missing copyright information).

A bounty of $150 is being offered by one of the MojoMojo developers to the first person who can re-implement the String::Diff functionality in a free/open source way.

Read Full Post »

One of the most often overlooked–yet arguably most important–issues in software development is copyright and licensing of works. In particular, I will discuss how this affects the open source software community with relevance to the Debian project.

As with any artistic or creative works, software is protected by copyright and its use is often governed by some sort of license. Please note that I am not a lawyer and I am not qualified to give legal advice, so take my suggestions with a grain of salt and please do leave a comment if you know something that I don’t.

A license is a legal contract that permits end users use of software under agreed-upon guidelines. In the open source community, licenses protect the integrity of free software by ensuring that they continue to remain freely available. For example, the GNU General Public License (GPL) stipulates that any derivative works of GPL-licensed code must distribute source code back to the community, which enables a two-way sharing of information between the originating software developers and the others who benefit from their work. Other licenses, such as the BSD License, are more liberal and do not have this restriction, but do have a disclaimer of warranties which shields authors from unintended legal consequences of their work.

Though licensing is probably the most important document detailing the relationship of the supplier (software developer or team) and other users, it cannot mean anything without copyright. In general, it is most useful to provide a copyright statement somewhere in resulting packages. A copyright statement is what allows authors to assert a particular license in the first place.

Moreover, license terms can only be changed when all copyright holders agree to the change. Unless you are explicit in your copyright conditions in the beginning, this can lock your project in to an undesirable license.

To make matters even more complicated, the Berne Convention for the Protection of Literary and Artistic Works (or simply Berne Convention as it’s most often called) describes a mechanism by which copyright is automatically in force upon creation of a work, even if the author does not explicitly assert it. For software, this effectively means that anyone who contributes any code is automatically the copyright holder on their contribution, which means that things quickly get complicated when there are many authors and contributors involved.

In Debian, we cannot and do not distribute software without knowing copyright information (including years of copyright, names, e-mail addresses where people can be reached, or a web site in the case of an incorporated entity). This is pursuant to the Debian Free Software Guidelines (DFSG), which require that we distribute only “free” software in our main repository–it’s part of our Social Contract.

In this regard, I would make the following recommendations:

  1. When beginning any project (open source or not), include a copyright statement immediately. It will eventually become a force of habit; this is a very good thing, and will pay dividends in the future.
  2. Establish a policy whereby contributors are asked to assign you copyright of their work; make a note of this somewhere in your documentation. Better yet, if you are part of an incorporated entity, assign copyright to that entity.
  3. Be explicit about your licensing terms and make sure to include copies of the license with your software. This helps to resolve ambiguities where there are several derivatives of a license (occasionally, developers license software under the BSD License without specifying which version they mean)
  4. Be wary of the “Public Domain” — this is an even more contentious issue than choosing an appropriate license. It is probably preferable to use a non-restrictive license such as the aforementioned BSD License (and its variants) or the MIT/X11 license, which is even more permissive.

Read Full Post »

Okay, so this is a long-awaited follow-up to my first post on the topic of  Debian Perl Packaging. Some of you might note I was pretty extreme in the first post, which is partially because people only really ever respond to extremes when they’re new to things. When you first begin programming, the advice you get is “hey, never use goto statements” — but as your progress in your ability and your understanding of how it works, what it’s actually doing in the compiler — then it might not be so bad after all. In fact, I hear the Linux kernel uses it extensively to provide Exceptions in C. The Wikipedia page on exception handling in various languages shows how to implement exceptions in C using setjmp/longjmp (which is essentially a goto statement). But I digress.

Back to the main point of this writeup. Previously I couldn’t really think of cases where packaging your own modules is really all that useful, especially when packaging them for Debian means that you benefit many communities — Debian, Ubuntu, and all of the distributions that are based on those.

During a discussion with Hans Dieter Piercey after his article providing a nice comparison between dh-make-perl and cpan2dist. (Aside: I feel like he was slightly biased toward cpan2dist in his writeup, but I’m myself biased toward dh-make-perl, so he might be right, even though I won’t admit it.)

I’m really glad for that article and the ensuing dialog, because it really got people talking about what they use Debian Perl packages for, and where it is useful to make your own.

Firstly, if you’ve got an application that depends on some Perl module that isn’t managed by Debian, but you need it yesterday, then you can either install that module via CPAN or roll your own Debian package. The idea here is to make and install the package so you can use it, but also file a Request For Package bug at the same time — see the reportbug command in Debian, or use LaunchPad if you’re on Ubuntu. This way, when the package is officially released and supported, you can move to that instead, and thus get the benefits of automatic upgrades of those packages.

Secondly, if you’ve got an application that depends on some internally-developed modules, then they probably wouldn’t exist on CPAN (some call this Perl code part of the DarkPAN), except in the rare case that a company open sources their work. But corporations will never open source all of their work, even if they consider the implications of providing some of it to the open source community, so at some point or another you’ll need to deal with internal packages. Previously, the best way to handle this was to construct your own CPAN local mirror, and have other machines install and upgrade from it — thus your internal code is easily distributed via the usual mechanism.

One of the advantages of using CPAN to distribute things is that it’s available on most platforms, builds things and runs tests automatically on many platforms. CPANPLUS will even let you remove packages, which was one of the main reasons I am so pro-Debian packages anyway. However, it does mean you’ll need to rebuild the package on other systems, which is prone to failures that cost time and money to track down and fix. CPAN and CPANPLUS are the Perl tradition of distributing packages.

If you are using an environment mostly with Debian systems, however, you may benefit from using a local Debian repository. This way, you only need to upgrade packages in your repository, and they’ll be automatically upgraded along with the rest of your operating system (you do run update and upgrade periodically right?). There is even the fantastic aptcron program to automate this, so there’s really no excuse not to automatically update.

In either case, creating a local package means you will be able to easily remove anything you no longer need via the normal package management tools. You can also distribute the binary packages between machines — though it sometimes depends on the platform (for modules that incorporate C or other platform-specific code that needs to be rebuilt). Generally, most Perl modules are Pure Perl, and thus you can compile and test it once, on one machine, and distribute it to other ones simply by installing the .deb package on other machines. You can copy packages to machines and use dpkg to install them, or better yet, create a local Debian mirror so it’s done automatically and via the usual mechanism (aptitude, etc.)

In conclusion: if you’re going to make your own Debian packages, do so with caution, and be aware of all the consequences (positive and negative) of what you’re doing. As always, a real understanding of everything is necessary.

Read Full Post »

It’s been some time since I re-installed Debian over my Kubuntu install, so I thought I’d discuss some reasons why I changed back to Debian, what my experience was like, and some learning opportunities.

One reason I made the switch was because there was a utility newly packaged for Debian, Frama-C, which was not available in Kubuntu at the time. It also frustrated me that I was having various frustrations with the installation, not the least of which was an unreliable and quite crashy KDE Plasma.

When I reinstalled this time, I picked the normal install but told it to install a graphical environment, which gave me a GNOME desktop environment. I actually rather like it, the setup didn’t ask too many questions and everything was set up perfectly. There was some minor tweaking, but it was all done by the easily accessible System menu and all the applets therein.

Now, I wanted to be able to use the server both as a virtual machine and as a physical dual-boot. This wasn’t working properly with GRUB-2, so I had to stay with version 1.96, which works rather well. I even spent some time making a pretty splashimage for it, which looks rather nice, even if I don’t see it all that often.

If I boot into the Virtual Machine, all the hardware is detected properly, and there aren’t even complaints about the fact that a bunch of hardware disappeared — certainly very good news if you decide to do something like move your hard drive to a different machine. Likewise, if I boot into the desktop, everything works well there too.

One issue I came across during the installation was having to teach Network-Manager how to configure my network interfaces. In my VMware NAT setup, there is no DHCP server, so the IP address, subnet and gateway information needs to be statically defined. Luckily, Network-Manager was able to do this based on the MAC address of the adapter — inside my virtual machine, it had a VMware-assigned static one. Through this, Network-Manager had an easy way to determine how to configure my network, and it works beautifully for Ethernet and Wireless (when Debian is running as the main operating system) and also for VMware NAT (when inside the virtual machine container).

Anyway, I have now been developing quite happily inside a Debian + GNOME desktop environment. The system runs fine even within a constrained environment, though I miss KDE’s setup with sudo; with GNOME, the option seems to be to have the root password entered every time privilege escalation is necessary. I don’t like using a root password — on my server system I don’t use the root password at all, and do everything I need to do via sudo. It’s okay for me because I log into the server with a private key and have disabled SSH password authentication for security reasons.

One thing that is still weird for me is that my system currently shows a time of 01:53 even though it is 23:57 in my timezone. Presumably the few minutes of difference is because the virtual machine clock and my system hardware clock aren’t synchronized perfectly, but more than that, I think it’s an issue with the Date applet somehow. I haven’t looked into this because the thing is running inside a virtual machine, so it doesn’t bother me much.

I have looked high and low to see where to change the time zone, and to my knowledge the system knows that it’s in the America/Toronto time zone. The house picture next to Timmins (the city I am in right now, though it doesn’t matter since the timezone is the same) seems to indicate to me that it’s set to the appropriate time zone.

I think it’s due to VMware synchronizing the virtual machine clock with my host machine clock. Windows (my host operating system) stores the time in the local format, which I believe Linux thinks is UTC. Still, it doesn’t explain the weird display it’s got going.

Someone noted last time that I didn’t make direct mention of which programs are only offered on Windows and not on Linux/etc, and that do not have reasonable replacement on these systems. Kapil Hari Paranjape noted that I was sounding somewhat like a troll by simply saying that I don’t think Linux is yet ready to replace my environment. Here was my reply:

Far from a troll, I’d really like Debian and Ubuntu, but moreso Linux in general, to improve at the pace it has been doing so. It’s made great progress since the last time I tried it out on my desktop, but I have to acknowledge that there are lots of rough edges right now that should be worked out.

One of the advantages of huge proprietary development organizations like Microsoft is that they have tons of developers and can implement new features at a relatively quick pace, even if they’re half-assed. Developers’ pride in the FOSS community prevents this overly quick pace of development in favour of more secure, more stable platforms. Which is a good thing, I think. But nonetheless it results in a “slower” development pace.

The applications I’m complaining about are things like:
– SolidWorks (a CAD tool for designing parts and assemblies, used in manufacturing and mechanical engineering)
– SpectrumSoft Micro-Cap (a version of software similar to PSpice used by my school)
– AutoCAD (another CAD tool)

Luckily this is changing, but only for the large & most popular distributions:
– MathWorks MATLAB (runs on Linux and Solaris, etc.)
– Wolfram Mathematica (which has versions for Linux and MacOS X)
– FEKO (runs on Linux and Solaris among others)

Anyway, I still consider SolidWorks to be a rather big program not supported on Linux, which is a big issue for those working on Civil Engineering programs. There are most probably others which are very domain-specific that I don’t even know about.

There is a nice matrix comparing cross-platform capabilities of CAD software: http://en.wikipedia.org/wiki/Comparison_of_CAD_software

Oh, one final thought: perhaps that KDE Recommends: should be moved to a Suggests: instead, on account of its heavy dependencies, requiring mysql-server installed on desktop machines.. WTF!

Oh, and on another note, I re-installed Debian using the non-expert Auto Install and it installed Gnome rather flawlessly, much like installing Ubuntu, which was pretty nice. So kudos to those who have been working on the main installer; it seems as though the advanced ones really give you some rope to hang yourself with, though :-)

Oh, and k3ninho told me that there is an initiative from the Ubuntu community called “100 Paper Cuts” to help fix small bugs like those I was complaining about. I hope this leads to an improved user experience, and I’d really like to see some of those changes propagated both upstream to Debian and upstream to the KDE folk.

During my install of Kubuntu + KDE, I felt that plasma was crashing more than Windows Explorer — it felt like the days when I was running Windows ME, and the shell would crash, and the system would restart it. Repeatedly. It’s exactly what seemed to happen with plasma. I’m not sure if it was something I screwed up during configuration (presumably so), but KDE was far too complicated for me to try and debug. It might also have been a result of me running my system within a fairly constrained virtual machine environment – the system only gets 768MB of RAM and no access to the actual graphics processor unit (since it’s virtualized).

Read Full Post »

Older Posts »