Archive for January, 2009

Strawberry Perl, by default, does not include many Database Drivers. While it does a great job installing most modules, some CPAN authors simply overlooked Win32 as a target platform, so the build/installation scripts get confused. Among these is the DBD::Pg driver (PostgreSQL database driver), which is really just a thin layer providing access to the C library, libpq.

In terms of working with PostgreSQL databases under Windows, this effectively leaves people with a few options:

  1. Try to install the Perl Package Manager (ppm) version of DBD-Pg. This didn’t work for me. I suppose that’s because the installer was expecting an ActivePerl-like environment, and I was using Strawberry Perl’s ppm tool.
  2. Compile the DBD::Pg drivers from scratch using Microsoft Visual Studio. This wasn’t a possibility for me because I don’t want to purchase Visual Studio. My school provides licenses via the MSDN Academic Alliance, but I wanted to use something more open-source if possible. Also, the Visual Studio suite is pretty large and takes a significant amount of time to install. It also clutters your machine with an SQL database, among other things.
  3. Install DBD::PgPP, a Pure Perl version of the PostgreSQL API. The problem with this is that there are lots of outstanding bugs, and so far it does not behave in exactly the same way as DBD::Pg.
  4. Install a specialized Perl package like Camelbox (one of Camelbox’s design goals was to provide DBI and popular DBD support out of the box). I didn’t like the idea of this because I’m so far a pretty big fan of Strawberry Perl and its sister project, Vanilla Perl. Together they seem like the most effective way to solve the Perl-on-Win32 dilemma.

As it turns out, there’s another option. Taking the package stuff meant for Camelbox and dropping it into your Strawberry Perl installation. It works flawlessly, and I’m very grateful to Brian Manning for his work on the project.

Here’s the quick and dirty:

  1. Download the postgresql-bin package from the Camelbox downloads area.
  2. Download the perl-DBD-Pg package from the same place.
  3. Open the lzma files using your favourite archiver program. I love 7-zip and it worked beautifully for extracting those files.
  4. Under the perl-DBD-Pg package, there should be a bunch of subdirectories; these correspond to those under C:\Strawberry\perl (or the perl subdirectory of wherever you installed Strawberry Perl). Extract all the files there.
  5. In the postgresql-bin package, there is a bin directory that contains a single file, libpq.dll. This one is really important for connecting to the database as it does all of the real work; the Perl stuff just binds to the library functions. Extract this one into C:\Strawberry\perl\site\lib\auto\DBD\Pg (or wherever your Pg.dll is installed)

Alternatively, it might be less stressful to just install Camelbox instead of Strawberry Perl; but this is entirely up to you.

Read Full Post »

One of the things that many students don’t realize is that we are essentially customers. We give the university money (tuition) in exchange for knowledge and a degree. We are often forced to put up with a terrible customer experience we would not accept anywhere else; yet, we do.

How do organizations get away with providing awful support for their products?

Well, put simply, support is not a criteria we often use to select a company we wish to deal with. We look at cost effectiveness, we look at the short-term gains we expect to achieve, we look at whether there are other solutions available, we look at whether we need the solution in the first place. But as everyday consumers, we don’t often require that an appropriate level of support is in place.

Why? Because we don’t think about the inevitable; we always like to pretend that we will never need our car insurance, that we will never need to use the limited manufacturer warranties that come with our products. We often don’t even bother to stop and read the fine print, instead preferring to believe whatever promises salespeople leave us with.

As a student of both Electrical Engineering and Computer Science, I am actually part of two faculties – a full time student in the Faculty of Engineering, while a part time student in the Faculty of Science. This arrangement means I can enroll in courses from both faculties, but it also means that in order to do so, I must deal with the faculty that has control over the particular course.

So when I needed to have something done with regard to a Computer Science course, I had to go to the Faculty of Science Dean’s Office. Upon arrival at about 08:30, I received a ticket and waited in the reception area. When my number was called, I was sent to a triage-type area, where the counselling assistant determined whether or not we needed to make an appointment to see the Academic Counsellor.

When I finally got to the Academic Counsellor who could actually do what I needed, it was 10:30 – two hours later and shortly before my next class. The transaction itself took a few minutes, which left me wondering why my situation wasn’t dealt with in a more timely fashion.

I understand that it’s not always the fault of the staff. After all, there are lots of students in the Faculty of Science, and only four counsellors, only one of which was accepting drop-in appointments that day. So perhaps this is an intrinsic problem with the way we allocate people.

This graph illustrates the number of students each academic counsellor is responsible for, assuming equal distribution of students per counsellor.

This data was compiled from data published by the University of Western Ontario as part of the CUDO – Common University Data Ontario – initiative. The data from 2008 was used to compile this graph, with counsellor counts coming from the respective faculty web sites.

As we can see, the number of students each counsellor must handle is large, and has little to do with the total number of students in each faculty. So while there are 11091 students in the Faculty of Science, there remain only 4 academic counsellors capable of special review tasks. In the Faculty of Engineering, there are 1788 students and 3 counsellors. Does this make sense? I think not.

So, support should always be part of the equation. In business, support is a important metric for making the next buying decision. As a result, the Business Division of Dell Computer provides excellent and prompt response with minimal waiting — offering services like quick advice from highly trained personnel backed up by Next Business Day service.

This is important in industry. Why shouldn’t it be important to us consumers? Why shouldn’t we demand more support personnel, or different ways to quickly apply for these types of special considerations? Perhaps some sort of online queuing system could be a solution to this; and definitely we need some review here.

Read Full Post »

This article was originally published in Project Magazine, a Canadian periodical written by engineering students, for engineering students. The original publication date is unknown, but it was some time in 2008. I am publishing it here because it is still a relevant read, especially in light of our growing use of social networking tools.

Like many widely present inventions, what we know today as the World Wide Web began its life as a simple research project. In the 1980s, Tim Berners-Lee, often attributed with the creation of the Web, sought to provide a system of distributing, sharing and publishing information for the academic community.

Independent of Berners-Lee’s work, the University of Minnesota developed the Gopher protocol as a universal document retrieval system, marking a revolutionary shift in thinking; it was an attempt to model the intricate relationships between documents in a way that computers could understand. The links between these resources, or hypertext, would pave the way for the Web to evolve over the next three decades.

Based on the Gopher’s hypertext linking capabilities and the Generalized Markup Language developed at IBM, the HyperText Markup Language (HTML) enabled the Web to incorporate more advanced features such as embedded media (images, but later sounds and video) and formatted text. Interest in the World Wide Web as the next communication medium became apparent, due largely as a result of the ease of publishing information.

Over fifteen years to follow, many companies including Netscape and Microsoft were in an arms race to develop new features to cater to an exponentially growing market space. During this time, browsers added countless extensions to HTML, some of which became de-facto standards, albeit different from Berners-Lee’s vision for the Web. By the release of HTML 3.0, browser support for tables and other complex formatting became widespread, enabling the publication of an ever-increasing array of scientific and literary works.

By the 21st century, software such as blogs, social networking, wikis and podcasts marked the birth of a second generation of the World Wide Web. The idea of Web 2.0 indicated the transition of many websites from isolated systems to an interlinked, global computing platform. Ultimately, Web 2.0 is about increasing the socialization of the Web, enriching collaboration and utility for users. This had significant implications for both individuals and businesses because it provided a means to make sense of the growing amount of available information.

The progress in the field of web standards has been relentless, and gradual. Under the guidance of the World Wide Web Consortium (W3C), a multinational non-profit organization founded by Tim Berners-Lee, standards are developed through several stages of peer review, and then officially published to the community-at-large. This ensures that updates are logical and consistent with the W3C’s goals of interoperability, flexibility and extensibility.

The largest step forward so far has been a separation of document structure from its presentation in code. Cascading Style Sheets (CSS) enable this by providing a separate language to describe the way data should be output to various devices. This is important particularly for accessibility purposes: after all, information such as fonts being red or bold have no meaning for alternative display systems such as screen readers (text-to-speech) and Braille outputs. In this way, multiple style sheets can be created for each document, allowing them to behave differently depending on the output media.

So what is the future direction of the Internet and the World Wide Web? As we are able to gather increasing amounts of information from our outside environment, we need a way of gathering and organizing the information in an interoperable way. Tim Berners-Lee envisions a Web that is connected not with the data itself, but with computers understanding the meaning of the data. While likely to yield some notable results, another browser war will inevitably prevent this dream from coming into fruition. This is the reason why initiatives proposed by institutions such as the W3C must be adhered to by industry.

In everyday use, the Semantic Web will provide the ability to interpret information in unprecedented ways: for example, the transactions on your bank statements can be overlaid onto a calendar, or inserted into graphs based on arbitrary criteria. Indeed, the possibilities are endless and the technology already exists to make this happen—all we need is one last push to implement it. We are looking to a future where computers can do an increasing amount of work and provide detailed analysis through the implementation of Web standards.

Read Full Post »