Skip to content

imap proxy and horde

I’m implementing an intranet using the Horde suite of tools. This is written in PHP, and provides an amazing amount of out of the box, easily configured functionality. The most robust pieces are the mail client (incidentally, used by WestHost for webmail, and very scalable), the calendar, and the address book. There are a ton of other projects using the Horde framework, but most of them are in beta, and haven’t been officially released. Apparently these applications are pretty solid (at least, that’s what I get from reading the mail list) but I wanted to shy away from unreleased code. I am, however, anxiously awaiting the day that the new version is ready; as you can see from the demo site that it’s pretty sharp.

Anyway, I was very happy with the Horde framework. The only issue I had was that the mail application was very slow. I was connecting to a remote imap server, and PHP has no way to cache imap connections. Also, for some reason, the mail application reconnects to the imap server every time. However, someone on that same thread suggested using UP IMAP Proxy. This very slick C program was simple to compile and install on a BSD box, and sped up the connections noticeably. For instance, the authentication to the imap server (the only part of the application that I instrumented) went from 10 milliseconds to 1. It apparently caches the user name and password (as an MD5 hash) and only communicates with the imap server when it doesn’t have the information needed (for example, when you first visit, or when you’re requesting messages from your inbox). It does have some security concerns (look here and search for P_NEWLOG), but you can handle these at the network level. All in all, I’m very impressed with UP IMAP Proxy.

And, for that matter, I’m happy with Horde. I ended up having to write a small horde module, and while the framework doesn’t give you some things that I’m used to in the java world (no database pooling, no MVC pattern) it does give you a lot of other stuff (an object architecture to follow, single sign-on, logging). And I’m not aware of any framework in the java world that comes with so many applications ready to roll out. It’s all LGPL and, as I implied above, the released modules have a very coherent structure that makes it easy to add and subtract needed functionality.

Bravo Horde developers! Bravo imap proxy maintainer!

mod_alias to the rescue

Have you ever wanted to push all the traffic from one host to another? If you’re using apache, it’s easy. I needed to have all traffic from a http://www.foo.com site go to https://secure.foo.com. Originally, I was thinking of having a meta header redirect on the index.html page, and creating a custom 404 page that would also do a redirect.

Luckily, some folks corrected me, and showed me an easier way. mod_alias (ver 2.0) can do this easily, and as far as I can tell, transparently. I just put this line in the virtual server section for www.foo.com:

Redirect permanent / https://secure.foo.com/

Now, every request for any file from www.foo.com gets routed to secure.foo.com. And since they share the same docroot, this is exactly what I wanted.

To do this, make sure you have mod_alias available. It should be either compiled in (you can tell with httpd -l) or a shared library (on unix, usually called mod_alias.so). You have to make sure to load the shared library; see LoadModule and AddModule for more information.

IM Everywhere

The last two companies I worked at used instant messaging (IM) extensively in their corporate environment. They sent meeting notifications over IM, they used IM to indicate their availability for interactions, and they used it for the quick questions that IM is so good at handling (“hey John, can you bounce the server?”). IM has no spam, and is good at generating immediate responses.

I’m a latecomer to IM. I’ve used talk in college, but in a work environment, email was usually enough. And, I have to confess, when I’m programming, it’s not easy for me to task switch. And IM demands that, in the same way that a phone call does. How many times have you been deep in a conversation with someone, only to have their phone ring? You know what happens next: they say “can I get that?” and reach for the phone. Whatever flow and connection you had is disrupted.

Now, obviously you can configure IM to be less intrusive than a phone call, and the first thing I did was switch off all sound notifications in my yahoo IM client. However, the entire point of IM is to disrupt what you’re doing–whether it’s by playing a sound or blinking or popping up a window, the attraction of IM is that it is immediate.

I’ve found that ninety percent of people would rather talk to a person than look something up for themselves. (I am one of those ninety percent.) There are a number of reasons. It’s easier to ask unstructured questions. People are more responsive, and can come up with answers that you wouldn’t think to find on your own. And it’s just plain reassuring to find out what someone else thinks–you can have a mini discussion about the issue. This last is especially important if you aren’t even sure what you’re trying to find.

IM is a great help for this kind of ad-hoc discussion. However, it’s another distraction at work. The real question is, do we need more distractions at work? Jakob Nielsen doesn’t think so (see number 6) and I agree.

However, IM is becoming ingrained in these corporations, and I don’t see anything standing in the way of further adoption. The original impetus to write this essay was the astonishment I felt at these two facts:

1. the widespread corporate use of IM

2. the paucity of corporate level control over IM

In all the time I was working at these companies, I saw many many IMs sent. But I only heard one mention of setting up a corporate IM server (someone mentioned that one of the infrastructure projects, long postponed, was to set up a jabber server). Now, I don’t pretend that any corporate secrets were being exchanged, at least no more than are sent every day via unencrypted email. But every corporation of a decent size has control over its email infrastructure. I was astonished that no similar move had taken place yet for IM. Perhaps because IM is a young technology, perhaps because it is being rolled out from the bottom up, perhaps because it’s not (always) a permanent medium.

For whatever reason, I think that you’re going to see more and more IM servers (even MS has an offering) being deployed as businesses (well, IT departments) realize that IM is being heavily used and is not being monitored at all. Perhaps this is analogous to the explosion of departments static HTML intranets that happened in the late 1990s, which only came to an end when the IT department realized what was happening, and moved to standardize what became an important business information resource.

Comments on “Mobility is more than J2ME (and the job martket for 2004)”, pt I

Michael Yuan had a pretty inflammatory post recently. Here are the first wave of my comments on this interesting topic.

“1. The move to mobility is inevitable in the enterprise. The IT revolution has to reach hundreds of millions of mobile workers in order to realize its promise. There is no other way. However, the real question is how and when this will happen. With the IT over-investment in the last decade, this might take several more years.”

Agreed. Allowing mobile users to access the corporate datacenter is an inevitability. When it does happen, however, it certainly won’t have the sexiness or big bang of the internet revolution. In fact, it’s much more an evolution than a revolution. Folks already have access to corporate databases right now; the mobile revolution simply combines the portability of paper with the real time nature of laptops. However, letting knowledge workers such as sales people and truckers have real time information on such a cheap, reliable device really will change the nature of the business. But we won’t see sock rabbits or dot.com millions, since such changes will favor existing businesses.

“2. When enterprises move to mobility, a key consideration is to preserve existing investment. Fancy flashy J2ME games will not do it. The task is often to develop specialized gateway servers and J2ME integration software to incorporate smart mobile frontends into the system. That requires the developers to have deep understanding of both J2EE and J2ME. I think that the “end-to-end” sector is where the real opportunities are in the next several years. That is also what “Enterprise J2ME” is all about. :)”

Now, don’t be so quick to judge. Gamers pushed the boundaries of the PC in terms of computing power, and I wouldn’t be surprised to see the same thing happen on the MIDP platform. That said, I’m not a gamer. However, I still have issues with the idea of folks paying to play a game on a cell phone. I play snake, but that’s a simple, free game, and I’m certainly not dedicated to it. Fred Grott claims that MMORPGs are going to drive J2ME game development–I just don’t see folks doing that when you can get a much, much richer experience from the XBox or Game Cube or PC sitting at home.

And I don’t see why Michael ties J2ME and J2EE so tightly. The whole point of web services is to decouple the server and the client. I dont’ see any reason why you couldn’t have J2ME talk to a .NET server, or a BREW client talk to a J2EE server. To me, the larger issue with the mobile revolution is the architecture of the J2ME applications, since I think that such small, non networked, memory constrained applications (with either extremely limited portability or extremely limited user interfaces, take your pick) are going to be a world apart from the standard java developer’s experience (which is HTML generation, not swing).

I’m going to leave his third point for another post, as the outsourcing issue is…worthy of a separate discussion.

Software as commodity

So, I was perusing the Joel On Software archives last night, and came upon Strategy Letter V in which Joel expounds on the economics of software. In particular, he mentions that commodifying hardware is easier than commodifying software. This is because finding or building substitutes for software is hard.

Substitutes for any item need to have the same attributes and behavior. The new hard drive that I install in my computer might be slower or faster, larger or smaller, but it will definitely save files and be accessible to my operating system. There are two different types of attributes of any substitute. There are required attributes (can the hard drive save files?) and ancillary attributes (how much larger is the hard drive?). A potential substitute can have all the ancillary features in the world, but it isn’t a substitute until it has all the required features. The catch to building a substitute is knowing what are required and what are ancillary–spending too much time on ancillary can lead to the perfect being the enemy of the good, but spending too little means that you can’t compete on features (because, by definition, all your viable competitors will have all the required features). (For an interesting discussion of feature selection, check out this article.)

Software substitutes are difficult because people don’t like change (not in applications, not in URLs, not in business). And software is how the user interacts with the computer, so the user determines the primary attributes of any substitute. And those are different with every user, since every user uses their software in a different manner.

But, you can create substitutes for software, especially if

  1. The users are technically apt (because such users tend to resent learning new things less).
  2. You take care to mimic user interfaces as much as you can, to minimize the new things a user had to learn.
  3. It’s a well understood problem, which means the solutions are probably pretty well understood also (open standards can help with this as well)

Bug tracking software is an example of this. Now, I’m not talking about huge defect tracking systems like Rational’s ClearCase that can, if you believe the marketing, track a bug throughout the software life cycle, up into requirements and out into deployment. I’m talking about tools that allow small teams to write better code by making sure nothing slips between the cracks. I’ve worked with a number of these tools, including Joel’s own FogBUGZ, TestTrack, Mozilla’s Bugzilla and PHPBT. I have to say that I think the open source solutions (Bugzilla and PHPBT) are going to eat the commercial solutions’ lunch for small projects, because they are a cheaper substitute with all the required attributes (bug states, email changes, users, web access).

I used to like Bugzilla, but recently have become a fan of PHPBT because it’s even simpler to install. If you have local access to sendmail, a mysql database and a web server (all of which WestHost provides for $100/year or you can get on a $50 Redhat CD and install on a old Intel box). It tracks everything that you’d need to know. It ain’t elegant, but it works.

I think that in general, the web has helped to commodify software, just because it imposes a certain uniformity of user interface. Everyone expects to use forms, select boxes, and the back button. However, as eBay knows and Yahoo! Auctions found out, there are other factors that prevent substitution of web applications.

Amazon Web Services

I remember way back when, in 2000, when EJBs were first hot. Everyone wanted to use EJBs in projects, mostly for the resume value. But there was also a fair bit of justified curiosity in this new technology that was being hyped so much. What did they do? Why were they cool? How did they help you?

I did some reading, and some research, and even implemented one or two toy EJBs. I remember talking to a more experienced colleague, saying “Well, all EJBs provide you is life-cycle assistance–just automatic pooling of objects, a set of services you can access, transaction support, and maybe SQL code generation.” Now, I’m young and inexperienced enough to never have had the joy of doing a CORBA application, but my colleague, who I believe had had the joy of doing one or three of those, must have been rolling her eyes when I said this. ‘Just’ life-cycle assistance, eh?

I just looked at Amazon’s web services, and I’m beginning to understand how she felt. Sure, all web services provides you is easy, (relatively) standardized access to the resources and data available in a web application. Sure, I could get the same information by screen-scraping (and many an application has done just that). But, just as EJB containers made life easier by taking care of grimy infrastructure, so do web services make life easier by exposing the information of a web application in a logical format, rather than one dependent on markup.

Using perl and XSLT (and borrowing heavily from the Developer Kit, I built an application using Amazon’s web services (the XML over HTTP API, not the full SOAP API). I was amazed at how easy it was to put together. This was partly due to the toy-like nature of the application, and how much it leveraged what Amazon already provided, but it was also due to the high level of abstraction I was able to use. Basically, Amazon exported their data model to me, and I was able to make small manipulations of that model. It took me the better part of three hours to put together an application which allows you to search on a keyword or ISBN and gives all the related books that Amazon has for that book. You know, the ‘Customers who bought this book also bought’ section.

I’ve always felt that that was the most useful bit of Amazon, and a key differentiator. This feature, as far as I can tell, leverages software to replace the knowledgeable bookstore employee. It does this by correlating book purchases. This software lends itself to some interesting uses. (I wanted to have a link to an app I found a while ago, where you entered two different artists/authors and it found the linkage between the two. But I can’t find it!)

I like this feature, but it also sucks. The aforementioned bookstore employee is much better than Amazon. Buying a book doesn’t mean that I’ll enjoy it–there are many books I’ve purchased that I wonder why I did so, even one hour after leaving the store–so that linkage isn’t surefire. In addition, purchase is a high barrier, and will probably cause me to not branch out as much as I should–rather than waste my money picking a random book, I’ll pick a book from an area I know. The book store employee, if good, can overcome both of these faults, because the process is more interactive, and the suggester has intelligence. But he doesn’t scale nearly as well as cheaply, nor does he have the breadth of Amazon’s database. (And he hates staying up all night responding to HTTP requests.)

Yahoo! mail problems

One of my clients had a problem earlier today. I helped them choose a new email setup, and they went with Yahoo Business Edition Mail. It’s worked like a charm for them until today. Oh, sure, they’ve had to adjust their business processes a bit, but it was a vast improvement over their previous situation, where only allowed one person in to view his or her mailbox at a time. And it’s quite low maintenance and fairly easy to learn, as it was entirely browser based.

But today, the web client for Yahoo! was busted. New layouts, new colors, old functionality gone, intermittent changes in the GUI, the whole bit. I got on the phone with Yahoo! support, and they assured me it was simply a webmail client problem. No mail was being lost. But, as Joel explains, for most folks, the interface is the program. You try explaining the difference between SMTP and POP and mail storage and local clients to an insurance agent who just wants to send her customers an application.

One of the worst bits is that, other than getting on the phone with Yahoo! support, there was absolutely nothing I could do to fix the problem. Alright, this isn’t entirely true–I could have worked on migrating them off the web client, and perhaps off Yahoo! entirely. And, had the outage continued, I probably would have begun that process. But fixing the web client was entirely out of my hands. That’s the joy and the pain of outsourcing, right? The problems aren’t yours to fix (yay!) but you can’t fix the problems yourself (boo!). Also, chances are the outsourcing provider is never going to be more enthusiastic than you about fixing your problems, and might be significantly less.

Put the pieces together: A Linux Small Business Server

I noticed an article on InfoWorld about Microsoft Small Business Server (SBS). This software package brings together important software packages for small business organizations in an easy to configure and install bundle. The primary features of SBS are, according to this article, email and calendaring, file and printer sharing, and file backup. There are additional features that you can plug in, including email clients, a fax server, remote access via the web, and possibly integration with a back end database.

All this software exists for Linux. For email, you have qmail and imap (we aren’t concerned about the client, because they’ll use Outlook, if they have Office). For calendaring, I haven’t found anything quite as slick as Outlook, but Courier promises web based calendaring, and Mozilla Calendar, when paired with a WebDAV enabled web server, like Apache with mod_dav, allows you to share calendars (it does require you to install the Mozilla browser on every client, though). For file and print sharing, there’s Samba and for backups, there’s the super stable Amanda. Remote access can be handled via VNC and fax server solutions can be built, although the author of the InfoWorld article prefers a fax over IP service which should work fine as long as you have MS Office. As for back end databases, you’d probably want PostgreSQL, probably managed via MS Access. Wrap it all up with Webmin for administration. (Full disclosure–I haven’t used all this software, just most of it.)

So, I set out on the web to see if anyone had gathered all these components together, tested them, and made it easy to install and configure. Basically, an SBS competitor that could compete on features, with the added bonus of Linux’s open nature and stability.

First, I checked out what Redhat and SuSe (1, 2) had to offer. While they had standard servers that were cheaper than the $1500 quoted for SBS, the Linux packages didn’t have all the features either. Then, I did a web search, which didn’t turn up much, except for a LUG project: the Windows Replacement Network. I’m not sure how active this project is, but at least it’s a start. I checked on SourceForge, but didn’t see anything that looked similar.

I really think that there’s an enormous opportunity for the open source community to piggy back off of MS. They’ve already done the market research, they’ve already determined the feature set that will sell to small businesses. And almost all the software is written for the Linux version of the SBS–all that really needs to happen is some configuration and documentation to make all these features work together. Cut it to a CD-ROM and start passing them out at LUG meetings. This would provide one more option in a consultant’s toolbox and give consumers one more choice.

Why open source?

Updated 2/25/2007: Corrected url.

This page has an eloquent explanation of some of the motivating factors behind the open source movement.

In case you aren’t a computer geek, the term open source (free software is another name) refers to computer programs that you can download and share with your friends. Licenses vary, but a common one (the GPL) specifies that you can do whatever you want with the software that is licensed under it, but if you redistribute any changes, you have to make them available under the same terms that the code was originally made available to you.

Some software is core business software. I was talking to a consultant who dealt with telecommunications companies. Their billing and minute tracking software really is part of their core competency. You can use that software to actually make the company more efficient in scalable ways. Ditto companies that make pace-makers–the software is entwined with their hardware and is really integral to the product.

But for many businesses, there are huge swathes of software that aren’t integral in the same way. Their needed, but for their supporting functionality, not for the processes that they enable. For example, the web server that hosts a company’s web pages is not integral. The office suite is not a fundamental part of your business processes. The macros and files and VB programs that you write on top of an office suite probably are, but the bland office suite is not.

When software is written that defines a business process, then it is integral. When it’s a supporting platform for the business process, it’s not. And, as Bruce argues in the article above, when it isn’t integral, there are very good reasons to push the software out into the world and share the cost of maintenance.

Oh, and any discussion of open source software would be remiss if it didn’t link to The Cathedral and the Bazaar.