Skip to content

What are EJBs good for?

Dion had a good post about what EJBs are good for. I’ve only used EJBs seldom (and peripherally), but it’s my understanding, from reading the literature, that EJBs are appropriately named–that is, good for enterprise situations. In that case, what on earth are these folks thinking? They demonstrate using an EJB in JSP. What?

Publishing power

You have to give the web credit for making information distribution a lot cheaper. Whether it’s a small business distributing forms via the web or BlockBuster distributing rental coupons via email, it’s just plain simpler to get information distributed over the internet.

A friend just forwarded me the expected US budgets for the next 5 years. And then he forwarded me budgets going back to 1996. An invaluable resource, to be certain. What other countries allow you to look at their budget on the web? The UK, New Zealand, Canada, Australia, India, Fiji….

Wow. And all this was found with half an hour of searching. Wonderful!

An IM application server

I’ve written before about IM in the workplace. It’s becoming more and more prevalent, and other people have noticed this as well. IM is something that’s easy to use, and gives you the immediate response of the phone without be nearly as intrusive.

Now, in the past, using IRC, it was relatively easy to have a program, or bot, that would listen to conversations, or that you could ask questions of. They were dumb, but they worked. In the world of IM, I wasn’t aware of any easy way to do this. However, browsing freshmeat yesterday I discovered an easy way to write IM applications.

It’s called the SDBA Revolution Instant Messaging Application Server and building IM applications is fantastically easy if you use this perl framework. I was able to download it, and build a simple application in about 30 minutes. And that includes signing up for the usernames from AOL. It uses a perlish syntax and doesn’t support extremely complicated applications, but does offer enough to be useful. If you can code a php website, you can build an IM application. The author even provides six or so sample applications, including a database interface (scary!). The only issues I found with the IM app server were:

1. It doesn’t support Yahoo! That’s because the Yahoo! IM perl module has been unmaintained since the last Yahoo! protocol update.

2. I’m not sure of the legality of using a bot on a public service like AIM, MSN, or Yahoo!. Violations of these license agreements happen all the time, but, if you’re a stickler for those darn license agreements, this application server appears to work with Jabber.

Just goes to show you that 30 minutes a week browsing freshmeat or SourceForge will almost never be wasted. A bit of slack to do this will probably pay off in the long run.

PowerPoint and presentations

I went to an ACM meeting last Tuesday at NREL. The topic was “The Role of Computational Science in Energy Efficiency and Renewable Energy Research” by Dr. Steve Hammond. It was an interesting talk–NREL is doing some neat stuff with alternative energy sources (one thing that Dr. Hammond mentioned was an algae that produces hydrogen gas–a possible clean, renewable, easily scalable source of that element).

Now, I definitely don’t want to single out Dr. Hammond. He did a good job explaining the value of computing to energy research, as well as fielding questions that were out of his expertise from nitpicking engineers (are there any other kind?). However, his presentation just drove home to me how easy it is to let PowerPoint drive a presentation. And how doing that really detracts from the speaker’s points. I’m certainly not the first person to mention this. But I just wanted to point out this very very good article about speaking during a presentation, rather than just reading from slides.

Hey buddy, I can probably read those slides faster than you can say them, and it’s a lot less boring for me. Instead, explain the slides to me in a way that makes the talk more of a conversation. Don’t let the technology drive the presentation; it may be easier to read the slides, but it makes for a much poorer presentation.

SQL Server JDBC driver troubles

I’m responsible for a small struts application for one of my clients. The application was originally coded on Windows against a SQL Server 2000 database. When I was contracted to roll it to production, a Linux box talking to a SQL Server 7 database, I found I couldn’t use the existing MS JDBC drivers, which only support SQL Server 2000. So, I went looking for SQL Server 7 JDBC drivers. There are a ton of choices out there, but most are commercial. I looked at jTDS, but that didn’t work because, at the time, jTDS did not support CallableStatements, which were used extensively by this application. (Apparently, jTDS does now.)

So, I looked at a few commercial drivers, and decided that Opta2000 offered the best feature set for the price ($800 for unlimited web application connections). Then, the database was upgraded from SQL Server 7 to SQL Server 2000. Luckily, we hadn’t bought the JDBC driver yet, so, hey, let’s use MS JDBC drivers–they’re free! Fantastic. The installation went fine (not that it was that complicated–dropping some new jars in the WEB-INF/lib directory and changing some lines in the struts-config.xmlTomcat (version 4.1.24) started behaving badly. With IE (and, to a lesser extent, with Mozilla), the pages started loading very slowly after Tomcat had been running for a while. A restart alleviated this symptom, but didn’t obviously solve the problem. Initially, we thought it was the load, and some misconfiguration of tomcat (tomcat was serving images–not usually considered its strong point, though benchmarks are needed to tell the full tale), but nothing seemed to change the behavior. We tried changing how tomcat was passed requests (mod_jk, mod_proxy), but nothing seemed to work. A colleague of mine looked at when the instability started, and it correlated with the installation of the MS JDBC drivers. So, we switched back to Opta. The application returned to a stable state, and we haven’t seen the problems since. (We plan to purchase the drivers now, although we may take a look at jTDS.)

Control of core business functions considered vital

I know I’ve commented on offshoring before, but I was talking to a friend last night, and he mentioned that his department, which maintained a suite of products for a very large software vendor, was gong to be re-organized. The new boss was the fellow tasked with offshoring. [cue jaws theme]

On a related note, I read this article by Joel On Software about Not Invented Here syndrome. In the article, he makes very good points about giving up control of vital business functions. Actually, Joel puts it very succinctly: “If it’s a core business function — do it yourself, no matter what.”

In some sense, that’s what you do when you buy software off the shelf. You trade control for cost savings. I know that in several cases at a former company, we built software on top of a vendor’s platform, but what we were building was so focused that we ended up twisting the platform all out of shape. I think it would have been better to focus the energy, money and time that went to learning and reshaping the platform into understanding the business domain better and building more features on a custom platform.

In general, giving up control of vital business functions is a bad idea. So, offshoring (or outsourcing) customer service is a good idea if customer service isn’t a vital business function (right!). And vice versa.

The question then becomes, what’s a core business function? Ask that question of any large business, and depending on what department you’re in, you’ll get some different answers. I was talking to a guy a year ago who worked for a pipe construction company (you know, water pipes, beer pipes) in the accounting department. We touched on offshoring, and he mentioned that his company was planning to move all of their accounts payable to India, but, “thank you very much, we’ll keep the accounts receivable close to home.” Getting paid is probably vital business function for everybody.

So, where does this leave all of the folks in IT? Well, Bob Lewis writes a lot about IT as a force for business change, and that sounds like a vital business function to me. Where does this leave my friend in the maintenance department? I don’t know.

Book Review: Hackers

Hackers, by Steven Levy, should be required reading for anyone who programs computers for a living. Starting from the late 1950s, when the first hackers wrote code for the TX-0 and every instruction counted, to the early 1980s, when computers fully entered the consumer mainstream, and it was marketing rather than hacking which mattered. Levy divides this time into three eras: that of the ‘True Hackers,’ who lived in the AI lab at MIT and spent most of their time on the PDP series, the ‘Hardware Hackers,’ mostly situated in Silicon Valley and responsible for enhancing the Altair and creating the Apple, and the ‘Game Hackers,’ who were also centered in California; expert at getting the most out of computer hardware, they were also the first to make gobs and gobs of money hacking.

The reason everyone who codes should read this book is to gain a sense of history. Because the field changes so quickly, it’s easy to forget that there is a history, and, as Santayana said, “Those who do not remember the past are doomed to repeat it.” It’s also very humbling, at least for me, to see what kind of shenanigans were undertaken to get the last bit of performance from a piece of hardware that was amazing for its time, but now would be junked without a thought. And a third takeaway was the transformation that the game industry went through in the early 80s: first you needed technical brilliance, because the hardware was slow and new techniques needed to be discovered. However, at some point, the hard work was all done, and the business types took over. To me, this corresponds to the 1997-2001 time period, with the web rather than games being the focus.

That’s one of my beefs–the version I read was written in 1983, and republished, with a new afterword in 1993. So, there’s no mention of the new ‘4th generation’ of hackers, who didn’t have the close knit communities of the Homebrew Computer Club or the AI lab, but did have a far flung, global fellowship via email and newsgroups. It would be a fascinating read.

Beyond the dated nature of the book, Levy omits several developments that I think were fundamental to the development of the hacker mindset. There’s only one mention of Unix in the entire book, and no mention of C. In fact, the only languages he mentions are lisp, basic and assembly. No smalltalk, and no C. I also feel that he overemphasizes ‘hacking’ as a way that folks viewed and interacted with the world, without defining it. For instance, he talks about Ken Williams, founder of Sierra Online, ‘hacking’ the company, when it looked to me like it was simple mismanagement.

For all that, it was a fantastic read. The more you identify with the geeky, single males who were in tune with the computer, the easier and more fun a read it will be, but I still think that everyone who uses a computer could benefit from reading Hackers, because of the increased understanding of the folks that we all depend on to create great software.

Book Review: Hear That Lonesome Whistle Blow

If you thought Halliburton abusing the tax payers was something new and different, think again. Hear That Lonesome Whistle Blow, by Dee Brown, is a history of the building of the transcontinental railroads. It starts in 1854 and proceeds in detail until the 1890s, then hurriedly summarizes until the 1970s. (The book was written in 1977.) And Brown shows, repeatedly and at length, how the railroad builders screwed the American public time and again.

In fact, reading this book made me very very angry. It’s the same old story: a bunch of rich men want to get richer, and figure out ways to use the public purse to make money. In this case, there were three main ways that wealth was moved from the taxpayer to the wealthy: scams building the railroads, land grants, and high railroad rates. Brown examines all of these in some detail, and sometimes the disgust just made me squirm. He also, towards the end of the book, examines some of the political reaction to the railroads: the Grangers and the Populist Party. And he covers at least some of what the railroads did to the Native Americans.

However, he also intermingles first person accounts in this story of perfidy. Whether it is stories from the immigrants, the first riders of the transcontinetnal railroad, the railroad workers, or the Congressmen who authorized the land grants, he quotes extensively from letters and speeches. In fact, he might go overboard in the quoting department; I would have appreciated more analysis of some of the statements.

Brown does include some very choice, precient statements though. In chapter 11, talking about Pullman’s improvements, a French traveller said “…unless the Americans invent a style of dwelling that can be moved from one place to another (and they will come to this, no doubt, in time)…”. In chapter 12, a fellow was travelling on an immigrant train and was happy to be separated in the mens’ car because he “escaped that most intolerable nuisance of miscellaneous travelling, crying babies.”

I learned a lot from this book, both about American history and the railroads. In large part, the railroads made the modern west–I 80 follows the path of the Union Pacific, and Colorado Springs was founded because a railroad magnate owned chunks of land around the area. It’s also always illuminating to see that, in politics as in everything else, there’s nothing new under the sun.

Moving a Paradox application to PostgreSQL

I have a client that has an existing Paradox database. This database is used to keep track of various aspects of their customers, and is based on a database system I originally wrote on top of Notebook, so I’m afraid I have to take credit for all of the design flaws present in the application. This system was a single user Paradox database, with the client portion of Paradox installed on every computer and the working directory set to a shared drive location. It wasn’t a large system; the biggest table had about 10k records.

This system had worked for them for years, but recently they’d decided they needed a bit more insight into their customer base. Expanding the role of this database was going to allow them to do that, but the current setup was flawed. Paradox (version 10) often crashed, and only one user could be in at a time. I took a look at the system and decided that moving to a real client server database would be a good move. This would also allow them to move to a different client if they ever decided to get Access installed, or possibly a local web server. This document attempts to detail the issues I ran into and the steps I followed to enable a legacy Paradox application to communicate with a modern RDBMS.

I chose PostgreSQL as the DBMS for the back end. I wasn’t aware at the time that MySQL was recently freed for commercial use, but I still would have chosen PostgreSQL because of the larger feature set. The client had a Windows 2000 server; we discussed considered installing a Linux box in addition, but the new hardware costs and increased maintenance risk led me to install PostgreSQL on the Windows 2000 server. With Cygwin‘s installer, it was an easy task. I followed the documentation to get the database up and running after Cygwin installed it. They even have directions for installing the database as a Windows service (it’s in the documentation with the install), but since this was going to be a low use installation, I skipped that step.

After PostgreSQL was up and running, I had to make sure that the clients could access it. This consisted of three steps:

1. Make sure that clients on the network could access the database. I had to edit the pg_hba.conf file and start PostgreSQL with the -i switch. The client’s computers are all behind a firewall, so I set up the database to accept any connections from that local network without a password.

2. Install the PostgreSQL ODBC driver and create a system ODBC DSN (link is for creating an Access db, but it’s a similar process) for the new database on each computer.

3. Creating an alias in Paradox that pointed to the ODBC DSN.

Once these steps are done, I was able to query a test table that I had created in the PostgreSQL database. One thing that I learned quickly was that two different computers could indeed access PostgreSQL via the Paradox front end. However, in order to see each others changes to the database, I had to hit cntrl-F3, which refreshed from the server.

The next step was to move the data over. There are several useful articles about moving databases from other RDBMS to PostgreSQL here, but I used pxtools to output the data to plain text files. I then spent several days cleansing the data, using vi. I:

1. Exported table names were in mixed case; I converted them to lower case. PG handles mixed case, but only with ” around the table names, I believe.
2. Tried to deal with a complication from the database structure. I had designed it with two major tables, which shared a primary key. The client had been editing the primary key, and this created a new row in the database for one of the tables, but not the other. In the end, matching these up became too difficult, and the old data (older than a couple of years) was just written off.
3. Removed some of the unused columns in the database.
4. Added constraints (mostly not null) and foreign key relationships to the tables. While these had existed in the previous application, they weren’t captured in the export.

Then I changed the data access forms to point to the new database. The first thing I did was copy each of the data access forms, so that the original forms would still work with the original database. Most of the forms were very simple to port—they were just lookup tables. I found the automatic form generator to be very helpful here, as I added a few new lookup tables and this quickly generated the needed update/insert forms.

However, I did have one customized form that caused problems. It did inserts into three different tables. After the database rationalization, it only inserted into two, but that was still an issue. Paradox needed a value for the insert into each table (one because it was a primary key, the other because it was a foreign key). I couldn’t figure out how to have Paradox send the key to the both inserts without writing custom code. So, that’s what I did. I added code to insert first into the table for which the value was a primary key, and later to insert the value into the table for which it was a foreign key. It wasn’t a pretty solution, and I think the correct answer was to combine the two tables, but that wasn’t an option due to time and money constraints. I also made heavy use of the self.dataSource technique to keep lists limited to known values.

After moving the forms over, I had to move one or two queries over (mostly query by examples, qbes, which generated useful tables), but that was relatively straight forward; this was a helpful article regarding setting up some more complicated qbes. Also, I found a few good resources here and here.

I also updated a few documents that referenced the old system, and tried to put instructions for using the new system onto the forms that users would use to enter data. I moved the original database to a different directory on the shared drive, and had the client start using the new one. After a bit of adjusting to small user interface issues, as well as the idea that more than one user could use the database, the client was happy with the results.

Customer service automation

Customer service, like everything else, has undergone a revolution in the past two hundred years. In the olden days there was a corner grocer who knew people personally and therefore could render excellent, customized service. Then the large department and grocery stores appeared on the scene. These large corporations could offer goods cheaply, but didn’t know or care who their customers were nor what the customers were willing to buy. Now, companies are trying, via software and databases, to recreate the corner store insight and knowledge, but in a scalable fashion. Whether it is frequent shopper cards coupons, or automated recommendations of books, companies are trying to use software to scale their ability to know what the customer wants, and hence give it to them.

This is good for the company, because if companies only try to sell what is wanted, they do not have to spend as much money advertising and maintaining unwanted inventory. In addition, such tactics aren’t as likely to annoy the customer as trying to sell the customer something undesired. This also builds customer loyalty, since the company gives the impression of caring about customers’ perceived needs. This is not a false impression: the company does care about customers’ needs, because satisfying these needs is the only way the company makes money.

This is also good for the customer because it gives them what they want, at minimal fuss. It also makes for cheaper goods in the long run, since companies aren’t spending excessive amounts of money on ill targeted customers; a single twenty something with no children has no need for diapers, and sending them coupons for those diapers only wastes resources.

However, there is a fly or two in the ointment of customer awareness via large databases. In stark contrast to the grocer, large companies are not the peers of their customers, and this inequality can lead to issues. In addition, the quality of the customer service provided by software, while better than no customer service, isn’t a replacement for human interaction. I am explicitly leaving aside the issues of privacy since they are murky and still being defined.

The corner grocer, whom companies are trying to emulate in service, was a member of the community. If he cheated a customer, word got around. If he was doing anything unethical, his peers and customers could apply neighborly pressure in order to rectify his behavior. And, most importantly, the knowledge he gained about hist customers was counterbalanced by their knowledge of him as a neighbor. Few of these constraints operate on modern, large companies in anywhere near the same fashion. I’m not denying that people can affect the behavior of retailers with words, activism and lawsuits. However, changing the behavior of a large corporation is never going to be as easy as changing the actions of a local shop owner.

In addition to the difference in resources and power between customers and companies, it’s also clear that performance suffers. This isn’t strictly related to the gathering of customer data, but the existence of such data inspires companies to cut costs by automating. In general, I believe that the service provided by any software is inferior to that provided by a real live human being. And by building these databases, companies are being seduced by the siren call of reducing human interactions—if software (a [relatively] fixed cost that scales well) can recommend a good book, who needs an employee (a recurring cost that scales poorly). I realize that this may sound Luddite, but certainly in the current incarnation, I’ve found such software doesn’t match up well with recommendations from real people.

In short, I believe that more and more companies are customizing and tailoring the customer experience in order to cut costs and build loyalty. But I also feel that there are significant downsides to such tailoring and I’m not sure that it’s worth it.