Skip to content

Concurrency, object orientation, and getting software done

The Free Lunch Is Over, via Random Thoughts, is a fascinating look at where CPUs are headed, and what effect that has on software development. The subtitle: “The biggest sea change in software development since the OO revolution is knocking at the door, and its name is Concurrency” drives home the fact that the author believes that concurrency will be the next big thing in software development.

I was struggling to write a relevant post about this topic, becuase I feel like, at least in the companies I’ve been with, there just wasn’t that much object oriented software being written. I’m working on a project right now that has a minimum of object orientation, even though it is written in java. I’m definitely more familiar with small scale projects and web applications, but I know there are plenty of applications out there that are written and working well without the benefits of objects.

Or, should I say, that are written and working well without the benefits of objects directly. Servers, operating systems and general purpose platforms are a different beast and require a different skill set. And by building on top of such platforms, normal programmers don’t have to understand the intricacies of object oriented development–they can benefit without that investment. Of course, they’d probably benefit more if they understood things and there may come a time in their development that they’ll have to. However, the short term gain of being able to continue on their productive plateau may be worth postponing the learning process (which will take them to a higher plateau at a short term cost).

In the same way, I think that multi-threading won’t be required of normal busines developers. I was struggling with this until the latest NTK came out, with this to say:

CPUs aren’t getting faster! Multi-core is the future! Which means we’ll all need to learn concurrent, multi-threaded programming, or else our software is never going to get faster again! That’s what Herb Sutter’s future shock article in Dr. Dobbs says (below). But before you start re-learning APL, here’s a daring thought: maybe programmers are just too *stupid* to write multi-threaded software (not you of course: that guy behind you). And maybe instead we’ll see more *background* processes springing up – filling our spare CPUs with their own weird, low i/o calculations. Guessing wildly, we think background – or remote – processes are going to be the new foreground.

From the Jan 21 edition, which should be online in a day or so. Those Brits certainly have a way with words.

If you’re a typical programmer, let the brilliant programmers who are responsible for operating systems, virtual machines and application servers figure out how to best use the new speed of concurrent processor execution, and focus on process and understanding business needsand making sure they’re met by your software. Or, if you have a need for speed, look at precalculation rather than multi threading.

Expresso and dbobjects and ampersands

If you’re ever pulling a url from a database via an Expresso dbobject (Expresso’s O-R layer) and you find yourself with mysterious & characters being inserted, you may want to visit this thread and the FilterManager javadoc. Long story short, add this line:

setStringFilter("fieldname", FilterManager.RAW_FILTER);

to any fields of the dbobject that you don’t want ‘made safe’ by the default filter (which screens out dangerous HTML characters). Tested on Expresso 5.5.

(I’m omitting the rant about changing data pulled from the database without making it loud and clear that default behavior is to filter certain characters. But it’s a Bad Idea.)

PL/SQL redux

I’ve written about PL/SQL before but recently have spent a significant amount of time writing stored procedures. Unlike some of my previous experiences, this time PL/SQL seemed like a great fit for the problem set, which was two fold.

In the first case, some of the stored procedures push data from stage tables, which are loaded via ODBC or SQL*Loader, into tables which the application accesses. PL/SQL is great for this type of task because cursors, especially when used with parameters, make row driven data transformations a pleasure, and fast as well. Handling deltas via updates instead of inserts was alright, and the fact is that PL/SQL code that manipulates data can be positively terse when compared to JDBC PreparedStatements and at least as fast. In addition, these stored procedures can be easily called over an ODBC connection, giving the client the capability to load new data to the stage tables and then call the stored procedure to update or insert the data as needed. (You could definitely do the same thing with a servlet and have the client hit a URL, but that’s a bit less self-contained.)

PL/SQL was also used to implement complex logic that was likely to change. Why do that in PL/SQL in the database rather than in java in the application server? Well, changes to PL/SQL programs don’t require a server restart, which can be quite an issue when a server needs high levels of uptime. Instead, you just recompile the PL/SQL. Sure, you can use the reloadable attribute of the context to achieve the same thing (if you’re using Tomcat) but recompiling PL/SQL doesn’t have the same performance hit as monitoring class files for changes.

Use the right tool for the job. Even if PL/SQL ties your application to Oracle, a judicious use of this language can have significant benefits.

Under Pressure

In almost every software project of any length that I’ve participated in, the last few weeks before a release are tense and pressure filled. (Please note that I write custom business software; that’s what these conclusions are drawn from.) Being in the middle of a project release myself, I thought I’d muse on the causes of this pressure. Why are the last few weeks before the deadline so tense? Because software is, above all else, about the details. Joel puts it well in his interview with Salon.com:

The fundamental problem that you’re trying to solve here is that humans think of things in vague, mushy terms. In order to visualize something, they don’t have to actually visualize every part of it. Whereas the programmer, in order to actually implement that thing, to create it, needs to have every part specified.

What happens on projects over a certain level of complexity is this specification is pushed off, often until a decision must be made, or even past that point. This occurs for a number of reasons: programmers want to start coding, the client doesn’t have the information at the moment the issue is raised and it is never revisited, the answers to certain questions (or the questions themselves) are dependent on answers to other questions. In the beginning of a project, big questions are decided, but the small niggling details, which the compiler most certainly needs to know about, are, perhaps noted, but not dealt with.

Why not specify how the system will work before building any of it, to every exacting detail? Some software processes try to do this, but in general, unless the problem is very well understood (in which case the client will almost always be better served by off-the-shelf software), the requirements will change as the project progresses. (Incidentally, if they don’t, the project is a great candidate for offshoring.) The client will better understand the problem and technology and the software team will likewise better understand the problem and domain space. So specifying the entire system up front will likely leave the customers unhappy or the system unused.

Because business software is actually business process crystallization, it matters very much that things are correct. Because business software is implemented by a group of people with specialized skills and a different focus from the users, at best, or no understanding of the business, at worst, software delivery is unlike other deadline driven industries in that changes are expensive and mysterious. I think every software engineer has an example of a simple change request that turned out to have massive implications throughout the system, and this effect is mysterious to normal users.

What matters is not why the details crop up, but that they do. So, the last few weeks of every project consists of mentally running around and nailing down every detail. I expect this is true of every job with fixed deadlines (ever been around a retail store the day before Thanksgiving?). Every issue should be resolved or acknowledged when the software is released, and while some facets are less important than others, no detail is unimportant.

Options for connecting Tomcat and Apache

Many of the java web applications I’ve worked on run in the Tomcat servlet engine, fronted by an Apache web server. Valid reasons for wanting to run Apache in front of Tomcat are numerous and include increased clickstream statistics, Apache’s ability to quickly and efficiently serve static content such as images, the ability to host other dynamic solutions like mod_perl and PHP, and Apache’s support for SSL certificates. This last is especially important–any site with sensitive data (credit card information, for example) will usually have that data encrypted in transit, and SSL is the default manner in which to do so.

There are a number of different ways to deal with the Tomcat-Apache connection, in light of the concerns mentioned above:

Don’t deal with the connection at all. Run Tomcat alone, responding on the typical http and https ports. This has some benefits; configuration is simpler and fewer software interfaces tends to mean fewer bugs. However, while the documentation on setting up Tomcat to respond to SSL traffic is adequate, Apache handling SSL is, in my experience, far more common. For better or worse, Apache is seen as faster, especially when when confronted with numeric challenges like encryption. Also, as of Jan 2005, Apache serves 70% of websites while Tomcat does not serve an appreciable amount of http traffic. If you’re willing to pay, Netcraft has an SSL survey which might better illuminate the differences in SSL servers.

If, on the other hand, you choose to run some version of the Apache/Tomcat architecture, there are a few different options. mod_proxy, mod_proxy with mod_rewrite, and mod_jk all give you a way to manage the Tomcat-Apache connection.

mod_proxy, as its name suggests, proxies http traffic back and forth between Apache and Tomcat. It’s easy to install, set up and understand. However, if you use this method, Apache will decrypt all SSL data and proxy it over http to Tomcat. (there may be a way to proxy SSL traffic to a different Tomcat port using mod_proxy–if so, I was unable to find the method.) That’s fine if they’re both running on the same box or in the same DMZ, the typical scenario. A byproduct of this method is that Tomcat has no means of knowing whether a particular request came in via secure or insecure means. If using a tool like the Struts SSL Extension, this can be an issue, since Tomcat needs such information to decide whether redirection is required. In addition, if any of the dynamic generation in Tomcat creates absolute links, issues may arise: Tomcat receives requests for localhost or some other hidden hostname (via request.getServerName()), rather than the request for the public host, whichApache has proxied, and may generate incorrect links.

Updated 1/16: You can pass through secure connections by placing the proxy directives in certain virtual hosts:

<VirtualHost _default_:80>
ProxyPass /tomcatapp http://localhost:8000/tomcatapp
ProxyPassReverse /tomcatapp http://localhost:8000/tomcatapp
</VirtualHost>

<VirtualHost _default_:443>

SSLProxyEngine On
ProxyPass /tomcatapp https://localhost:8443/tomcatapp
ProxyPassReverse /tomcatapp https://localhost:8443/tomcatapp
</VirtualHost>

This doesn’t, however, address the getServerName issue.

Updated 1/17:

Looks like the Tomcat Proxy Howto can help you deal with the getServerName issue as well.

Another option is to run mod_proxy with mod_rewrite. Especially if the secure and insecure parts of the dynamic application are easily separable (for example, if the application was split into /secure/ and /normal/ chunks), mod_rewrite can be used to rewrite the links. If a user visits this url: https://www.example.com/application/secure and traverses a link to /application/normal, mod_rewrite can send them to http://www.example.com/application/normal/, thus sparing the server from the strain of serving pages needlessly encrypted.

mod_jk is the usual way to connect Apache and Tomcat. In this case, Tomcat listens on a different port and a piece of software known as a connector enables Apache to send the requests to Tomcat with more information than is possible with a simple proxy. For instance, certain variables are sent via the connector when Apache receives an SSL request. This allows Tomcat full knowledge of the state of the request, and makes using a tool like the aforementioned Struts SSL Extension possible. The documentation is good. However using mod_jk is not always the best choice; I’ve seen some performance issues with some versions of the software. You almost always have to build it yourself: binary releases of mod_jk are few and far between, I’ve rarely found the appropriate version for my version of Apache, and building mod_jk is confusing. (Even though mod_jk 1.2.8 provides an ant script, I ended up using the old ‘configure/make/make install’ process because I couldn’t make the ant script work.)

In short, there are plenty of options for connecting Tomcat and Apache. In general, I’d start out using mod_jk, simply because that’s the option that was built specifically to connect the two; mod_proxy doesn’t provide quite the same level of integration.

sqlldr

I’ve been writing SQL*Loader scripts to load a fair bit of data into Oracle. I have a set of load tables with minimal constraints on them, into which SQL*Loader pushes the rows. Then I have written some PL/SQL which pulls from the load tables to the real database.

This architecture was chosen because the PL/SQL procedures can be written to allow incremental as well as full data loads. In the incremental case, it’s conceivable there there’d be a different way of pushing data over to the load tables (via ODBC or JMS, for example). In addition, the load tables can be denormalized, and you can put enough intelligence in the PL/SQL to turn your data structures into something at which a DBA won’t cringe.

Anyway, I thought I’d share a few tips, gleaned through the process. I’m definitely no SQL*Loader guru, but here are some useful links: the sqlldr FAQ, full of good information and recently updated, the Oracle Utilities page which does a great job of explaining all the options of SQL*Loader, and this case study which outlines internationalization with sqlldr. All very useful.

Two other tips: If you are loading delimited character data that is longer that 255 characters, you need to specify the length in your control file (for example, declaring it in the control file as char(4000)), or else you’ll get an aggravating error message warning that the data you’re loading is longer than the column in which you’re trying to load it. I spent some time looking very carefully at the load table trying to see what I was missing before I googled and found out that char fields do have default sizes in sqlldr control files.

And the bindsize and rows parameters are related, in terms of the amount of data that sqlldr can push into a table before it commits. You can make rows very very big, but if bindsize is too small (it defaults to 64k, apparently) the commits will happen sooner than they need to. For more explanation and other perforamance tips, see this page.

Overall, I’ve been very happy with how easy it is to load a fair bit of data, quickly (both in terms of load time and in development time) using sqlldr.

javascript and checkboxes

Ran into an interesting problem while I was using javascript today. I had a (dynamically generated) group of checkboxes that I wanted to be able to check and uncheck as a group. This was the code I had originally, which I had cribbed from one of the many fine javascript sites on the web:

function checkAll(field) {
   for (i = 0; i < field.length; i++) field[i].checked = true ;
}

This method was called by a link like this:

<a href="javascript:checkAll(document.form.checkboxes);">Check All</a>

All well and good, as long as the field that is passed into the function is an array of checkboxes. However, since javascript is a typeless language, you can call any method on an object, and depending on how egregarious the error is, the user might never see an error message. In this case, when the dynamically generated group of checkboxes has only one element, document.form.checkboxes is not an array of checkboxes, and its length attribute doesn’t return anything. The for loop is not executed at all, and the box is never checked.

The solution is simple enough, just check the type of object passed in:

function checkAll(field) {
if (field.type != 'checkbox') {
for (i = 0; i

It makes a bit of sense why one checkbox wouldn't be an array of size one, but the switch caught me a bit off guard. I'm trying to think of an analogous situation in the other dynamic languages I've used, but in most cases, you're either controlling both the calling and receiving code, or, in the case of libraries, the API is published. Perhaps the javascript API documenting this behavior is published--a quick google did not turn anything up for me.