Skip to content

Computer Security

Computer security has been on people’s minds quite a bit lately. What with all the new different viruses, worms and new schemes to get information through firewalls, I can see why. These problems cause downtime, which costs money. I had recently shared a conversation over a beer with one of my acquaintances who works for a networking security company. He’d given a presentation to a local business leaders conference about security. Did he talk about the latest and greatest in counter measures and self healing networks? Nope. He talked about three things average users can do to make their computers safer:

1. Anti virus software, frequently updated.
2. Firewalls, especially if you have an always on connection.
3. Windows Update.

Computer security isn’t a question of imperviousness–not unless you’re a bank or the military. In most cases, making it hard to break in is good enough to stop the automated programs as well as send the less determined criminals on their way. (This is part of the reason Linux and Mac systems aren’t (as) plagued by viruses–they’re not as typical and that makes breaking in just hard enough.) To frame it in car terms, keep your CDs under your seat–if someone wants in bad enough, they’ll get in, but the average crook is going to find another mark.

What it comes down to, really, is that users need to take responsibility for security too. Just like automobiles, where active, aware, and sober drivers combine with seat belts, air bags and anti-lock brakes to make for a safe driving experience, you can’t expect technology to solve the problem of computer security. After all, as Mike points out, social engineering is a huge security problem, and that’s something no program can deal with.

I think that science and technology have solved so many problems for modern society that it’s a knee jerk reaction nowadays to look to them for solutions, even if it’s not appropriate (the V-chip, the DMCA, Olean), rather than try to change human behavior.

Update (May 10):

I just can’t resist linking to The Tragedy of the Commons, which does a much more eloquent job of describing what I attempted to delineate above:

“An implicit and almost universal assumption of discussions published in professional and semipopular scientific journals is that the problem under discussion has a technical solution. A technical solution may be defined as one that requires a change only in the techniques of the natural sciences, demanding little or nothing in the way of change in human values or ideas of morality.

In our day (though not in earlier times) technical solutions are always welcome. Because of previous failures in prophecy, it takes courage to assert that a desired technical solution is not possible.”

Will RSS clog the web?

I’m in favor of promoting the use of RSS in many aspects of information management. However, a recent wired article asks: will RSS clog the web? I’m not worried much. Why?

1. High traffic sites like slashdot are already protecting themselves. I was testing my RSS aggregator, and hit slashdot’s RSS feed several times in a minute. I was surprised to get back a message to the effect of ‘You’ve hit Slashdot too many times in the last day. Please refrain from hitting the site more than once an hour’ (not the exact wording, and I can’t seem to get the error message now). It makes perfect sense for them to throttle down the hits from programs–they aren’t getting the same amount of ad revenue from RSS readers.

2. The Wired article makes reference to “many bloggers” who put most of their entries’ content in their RSS feed, which “allow[s] users to read … entries in whole without visiting” the original site. This is a bit of a straw man. If you’re having bandwidth issues because of automated requests, decrease the size of the file that’s being requested by not putting every entry into your RSS feed.

3. The article also mentions polling frequency–30 minutes or less. I too used to poll at roughly this frequency–every hour, on the 44 minute mark. Then, it struck me–I usually read my feeds once, or maybe twice, a day. And rarely do I read any articles between midnight and 8am. I tweaked my aggregator to check for new entries every three hours between 8am and midnight. There’s no reason to do otherwise with the news stories and blog entries that are most of the current RSS content. Now, if you’re using RSS to get stock prices, then you’ll probably want more frequent updates. Hopefully, your aggregator allows different frequencies for updating; Newsgator 1.1 does.

This comes back to the old push vs. pull debate. I like RSS because I don’t have to give out me email address (or update it, or deal with the unwanted newsletters in my inbox) and because it lets me automatically keep track of what people are saying. I think there’s definitely room for abuse with RSS spiders, just like with any other automated system; after all “a computer lets you make more mistakes faster than any invention in human history — with the possible exceptions of hand guns and tequila.”. I don’t think RSS will clog the web–it’s just going through some growing pains.

WAP vs J2ME

When I gave my talk about J2ME to BJUG a few weeks ago, one of the points I tried to address was ‘Why use J2ME rather than WAP.’ This is a crucial point, because WAP is more widely distributed. I believe the user interface is better, there is less network traffic, and there are possibilities for application extension that just don’t exist in WAP. (Though, to be fair, Michael Yuan makes a good point regarding issues with the optional packages standards process.)

I defended the choice of using MIDP 1.0 because we needed wide coverage and don’t do many complicated things with the data, but WAP is much more widely support than J2ME, by almost any measure. If you don’t have an archaic phone like my Nokia 6160, chances are you have a web browser. And WAP 2.0 supports images and XHTML, giving the application almost everything it needs without learning an entirely new markup language like WML.

So, we’ve decided to support XHTML and thus the vast majority of existing clients (one reason being that Verizon doesn’t support J2ME–at all.) So I’ve gotten a quick education in WAP development recently, and I just found a quote that just sums it up:

“As you can see, this is what Web programmers were doing back in 1994. The form renders effectively the same on the Openwave Browser as it does on a traditional web browser, albeit with more scrolling.”

This quote is from Openwave, a company that specializes in mobile development, so I reckon they know what they’re talking about. A couple of comments:

1. WAP browsers are where the web was in 1994. (I was going to put in a link from 1994, courtesy of the Way Back Machine, but it only goes back to 1996.) I don’t know about you, but I don’t really want to go back! I like Flash, DHTML and onClick, even though they can be used for some truly annoying purposes.

2. “…albeit with more scrolling” reinforces, to me, the idea that presenting information on a screen of 100×100 pixels is a fundamentally different proposition than a screen where you can expect, at a minimum, 640×480. (And who codes for that anymore?) On the desktop, you have roughly 30 times as much screen real estate (plus a relatively rich language for manipulating the interface on the client). It’s no surprise that I’m frustrated with I browse with WAP, since I’m used to browsing in far superior environments.

3. Just like traditional browsers, every time you want to do something complicated, you have to go to the server. You have to do this with XHTML (but not with WML, I believe. WML has its own issues, like supporting only bitmap pictures). That’s not bad when you’re dealing with fat pipes, but mobile networks are slow.

4. Fitting in with the carrier is an issue with WAP. Since the browser is provided, you have no control over some important issues. For example, one carrier we’re investigating requires you to navigate through pages and pages of carrier imposed links before you can get to your own bookmarks. It’s the whole gated community mindset; since the UI sucks, it’s harder to get around than it would be with Firefox.

In short, use WAP 2.0 if you must, but think seriously about richer clients (J2ME, BREW, or even the .Net compact framework). Even though they’ll be harder to implement and roll out, such clients will be easier to use, and thus more likely to become a part of your customers’ lives.