Skip to content

All posts by moore - 96. page

Passwords and authentication

Passwords are omnipresent, but just don’t work the way they should. A password should be a private string that only a user could know. It should be easy to remember, but at the same time hard to guess. It should be changed regularly, and only passed over a secure connection (SSL, ssh). At least, that’s what the password policies I’ve seen say. People, however, get in the way.

I have a friend who always has the same password: ‘lemmein’. She is non-technical. Whenever she tries to sign in to a system, she has invariably forgotten her password. She tries different incarnations, and eventually becomes so frustrated, she just types ‘lemmein’ and, voila, she is logged in.

I have another friend who is a computer security professional (or was). He has the same issue with forgotten passwords, but rather than have one insecure password, he keeps all his passwords in a file on a machine that he controls, protected by one master password. In this way, he only has to remember the one password, yet machines aren’t at risk.

I sympathize with both my friends, since, off the top of my head, I can easily think of ten different passwords that I currently use, for various systems and applications. In fact, the growth of the web applications (since the address bar is the new command line) has exploded the number of passwords that I have to remember.

I’m not as blase about security as my first buddy, nor as together as my second friend, so I just rely on my memory. That works, sometimes. Often, if I seldom visit a site that requires a password, I’ll always make use of the ‘mail me my password’ functionality that most such sites have. I won’t even bother to try to remember the password.

Sometimes, password changes are imposed on you. I’ve been at places where your password had to be changed every three weeks, and must be different rom your previous three passwords. I was only there for a short period of time, but I’m sure that there are some folks who are cycling passwords (‘oh, it’s one of these four, I know it’).

On the other hand, I worked at a place for three years; I had access to a number of web servers, often with sudo, yet I changed my passwords two times. It was just such a tremendous hassle to try to bring all my passwords in sync. (Yes, yes, we should have had an LDAP server responsible for all those passwords; that would have made changing it easier. There are some technical solutions that can ease password pain, at least within one organization.)

Passwords are even used in the ‘real world’ now. Leaving aside the obvious example of ATM pins, my bank won’t let me do anything serious to my account over the phone unless I know my password.

Passwords do have tremendous advantages. They let me authenticate myself without being physically present. They’re easy to carry with you. Computers don’t need special hardware or software to authenticate a user via a password. Everyone understands the concept. But passwords are really the least of the evils when it comes to authenticating remote users (/entities). They’re easy to pass around, or steal, since they’re aren’t physical. Passwords are either easy to forget or easy to crack.

I guess my solution has been to break up my passwords into levels. For simple things like logging into web applications, I have one or two very easy to remember passwords, or I use the ‘mail me my password’ functionality mentioned above. For more sensitive accounts that I use regularly, computer logins where I’m an administrator of some kind, my email, or web applications where my credit card details are viewable, I’ll have some more complicated password, which may or may not be shared among similar systems. And for other systems where I need a good password but don’t use it regularly, I’ll write it down and store it in a safe place.

Passwords are certainly better than using SSN, zip code, or some other arbitrary single token that could be stolen. But they certainly aren’t the optimal solution. I actually used a userid/biometric solution at a client’s office (for the office door) and it rejected me a very small percentage of the time. The overhead to add me to the system was apparently fairly substantial, since it took weeks for this to happen. For situations where the hardware is available and deployed, biometric solutions seem like a good fit.

No one, however, is going to add finger/eye/palm scanners to every machine that I want to access, to say nothing of various interesting remote applications (I want my travelocity!). Some scheme where you login to a single computer that then generates a certificate that uniquely identifies you (something like xauth) may be the best type of solution for general purpose non-physical authentication. But, as a software guy, my mind boggles at the infrastructure needed to support such a solution. Looks like passwords are here to stay for a while.

Slackware to the rescue

I bought a new Windows laptop computer about nine months ago, to replace my linux desktop that I purchased in 2000. Yesterday, I needed to check to see if I had a file or two on the old desktop computer, but I hadn’t logged in for eight months; I had no idea what my password was. Now, I should have a root/boot disk set, even though floppy disks are going the way of cursive. But I didn’t. Instead, I had the slackware installation disks from my first venture into linux: a IBM PS/2, with 60 meg of hard drive space, in 1997. I was able to use those disks to load a working, if spartan, linux system into RAM. Then, I mounted the boot partition and used sed (vi being unavailable) to edit the shadow file:

sed 's/root:[^:]*:/root::/' shadow > shadow.new
mv shadow.new shadow

Unmount the partition, reboot, pop the floppy out, and I’m in to find that pesky file. As far as I know, those slackware install disks are the oldest bit of software that I own that still is useful.

New approach to comment spam

Well, after ignoring my blog for a week, and dealing with 100+ comment spams, I’m taking a new tack. I’m not going to rename my comments.cgi script anymore, as that seems to have become less effective.

Instead, I’m closing all comments on any older entry that doesn’t have at least 2 comments. When I go through and delete any comment spam, I just close the entry. This seems to have worked, as I’ve dealt with 2-3 comment spams in the last week, rather than 10+.

I’ve also considered writing a bit of perl to browse through Movable Types DBM database to ease the removal of ‘tramadol’ entries (rather than clicking my way to carpal tunnel). We’ll see.

(I don’t even know what’s involved in using MT-Blacklist. Not sure if the return would be worth the effort for my single blog installation.)

Back to google

So, the fundamental browser feature I use the most is this set of keystrokes:
* cntrl-T–open a new tab
* g search term–to search for “search term”
(I set up g so the keyword expands and points to a search engine.)

Periodically, I’ll hear of a new search engine–a google killer. And I’ll switch my bookmark so that ‘g’ points to the new search engine. I’ve tried AltaVista, Teoma and, lately, IceRocket. Yet, I always return to Google. The others have some nice features–IceRocket shows you images of the pages–and the search results are similar enough. What keeps me coming back to google is the speed of the result set delivery. I guess my attention span has just plain withered.

Anyone else have a google killer I should try?

Freecycling, couchsurfing and easy information transfer

One of the most amazing things about the internet is the manner in which it decreases the costs of information exchange. The focus of this decreased cost is often the business world, because that’s where the money is. However, I’m fascinated by the other forums for information exchange that simply wouldn’t exist without extremely cheap publishing and distribution of information. In the past, I’ve taken a look at the web and government budgets, but I recently came across two other activities that I feel are impressive, and exhibit just what the web can provide: freecycling and couchsurfing.

In the past, when I had something (an excess of garden crops, for example) that I didn’t want anymore that was of negligible value, I had a few options for getting rid of it. In decreasing order of personal preference:

1. Foist it on a friend or family member.

2. Put it on the street with a ‘free’ sign.

3. Give it to Goodwill/Salvation Army.

4. Save it and have a garage sale when I had enough items of negligible value.

5. Give it to a thrift store.

6. Throw it away.

Well, now the internet gives me another option: post to a freecycle email list. There are thousands of these groups. I joined the Boulder list, and it has a simple rule: no trading, just giving. 876 people are subscribed to this list. Freecycling is similar in nature to option #2, except many more people will probably find out about your surplus rutabagas via an email than will drive by your house before they turn into a rotting mess (less effort, too–you can send emails from the comfort of your computer chair, as opposed to hauling produce to the curb). In addition to helping you get rid of stuff, these lists also let you accumulate more crap, easily, and without requiring new production. (I don’t know, there may have been freecycle newsletters circulating around yoga studios and health food stores before email took off. Again, the sheer number of people, who by self-selection are interested in giving and getting new stuff, and the ease of posting and receiving the information, means that email freecycling is a better way.)

Speaking of free stuff, a few years I was bumming around down under, and ended up staying with a friend of a friend of a friend of a friend. The free place to stay was sweet, but so was the local knowledge and a friendly face in a strange land. Upon returning to the USA, I decided it’d be great to build a website dedicated to these concepts. Friendster and the other social networking sites give you some of the needed functionality (who’s connected to me, where do they live) but not all of it (search by locality, meet random people). I wanted to call the website ‘findalocal’ and even threw together a PHP prototype before I got sucked into other projects. Well, I was browsing Wired a few days ago, and came upon Couchsurfing.com, a site which does almost exactly what I want, has been around for since 1999, and is much more professionally done than what I would have whipped up. The basic premise is, you offer up your local knowledge to anyone who is a member. You can also offer up other services, not least a place to crash for a few nights. For more info, check out the couchsurfing FAQ. Again, this is a service that would have a hard time without the easy dissemination of information provided by the web.

In short, I think that, although a lot of excitement revolves around the portions of the internet where you can make gobs and gobs of money, plenty of interesting stuff is going on with no money involved. In fact, the ease of information transfer is even more important when there is no explicit economic value. Invoices are going to be sent to suppliers, whether via carrier pigeon or extranet, but getting rid of my old bicycle by giving it to someone has to be easier than just trashing it, or, nine times out of ten, I’ll throw it away. And couch surfing is even more dependent on free information exchange, due to the dispersed geographic nature of the activity.

Book Review: Dancing with Cats

If you have a chance to read Dancing With Cats by Burton Silver and Heather Busch, don’t bother. However, pick it up and glance through the photos. For it’s in the pictures, of cats and humans cavorting, of almost impossibly resonant images, that this book shines. (Visit the Museum of Non Primate Art for more.) The text is a bit much, using words like ‘aura’ and negative energy, and apparently meaning it. But, if you like cats and have a sense of the absurd, oh the pictures–check it on Amazon.com. I chuckled and chortled through the entire book.

“Dancing With Cats” on Amazon.

OJB and object caching, pt II

Well, I was wrong, when I posted that OJB rc4 didn’t support caching. Because of the way the application is architected, there are two places where we grab data from the database. I know, I know, don’t repeat yourself. But when you’re using free JAAS modules and free O/R mapping tools, you can’t be too picky.

The upshot is, when I actually look at SQL statements for a typical two user session, I see 21 of a certain select statement for when caching using the org.apache.ojb.broker.cache.ObjectCacheEmptyImpl class, and only 6 when performing exactly the same user actions with the org.apache.ojb.broker.cache.ObjectCacheDefaultImpl class. Don’t ask me why it’s not a 2 to 1 ratio; I’m looking into it. (Deep are the ways of object caching.)

Wireheads and depression

I re-read the first two books of the Ringworld series a few months ago. Great science fiction–cool technology, interesting aliens, cardboard characters, decent plot. In the second one, The Ringworld Engineers, one of the main characters is addicted to electricity. Seriously–he has a device that directly stimulates the pleasure center of the brain. Niven calls this ‘addicted to the wire.’

Recently, however, I ran across this article: Shocking Treatment for Depression. Sure, sure, it will require a prescription, and it only helps “lift [your] mood.” It’s only for folks who are depressed. And Viagra is only for older men with erection problems.

Book Review: Deadly Feasts

Deadly feasts: tracking the secrets of a terrifying new plague, by Richard Rhodes, is one scary book. It tracks the discovery of prions, the mishapen proteins responsible for mad cow disease, scrapie, and Creutzfeldt Jacob disease. Following human cannibals in the jungles of New Guinea in the fifties, bovine cannibals of the British Isles in the eighties, and the bizarre history of sheep scrapie from the 17th century on, Rhodes does a great job of presenting the history and discovery of this bizarre group of diseases. I especially enjoyed the characterizations of the scientists, from the Noble Laureate who so enjoyed the New Guinea that he often regretted rejoining civiliziation, yet brought thirty natives back to the USA and helped them through school, to the hyper-competitive scientist who named the molecules even though he wasn’t quite certain what they were.

But this isn’t just a story of scientific discovery. As the foreboding subtitle blares, Rhodes explores some of the scarier aspects of prions. These include spontaneous formation, responsible for the known early cases of Creutzfeldt Jacob disease, trans-species infection, including mad cow disease and scrapie, the long long incubation period and lack of immune system response, and hardiness of the disease. One scary factoid: a scientist took a sample of scrapie, froze it, baked it for an hour at 360 degrees (celsius), and was able to re-infect other animals from this sample.

For all the uneasiness this book inspires, it certainly doesn’t offer any answers. A condemnation of industrial agriculture, a warning that it’s unknown whether vegetarians are even safe, and a caution against using bone meal for your flower garden do not make a recipe for handling this issue. To be fair, it was printed in 1997–perhaps things are under control now.

Book Review: Java Transaction Processing

Since many financial institutions have standardized on it, I hear Java is the new COBOL. Whether or not this is true, if Java is to become the business language of choice, transaction support is crucial. (By ‘transaction,’ I mean ‘allowing two or more decisions to me made under ACID constraints: atomically, consistently, (as) in isolation and durably’.) Over the last five years, the Java platform has grown by leaps and bounds, not least in this area.

Java Transaction Processing by Mark Little, Jon Maron and Greg Pavlik, explores transactions and their relationship with the Java language and libraries. Starting with basic concepts of transactions, both local and distributed, including the roles of participant and coordinator, and the idea of transaction context, the book covers much old but useful ground. Then, by covering the Java Transaction API (JTA) as well as OTS, the OMG’s transaction API which is JTA’s foundation, this book provides a solid understanding of the complexities of transactions for Java programmers who haven’t dealt with anything more complex than a single RDBMS. I’d say these complexities could be summed up simply: failures happen; how can you deal with them reliably and quickly?

The book then goes on to examine transactions and the part they play in major J2EE APIs: Java Database Connectivity (JDBC), Java Message Service (JMS), Enterprise Java Beans (EJB) and J2EE Connector Architecture (JCA). These chapters were interesting overviews of these technologies, and would be sufficient to begin programming in them. However, they are complex, and a single chapter certainly can’t do justice to any of the APIs. If you’re new to them, expect to buy another book.

In the last section, the authors discuss the future of transactions, especially long running activities (the Java Activity Service) and web services. This was the most interesting section to me, but also is the most likely to age poorly. These technologies are all still under development; the basic concepts, however, seem likely to remain useful for some time. And, if you need to decide on a web service transaction API yesterday, don’t build your own, read chapter 10.

There were some things I didn’t like about Java Transaction Processing. Some of the editing was sloppy—periods or words missing. This wasn’t too big a problem for me, since the publisher provided me a free copy for review, but if I were paying list price ($50) I’d be a bit miffed. A larger annoyance was incorrect UML and Java code snippets. Again, the meaning can be figured out from the text, but it’s a bit frustrating. Finally, while the authors raise some very valid points about trusting, or not, the transaction system software provider, I felt the constant trumpeting of HP and Arjuna technologies was a bit tedious. Perhaps these companies are on the forefront of Java transactions (possible); perhaps the authors are most familiar with the products of these companies (looking at the biographies, this is likely). The warnings—find out who is writing the transaction software, which is probably at the heart of your business, and how often they’ve written such software before—were useful, if a bit repetitive.

That said, this book was still a good read, if a bit long (~360 pages). I think that Java Transaction Processing would be especially useful for an enterprise architect looking to leverage existing (expensive) transactional systems with more modern technology, and trying to see how Java and its myriad APIs fit into the mix. (This is what I imagine, because I’m not an enterprise architect.) I also think this book would be useful to DBAs; knowing about the Java APIs and how they deal with transactions would definitely help a DBA discuss software issues with a typical Java developer.

To me, an average Java developer, the first section of the book was the most useful. While transactions are fairly simple to explain (consider the canonical bank account example), this section illuminated complexities I’d not even thought of—optimizations, heuristic outcomes, failure recovery. These issues occur even in fairly simple setups—I’m working at a client who wants to update two databases with different views of the same information, but make sure that both are updated or neither; this seems to be a typical distributed transaction. The easiest way to deal with this is to pretend that such updates will always be successful, and then accept small discrepancies. That’s fine with click-throughs—money is a different matter.

However, if you are a typical web developer, I’m not sure this book is worth the price. I would borrow it from your company’s enterprise architect, as reading it will make you a better programmer (as well as giving you a sense of history—transactions have been around for a long time). But, after digesting fundamental distributed transaction concepts, I won’t be referencing this book anytime soon, since the scenarios simply don’t happen that often (and when they do, they’re often ignored, as outlined above).