Skip to content

Book Review: The Social Life of Information

I just finished reading The Social Life of Information, by John Seeley Brown and Paul Duguid. This was not the quickest read; it’s a business book with the obtuseness of vocabulary that implies. However, if you’re a computer person with any desire to see your work in a larger context, this is a book you should read. In it, they examine eight separate areas in which computers, and the internet in particular, have supposedly changed our lives (this is typically called ‘hype’, though the authors don’t use the word) in the latter years of the 20th century. (This book is copyright 2000.) You probably remember some of these claims: the death of the corporation, of the university, of paper documents, of the corporate office. In each chapter, they review one claim, show how the claim’s proponents over-simplify the issue, and look at the (new and old) responses of people and institutions to the problem that the claim was trying to solve. They also examine, in detail, the ways in which humans process information, and how the software that is often touted as a replacement simply isn’t.

I really enjoy ‘ah-ha’ moments; these are times where I look back at my experiences in a new light, thanks to a theory that justifies or explains something that I didn’t understand. For example, I remember when I started my first professional job, right out of college, I thought the whole point of work was to, well, work. So I sat in my cube and worked 8 solid hours a day. After a few months, when I still didn’t know anyone at the office, but had to ask someone how to modify a script I was working on, I learned the value of social interaction at the office. (Actually, I was so clueless, I had to ask someone to find the appropriate someone to ask.) While examining the concept of the home office, the authors state “[t]he office social system plays a major part in keeping tools (and people) up and running.” It’s not just work that happens at the office–there’s collaboration and informal learning.

I’ve worked remotely in the past year for the first time, and anyone who’s worked remotely has experienced a moment of frustration when trying to explain something and wished they were just “there,” to show rather than tell–the authors refer to this process as ‘huddling.’ When someone is changing a software configuration that I’m not intimately familiar, it’s much easier to judge correct options and settings if I’m there. The authors explain that “[huddling] is often a way of getting things done through collaboration. At home with frail and fickle technologies and unlimited configurations, people paradoxically may need to huddle even more, but can’t.” This collaboration is even more important between peers.

Reading about the home office and its lack of informal networks (which do occur around the corporate office) really drove home the social nature of work. After a few years at my company, I had cross-departmental relationships (often struck up over beer Friday) that truly eased some of my pain. Often, knowing who to ask a question is more important than knowing the answer to the question. It’s not impossible to build those relationships when you’re working remotely, but it’s much more difficult.

Another enjoyable moment of clarity arose when the authors discussed the nature of documents. I think of a document as a Word file, or perhaps a set of printed out pages. The explicit information (words, diagrams, etc) that I can get from the document is the focus (and this is certainly the case in document management systems sales pitches). But there’s a lot more to a document. How do I know how much to trust the information? Well, if it’s on a website somewhere, that’s a fair bit sketchier than if it’s in the newspaper, which is in turn less trustworthy than if I’ve experienced the information myself. Documents validate information–we’ve all picked up a book, hefted it, examined it, and judged it based on its cover. The authors say “readers look beyond the information in documents. … The investment evident in a document’s material content is often a good indicator of the investment in its informational content.” Just as if someone says “trust me” you should probably run the other way, information alone can’t attest to its own veracity. The authors also look at aspects to documents (like history, like feel, like layout) that simply aren’t captured when you treat them as streams of bits.

And there are many other examples of ‘hype’ that are deflated in this book, and a few other ‘ah-ha’ moments as well. As I stated above, this is a great read for anyone who thinks there is a technical answer to any problem (or even most problems). By taking apart various claims, and examining the truth and untruth of those claims in a real world context, these two authors give technology credit where it’s due, while at the same time explaining why some of the older institutions and important factors in our lives will remain around. Reading this book was hard work, but understanding what the authors say gives me yet another way to relate to non-technical people, as well as fend off the zealots who claim, in a knee-jerk fashion, that more software solves problems. I majored in physics, in college, but minored in politics. It always seemed that the people problems, though more squishy, were more interesting. This book is confirmation of that fact.

People and automation

I read this article with interest. I’ve noticed the creep of automated services in the last ten years. Who goes into gas stations any more, unless you need a candy bar. Given the fact that these machines are a fixed cost investment (as opposed to an ongoing expense, like labor), I expect to see more and more of these. When reading this article, what every employee has to ask themselves is ‘Am I an elevator attendant or a bank teller?’.

I remember reading a story in Analog, years ago, about a general purpose robot and the societal dysfunction it caused. These robots could do everything a human being could, but 24 hours a day, rather than 8. Of course, this caused riots among workers, afraid of their jobs being turned over to the robots. Luckily, workers’ organizations and employers were able to come to an compromise–businesses couldn’t own these robots, only people could. Businesses would rent them from individuals, who would thus be able to earn a living.

That’s science fiction for you: both the problems and solutions are outlined in black and white. What we see nowadays is greyer–more and more ATMs are installed, yet tellers are being hired. Robots aren’t general purpose (and humanoid)–they’re slipping into the mainstream industry by industry. People aren’t rioting in protest of robots–they’re traveling extra distance to use them.

But the issues raised are still the same. Every machine that replaces a person (or two and one half people) causes a very real impact on the bottom line of the employee. At the same time, if a business can cut its labor costs, it will need to do so (especially if its competitors are also heading down the automation path). These differences revisit the old labor vs. capital divide (wouldn’t Marx and Engels be proud?), and the answers aren’t simple (or completely known, for that matter).

(The same issues arise in offshoring, and Bob Lewis comments here (sorry, you have to register to read the article). He states that the labor and capital national economies have been coupled for a long time, but now are being decoupled. He doesn’t have any answers, either.)

Technology has been automating away jobs since the Industrial Revolution, if not before. Things have worked out fine in the past, but it hasn’t always been pleasant to live through.

I don’t see any substantive debate on the nature of labor disempowerment. Perhaps this is because “we’ve seen this before, and it has always worked out” or because it’s an uncomfortable issue (especially in an election year) or because “we don’t have any real leaders anymore” or because we’re all vegetated by the modern opiate of the masses? I don’t know whether labor will riot, but brushing the issue under the rug certainly isn’t going to help.

What the heck is Flash good for?

Flash is a fairly pervasive rich client framework for web applications. Some folks have issues with it. I’ve seen plenty of examples of that; the Bonnaroo site is an example of how Flash . Some folks think it’s the future of the internet. I like it, when it’s used for good purpose, and I thought I’d share a few of my favorite flash applications:

1. Ishkur’s guide to electronic music has an annoying intro, but after that, it’s pure gold. Mapping the transitions and transformations of electronic music, complete with commentary and sample tracks, I can’t imagine a better way to get familiar with musical genres and while away some time.

2. They Rule is an application that explores the web of relationships among directors on boards of public companies. Using images, it’s much easier to see the interconnectedness of the boards.

3. A couple of short animated pieces: Teen Girl Squad follows the (amateurly drawn) exploits of, well, a set of four teenage girls, and a cute movie about love (originally from http://students.washington.edu/k1/bin/Ddautta_01_masK.swf).

Of course, these all beg the question: what is a rich client good for (other than cool movies)? When is it appropriate to use Flash (or ActiveX, or XUL) rather than plain old (D)HTML? I wish I knew the answer, but it seems to me that there are a couple of guidelines.

1. How complicated is the data? And how complicated is the representation of that data? The more complicated, the more you should lean towards rich clients. I can’t imagine the electronic guide to music being half as effective if it was done in html.

2. How savvy are your users? This cuts both ways–if the users aren’t savvy, then the browser may be a comfortable, familiar experience. However, sometimes rich clients can ‘act smarter’ and make for a better user experience.

3. How large is your userbase? The larger, the more you should tend towards a thin, pervasive client like the browser, since that will ease deployment issues.

I used to think Flash was unabatedly evil, but I’m now convinced that, in some cases, it really makes a lot of sense.

Arrogance

Ah, the arrogance of software developers. (I’m a software developer myself, so I figure I have carte blanche to take aim at the foibles of my profession.) Why, just the other day, I reviewed a legal document, and pointed out several places where I thought it could be improved (wording, some incorrect references, and whatnot). Now, why do I think that I have any business looking over a legal document (a real lawyer will check it over too)? Well, why shouldn’t I? I think that most developers have a couple of the characteristics/behaviors listed below, and that these can lead to such arrogance.

1. Asking questions

Many developers have no fear, even take pride, in asking good, difficult questions about technical topics. Asking such questions can become a habit. A developer may ask a question, and feel comfortable about it, when he/she is entirely out of his/her depth.

2. Attention to detail

Developers tend to be capable of focusing on one thing to the exclusion of all else. This often means that, whatever the idea that comes along, a developer will focus on it exclusively. Such focus may turn up issues that were missed by the less attentive, or it may just be nit picking. (Finding small issues isn’t nitpicking when one is developing–it’s pre-emptive bug fixing.)

3. Curiosity and the desire to learn

Most developers are curious. In part because computers are so new, and in part because software technologies change so rapidly, hackers have to be curious, or they’re left behind, coding Cobol (not that there’s anything wrong with that!). This sometimes spills out into other portions of their lives, tweaking their bodies or the mechanics of an IPO.

4. Know something about something difficult

Yeah, yeah, most developers are not on the bleeding edge of software. But telling most people what they do usually causes some kind of ‘ooh’ or raised eyebrows conveying some level of expectation of the difficulty of software development. (Though this reaction is a lot less universal than it was during the dotcom boom–nowadays, it could just as easily be an ‘ooh’ of sympathy to an out of work .) Because developers are aware that what they do often isn’t that difficult (it’s just being curious, asking questions, and being attentive), it’s easy to assume that other professions usually thought difficult are similarly overblown.

Now, this arrogance surfaces in other realms; for example, business plans. I am realizing just how far I fall short in that arena. I’ve had a few business plans, but they often fall prey to the problem that the gnomes had in South Park: no way to get from action to profit. I’m certainly not alone in this either.

In the same vein of arrogance, I used to make fun of marketing people, because everything they do is so vague and ill-defined. I always want things nailed down. But, guess what, the real world is vague and ill-defined. (Just try finding out something simple, like how many people are driving Fords, how women use the internet, or how many people truly, truly love Richie Valens. You appear to be reduced to interviewing segments of the population and extrapolating.) And if you ask people what they want, they’ll lie to you. Not because they want to lie, but because they don’t really know what they want.

I guess this is a confession of arrogance on the part of one software developer and an apology to all the marketroids I’ve snickered at over the years (oops, I just did it again :). (I promise to keep myself on a shorter leash in the future.) Thanks for going out into the real world and delivering back desires, which I can try to refine into something I can really build. It’s harder than it looks.

Computer Security

Computer security has been on people’s minds quite a bit lately. What with all the new different viruses, worms and new schemes to get information through firewalls, I can see why. These problems cause downtime, which costs money. I had recently shared a conversation over a beer with one of my acquaintances who works for a networking security company. He’d given a presentation to a local business leaders conference about security. Did he talk about the latest and greatest in counter measures and self healing networks? Nope. He talked about three things average users can do to make their computers safer:

1. Anti virus software, frequently updated.
2. Firewalls, especially if you have an always on connection.
3. Windows Update.

Computer security isn’t a question of imperviousness–not unless you’re a bank or the military. In most cases, making it hard to break in is good enough to stop the automated programs as well as send the less determined criminals on their way. (This is part of the reason Linux and Mac systems aren’t (as) plagued by viruses–they’re not as typical and that makes breaking in just hard enough.) To frame it in car terms, keep your CDs under your seat–if someone wants in bad enough, they’ll get in, but the average crook is going to find another mark.

What it comes down to, really, is that users need to take responsibility for security too. Just like automobiles, where active, aware, and sober drivers combine with seat belts, air bags and anti-lock brakes to make for a safe driving experience, you can’t expect technology to solve the problem of computer security. After all, as Mike points out, social engineering is a huge security problem, and that’s something no program can deal with.

I think that science and technology have solved so many problems for modern society that it’s a knee jerk reaction nowadays to look to them for solutions, even if it’s not appropriate (the V-chip, the DMCA, Olean), rather than try to change human behavior.

Update (May 10):

I just can’t resist linking to The Tragedy of the Commons, which does a much more eloquent job of describing what I attempted to delineate above:

“An implicit and almost universal assumption of discussions published in professional and semipopular scientific journals is that the problem under discussion has a technical solution. A technical solution may be defined as one that requires a change only in the techniques of the natural sciences, demanding little or nothing in the way of change in human values or ideas of morality.

In our day (though not in earlier times) technical solutions are always welcome. Because of previous failures in prophecy, it takes courage to assert that a desired technical solution is not possible.”

Will RSS clog the web?

I’m in favor of promoting the use of RSS in many aspects of information management. However, a recent wired article asks: will RSS clog the web? I’m not worried much. Why?

1. High traffic sites like slashdot are already protecting themselves. I was testing my RSS aggregator, and hit slashdot’s RSS feed several times in a minute. I was surprised to get back a message to the effect of ‘You’ve hit Slashdot too many times in the last day. Please refrain from hitting the site more than once an hour’ (not the exact wording, and I can’t seem to get the error message now). It makes perfect sense for them to throttle down the hits from programs–they aren’t getting the same amount of ad revenue from RSS readers.

2. The Wired article makes reference to “many bloggers” who put most of their entries’ content in their RSS feed, which “allow[s] users to read … entries in whole without visiting” the original site. This is a bit of a straw man. If you’re having bandwidth issues because of automated requests, decrease the size of the file that’s being requested by not putting every entry into your RSS feed.

3. The article also mentions polling frequency–30 minutes or less. I too used to poll at roughly this frequency–every hour, on the 44 minute mark. Then, it struck me–I usually read my feeds once, or maybe twice, a day. And rarely do I read any articles between midnight and 8am. I tweaked my aggregator to check for new entries every three hours between 8am and midnight. There’s no reason to do otherwise with the news stories and blog entries that are most of the current RSS content. Now, if you’re using RSS to get stock prices, then you’ll probably want more frequent updates. Hopefully, your aggregator allows different frequencies for updating; Newsgator 1.1 does.

This comes back to the old push vs. pull debate. I like RSS because I don’t have to give out me email address (or update it, or deal with the unwanted newsletters in my inbox) and because it lets me automatically keep track of what people are saying. I think there’s definitely room for abuse with RSS spiders, just like with any other automated system; after all “a computer lets you make more mistakes faster than any invention in human history — with the possible exceptions of hand guns and tequila.”. I don’t think RSS will clog the web–it’s just going through some growing pains.

WAP vs J2ME

When I gave my talk about J2ME to BJUG a few weeks ago, one of the points I tried to address was ‘Why use J2ME rather than WAP.’ This is a crucial point, because WAP is more widely distributed. I believe the user interface is better, there is less network traffic, and there are possibilities for application extension that just don’t exist in WAP. (Though, to be fair, Michael Yuan makes a good point regarding issues with the optional packages standards process.)

I defended the choice of using MIDP 1.0 because we needed wide coverage and don’t do many complicated things with the data, but WAP is much more widely support than J2ME, by almost any measure. If you don’t have an archaic phone like my Nokia 6160, chances are you have a web browser. And WAP 2.0 supports images and XHTML, giving the application almost everything it needs without learning an entirely new markup language like WML.

So, we’ve decided to support XHTML and thus the vast majority of existing clients (one reason being that Verizon doesn’t support J2ME–at all.) So I’ve gotten a quick education in WAP development recently, and I just found a quote that just sums it up:

“As you can see, this is what Web programmers were doing back in 1994. The form renders effectively the same on the Openwave Browser as it does on a traditional web browser, albeit with more scrolling.”

This quote is from Openwave, a company that specializes in mobile development, so I reckon they know what they’re talking about. A couple of comments:

1. WAP browsers are where the web was in 1994. (I was going to put in a link from 1994, courtesy of the Way Back Machine, but it only goes back to 1996.) I don’t know about you, but I don’t really want to go back! I like Flash, DHTML and onClick, even though they can be used for some truly annoying purposes.

2. “…albeit with more scrolling” reinforces, to me, the idea that presenting information on a screen of 100×100 pixels is a fundamentally different proposition than a screen where you can expect, at a minimum, 640×480. (And who codes for that anymore?) On the desktop, you have roughly 30 times as much screen real estate (plus a relatively rich language for manipulating the interface on the client). It’s no surprise that I’m frustrated with I browse with WAP, since I’m used to browsing in far superior environments.

3. Just like traditional browsers, every time you want to do something complicated, you have to go to the server. You have to do this with XHTML (but not with WML, I believe. WML has its own issues, like supporting only bitmap pictures). That’s not bad when you’re dealing with fat pipes, but mobile networks are slow.

4. Fitting in with the carrier is an issue with WAP. Since the browser is provided, you have no control over some important issues. For example, one carrier we’re investigating requires you to navigate through pages and pages of carrier imposed links before you can get to your own bookmarks. It’s the whole gated community mindset; since the UI sucks, it’s harder to get around than it would be with Firefox.

In short, use WAP 2.0 if you must, but think seriously about richer clients (J2ME, BREW, or even the .Net compact framework). Even though they’ll be harder to implement and roll out, such clients will be easier to use, and thus more likely to become a part of your customers’ lives.

What use is certification?

What good are certifications like the Sun Certified Java Programmer (SCJP) and the Microsoft Certified Systems Engineer programs? Unlike the Cisco certifications, you don’t have to renew these every couple of years (at least the Java certifications–in fact, everything I mention below applies only to the Java certifications, as those are the only ones of which I have more than a passing knowledge). I am a SCJP for Java2, and I have an acquaintance who is a certified programmer for Java1.1; a Java1.1 cert isn’t very useful unless you’re targeting .Net, or writing applets that need to run on most every browser. Yet my colleague and myself can continue to call ourselves ‘Java Certified Programmers.’ I realize that there’s an upgrade exam, but I’ve never met a soul who’s taken it; and I don’t believe I’m prohibited from heading down the Java Certification path and handing Sun more money because I am not an SCJP for the most recent version of Java. In fact, I’m studying right now for the Sun Certified Web Component Developer (SCWCD) and plan to take the exam sometime this summer. Even though these certifications may be slightly diluted by not requiring renewal, I think there are a number of reasons why they are a good thing:

1. Proof for employers.

Especially when you deal with technologies that are moving fast (granted, changes to Java have slowed down in the past few years, but it’s still moving faster than, for example, C++ or SQL), employers may not have the skill set to judge your competence. Oh, in any sane environment you will probably interview with folks who are up to date on technology, but who hasn’t been screened out by HR because of a lack of appropriate keywords. Having a certification is certainly no substitute for proper experience, but it serves as a baseline that employers can trust. In addition, a certification is also a concrete example of professional development: always a good thing.

2. Breadth of understanding.

I’ve been doing server side Java development for web environments for 3 years now, in a variety of business domains and application servers. Now, that’s not a long time in programming years, but in web years, that’s a fair stint. But, studying for the SCWCD, I’m learning about some aspects of web application development that I hadn’t had a chance to examine before. For example, I’m learning about writing tag libraries. (Can you believe that the latest documentation I could find on sun.com about tag libraries was written in 2000?) I was aware of tag libraries, and I’d certainly used them, the struts tags among others, but learning how to implement one has really given me an appreciation for the technology. Ditto for container managed security. Studying for a certification definitely helps increase the breadth of my Java knowledge.

3. Depth of understanding.

Another aspect is an increased depth of understanding; actually reading the JSP specification or finding out what the difference is between overriding and overloading (and how one of them cares about the type of the object, whereas the other cares only about the type of the reference) or in what order static blocks get initialized. (My all time favorite bit of know-how picked up from the SCJP was how to create anonymous arrays.) The knowledge you gain from certification isn’t likely to be used all the time, but it may save you when you’ve got a weird bug in your code. In addition, knowing some of the methods on the core classes saves you from running to the API every time (though, whenever I’m coding, the javadoc is inevitably open). Yeah, yeah, tools can help, but knowing core methods can be quicker (and your brain will always be there, unlike your IDE).

4. A goal can be an incentive.

Personally, I’m goal oriented, and having a certification to achieve gives me a useful framework for expenditure of effort. I know what I’m aiming for and I’m aware of the concrete series of steps to achieve that goal. I can learn quite a bit just browsing around, but for serious understanding, you can’t beat a defined end point. I’d prefer it to be a real-world project, but a certification can be a useful stand in. (Yes, open source projects are good options too–but they may not cover as much ground and certainly, except for a few, are not as widely known as certifications.)

I’ve met plenty of fine programmers who weren’t certified (just as I’ve met plenty of fine programmers who weren’t CS majors). However, I think that certifications can be a useful complement to real world experience, giving job seekers some legitimacy while also increasing the depth and breadth of their understanding of a language or technology.

Inlining of final variables and recompilation

This problem that has bitten me in the ass a few times, and I’d love to hear any bright ideas on how y’all avoid it.

Suppose you have an interface that defines some useful constants:

public interface foo {
 int five = 6;
}

and a class that uses those constants:

public class bar {
 public static void main(String[]args) {
  System.out.println("five: "+foo.five);
 }
}

All well and good, until you realize that five isn’t really 6, it’s 5. Whoops, change the foo java file and rebuild, right? Well, if you use javac *.java to do this (as you might, if you only have the foo and bar files), then you’ll be alright.

But, if you’re like the other 99% of the java development world, and you use a build tool, like ant, smart enough to look at timestamps, you’ll still get 6 for the output of java bar. Ant is smart enough to look at the timestamps of .class and .java files to determine which .java files have changed since it last did a compilation. But it is too dumb to realize that the bar class has a dependency on foo, and should thus be recompiled even though bar.java is older than bar.class. (I haven’t looked at the byte code, but I expect that the value of five is just inlined into the bar class because it’s a final variable.) If you’re using a make based build system, I believe you can use javadeps to build out the correct dependency list, but I haven’t seen anything similar for ant. Another options is to just remember to blow away your build directory anytime you change your ‘constants’.

I guess this is why properties files might be a better choice for this type of configuration information, because they’re always read in anew at startup, and thus cannot be inlined (since they’re a runtime thing). Of course, then you lose the benefits of type checking. Not sure what the correct answer is.

Kris Thompson’s review of my talk

Kris Thompson attended my BJUG kick start talk on J2ME development. I wanted to comment on his post.

1. I wouldn’t say that J2ME development has scarred me. But J2ME is definitely a technology (well, a set of technologies, really) that is still being defined. This leads to pain points and opportunities, just like any other new technology. Lots of ground to be broken.

2. Caching–you can do it, but just like in any other situation, caching in J2ME introduces additional complexities. Can it be worth it, if it saves the user time and effort? Yes. Is it worth it for the application I was working on? Not yet.

3. PNG–it’s easy to convert images from GIF/JPEG format to PNG. Check out the extremely obtuse JAI.create() method, and make sure you check out the archives of the jai-interest mailing list as well.

4. Re: Shared actions between MIDP and web portions of the application, I guess I wasn’t very clear on this–the prime reason that we don’t have shared action classes between these two portions was because, other than in one place (authentication) they don’t have any feature overlap. What you can do on the web is entirely separate from what you can do with the phone (though they can influence each other, to be sure).

Anyway, thanks Kris for the kind words.

As a last note, I would reiterate what Kris mentions: “Find out which phones have the features you want/need” and perhaps add “and make sure your service provider supports those features as well.” Unlike in the server side world, where everyone pretty much targets IE, J2ME clients really do have different capabilities and scouting those out is a fundamental part of J2ME development.