What the heck is Flash good for?

Flash is a fairly pervasive rich client framework for web applications. Some folks have issues with it. I’ve seen plenty of examples of that; the Bonnaroo site is an example of how Flash . Some folks think it’s the future of the internet. I like it, when it’s used for good purpose, and I thought I’d share a few of my favorite flash applications:

1. Ishkur’s guide to electronic music has an annoying intro, but after that, it’s pure gold. Mapping the transitions and transformations of electronic music, complete with commentary and sample tracks, I can’t imagine a better way to get familiar with musical genres and while away some time.

2. They Rule is an application that explores the web of relationships among directors on boards of public companies. Using images, it’s much easier to see the interconnectedness of the boards.

3. A couple of short animated pieces: Teen Girl Squad follows the (amateurly drawn) exploits of, well, a set of four teenage girls, and a cute movie about love (originally from http://students.washington.edu/k1/bin/Ddautta_01_masK.swf).

Of course, these all beg the question: what is a rich client good for (other than cool movies)? When is it appropriate to use Flash (or ActiveX, or XUL) rather than plain old (D)HTML? I wish I knew the answer, but it seems to me that there are a couple of guidelines.

1. How complicated is the data? And how complicated is the representation of that data? The more complicated, the more you should lean towards rich clients. I can’t imagine the electronic guide to music being half as effective if it was done in html.

2. How savvy are your users? This cuts both ways–if the users aren’t savvy, then the browser may be a comfortable, familiar experience. However, sometimes rich clients can ‘act smarter’ and make for a better user experience.

3. How large is your userbase? The larger, the more you should tend towards a thin, pervasive client like the browser, since that will ease deployment issues.

I used to think Flash was unabatedly evil, but I’m now convinced that, in some cases, it really makes a lot of sense.


Arrogance

Ah, the arrogance of software developers. (I’m a software developer myself, so I figure I have carte blanche to take aim at the foibles of my profession.) Why, just the other day, I reviewed a legal document, and pointed out several places where I thought it could be improved (wording, some incorrect references, and whatnot). Now, why do I think that I have any business looking over a legal document (a real lawyer will check it over too)? Well, why shouldn’t I? I think that most developers have a couple of the characteristics/behaviors listed below, and that these can lead to such arrogance.

1. Asking questions

Many developers have no fear, even take pride, in asking good, difficult questions about technical topics. Asking such questions can become a habit. A developer may ask a question, and feel comfortable about it, when he/she is entirely out of his/her depth.

2. Attention to detail

Developers tend to be capable of focusing on one thing to the exclusion of all else. This often means that, whatever the idea that comes along, a developer will focus on it exclusively. Such focus may turn up issues that were missed by the less attentive, or it may just be nit picking. (Finding small issues isn’t nitpicking when one is developing–it’s pre-emptive bug fixing.)

3. Curiosity and the desire to learn

Most developers are curious. In part because computers are so new, and in part because software technologies change so rapidly, hackers have to be curious, or they’re left behind, coding Cobol (not that there’s anything wrong with that!). This sometimes spills out into other portions of their lives, tweaking their bodies or the mechanics of an IPO.

4. Know something about something difficult

Yeah, yeah, most developers are not on the bleeding edge of software. But telling most people what they do usually causes some kind of ‘ooh’ or raised eyebrows conveying some level of expectation of the difficulty of software development. (Though this reaction is a lot less universal than it was during the dotcom boom–nowadays, it could just as easily be an ‘ooh’ of sympathy to an out of work .) Because developers are aware that what they do often isn’t that difficult (it’s just being curious, asking questions, and being attentive), it’s easy to assume that other professions usually thought difficult are similarly overblown.

Now, this arrogance surfaces in other realms; for example, business plans. I am realizing just how far I fall short in that arena. I’ve had a few business plans, but they often fall prey to the problem that the gnomes had in South Park: no way to get from action to profit. I’m certainly not alone in this either.

In the same vein of arrogance, I used to make fun of marketing people, because everything they do is so vague and ill-defined. I always want things nailed down. But, guess what, the real world is vague and ill-defined. (Just try finding out something simple, like how many people are driving Fords, how women use the internet, or how many people truly, truly love Richie Valens. You appear to be reduced to interviewing segments of the population and extrapolating.) And if you ask people what they want, they’ll lie to you. Not because they want to lie, but because they don’t really know what they want.

I guess this is a confession of arrogance on the part of one software developer and an apology to all the marketroids I’ve snickered at over the years (oops, I just did it again :). (I promise to keep myself on a shorter leash in the future.) Thanks for going out into the real world and delivering back desires, which I can try to refine into something I can really build. It’s harder than it looks.


Computer Security

Computer security has been on people’s minds quite a bit lately. What with all the new different viruses, worms and new schemes to get information through firewalls, I can see why. These problems cause downtime, which costs money. I had recently shared a conversation over a beer with one of my acquaintances who works for a networking security company. He’d given a presentation to a local business leaders conference about security. Did he talk about the latest and greatest in counter measures and self healing networks? Nope. He talked about three things average users can do to make their computers safer:

1. Anti virus software, frequently updated.
2. Firewalls, especially if you have an always on connection.
3. Windows Update.

Computer security isn’t a question of imperviousness–not unless you’re a bank or the military. In most cases, making it hard to break in is good enough to stop the automated programs as well as send the less determined criminals on their way. (This is part of the reason Linux and Mac systems aren’t (as) plagued by viruses–they’re not as typical and that makes breaking in just hard enough.) To frame it in car terms, keep your CDs under your seat–if someone wants in bad enough, they’ll get in, but the average crook is going to find another mark.

What it comes down to, really, is that users need to take responsibility for security too. Just like automobiles, where active, aware, and sober drivers combine with seat belts, air bags and anti-lock brakes to make for a safe driving experience, you can’t expect technology to solve the problem of computer security. After all, as Mike points out, social engineering is a huge security problem, and that’s something no program can deal with.

I think that science and technology have solved so many problems for modern society that it’s a knee jerk reaction nowadays to look to them for solutions, even if it’s not appropriate (the V-chip, the DMCA, Olean), rather than try to change human behavior.

Update (May 10):

I just can’t resist linking to The Tragedy of the Commons, which does a much more eloquent job of describing what I attempted to delineate above:

“An implicit and almost universal assumption of discussions published in professional and semipopular scientific journals is that the problem under discussion has a technical solution. A technical solution may be defined as one that requires a change only in the techniques of the natural sciences, demanding little or nothing in the way of change in human values or ideas of morality.

In our day (though not in earlier times) technical solutions are always welcome. Because of previous failures in prophecy, it takes courage to assert that a desired technical solution is not possible.”


Will RSS clog the web?

I’m in favor of promoting the use of RSS in many aspects of information management. However, a recent wired article asks: will RSS clog the web? I’m not worried much. Why?

1. High traffic sites like slashdot are already protecting themselves. I was testing my RSS aggregator, and hit slashdot’s RSS feed several times in a minute. I was surprised to get back a message to the effect of ‘You’ve hit Slashdot too many times in the last day. Please refrain from hitting the site more than once an hour’ (not the exact wording, and I can’t seem to get the error message now). It makes perfect sense for them to throttle down the hits from programs–they aren’t getting the same amount of ad revenue from RSS readers.

2. The Wired article makes reference to “many bloggers” who put most of their entries’ content in their RSS feed, which “allow[s] users to read … entries in whole without visiting” the original site. This is a bit of a straw man. If you’re having bandwidth issues because of automated requests, decrease the size of the file that’s being requested by not putting every entry into your RSS feed.

3. The article also mentions polling frequency–30 minutes or less. I too used to poll at roughly this frequency–every hour, on the 44 minute mark. Then, it struck me–I usually read my feeds once, or maybe twice, a day. And rarely do I read any articles between midnight and 8am. I tweaked my aggregator to check for new entries every three hours between 8am and midnight. There’s no reason to do otherwise with the news stories and blog entries that are most of the current RSS content. Now, if you’re using RSS to get stock prices, then you’ll probably want more frequent updates. Hopefully, your aggregator allows different frequencies for updating; Newsgator 1.1 does.

This comes back to the old push vs. pull debate. I like RSS because I don’t have to give out me email address (or update it, or deal with the unwanted newsletters in my inbox) and because it lets me automatically keep track of what people are saying. I think there’s definitely room for abuse with RSS spiders, just like with any other automated system; after all “a computer lets you make more mistakes faster than any invention in human history — with the possible exceptions of hand guns and tequila.”. I don’t think RSS will clog the web–it’s just going through some growing pains.


WAP vs J2ME

When I gave my talk about J2ME to BJUG a few weeks ago, one of the points I tried to address was ‘Why use J2ME rather than WAP.’ This is a crucial point, because WAP is more widely distributed. I believe the user interface is better, there is less network traffic, and there are possibilities for application extension that just don’t exist in WAP. (Though, to be fair, Michael Yuan makes a good point regarding issues with the optional packages standards process.)

I defended the choice of using MIDP 1.0 because we needed wide coverage and don’t do many complicated things with the data, but WAP is much more widely support than J2ME, by almost any measure. If you don’t have an archaic phone like my Nokia 6160, chances are you have a web browser. And WAP 2.0 supports images and XHTML, giving the application almost everything it needs without learning an entirely new markup language like WML.

So, we’ve decided to support XHTML and thus the vast majority of existing clients (one reason being that Verizon doesn’t support J2ME–at all.) So I’ve gotten a quick education in WAP development recently, and I just found a quote that just sums it up:

“As you can see, this is what Web programmers were doing back in 1994. The form renders effectively the same on the Openwave Browser as it does on a traditional web browser, albeit with more scrolling.”

This quote is from Openwave, a company that specializes in mobile development, so I reckon they know what they’re talking about. A couple of comments:

1. WAP browsers are where the web was in 1994. (I was going to put in a link from 1994, courtesy of the Way Back Machine, but it only goes back to 1996.) I don’t know about you, but I don’t really want to go back! I like Flash, DHTML and onClick, even though they can be used for some truly annoying purposes.

2. “…albeit with more scrolling” reinforces, to me, the idea that presenting information on a screen of 100×100 pixels is a fundamentally different proposition than a screen where you can expect, at a minimum, 640×480. (And who codes for that anymore?) On the desktop, you have roughly 30 times as much screen real estate (plus a relatively rich language for manipulating the interface on the client). It’s no surprise that I’m frustrated with I browse with WAP, since I’m used to browsing in far superior environments.

3. Just like traditional browsers, every time you want to do something complicated, you have to go to the server. You have to do this with XHTML (but not with WML, I believe. WML has its own issues, like supporting only bitmap pictures). That’s not bad when you’re dealing with fat pipes, but mobile networks are slow.

4. Fitting in with the carrier is an issue with WAP. Since the browser is provided, you have no control over some important issues. For example, one carrier we’re investigating requires you to navigate through pages and pages of carrier imposed links before you can get to your own bookmarks. It’s the whole gated community mindset; since the UI sucks, it’s harder to get around than it would be with Firefox.

In short, use WAP 2.0 if you must, but think seriously about richer clients (J2ME, BREW, or even the .Net compact framework). Even though they’ll be harder to implement and roll out, such clients will be easier to use, and thus more likely to become a part of your customers’ lives.


What use is certification?

What good are certifications like the Sun Certified Java Programmer (SCJP) and the Microsoft Certified Systems Engineer programs? Unlike the Cisco certifications, you don’t have to renew these every couple of years (at least the Java certifications–in fact, everything I mention below applies only to the Java certifications, as those are the only ones of which I have more than a passing knowledge). I am a SCJP for Java2, and I have an acquaintance who is a certified programmer for Java1.1; a Java1.1 cert isn’t very useful unless you’re targeting .Net, or writing applets that need to run on most every browser. Yet my colleague and myself can continue to call ourselves ‘Java Certified Programmers.’ I realize that there’s an upgrade exam, but I’ve never met a soul who’s taken it; and I don’t believe I’m prohibited from heading down the Java Certification path and handing Sun more money because I am not an SCJP for the most recent version of Java. In fact, I’m studying right now for the Sun Certified Web Component Developer (SCWCD) and plan to take the exam sometime this summer. Even though these certifications may be slightly diluted by not requiring renewal, I think there are a number of reasons why they are a good thing:

1. Proof for employers.

Especially when you deal with technologies that are moving fast (granted, changes to Java have slowed down in the past few years, but it’s still moving faster than, for example, C++ or SQL), employers may not have the skill set to judge your competence. Oh, in any sane environment you will probably interview with folks who are up to date on technology, but who hasn’t been screened out by HR because of a lack of appropriate keywords. Having a certification is certainly no substitute for proper experience, but it serves as a baseline that employers can trust. In addition, a certification is also a concrete example of professional development: always a good thing.

2. Breadth of understanding.

I’ve been doing server side Java development for web environments for 3 years now, in a variety of business domains and application servers. Now, that’s not a long time in programming years, but in web years, that’s a fair stint. But, studying for the SCWCD, I’m learning about some aspects of web application development that I hadn’t had a chance to examine before. For example, I’m learning about writing tag libraries. (Can you believe that the latest documentation I could find on sun.com about tag libraries was written in 2000?) I was aware of tag libraries, and I’d certainly used them, the struts tags among others, but learning how to implement one has really given me an appreciation for the technology. Ditto for container managed security. Studying for a certification definitely helps increase the breadth of my Java knowledge.

3. Depth of understanding.

Another aspect is an increased depth of understanding; actually reading the JSP specification or finding out what the difference is between overriding and overloading (and how one of them cares about the type of the object, whereas the other cares only about the type of the reference) or in what order static blocks get initialized. (My all time favorite bit of know-how picked up from the SCJP was how to create anonymous arrays.) The knowledge you gain from certification isn’t likely to be used all the time, but it may save you when you’ve got a weird bug in your code. In addition, knowing some of the methods on the core classes saves you from running to the API every time (though, whenever I’m coding, the javadoc is inevitably open). Yeah, yeah, tools can help, but knowing core methods can be quicker (and your brain will always be there, unlike your IDE).

4. A goal can be an incentive.

Personally, I’m goal oriented, and having a certification to achieve gives me a useful framework for expenditure of effort. I know what I’m aiming for and I’m aware of the concrete series of steps to achieve that goal. I can learn quite a bit just browsing around, but for serious understanding, you can’t beat a defined end point. I’d prefer it to be a real-world project, but a certification can be a useful stand in. (Yes, open source projects are good options too–but they may not cover as much ground and certainly, except for a few, are not as widely known as certifications.)

I’ve met plenty of fine programmers who weren’t certified (just as I’ve met plenty of fine programmers who weren’t CS majors). However, I think that certifications can be a useful complement to real world experience, giving job seekers some legitimacy while also increasing the depth and breadth of their understanding of a language or technology.


Inlining of final variables and recompilation

This problem that has bitten me in the ass a few times, and I’d love to hear any bright ideas on how y’all avoid it.

Suppose you have an interface that defines some useful constants:

public interface foo {
 int five = 6;
}

and a class that uses those constants:

public class bar {
 public static void main(String[]args) {
  System.out.println("five: "+foo.five);
 }
}

All well and good, until you realize that five isn’t really 6, it’s 5. Whoops, change the foo java file and rebuild, right? Well, if you use javac *.java to do this (as you might, if you only have the foo and bar files), then you’ll be alright.

But, if you’re like the other 99% of the java development world, and you use a build tool, like ant, smart enough to look at timestamps, you’ll still get 6 for the output of java bar. Ant is smart enough to look at the timestamps of .class and .java files to determine which .java files have changed since it last did a compilation. But it is too dumb to realize that the bar class has a dependency on foo, and should thus be recompiled even though bar.java is older than bar.class. (I haven’t looked at the byte code, but I expect that the value of five is just inlined into the bar class because it’s a final variable.) If you’re using a make based build system, I believe you can use javadeps to build out the correct dependency list, but I haven’t seen anything similar for ant. Another options is to just remember to blow away your build directory anytime you change your ‘constants’.

I guess this is why properties files might be a better choice for this type of configuration information, because they’re always read in anew at startup, and thus cannot be inlined (since they’re a runtime thing). Of course, then you lose the benefits of type checking. Not sure what the correct answer is.


Kris Thompson’s review of my talk

Kris Thompson attended my BJUG kick start talk on J2ME development. I wanted to comment on his post.

1. I wouldn’t say that J2ME development has scarred me. But J2ME is definitely a technology (well, a set of technologies, really) that is still being defined. This leads to pain points and opportunities, just like any other new technology. Lots of ground to be broken.

2. Caching–you can do it, but just like in any other situation, caching in J2ME introduces additional complexities. Can it be worth it, if it saves the user time and effort? Yes. Is it worth it for the application I was working on? Not yet.

3. PNG–it’s easy to convert images from GIF/JPEG format to PNG. Check out the extremely obtuse JAI.create() method, and make sure you check out the archives of the jai-interest mailing list as well.

4. Re: Shared actions between MIDP and web portions of the application, I guess I wasn’t very clear on this–the prime reason that we don’t have shared action classes between these two portions was because, other than in one place (authentication) they don’t have any feature overlap. What you can do on the web is entirely separate from what you can do with the phone (though they can influence each other, to be sure).

Anyway, thanks Kris for the kind words.

As a last note, I would reiterate what Kris mentions: “Find out which phones have the features you want/need” and perhaps add “and make sure your service provider supports those features as well.” Unlike in the server side world, where everyone pretty much targets IE, J2ME clients really do have different capabilities and scouting those out is a fundamental part of J2ME development.


Software archeology

I presented this evening on J2ME for the kickstart meeting at BJUG, where Grady Booch was the featured speaker. After unknowingly knocking UML in his presence, I enjoyed a fine talk on software archeology. This discipline involves looking at larger, historical patterns of software development. Essentially, when we build software, we are building artifacts. And, just as the plans and meetings of the the slave foremen who built the pyramids used are not recorded, so there are aspects of present day software development that are simply lost when the project ends or the programmers die. One of Booch’s projects is to capture as much of that data as possible, because these architectures are full of valuable knowledge that many folks have sweated for. It needs to happen soon, because, in his words, “time is not on our side” when it comes to collecting this kind of data. Man, I could handle that kind of job.

Speaking of architecture, I stumbled on “Effective Enterprise Java” which looks to be a set of rules for enterprise java development. I really enjoy “Effective Java”, by Joshua Bloch, so I hope that Ted Neward’s book lives up to its name. And I certainly hope this project doesn’t get stranded like “Interface Design” apparently did.


First Monday

First Monday is a collection of peer reviewed papers regarding technology, “solely devoted to the Internet”. If you want a feeling for how Internet technology is affecting society, presented in a clear, reasoned format, this is one of the places to go. Topics range from “what’s wrong with open source” (which got slashdotted recently) to “how online education affects the balance of power at universities” to “how can we best keep track of information”. Fascinating stuff, and the academic nature of the discourse means that it’s got a solid foundation (as opposed to most weblogs, which are just some opinionated person rambling). You can even sign up for an email list to be notified when they post news articles (ah, listserv). Too bad they don’t have an RSS feed.



© Moore Consulting, 2003-2017 +