Skip to content

Google Chrome First Impressions

Here are my thoughts on Google Chrome. Yup, I’m following the blogging pack about a week late. First off, the install process was smooth. The comic book stated that the rendering engine is Webkit, which should make testing relatively easy. This is borne out by the user agent string: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.29 Safari/525.13

They give you the ability to change the search engine, and other options, easily. It definitely follows a Macish configuration processs–you don’t have to apply or save the config changes you make, you just make them and close the options screen.
As Farhad Manjoo mentions, there is a lack of addons. (Addons are pieces of functionality that are extend the browser’s behavior [think adblock], as opposed to plugins which extend the browser’s ability to handle content [think flash]). I didn’t see much about addons or plugins for Chrome searching today, other than some strong desire for it. I don’t remember any mention of addons in the comic book or on the Chrome website. Also, as Manjoo mentions, opening a new tab by clicking next to the existing tabs doesn’t work (though there is a plus icon up there which it should not take too long to get used to).

It looks like there is already a way to create simple desktop applications like a calculator that use chrome as their ‘shell’, javascript as the programming language and HTML for user interface definition. That’s very similar to Adobe AIR (at least the ajax version) and something like the C#/XAML pairing as well. Let’s hear it for declarative markup for user interfaces!

The custom start page seems pretty neat, with the ability to have bookmarks not in a pulldown menu, but right on the start page, which also includesthe ‘most visited’ sites. Machine learning of this type can be a great time saver.

From a development standpoint, there is a javascript/DOM console, which looks similar to Safari’s. It is, however, much more responsive and stable, though I still can’t figure out what the ‘search’ box does. However, the wealth of development tools that I use everyday in FireFox (web developer, yslow, firebug, whois, live HTTP headers) will take time to migrate over to Chrome, if they do so at all. This will continue to make developing in FireFox first and testing in other browsers my default strategy.

Finally, Cringley has some interesting comments on Google’s motivation.

[tags]google chrome[/tags]

Google I/O

I’m off to San Francisco for Google I/O, a two day conference on focused on web application development, with a heavy focus on Google’s APIs and tools. I’m excited for sessions ranging from “Faster-Than-Possible Code: Deferred Binding with GWT” to “Underneath the Covers at Google: Current Systems and Future Directions”.

I probably won’t be live blogging the sessions, but I will try to write a wrap up post. If you’re attending, I hope to see you there.

IE7 doesn’t like div classes named ‘content’

I ran into a really bizarre issue today.  A client’s website had paragraph tags separating content, wrapped inside a number of divs.  One of the divs had the classname of ‘content’.  In IE7, and not in FF2 or IE6, the first line of the paragraph was indented.  Modifying, even deleting, styles of the parent divs or the paragraph did not seem to change anything.  Adding a break tag as the first child of the paragraph tag fixed the indentation, but led to too much space.

Finally, I tried deleting the entire classnames from the parent divs.  Deleting the class ‘content’ from one of the divs fixed the issue.  Then I tried with a classname of ‘contaent’, and the issue was still fixed.

Apparently IE7 doesn’t like divs with the name ‘content’?  I was able to use ‘content-wrapper’ just fine.  Beats me.  A cursory search of the internet didn’t turn up any suggestions, and I unfortunately don’t have the time to try to plumb the depths of this bug.
[tags]IE weirdness[/tags]

Boulder Facebook Developer Garage Notes

So, on Thursday I went to the Facebook developers garage, kindly arranged by Kevin Cawley. It was held at ‘the bunker’, secret home of TechStars (we didn’t even get the precise address until the afternoon of the event. I could tell you where it is, but then I’d have to kill you). It was an interesting evening; it started off, as per usual with tech events, with free pizza and pop and beer (folks! if you want a job where you can get free pizza regularly, consider software). Then, there were six presentations by local startups. A number of the companies were ones that I’d seen before at the New Tech Meetup, and that was what the garage felt like. Rather than a deep look at some of the technical or business problems that Facebook applications face, it was a quick overview of how the platform was being used by existing startups. Then there was a video conference call with Dave Morin, the platform development manager for Facebook.

As far as demographics, the garage had about 40-50 people there, of which 4-5 were women. One of the presenters asked how many people had apps running on Facebook; I’d say it was about 40%.

While I’ve succeeded in writing a “hello world” application for Facebook, I am by no means a Facebook developer. I attended the garage because the platform is exciting new technology. There are a number of interesting applications that have a chicken and egg problem: the application is cool if there are a lot of people using it, but people will only use it if is cool. For example, any recommendation engine, or classified listing service (incidentally, I started off the night talking to the Needzilla folks, who are hoping to build a next generation Craigslist). Facebook, by making finding and adding applications delightfully easy, helps developers jump the chicken and egg problem. Of course, it’s just as easy to uninstall the application, so it needs to be pretty compelling. And that is why I went to the garage (no, free pizza had nothing to do with it!). Also, note that this meeting was streamed on Ustream and will hopefully be available on one of the video sharing sites sometime soon. I’ve asked to be notified and will post the link when it is available.

The first speaker was Charlie Henderson of MapBuzz. They’re a site looking to make mapping easier for users to build and create content and community around. They partner with organizations such as the Colorado Mountain Club and NTEN to build maps and communities. They’re building a number of Facebook apps, mostly using iframes rather than FBML. They had some difficulties porting their existing application to work within Facebook, especially around bi-directional communication (if you add a marker or a note to a map within Facebook, that content will appear on the Mapbuzz site, and vice versa). They had some difficulties with FBJS. I couldn’t find their widget on Facebook as of this time.

Lijit is a widget you can drop in your blog that does search better, as it draws on more structured information, including trusted sources. They use the Facebook API as another source of information to search that is fairly structured. They insisted they complied with the Facebook TOS and are starting to look at OpenSocial as a data source as well.

Fuser showed a widget that displayed Myspace notifications on Facebook. Again, they used the iframe technology, and actually used a java applet! (I’m happy that someone is using a java applet for something more complex than a rent vs buy calculator). Since Myspace doesn’t have an API, they ended up having that applet do some screen scraping.

Useful Networks had the most technically impressive presentation, though not because of its Facebook integration. This company, part of Liberty Media, is looking to become a platform for cell phone applications with location services (a “location services aggregator”). Eventually, they want to help developers onboard applications to the carriers, but right now they’re just trying to build some compelling applications of their own. Useful Networks has built an application called sniff (live in the Nordic countries, almost live in the USA) which lets you find friends via their mobile phones and plots locations on a map in Facebook. They had a number of notification options (‘tell me when someone finds me’, etc) and an option to go invisible–the emphasis on letting people know about about their choices surrounding sensitive information like location was admirable. The presenter said that they had been whittling away features to just the minimum for a while. Most importantly, they had a business model–each ‘sniff’ was 50 cents (I’m sure some of that went to the carrier). 80% of the people using sniff did it via their mobile phone, while the other 20% did it from the web. They used GPS when possible, but often cell [tower] id was close enough. And the US beta was live on two carriers. Someone suggested integrating with Fire Eagle for another source of geo information.

Stan James, founder of Lijit, then demoed a Facebook app. I’ve been pretty religious about not installing Facebook applications, but this is one that seemed cool enough to do so. It’s called SongTales and it lets you share songs with friends and attach a small story to each one. As Stan said, we all like songs that are pretty objectively awful because of the experiences they can evoke. As far as technical difficulties, he mentioned that it was harder to get music to play than to just embed a Youtube video (a probable violation of Youtube’s TOS).

David Henderson spoke about Socialeyes. Actually, first he spoke about Social Media, a company he worked for in Palo Alto that aimed to create adsense for social networks (watch out, Google!), and has a knowledge of Facebook applications after creating a number of them. One of those was Appsaholic, that lets a developer track usage of their app. He showed some pretty graphs of application uptake on Facebook, and made some comments about how the rapid uptake created viral customer interaction that was a dream for marketers. His new venture is an attempt to build a Nielsen ratings for social networks, measuring engagement and social influence of users as well as application distribution.

After the presenters Dave Morin skyped in. Pretty amazing display of cheap teleconferencing (though people did have to ask their questions to someone in Boulder, who typed them to Dave). He said some nice things about Boulder, and then talked a small bit about how the Facebook platform was going to improve. Actually, I think he just said that it will improve, and that Facebook loves developers (sorry, no yelling). Dave shared an anecdote about a girl who build a Facebook application which had 300K users and sold it in three weeks. Then people asked questions.

One person asked about how to make applications stickier (so you don’t have 50K users add your app, then 50K users delete it the next week). No good answer there (that I caught).

Another asked about the SEO announcement, which Dave didn’t know about because he ad been in a roundtable with developers all day (and I couldn’t find reference to on the internet–sorry).

Someone else asked about changes to the Facebook TOS, in particular letting users grant greater data storage rights to applications (apparently apps right now can only store 24 hours of data). Dave answered that what Facebook currently allowed was in line with the market standards, and that as those changed, Facebook would.

Somebody asked if authenticating mobile Facebook apps was coming, but I didn’t catch/understand the answer.

Someone else asked about the ‘daily user interactions’, and what that meant. After some hemming and hawing about the proprietary nature of that calculation (to prevent gaming the numbers), Dave answered that the number is not interactions per user per day, but the number of unique users who interact with your application over the last day. He also talked about what went into the ‘interaction’ calculation.

Someone asked about monetization solutions. Dave mentioned that the Facebook team was working on an e commerce API which he hoped would spawn an entire new set of applications. It will be released in the next couple of quarters. (This doesn’t really help the people building apps now, though).

Another person asked about the pluses and minuses of FBML versus the iframe. Dave said that FBML was “super native and super fast”, whereas the iframe solution lets you run the application on your own servers, and is more standard.

Yet another person asked about whether they should release an app in beta, or get it a bit more polished. Dave said that his experience was that the best option was always to get new features into the hands of users as soon as possible, and that Facebook was still releasing new code once a week.

Someone asked about Facebook creating a ‘preferred applications’ repository and Dave said that there were no plans to do that.

Somebody asked if Facebook was interested in increasing integration between FBJS and flash, but Dave didn’t quite get the question correctly, and answered that of course Facebook was happy to have applications developed in flash.

Dave then signed off. It was nice to be able to talk to someone so in touch with the Facebook platform, but I felt that he danced around some questions. And having someone have to type the questions meant that it was hard to refine your question or have him clarify what you were asking. FYI, I may not have understood all the questions and answers that happened in Dave’s talk because I’m so new to Facebook and the jargon of the platform.

After that, things were wrapping up. Kevin plugged allfacebook. People stood up and announced whether they were looking for work, or if they were looking for Facebook developers. By my count, 5 folks said they were looking for developers; no developers said they were looking for work (hey look, kids! Facebook app dev is the new Ruby on Rails!), including the developer of the skipass application and mezmo.com.

Was this worth my time? Yes, definitely. Good friendly vibe and people, and it opened up my eyes to the wide variety of apps being built for Facebook. The presentations also drove home how you could build a really professional application on top of the API.

would I go again? Probably not. I would prefer a more BJUG style presentation, where one can really dive into some of the technical issues (scaling, bi-directional communication, iframe vs FBML vs Flash) that were only briefly touched upon.

[tags]facebook[/tags]

Squid Notes: A fine web accelerator

I recently placed squid in front of an Apache/Tomcat based web application to serve as a web accelerator. We could have used Apache’s mod_proxy, but squid has the ability to federate and that was considered valuable for future growth. (Plus, Wikipedia uses squid, and it has worked out pretty good for them so far.) I didn’t find a whole lot of other options–Varnish looks good, but wasn’t quite documentation and feature rich enough.

However, when the application generates a page for a user who is logged in, the content can be different than if the exact same URL is visited by a robot or a user who is not signed in. It’s easy to tell if a user is signed in, because they send cookies. What was not intuitive was how to tell Squid that pages for logged in users (matching a certain header, or a certain URL pattern) should always be referred to Tomcat. In fact, I asked about this on the mailing list, and it doesn’t seem as if it is possible at all. Squid caches objects at the page level, and can’t cache just pieces of a page (like I believe, among others, OSCache can).

I compromised by deleting the cached object (a page, for example) whenever a logged in user visits it. This forces squid to go back to the origin server, guaranteeing that the logged in user gets the correct version. Then, if a non logged in user requests the page, squid again goes back to the origin server (since it doesn’t have anything in its cache). If a second logged in user requests the same page, squid serves it out of cache. It’s not the best solution, but it works. And non logged in users are such a high proportion of the traffic that squid is able to serve a fair number of pages without touching the application.

Overall I found Squid to be pretty good–even with the above workaround it was able to take a substantial amount of traffic off the main application. Note that Squid was designed to be a forward proxy (for example, a proxy at an ISP caching commonly requested pages for that ISPs users), so if you want to use it as a web accelerator (in front of your website, to increase the speed of pages you create), you have to ignore a lot of the documentation. The FAQ is a must read, especially the section on reverse proxying and the logs section.

[tags]proxy, squid,increasing webapplication performance[/tags]

FogBugz world tour, Boulder edition

I went and saw Joel Spolsky’s talk about FogBugz6 tonight. It seems to be quite the powerful software development tool. But I’m afraid that it seems to suffer like every tool–it forces you into certain methods of development. For example, there’s no way to ensure that every bug entered is viewed by QA. Now, that isn’t a problem for the teams I currently work on, but I can see it being a problem for teams I have worked on. Joel mentioned very valid reasons for doing this, but they only seem valid for the subset of development teams that FogBugz targets.

In fact, as I left, almost every conversation I heard was about the product, and how people could fit it into their process, rather than use the process it gives you. Because FogBugz really is more than a bug tracking system–it now goes from documentation/requirements gathering all the way to estimation to bug tracking to customer support. FogBugz appears to be a tool that is used in almost the entire software development life cycle–hey look, it’s RUP lite.

But I’ve used never version 6, and I’m sure there are significant wins. My other concerns are that the software estimation parts sound like they’re 1.0 features (just from the words he used–at best 1.1 since they used FogBugz6 to develop FogBugz6); I’d rather wait until the features are more settled. I’m sure you could use just the bug tracking system, and they’ve certainly taken the ‘Web 2.0’/instant response/make it feel like a desktop application ideas to heart. The cost is another concern; while minimal, it is greater than $0. On many projects I’m on, just using any bug tracker, let alone an entire software development tool, is difficult, and you can’t beat stealth bug tracker installs. (I’m on record as saying “I have to say that I think the open source solutions (Bugzilla and PHPBT) are going to eat the commercial solutions’ lunch for small projects, because they are a cheaper substitute with all the required attributes”, just as an FYI.)

One thing that really surprised me at the talk is how many folks were there evaluating FogBugz as opposed to seeing Joel speak. Around one third of the audienced had used or was using FogBugz. Joel opened up the floor to questions, and every single one except one (of mine) was about features or flaws in FogBugz. I mean, this is the guy who wrote the Joel Test and no one took the opportunity to ask him general development questions, even though he said he’d field them. I don’t know what the deal was.

Will I give FogBugz a try? Not right now. But I’ll keep an eye on what they’re doing.
[tags]software development tools, bug tracking, fogbugz, joelonsoftware[/tags]

Interesting posts about web application performance

The good folks over at the YUI blog posted this: What the 80/20 Rule Tells Us about Reducing HTTP Requests a while ago. I bookmarked it, but wanted to point it out to other folks–it’s a nice bit of research, with numbers and graphs and all that good stuff. It opened my eyes to various non intuitive aspects of web application performance. The whole series is a nice read; part 1 is linked above and here’s part 2, part 3 and part 4.