Skip to content

Monthly Archives: March 2008

Boulder Facebook Developer Garage Notes

So, on Thursday I went to the Facebook developers garage, kindly arranged by Kevin Cawley. It was held at ‘the bunker’, secret home of TechStars (we didn’t even get the precise address until the afternoon of the event. I could tell you where it is, but then I’d have to kill you). It was an interesting evening; it started off, as per usual with tech events, with free pizza and pop and beer (folks! if you want a job where you can get free pizza regularly, consider software). Then, there were six presentations by local startups. A number of the companies were ones that I’d seen before at the New Tech Meetup, and that was what the garage felt like. Rather than a deep look at some of the technical or business problems that Facebook applications face, it was a quick overview of how the platform was being used by existing startups. Then there was a video conference call with Dave Morin, the platform development manager for Facebook.

As far as demographics, the garage had about 40-50 people there, of which 4-5 were women. One of the presenters asked how many people had apps running on Facebook; I’d say it was about 40%.

While I’ve succeeded in writing a “hello world” application for Facebook, I am by no means a Facebook developer. I attended the garage because the platform is exciting new technology. There are a number of interesting applications that have a chicken and egg problem: the application is cool if there are a lot of people using it, but people will only use it if is cool. For example, any recommendation engine, or classified listing service (incidentally, I started off the night talking to the Needzilla folks, who are hoping to build a next generation Craigslist). Facebook, by making finding and adding applications delightfully easy, helps developers jump the chicken and egg problem. Of course, it’s just as easy to uninstall the application, so it needs to be pretty compelling. And that is why I went to the garage (no, free pizza had nothing to do with it!). Also, note that this meeting was streamed on Ustream and will hopefully be available on one of the video sharing sites sometime soon. I’ve asked to be notified and will post the link when it is available.

The first speaker was Charlie Henderson of MapBuzz. They’re a site looking to make mapping easier for users to build and create content and community around. They partner with organizations such as the Colorado Mountain Club and NTEN to build maps and communities. They’re building a number of Facebook apps, mostly using iframes rather than FBML. They had some difficulties porting their existing application to work within Facebook, especially around bi-directional communication (if you add a marker or a note to a map within Facebook, that content will appear on the Mapbuzz site, and vice versa). They had some difficulties with FBJS. I couldn’t find their widget on Facebook as of this time.

Lijit is a widget you can drop in your blog that does search better, as it draws on more structured information, including trusted sources. They use the Facebook API as another source of information to search that is fairly structured. They insisted they complied with the Facebook TOS and are starting to look at OpenSocial as a data source as well.

Fuser showed a widget that displayed Myspace notifications on Facebook. Again, they used the iframe technology, and actually used a java applet! (I’m happy that someone is using a java applet for something more complex than a rent vs buy calculator). Since Myspace doesn’t have an API, they ended up having that applet do some screen scraping.

Useful Networks had the most technically impressive presentation, though not because of its Facebook integration. This company, part of Liberty Media, is looking to become a platform for cell phone applications with location services (a “location services aggregator”). Eventually, they want to help developers onboard applications to the carriers, but right now they’re just trying to build some compelling applications of their own. Useful Networks has built an application called sniff (live in the Nordic countries, almost live in the USA) which lets you find friends via their mobile phones and plots locations on a map in Facebook. They had a number of notification options (‘tell me when someone finds me’, etc) and an option to go invisible–the emphasis on letting people know about about their choices surrounding sensitive information like location was admirable. The presenter said that they had been whittling away features to just the minimum for a while. Most importantly, they had a business model–each ‘sniff’ was 50 cents (I’m sure some of that went to the carrier). 80% of the people using sniff did it via their mobile phone, while the other 20% did it from the web. They used GPS when possible, but often cell [tower] id was close enough. And the US beta was live on two carriers. Someone suggested integrating with Fire Eagle for another source of geo information.

Stan James, founder of Lijit, then demoed a Facebook app. I’ve been pretty religious about not installing Facebook applications, but this is one that seemed cool enough to do so. It’s called SongTales and it lets you share songs with friends and attach a small story to each one. As Stan said, we all like songs that are pretty objectively awful because of the experiences they can evoke. As far as technical difficulties, he mentioned that it was harder to get music to play than to just embed a Youtube video (a probable violation of Youtube’s TOS).

David Henderson spoke about Socialeyes. Actually, first he spoke about Social Media, a company he worked for in Palo Alto that aimed to create adsense for social networks (watch out, Google!), and has a knowledge of Facebook applications after creating a number of them. One of those was Appsaholic, that lets a developer track usage of their app. He showed some pretty graphs of application uptake on Facebook, and made some comments about how the rapid uptake created viral customer interaction that was a dream for marketers. His new venture is an attempt to build a Nielsen ratings for social networks, measuring engagement and social influence of users as well as application distribution.

After the presenters Dave Morin skyped in. Pretty amazing display of cheap teleconferencing (though people did have to ask their questions to someone in Boulder, who typed them to Dave). He said some nice things about Boulder, and then talked a small bit about how the Facebook platform was going to improve. Actually, I think he just said that it will improve, and that Facebook loves developers (sorry, no yelling). Dave shared an anecdote about a girl who build a Facebook application which had 300K users and sold it in three weeks. Then people asked questions.

One person asked about how to make applications stickier (so you don’t have 50K users add your app, then 50K users delete it the next week). No good answer there (that I caught).

Another asked about the SEO announcement, which Dave didn’t know about because he ad been in a roundtable with developers all day (and I couldn’t find reference to on the internet–sorry).

Someone else asked about changes to the Facebook TOS, in particular letting users grant greater data storage rights to applications (apparently apps right now can only store 24 hours of data). Dave answered that what Facebook currently allowed was in line with the market standards, and that as those changed, Facebook would.

Somebody asked if authenticating mobile Facebook apps was coming, but I didn’t catch/understand the answer.

Someone else asked about the ‘daily user interactions’, and what that meant. After some hemming and hawing about the proprietary nature of that calculation (to prevent gaming the numbers), Dave answered that the number is not interactions per user per day, but the number of unique users who interact with your application over the last day. He also talked about what went into the ‘interaction’ calculation.

Someone asked about monetization solutions. Dave mentioned that the Facebook team was working on an e commerce API which he hoped would spawn an entire new set of applications. It will be released in the next couple of quarters. (This doesn’t really help the people building apps now, though).

Another person asked about the pluses and minuses of FBML versus the iframe. Dave said that FBML was “super native and super fast”, whereas the iframe solution lets you run the application on your own servers, and is more standard.

Yet another person asked about whether they should release an app in beta, or get it a bit more polished. Dave said that his experience was that the best option was always to get new features into the hands of users as soon as possible, and that Facebook was still releasing new code once a week.

Someone asked about Facebook creating a ‘preferred applications’ repository and Dave said that there were no plans to do that.

Somebody asked if Facebook was interested in increasing integration between FBJS and flash, but Dave didn’t quite get the question correctly, and answered that of course Facebook was happy to have applications developed in flash.

Dave then signed off. It was nice to be able to talk to someone so in touch with the Facebook platform, but I felt that he danced around some questions. And having someone have to type the questions meant that it was hard to refine your question or have him clarify what you were asking. FYI, I may not have understood all the questions and answers that happened in Dave’s talk because I’m so new to Facebook and the jargon of the platform.

After that, things were wrapping up. Kevin plugged allfacebook. People stood up and announced whether they were looking for work, or if they were looking for Facebook developers. By my count, 5 folks said they were looking for developers; no developers said they were looking for work (hey look, kids! Facebook app dev is the new Ruby on Rails!), including the developer of the skipass application and mezmo.com.

Was this worth my time? Yes, definitely. Good friendly vibe and people, and it opened up my eyes to the wide variety of apps being built for Facebook. The presentations also drove home how you could build a really professional application on top of the API.

would I go again? Probably not. I would prefer a more BJUG style presentation, where one can really dive into some of the technical issues (scaling, bi-directional communication, iframe vs FBML vs Flash) that were only briefly touched upon.

[tags]facebook[/tags]

GWT Mini Pattern: Make Your Links Friendly To All Users

Sometimes GWT widgets are used to provide functionality, like a form submission, that is easily replicable in traditional HTML. Other times, it’s not so easy to create a user interface that degrades well. Luckily, the self contained nature of widgets means that crucial functionality is typically still available to users with javascript disabled. However, you can take steps to make your site more friendly to users with javascript disabled (for whatever reason).

For the first case, where there is a viable HTML analog, the solution is to have your widget home span look something like this:

<span id="requestinfocuspan"><span class="clickable-link" id="requestinfocuspanplaceholder"><a xhref="/ContactUs.do?ContactUsType=More-info-about-a-listing" mce_href="/ContactUs.do?ContactUsType=More-info-about-a-listing" >Request More Info</a></span></span>

Now, if the user has javascript disabled, and they click on the href, they are sent to the appropriate HTML form. However, if the user has javascript enabled, the widget looks for a span to enable and if it sees the “requestinfocuspan” span, calls code like this:

public static Label buildClickableLink(String span, ClickListener listener) {
Element placeHolderLink = DOM.getElementById(span+"placeholder");
String contentsOfPlaceHolder = "Content unavailable";
Element placeHolderSpan = DOM.getElementById(span);
if (placeHolderSpan != null && placeHolderLink != null) {
contentsOfPlaceHolder = DOM.getInnerText(placeHolderLink);
DOM.removeChild(placeHolderSpan, placeHolderLink);
}
Label clickme = new Label(contentsOfPlaceHolder);
clickme.setStyleName(Constants.CLICKABLE_LINK_STYLE_NAME);
clickme.addStyleName(Constants.INLINE_STYLE_NAME);

clickme.addClickListener(listener);
return clickme;
}

This code takes the contents of “requestinfocuspan”, grabs the text of the span inside it, and replaces the whole of the contents with a label. That label has the same text and a click listener attached, so there’s no bouncing of text. If a designer wants to change the text of this link, they can do so by modifying the JSP without touching (or redeploying) GWT components.The second case mentioned above, where there is no easy way to build an HTML interface for GWT functionality, can be handled by placing a link to a page explaining to the user that javascript is required, and possibly how to turn it on. An easy way to show that link for non javascript users is to use the noscript tag

<script type="text/javascript">document.write("<a xhref='/psaf&city=Boulder'>Advanced Search Form</a>");</script>
<noscript><a xhref='/HomePage.do?state=javascriptNeeded'>Advanced Search Form</a></noscript>

This is a viable option here because there is no need to run the ‘buildClickableLink’ method. We don’t really need a click listener, because we have a plain href to a page that the GWT component will run on.

The other option is to use the existing link method discussed above and use the buildClickableLink method:

<span id="searchspan"><span class="clickable-link" id="searchspanplaceholder"><a
xhref="/HomePage.do?state=javascriptNeeded" mce_href="/HomePage.do?state=javascriptNeeded" >Advanced Search</a></span></span>

While accessibility to non javascript users is not built into GWT (although it might be on the way), you can take steps to make your widgets more friendly to these users. This is especially true if you’re merely ‘ajaxifying’ existing functionality. In addition, users with javascript enabled get a better user experience if you use some of the above methods because the text on the page doesn’t ‘jump around’ when GWT code is executed after the page is loaded.

PS Sorry for the ‘xhref’s in my links above–I’m not quite sure how to turn off the escaping that WordPress or TinyMCE does.  Please replace ‘xhref’ with ‘href’ wherever you see it.

On writing a book

I saw this great post on the nuts and bolts of writing a book on the BJUG mailing list. Well worth a read.

I have sympathy in particular with his ‘writing schedule’ comment. I started to put together a book with some friends, and it was hard to keep things moving. We ended up not moving forward with the book and placing the content on a blog (which has proven hard to update as well). The book was about software contracting.

[tags]authorship,inertia[/tags]

Finding bugs from software history: a talk at CU I attended

I went to an absolutely fascinating talk today at CU. Sunghun Kim gave a talk titled “Predicting Bugs by Analyzing Software History”. The basic premise is you can look at historical unstructured information, including emails, bug reports, check in comments, and if you can identify bugs that were related to that unstructured information, you can use that to find other bugs.

He talked about two different methods to ‘find other bugs’. The first is change classification. Based on a large number of factors, including attributes of the program text, complexity metrics and source control management meta data like time of checkin (don’t check code in on Friday!) and committing developer, he was able to identify whether or not a bug was introduced for a given checkin. (A question was asked about looking at changes at the token level, and he said that would be an interesting place for further research.) This was 94% precise (if the system said a bug was introduced, there was a 94% chance it was) and had 70% recall (it missed 30% of real bugs introduced, but got 70% of them).

They [he collaborates a lot] were able to judge future changes probabilities of introducing a bug by feeding all the attributes I mention above on known bugs to a machine learning program. Kim said there were ways of automating some of the collection of this data, but I can imagine that the quality of bug prediction depends massively on the ability to find which bugs were introduced when, and tie known bugs to those attributes. The numbers I quote above were based on research from a number of open source projects like Apache and Mozilla, and varied quite a bit. Someone asked about the difference of numbers, and he said that the habit of commit activity was a large cause of that variation. Basically, the more targeted commits were (one file instead of five, etc), the higher precision could be attained. Kim also mentioned that this type of change classification is similar to using a GPS for directions around a city–the more unfamiliar you are with the code, the more useful it would be. He mentioned that Apple and Yahoo! were using this system in their software development.

The other interesting concept he talked about was a bug cache. If you’ve developed for any length of time on a given project, you know there are places developers fear to tread. Whether it is complicated logic, fragile interfaces with legacy systems or just plain fugly code, there are sections of any codebase where change is a bit scary. Kim talked about the Windows Server 2003 team maintaining a list of such modules, so that anytime anyone changed something on that list, more review than normal would take place. This list is what he’s trying to repeat in an automated fashion.

If you place files in a cache when they are identified as having a bug, and also place other files that are close in checkin time to that file, you can build a cache of files to closely review. After about 50-100 files for the 200 file Apache project, that cache of 20 files (10%) contained a significant portion of future bugs. Across several open source projects, the range of bugs contained in the cache was 73-95%. He also talked about using this on the method level as opposed to the file level.

In both these cases, machine learning that happens on one project is not very useful for others. (When they did an analysis of the Mozilla codebase and then turned it on the Eclipse codebase, it wasn’t good at finding bugs). Kim speculated that this was due to project and personal coding styles (some people are from Mars, others write buffer overflow bugs), as the Apache 1.3 trained machine was OK at finding bugs in the Apache 2.0 codebase.

Kim talked about several other interesting projects that he has been part of, including the ‘Memory of Similar Bugs’, which found that up to 40% of bugs are recurring patterns, and ReCrash, a probe that monitors an application for crash conditions, and, when it finds one, automatically writes a unit test that can reproduce the crash situation. How cool is that? The cost, of course, is ReCrash imposing high overhead (13-64% increase) as a cost of monitoring.

This was so fascinating to me because anything we can do to automate the bug finding process lets us build better software. There are always data input problems (GIGO still rules), but it seemed like Kim had found some ways around that, especially when the developers were good about comments on checkin. I’m all for spending more time building cool features and better business logic. All in all, a great talk about an interesting topic.

[tags]the bug in the machine, commit early and often[/tags]

Squid Notes: A fine web accelerator

I recently placed squid in front of an Apache/Tomcat based web application to serve as a web accelerator. We could have used Apache’s mod_proxy, but squid has the ability to federate and that was considered valuable for future growth. (Plus, Wikipedia uses squid, and it has worked out pretty good for them so far.) I didn’t find a whole lot of other options–Varnish looks good, but wasn’t quite documentation and feature rich enough.

However, when the application generates a page for a user who is logged in, the content can be different than if the exact same URL is visited by a robot or a user who is not signed in. It’s easy to tell if a user is signed in, because they send cookies. What was not intuitive was how to tell Squid that pages for logged in users (matching a certain header, or a certain URL pattern) should always be referred to Tomcat. In fact, I asked about this on the mailing list, and it doesn’t seem as if it is possible at all. Squid caches objects at the page level, and can’t cache just pieces of a page (like I believe, among others, OSCache can).

I compromised by deleting the cached object (a page, for example) whenever a logged in user visits it. This forces squid to go back to the origin server, guaranteeing that the logged in user gets the correct version. Then, if a non logged in user requests the page, squid again goes back to the origin server (since it doesn’t have anything in its cache). If a second logged in user requests the same page, squid serves it out of cache. It’s not the best solution, but it works. And non logged in users are such a high proportion of the traffic that squid is able to serve a fair number of pages without touching the application.

Overall I found Squid to be pretty good–even with the above workaround it was able to take a substantial amount of traffic off the main application. Note that Squid was designed to be a forward proxy (for example, a proxy at an ISP caching commonly requested pages for that ISPs users), so if you want to use it as a web accelerator (in front of your website, to increase the speed of pages you create), you have to ignore a lot of the documentation. The FAQ is a must read, especially the section on reverse proxying and the logs section.

[tags]proxy, squid,increasing webapplication performance[/tags]

GIS Geeks Unite

Or at least come to Boulder and have a beer.  The FRUGOS (Front Range Users Group of GIS Open Source?) folks are meeting at the Boulder Beer Company tomorrow night.  More info.  I’m thinking of attending, but the Boulder Denver New Tech Meetup is happening at the same time in a different place, so I’m torn.