Skip to content

Programming - 8. page

Musings on php development as a career path

I was at a TedXBoulder preparty last night.  Ran into some really interesting folks–the usual tech folks, but also Charles, a high flying audio engineer (we’re talking Wembley stadium), Emily, a money manager bizdev lady ($30 million, minimum, please) and Donna, an engineer on leave from a big aerospace firm who is interested in entrepreneurialism.  Really looking forward to the talks on Saturday (tickets apparently still available).  I also ran into an old friend, roommate and colleague.

We chatted about a wide variety of topics, but one stuck out in my mind.  His brother is getting back into software development, and is starting out doing a lot of php.  Fair enough–it’s a great language, I’ve done a fair bit of it, and one can write good, maintainable, fase code with it.  But last night, we agreed that if you don’t want to be competing against, how do I say this politely, the lowest common denominator, it is wise to develop your software dev skills elsewhere, into one of three paths.  I thought it’d make a good blog post.  As I see it, the three options are

  • a compiled language–C#, Java, c, erlang: these tend to be used by large companies
  • a sexy dynamic language–Ruby, javascript (especially server side), groovy, python, clojure, lisp: my feeling is that these are more used by startups
  • particular packages in php–magento, drupal: these are often more configuration than coding, but can be customized to produce astonishingly powerful applications

The end goal, to be clear, is not to avoid php, but just to avoid competing against developers who are likely to undercut you.  For example, I knew of someone, in the US, who was doing contract php work for $18/hr a few years ago.  I just don’t think that’s someone with whom you want to be competing for business (I certainly don’t!).  Following one of the above career development paths will help you avoid that.  I personally have followed the first and third paths, with some dabbling in the second.

One customer’s experience with offshoring

I’ve been thinking about offshoring for some time, but haven’t had many interactions with someone who really used an offshore developer. Recently, I ran into a client who did try to develop some software using an offshore developer, and was able to ask him some questions about the experience.  (Disclaimer: this client owns the dating site I blogged about recently).  Plus, outsourcing/freelancing is on my mind, due to the pick.im launch.
——————-

Dan: What did you ask your offshore developer to do? Did they succeed in that?

Michael:  We asked that a dating and friendship site be created for adults with mental illness and that it mimic sites similar to Match.com. We were assured that this would be no problem, but that was far from how things transpired. We did end up with a finished product, but I wouldn’t say it was a success. Nightmare would define things more accurately.

D:  How exactly was the finished product deficient? Did it not perform well? Did it not meet your expectations? Did it not do what the developer said it would do? Were there unfixed bugs? Did it look bad?

M: There were certain features that never worked, and the developer kept maintaining that everything worked just fine.  No, it did not meet the expectation that we would have a fully operational site with the features we had requested along with our bid that was accepted.  He also promised support for his work, but when we started experiencing bugs he would only fix for additional fees.  We didn’t know anybody else and really feel trapped.  … The overall aesthetics were fine, but the internal workings had issues.

D: What were your motivations for using them? What was the cost savings?

M: Money was the only motivation. I have to say that the cost savings were significant up front. However, we were constantly investing money to fix issues that plagued the site from day one. So, we probably ended up spending more, because we eventually scrapped the website and started over.

D: Did the offshore developer build the ASP website we recently scrapped (for skadate), or was there a previous iteration?

M: No, that was the site that the offshore developer originally built.  However, we found a local guy who made some modifications and was able to fix many of the issues.

D: How did you manage them? How were the requirements documented?

M: This was very loose and managed through email. I have to say, we were very ignorant and really should have sought out the advice of an expert before proceeding with the project. We knew what we wanted the site to do, but we really had no clue what we were doing…shame on us. What  can I say though, using an offshore freelancer sounded pretty good for a social worker trying to make things happen on a shoestring budget.

D: So, did you have a master checklist of requirements, or did you just kind of email the developer feature by feature?

M: There was a list of basic features that was provided when we posted the job, but I wouldn’t call it a master list.  That would make it sound like we were prepared!  We didn’t even know what language the site should be written in.  We did not know that asp was kind of a dinosaur language…

D: How did you find them?

M: getafreelancer.com

D: How were they lacking, and/or not lacking?

M: Let me try to recap a bit. Things started out nicely. There was a lot of email communication and even some discussion over skype (although the language barrier made this difficult, so we primarily stuck with email). The great communication didn’t last though. The email responses slowed to a trickle and then all together stopped.

Fortunately, the payment funds had been deposited into an escrow account, and we refused earlier requests to release part of the funds until the work had been completed. This was a recommendation by the freelance site, and were we glad we at least had this leverage. When we finally resumed communication with the developer he threatened to stop work on the site until we released the funds. We were so exhausted that we called his bluff, and told him to go ahead and keep the site. To make a long story short, he completed the project, and we went our separate ways. Oh, and the project took twice as long as he had agreed.

Retelling this is making my blood pressure rise, so I think I’ll leave it there…

D: What would you do differently next time to increase your chances of success?

M: I would not go offshore, and to be honest, I probably wouldn’t choose someone who I couldn’t drive over to their house. I’d also make sure I sucked it up and paid for someone with development knowledge who could  help manage the project.

D: Do you think that your project failed primarily because you went offshore, or primarily because you were a relative novice at managing a technical project, or some combination of the two?

M: I’m sure a bit of both, but it definitely would have helped if we would have had someone with development expertise that could have managed the project for us.

——————-

Wow.  What a nightmare!  (For a contrasting experience, read this excellent article about using RentACoder.)

So, as I alluded to in my last couple of questions, I think that this project would have been hard for any developer, anywhere.  Obstacles to overcome include:

  • shoestring budget
  • lack of technical knowledge
  • missing/unknown requirements
  • lack of project management.  It sounds like the client wasn’t able to do this, and the developer couldn’t or didn’t.

However, I think the issues above were exacerbated by offshoring because of

  • language and timezone barriers
  • missing trust building opportunities (face to face meetings, local get togethers)
  • no real reputation risk for the developer if his work was not up to snuff (sure, his rating on a site might go down, but that’s different than running into someone you screwed at a networking event)

Anyway, a fascinating look at how offshore development can go awry.  Try to think about the total costs, rather than just the hourly wage.

Finding supported GWT user.agent values

The GWT compile process has been taking longer and longer, as I’ve moved from 1.0 to 2.0, because they keep adding optimizations and functionality.  You could deal with this in a number of ways.  You could write less GWT.  You could buy a faster computer.

More realistically, you could live in your IDE, and run in development (nee hosted) mode.  However, sometimes you just have to test with javascript.  The external browser development environment can be tough to set up correctly, especially if your application depends on external resources, and you can run into weird bugs.  I’m currently dealing with an issue where I depend on an external library and it is consistently throwing an assertion exception in development but runs fine in production mode.

If I am compiling to javascript repeatedly, I often compile for just one user-agent, which saves about 80% of the compile time.  When I have code more put together, I can compile for all other browsers.  This process is not flawless, as I am currently debugging a cross browser issue (something that works fine on FF doesn’t work on Safari) but I probably wouldn’t have tested on Safari until after I was through the lion’s share of development anyway.

The way to target just one browser is to put this string in your module (.gwt.xml) file:

< set-property name="user.agent" value="safari,ie6,gecko1_8" / > (remove the spaces next to the angle brackets)

How do you find valid values?  Via this thread, I found that you look in UserAgent.gwt.xml, part of the gwt-user.jar file.  This has the javascript code that looks at navigator.userAgent.  It is not as fine grained as the ubiquitous browser detect script, but shows you what different browsers are known to GWT.

For other tips on speeding up the compile process, check out this series of posts.

[tags]gwt,user.agent,gwt compiler, slow as molasses[/tags]

Palm Pre ‘Hello World’ error on Windows

If you’re running through the Palm WebOS ‘Hello World’ application on Windows (I use XP) with the command line tools, you’ll want to change this line this.controller.pushScene("first"); to this this.controller.pushScene("First");, as outlined in this forum post.  Apparently, case matters somewhere in there, and the palm command line tools generate upper case view and assistant file names.

This oversight is a bit embarrassing/peculiar, since the ‘Hello World’ example is often the first thing developers turn to when learning a new platform, and it was reported in July and acknowledged by a Palm employee that same month. I can only surmise that this works fine on other platforms (which is definitely possible, given that the palm tools don’t lowercase the first letter of the scene, as expected, on Windows).

I’m not sure about whether this affects the Eclipse plugin. This is using the 1.3.1-314 SDK.

[tags]palm pre,hello world[/tags]

Using APIs to move time entries from FreshBooks to Harvest

I recently was working for a client who has their own time tracking system–they use Harvest.  They want me to enter time that I work for them into that system–they want more insight into my time use than monthly invoice. However, I still use my own invoicing system, FreshBooks (more on that choice here) and will need to invoice them as well.  Before the days when APIs were common, or if either of these sites did not have an API, I would have had three, equally unsavory, choices:

  • Convince the client to use my system or at least access it for whatever data they needed
  • Send reports (spreadsheets) to the client from my system and let them process it
  • Enter my time in both places.  This option would have won, as I don’t like to inconvenience people who write me checks.

Luckily, both Harvest and FreshBooks provide APIs for time tracking (Harvest doco here, FreshBooks doco here). I was surprised at how similar the time tracking data formats were.  With the combination of curl, gnu date, sed, Perl and bash, I was able to write a small script (~80 lines) that

  • pulled down my time data for this client, for this week, from FreshBooks (note you have to enable API access to your account for this to work)
  • mapped it it from the FreshBooks format to the Harvest format
  • then posted it to Harvest.

A couple of caveats:

  • I still log in to Harvest to submit my time (I didn’t see a way to submit my time in the API documentation), but it’s a heck a lot easier to press one button and submit a weeks worth of time than to do double entry.
  • I used similar project and task codes in both systems (or, more accurately, I set up the FreshBooks tasks and projects to map to the Harvest ones, since FreshBooks is what I had control over).  That mapping was probably the most tedious part of writing the script.

You can view my script here, or at least a sanitized version thereof.  it took about an hour and a half to do this. Double entry might have been quicker in the short term, but now I’m not worried about entry mistakes, and submitting my time every week is easy!  I could also have used XSLT to transform from one data format to the other, but they were so similar it was easier just parse text.

[tags]getharvest,freshbooks,time tracking, process automation[/tags]

GWT and complex javascript overlays

I remember hearing about javascript overlays and the Google Web Toolkit at the 2008 Google I/O conference, so when I had a chance to use them for a complicated JSON dataset, I thought it would be a good fit.  I use JSON in a number of widgets around a client’s site, mostly because it is the easiest way to have a client be cross site compatible (using JSONP for server communication), the data ranges in size from 13K to 87K.

But as I looked for examples, I didn’t see a whole lot.  There is, of course, the canonical blog post about javascript overlays, but other than that, there’s not much out there.  I searched the gwt google group, but didn’t see anything about complex javascript overlays.

Perhaps I should define what I mean by complex javascript overlays.  I mean JSON objects that have embedded objects and arrays; something like this: var jsonData = [{"id":52,"display":{"desc":"TV","price":125},"store":[1,2,3,4,5]}, {"id":2,"display":{"desc":"VCR","price":25},"store":[1,2,5]}];

The key is to realize that just as you can return a string from a native method, like the canonical blog post does: public final native String getFirstName() /*-{ return this.FirstName; }-*/;, you can also return any other javascript object, including arrays (helpfully available as the JsArray class).  So, for the above JSON example, I have a Display class that looks like this:

package com.mooreds.complexoverlay.client;
import com.google.gwt.core.client.JavaScriptObject;

public class Display extends JavaScriptObject {
protected Display() {}
public final native String getDesc()/*-{ return this.desc;}-*/;
public final native int getPrice()/*-{ return this.price;}-*/;
}

Code for the array ("store") looks like this:

package com.mooreds.complexoverlay.client;

import com.google.gwt.core.client.JsArrayInteger;
public class StoreList extends JsArrayInteger{
protected StoreList() {}

}

Running code that shows you how to parse and display the entire json string is here (using GWT 1.6.4) is available in this tarball.  I haven't yet tried it, but I imagine that it would be pretty easy a JSONP call work in a similar manner to the HTML embedded JSON in this example.

Update June 29: Note that if you want to use any overlay types in collections, that JavaScriptObject, which you must inherit from, has final hashCode and equals methods.  The hashCode method "uses a monotonically increasing counter to assign a hash code to the underlying JavaScript object" and the equals method "returns true if the objects are JavaScript identical (triple-equals)".  Both of these may cause issues with collections such as maps.
[tags]gwt, javascript overlays, google web toolkit, is anyone still using technorati?[/tags]

Sending (and receiving) more than a file with Flex FileReference

The Flex FileReference object makes it very easy to send a file to the server.  I had a situation where I wanted to send some additional data and also get back server output.  This is possible, but not entirely intuitive, so I wanted to document this for others.  In the process of making this work, this post and this post were very helpful to me.

To send more than a file, you use a URLVariables object, like you normally would.  The key is to realize that you also have to set the URLRequest.method to URLRequestMethod.POST, otherwise these variables get lost.  (Makes sense–no one sends files via GET, but it was not obvious to me.)

var request:URLRequest = new URLRequest(url);
var variables:URLVariables = new URLVariables();
variables.docname = docName.text;
request.data = variables;
request.method = URLRequestMethod.POST;
try {
fileRef.upload(request);
} catch (error:Error) {
trace("Unable to upload file.");
}

To get any response from the server (like a success message or filename), you have to attach a listener to the DataEvent.UPLOAD_COMPLETE_DATA event, like so:

fileRef.addEventListener(DataEvent.UPLOAD_COMPLETE_DATA, onUploadCompleteData);

...
private function onUploadCompleteData (event : DataEvent) : void {
var myData = new String(event.data);
// do something with your serverside data...
}

[tags]flex,file upload[/tags]

Boulder Facebook Developer Garage Notes

So, on Thursday I went to the Facebook developers garage, kindly arranged by Kevin Cawley. It was held at ‘the bunker’, secret home of TechStars (we didn’t even get the precise address until the afternoon of the event. I could tell you where it is, but then I’d have to kill you). It was an interesting evening; it started off, as per usual with tech events, with free pizza and pop and beer (folks! if you want a job where you can get free pizza regularly, consider software). Then, there were six presentations by local startups. A number of the companies were ones that I’d seen before at the New Tech Meetup, and that was what the garage felt like. Rather than a deep look at some of the technical or business problems that Facebook applications face, it was a quick overview of how the platform was being used by existing startups. Then there was a video conference call with Dave Morin, the platform development manager for Facebook.

As far as demographics, the garage had about 40-50 people there, of which 4-5 were women. One of the presenters asked how many people had apps running on Facebook; I’d say it was about 40%.

While I’ve succeeded in writing a “hello world” application for Facebook, I am by no means a Facebook developer. I attended the garage because the platform is exciting new technology. There are a number of interesting applications that have a chicken and egg problem: the application is cool if there are a lot of people using it, but people will only use it if is cool. For example, any recommendation engine, or classified listing service (incidentally, I started off the night talking to the Needzilla folks, who are hoping to build a next generation Craigslist). Facebook, by making finding and adding applications delightfully easy, helps developers jump the chicken and egg problem. Of course, it’s just as easy to uninstall the application, so it needs to be pretty compelling. And that is why I went to the garage (no, free pizza had nothing to do with it!). Also, note that this meeting was streamed on Ustream and will hopefully be available on one of the video sharing sites sometime soon. I’ve asked to be notified and will post the link when it is available.

The first speaker was Charlie Henderson of MapBuzz. They’re a site looking to make mapping easier for users to build and create content and community around. They partner with organizations such as the Colorado Mountain Club and NTEN to build maps and communities. They’re building a number of Facebook apps, mostly using iframes rather than FBML. They had some difficulties porting their existing application to work within Facebook, especially around bi-directional communication (if you add a marker or a note to a map within Facebook, that content will appear on the Mapbuzz site, and vice versa). They had some difficulties with FBJS. I couldn’t find their widget on Facebook as of this time.

Lijit is a widget you can drop in your blog that does search better, as it draws on more structured information, including trusted sources. They use the Facebook API as another source of information to search that is fairly structured. They insisted they complied with the Facebook TOS and are starting to look at OpenSocial as a data source as well.

Fuser showed a widget that displayed Myspace notifications on Facebook. Again, they used the iframe technology, and actually used a java applet! (I’m happy that someone is using a java applet for something more complex than a rent vs buy calculator). Since Myspace doesn’t have an API, they ended up having that applet do some screen scraping.

Useful Networks had the most technically impressive presentation, though not because of its Facebook integration. This company, part of Liberty Media, is looking to become a platform for cell phone applications with location services (a “location services aggregator”). Eventually, they want to help developers onboard applications to the carriers, but right now they’re just trying to build some compelling applications of their own. Useful Networks has built an application called sniff (live in the Nordic countries, almost live in the USA) which lets you find friends via their mobile phones and plots locations on a map in Facebook. They had a number of notification options (‘tell me when someone finds me’, etc) and an option to go invisible–the emphasis on letting people know about about their choices surrounding sensitive information like location was admirable. The presenter said that they had been whittling away features to just the minimum for a while. Most importantly, they had a business model–each ‘sniff’ was 50 cents (I’m sure some of that went to the carrier). 80% of the people using sniff did it via their mobile phone, while the other 20% did it from the web. They used GPS when possible, but often cell [tower] id was close enough. And the US beta was live on two carriers. Someone suggested integrating with Fire Eagle for another source of geo information.

Stan James, founder of Lijit, then demoed a Facebook app. I’ve been pretty religious about not installing Facebook applications, but this is one that seemed cool enough to do so. It’s called SongTales and it lets you share songs with friends and attach a small story to each one. As Stan said, we all like songs that are pretty objectively awful because of the experiences they can evoke. As far as technical difficulties, he mentioned that it was harder to get music to play than to just embed a Youtube video (a probable violation of Youtube’s TOS).

David Henderson spoke about Socialeyes. Actually, first he spoke about Social Media, a company he worked for in Palo Alto that aimed to create adsense for social networks (watch out, Google!), and has a knowledge of Facebook applications after creating a number of them. One of those was Appsaholic, that lets a developer track usage of their app. He showed some pretty graphs of application uptake on Facebook, and made some comments about how the rapid uptake created viral customer interaction that was a dream for marketers. His new venture is an attempt to build a Nielsen ratings for social networks, measuring engagement and social influence of users as well as application distribution.

After the presenters Dave Morin skyped in. Pretty amazing display of cheap teleconferencing (though people did have to ask their questions to someone in Boulder, who typed them to Dave). He said some nice things about Boulder, and then talked a small bit about how the Facebook platform was going to improve. Actually, I think he just said that it will improve, and that Facebook loves developers (sorry, no yelling). Dave shared an anecdote about a girl who build a Facebook application which had 300K users and sold it in three weeks. Then people asked questions.

One person asked about how to make applications stickier (so you don’t have 50K users add your app, then 50K users delete it the next week). No good answer there (that I caught).

Another asked about the SEO announcement, which Dave didn’t know about because he ad been in a roundtable with developers all day (and I couldn’t find reference to on the internet–sorry).

Someone else asked about changes to the Facebook TOS, in particular letting users grant greater data storage rights to applications (apparently apps right now can only store 24 hours of data). Dave answered that what Facebook currently allowed was in line with the market standards, and that as those changed, Facebook would.

Somebody asked if authenticating mobile Facebook apps was coming, but I didn’t catch/understand the answer.

Someone else asked about the ‘daily user interactions’, and what that meant. After some hemming and hawing about the proprietary nature of that calculation (to prevent gaming the numbers), Dave answered that the number is not interactions per user per day, but the number of unique users who interact with your application over the last day. He also talked about what went into the ‘interaction’ calculation.

Someone asked about monetization solutions. Dave mentioned that the Facebook team was working on an e commerce API which he hoped would spawn an entire new set of applications. It will be released in the next couple of quarters. (This doesn’t really help the people building apps now, though).

Another person asked about the pluses and minuses of FBML versus the iframe. Dave said that FBML was “super native and super fast”, whereas the iframe solution lets you run the application on your own servers, and is more standard.

Yet another person asked about whether they should release an app in beta, or get it a bit more polished. Dave said that his experience was that the best option was always to get new features into the hands of users as soon as possible, and that Facebook was still releasing new code once a week.

Someone asked about Facebook creating a ‘preferred applications’ repository and Dave said that there were no plans to do that.

Somebody asked if Facebook was interested in increasing integration between FBJS and flash, but Dave didn’t quite get the question correctly, and answered that of course Facebook was happy to have applications developed in flash.

Dave then signed off. It was nice to be able to talk to someone so in touch with the Facebook platform, but I felt that he danced around some questions. And having someone have to type the questions meant that it was hard to refine your question or have him clarify what you were asking. FYI, I may not have understood all the questions and answers that happened in Dave’s talk because I’m so new to Facebook and the jargon of the platform.

After that, things were wrapping up. Kevin plugged allfacebook. People stood up and announced whether they were looking for work, or if they were looking for Facebook developers. By my count, 5 folks said they were looking for developers; no developers said they were looking for work (hey look, kids! Facebook app dev is the new Ruby on Rails!), including the developer of the skipass application and mezmo.com.

Was this worth my time? Yes, definitely. Good friendly vibe and people, and it opened up my eyes to the wide variety of apps being built for Facebook. The presentations also drove home how you could build a really professional application on top of the API.

would I go again? Probably not. I would prefer a more BJUG style presentation, where one can really dive into some of the technical issues (scaling, bi-directional communication, iframe vs FBML vs Flash) that were only briefly touched upon.

[tags]facebook[/tags]