Skip to content

“Blending Social Media with Traditional Marketing” Presentation

I have some interest in social media–I obviously blog, but I also have a twitter account as well as some other ‘social media’ interaction.  It’s also the reason I attend the Boulder New Media Breakfast.  One of the ways I keep up to date on topics of interest is using Google Alerts (it’s not just for Craigslist).  I also use FiltrBox, mostly because I have some friends who work there.

The FiltrBox folks are having a 1 hour webinar tomorrow titled “Blending Social Media with Traditional Marketing” which looks to be pretty interesting.  From the description, they’ll discuss:

how social media marketing and traditional marketing integrate, how your company can leverage social media, along with best practices, how to listen, monitor, engage and interact, with highlights of specific case studies.

Not sure how much marketing material you might get from signing up, but you can always use a throwaway email address.

Denver BDNT Sep 2009

I helped Brian Timoney, of the Timoney Group present last night at the Boulder Denver New Tech Meetup.  It was my second experience presenting at BDNT.  (I presented in Jan of 2008 on GWT.)  But it was my first time at BDNT Denver–down at the Tivoli.

Co-presenting is always different than presenting alone.  I actually had a pretty small role in the presentation–I mostly just drove the demo (underwater navigation with Google Earth to visualize sonar coverage data–it’s very cool, but I don’t feel comfortable putting the demo login up–contact me if you want to see it).  I worked with Brian on the presentation format.  Brian has deep knowledge of GIS concepts (he recently ran a workshop at GIS In the Rockies), but he’s used to having more time to cover concepts, and 5 minutes just enforces a certain brevity.

We had a mentor–Josh Fraser of EventVue took some time to run through our presentation with us.  It was really great to have a third party, especially one in tune with the BDNT, give us feedback.  As I told Robert Reich last night, we went into the mentoring session with one presentation, and left with an entirely different one.  If you’re thinking about presenting at BDNT, please get a mentor (and you might have to ping the organizers a few times to get one–we did).

If I ever present at BDNT again, I’ll follow the format we arrived at:

  • 15 sec intro
  • 1 min talking about problem
  • 2-3 min demoing software solving problem
  • wrap up
  • contact info on screen during questions

However, one of the difficulties in presenting for 5 minutes to a varied audience is that it is hard to know what knowledge to assume (about, say, GIS).  I talked to some people after the presentation, and it seemed like we assumed our exposition of the problem was better than it actually was.  I guess one way to address that would be to have a 30 sec intro spiel that you could deliver or not deliver based on a show of hands.  Not sure if there are other ways to deal with this issue.

Finally, we were the only formal presentation last night.  It sounds like BDNT Denver isn’t as supported by the community as BDNT Boulder, in terms of participation.  I hope it doesn’t end–so, if you’re in Denver, consider attending this meetup–it’s a great place to network and get excited about tech.  Here’s the calendar of meetups.

Instead of other presentations, we went unconference style, a la BarCamp.  People broke into 5 groups and discussed a tech issue (personalization, structured data, real time web) in detail for 10-15 minutes.  Then someone from each group presented 1-3 minutes.  The twitter feedback seemed pretty favorable.  I like BarCamp formats, and enjoyed the change.  I found that everyone in my group had lots to say about personalization, including some pretty creepy personal storied about advertising on the net.  I believe someone was going to write up the resulting presentations–will link to it when I find it.

[tags]bdnt is the new barcamp?, denver, the timoney group, underwater visualization[/tags]

Amazon AMI search

It’s interesting to me that there is no Amazon Machine Image (AMI) search.  AMIs are virtual machine images that you can run on EC2, Amazon’s cloud computing offering.  Sure, you can browse the list of AMIs, but that doesn’t really help.  Finding an image seems to be haphazard, via a google search (how I found this alfresco image) or via the community around a product on an image (like this image for pressflow, a high performance drupal).

I’m not the only person with this complaint.  The Amazon EC2 API only provides limited data about various images, but surely some kind of search mechanism wouldn’t be too hard to whip up, if only on the image owner and platform fields.

Does anyone know where this exists?  My current best solution for finding a specific AMI is to use the fantastic ElasticFox FireFox plugin and just search free form on the ‘Images’ tab.

[tags]amazon, ec2, can I get a ‘search search'[/tags]

Samasource: Outsourcing as social enterprise

One of my clients (Twomile Heavy Industries) is building a large website.  He is involved in the non profit technology world (NTEN, CTNC, etc), and ran across Samasource, which is a social enterprise to bring outsourcing work into the developing world.  And not just Bangalore–they have work centers in refugee camps. They will be providing some testing services for this project.

Samasource offers a wide variety of services, all via screened partners.  I seem them as a cross between Odesk (which an interviewee touchs on here), Elance (which I joined, mostly to see what kind of jobs are available) and Kiva (which I learned about via Andrew Leonard).  See Samasource’s take on comparisons to Odesk and Elance in their FAQ.

What a great idea!  Very ‘the world is flat’.  This type of social enterprise overcomes my main objection to Kiva, because Samasource could provide a cost savings to their clients; compare that to Kiva, which provides no monetary return to lenders.

Anyhow, I’ll try to update when I’ve actually engaged with the folks that Samasource led us to, but it was such a cool business model I had to give them a shout out.

[tags]social enterprise, outsourcing, developing world[/tags]

A survey of CDNs for use with Drupal

I have spent some time researching Content Delivery Networks (CDNs) and how they can integrate with Drupal.  Note that I have not yet implemented a CDN solution, so my experiences and opinion may change….  I will try to do a second post or update when we’ve actually rolled something out live.

Here are some criteria I’d use in selecting a drupal module for CDN management:

  • Do you need a CDN?  This is the key question, as a CDN can speed up your site, but introduces a layer of managment and expense that might not be worth the hassle.
  • Do you mind patching drupal core?  This might be a maintenance issue going forward.
  • Do you want to have just images on your CDN, or javascript and CSS as well?  What about video?
  • How contained within the drupal interface do you need your interactions with a CDN to be?  Are you comfortable using a third party tool sometimes?
  • Do you have an existing CDN to work with, or are you selecting a CDN from scratch?  Obviously, you have more flexibility in the second case.
  • Do you mind coding? Some of these modules seem like they are 75% of the solution, but you might need to write some code to finish things up.

There are a number of modules that attempt to integrate a CDN into Drupal, or might help doing so.  All of these had a release for Drupal6.

  • CDN: this seems like a great fit.  Active development, good sized issue queue, support for multiple CDNs.  It also patches core. Here’s a list of CDNs used with this module.
  • media_mover: this module seems like it might be useful if you were needing to move image and or video files to a CDN.  That might require some coding, although I remember there being some S3 and FTP support.
  • creeper: this module is all about Amazon API integration, including CloudFront.  Plus, what a great name!
  • parallel: fairly new module that changes the source hostnames of images, css files and javascript html tags.  Therefore, they can be served off a CDN, or another web server, etc.
  • storage_api: this is a general storage service with a CDN focus, but doesn’t appear to be well documented or supported as of this time.
  • cloudfront: adds Amazon CloudFront support to the imagecache module

These all seem to be useful in their own ways.  The current project I’m working on is already invested in the Amazon infrastructure, mainly because of Project Mercury, so cloudfront is our current choice.

Did I miss any key modules?

[tags]drupal cms, cdns rock[/tags]

NearlyFreeSpeech.net: pay only for the hosting you use

I had a friend tell me about NearlyFreeSpeech.net. Much like Amazon’s cloud computing services, you only pay for what you use.  Unlike Amazon, there’s no complicated infrastructure or proprietary protocols to get familiar with.  I doubt it has the reliability and scalability of Amazon either.

The pricing is pretty crazy: a penny a day for a website, $1/GB for your first GB of transfer, etc.  There’s a calculator to give you an idea of what you’d pay.

For a certain type of user, who my ‘web presence in two hours’ method just won’t work for, and who can use time in lieu of money, this seems like a great solution. I’m thinking, for example, of non-profits that are just trying to get a web presence and who don’t want to use one of the blog sites for reasons of design control.  If all you have is a static site, this can be very affordable:

“Static sites don’t have any baseline charges at all; you pay only for the storage and bandwidth you use, making them incredibly affordable if you’re on a limited budget and you’re working with a prebuilt website like those produced by many of the most popular web design programs.”

I don’t have any idea what kind of support or uptime they offer, but I love the idea of hosting that might start at $3/month, but can scale up easily and transparently.  They’ve been around since 2002, so they must be doing something right.

[tags]hosting, don’t buy what you can’t afford[/tags]

Using APIs to move time entries from FreshBooks to Harvest

I recently was working for a client who has their own time tracking system–they use Harvest.  They want me to enter time that I work for them into that system–they want more insight into my time use than monthly invoice. However, I still use my own invoicing system, FreshBooks (more on that choice here) and will need to invoice them as well.  Before the days when APIs were common, or if either of these sites did not have an API, I would have had three, equally unsavory, choices:

  • Convince the client to use my system or at least access it for whatever data they needed
  • Send reports (spreadsheets) to the client from my system and let them process it
  • Enter my time in both places.  This option would have won, as I don’t like to inconvenience people who write me checks.

Luckily, both Harvest and FreshBooks provide APIs for time tracking (Harvest doco here, FreshBooks doco here). I was surprised at how similar the time tracking data formats were.  With the combination of curl, gnu date, sed, Perl and bash, I was able to write a small script (~80 lines) that

  • pulled down my time data for this client, for this week, from FreshBooks (note you have to enable API access to your account for this to work)
  • mapped it it from the FreshBooks format to the Harvest format
  • then posted it to Harvest.

A couple of caveats:

  • I still log in to Harvest to submit my time (I didn’t see a way to submit my time in the API documentation), but it’s a heck a lot easier to press one button and submit a weeks worth of time than to do double entry.
  • I used similar project and task codes in both systems (or, more accurately, I set up the FreshBooks tasks and projects to map to the Harvest ones, since FreshBooks is what I had control over).  That mapping was probably the most tedious part of writing the script.

You can view my script here, or at least a sanitized version thereof.  it took about an hour and a half to do this. Double entry might have been quicker in the short term, but now I’m not worried about entry mistakes, and submitting my time every week is easy!  I could also have used XSLT to transform from one data format to the other, but they were so similar it was easier just parse text.

[tags]getharvest,freshbooks,time tracking, process automation[/tags]

Using phpMyAdmin without the “Show Databases” privilege

phpMyAdmin is a pretty cool piece of software, and a very useful tool. If you haven’t used it before, it’s a full featured web-based interface to MySQL databases. You can use it to add data, update data, delete data, export data, and basically any other management of the database you might need.

I ran into an issue the other day. I was trying to allow phpMyAdmin to access a database on a remote host. The user that phpMyAdmin was connecting as didn’t have the “show databases” privilege (I imagine this is common in shared hosting environments, which is what this was). This, apparently, is what phpMyAdmin uses to populate the drop-down of databases on the left-hand side after you login. Since it didn’t display that drop-down, there is no way of selecting the database to which this user did have access.

I searched for a while, but couldn’t find anyone else with this problem. So I thought I would post the solution I found.

The solution is to hard code authentication data for the remote server in my config.inc.php file.  Then, you append the server and the database that you want to connect to the phpMyAdmin url.

In the config.inc.php file:
$cfg['Servers'][$i]['host'] = 'xxx.xxx.xx.xx';
$cfg['Servers'][$i]['user'] = 'user';
$cfg['Servers'][$i]['password'] = 'pass';

In the url that normal users use to access the database:
http://phpMyAdmin.example.com/phpMyAdmin/index.php?db=databasename&server=xxx.xxx.xxx.xxx

The left hand side still isn’t populated with any databases. But, this allows you to go directly to the database to which you do have access, and perform whatever tasks you want (and have permissions for). I tried adding the “db” parameter to the config.inc.php file, but that didn’t seem to work. I didn’t try using any other form of authentication than hardcoded, as I wanted to make this as simple as possible for the phpMyAdmin users.

I was running an older version of phpMyAdmin (2.11.9.5); I didn’t test this on newer versions.  If you do, please post a comment and let me know if this still works.

[tags]phpmyadmin,mysql privileges,remote database access[/tags]

Article about using hibernate with GWT

I just read this article about the Google Web Toolkit and hibernate, and I’m thrilled that someone wrote this. A few years ago, when I was just starting to use GWT and hibernate, the ORM tool, I thought about writing something similar myself. I could never get over the hump of writing about setting up all the infrastructure necessary, something which the author does quite nicely.

I think this article gives a great overview of some of the complexities of using hibernate with the GWT client. The author essentially talks about three possible solutions to the primary problem when using hibernate objects in a GWT system: hibernate enhances your POJO code, and thus you cannot send objects returned from hibernate queries down the wire to the JavaScript client.  The JRE emulation simply can’t handle it.

I especially enjoyed the explanations of how to use some of the tools, to make mapping between GWT capable objects and hibernate objects easier. I’d heard of hibernate4gwt, now Gilead, but never used it. For most of my RPC calls, I end up using the first approach the author explores, custom DTO creation. Often times, I won’t create a special DTO object, but rather reuse the POJO that represents the domain object. This way, you can scrub subsidiary objects (though you lose lazy loading when you do this) and send those down as well.  As long as the POJO doesn’t have too many extraneous members, this seems to work fine, and removes the need for an extra class.

I was a bit frustrated, however, that the author ignored the delete case. This seems like a situation where tools like Gilead might really shine. I have often run into issues where I have to add a ‘deleted’ boolean flag to the hibernate object.  I do this because when an object gets deleted from a collection on the GWT side, my server-side code has no way of knowing this, without some additional complexity (rerunning the query and doing a comparison of results). Adding such a ‘deleted’ boolean flag, solves one set of problems, but raises additional complexity, because you end up having to check to see whether or not an object exists before you try to insert it in the database.

For example, imagine you have a user with set of CDs, which you display in a grid.  If you want to allow a user to correct the name of one of the CDs, and send it back, the server side has the modified record, hopefully with and ID, and can simply save it.  But if you delete one of the CDs from the collection, the server side does not have the modified object, and so has to figure out which one to delete.  Gilead, with its knowledge of the object graph, seems at first glance like it could solve this problem elegantly (a quick search on the Gilead site shows nothing that I could see).

Also note that, using RPC is fantastic for GWT applications, but if you think about using GWT for widgets, I would suggest using something that gives you a bit more flexibility like JSONP. Because GWT RPC depends on XMLHTTPRequest, it is fundamentally limited to sites where the JavaScript and RPC services are on the same host.  Obviously, since using JSONP serializes hibernate objects to strings, none of these tools are appropriate.  (See my survey of Google Web Toolkit client-server communication strategies for more.)

All that said, if you’re thinking about using hibernate and GWT in the same project, reading this paper and running through the examples will be a worthwhile use of your time.

[tags]hibernate,gwt,useful articles[/tags]

Useful Tools: piwik, a worthy web statistics package

I recently installed a open source web analytics tool called piwik.  (You can demo it at that site.) I found out about it via the sourceforge.net mailing list. It was the featured project for July 2009. It bills itself as an alternative to Google Analytics (GA) (actually, right now, the home page states “Piwik aims to be an open source alternative to Google Analytics.”) and I can see why it does so. The architecture is similar, JavaScript executing on every page and sending data to a server; the interface is similar as well, with lots of whizzy Web 2.0, JavaScript heavy features and detailed data.

I had to see been using the Wusage installation that came with my web hosting service. piwik was quite a step up from that, with richer graphs, results and UI. Plus, because it was JavaScript executing and I was assured that every visit was actual visit by an actual person. Since it’s hosted on my server, I control all the data, which was a sticking point for me considering using Google Analytics.

I recently upgraded to 0.4.2, which broke the dashboard, but I’ve been assured a fix is in SVN (Update Aug 4: They no longer plan to fix the bug, but there is a workaround in that thread.).  If you want to get the latest code, go hereYou can download 0.4.1, the last working version I know of, here. I’ll update this to point to the piwik website when they have a release up that works. For some reason they don’t have a release archive that I could find.

So what’s good about piwik?  Well compared to what, Google analytics, or other website analytics tools? This is a fundamental question, because if you are using GA just for the web stats piece, or are using some other static logfile analysis tool, piwik is well worth reviewing.

In comparison to Google Analytics

The downside is

  • you have to maintain another server/database, etc.  I imagine that someone will offer piwik via SAAS sometime soon, though I couldn’t find anyone doing that right now.
  • it’s a beta product and is not as mature as Google Analytics, as evidenced by the 0.4.2 issue above
  • some key GA features are missing (goals, funnels, etc).

In comparison to the other website analytics tools I’ve used, AWstats (which I’ve written about before and is open source) and wusage (not open source, but free with my hosting contract), piwik has

  • a slick user interface
  • JavaScript execution, so you know you’re getting a real browser instead of a bot (the javascript browser guarantee)
  • click outs easier to track
  • easier configuration
  • javascript widgets available

The downside is:

This is obviously not intended to be a full, detailed analysis of all the differences between these tools, but I think that piwik has a lot of promise.  They have a roadmap full of planned features but they definitely aren’t yet an alternative to Google Analytics for anyone who uses some of the more advanced features of that product. Funnels, the click overlay or goals, are all unsupported in piwik as of this version. In the forums, I saw several requests for such richer analysis tools, and in the roadmap I saw a goal tracking plugin as a blocker for version 1.0, so the team is aware of the lack.

When browsing around doing research for this post, I saw a post (sorry, couldn’t find it again) about how piwik features would be developed for smaller websites because it’s an open-source alternative, but I believe that the support of openX (an ad server company that I wrote about in the past), who is funding at least one of the developers, will prevent such feature capture.  In addition, I find that open source projects that have an existing project to model themselves on (like GA), tend to try to reach feature parity.  If piwik continues on its current valid path of replicating Google Analytics features, then I think it will live up to its aim.

If you’re simply using Google Analytics to see who referred traffic to your sites, or for which keywords search engines are showing your site, and you want something more open or more control of your data, piwik is a good fit.  If you use any other web stats tool, and want a slicker admin interface or the javascript browser guarantee, piwik is also worth a look.

Update, 7/31: A friend pointed out this broad survey of the current state of free (as in beer) web analytics options
[tags]piwik,the javascript browser guarantee,google analytics, piwik vs google analytics, web stats[/tags]