Skip to content

Trying to schedule a meeting? Try Doodle

A friend turned me on to Doodle for distributed meeting scheduling.  Four steps to a url you can send out to an unlimited number of people.  You select dates and times.

Anyone who has the URL can then, without logging in, vote for a date/time that works for them.

There are other options available, but this is enough to set up the occasional lunch with busy friends, which is what I use it for.

If you need to do any scheduling of multiple parties, give Doodle a try.

StackOverflow and Community

“Hey, have you heard?  StackExchange is the new faq/forum.  It’s the cat’s pajamas, with SEO friendly urls, lots of web 2.0 features (including a realtime wysiwyg editor) and social goodness baked in.” — Dan, trying on his hipster hat

If you’re a programmer, and you use the google to look for answers to your programming questions, you’ve probably seen stackoverflow.com pop up in the search results.  This site, started as a collaboration between Joel Spolsky and Jeff Atwood last year, is a better way to do question and answer sites, aka FAQs.  It opens the FAQ asking and answering process to anyone with a browser, has anti spam features, some community aspects (voting, editing answers, reputation, commenting, user accounts), and great urls.  And, incidentally, a great support staff–I emailed them a question about my account and they responded is less than 24 hours.

They’ve done a good job of generalizing the platform, and now you can create your very own.  There are a wide variety of stack sites: real estate, your pressing Yoda needs, small business, space exploration.  Here’s a list.  I love the fact they are charging for this software–$129/month for a 1M pageviews is not very much for software that lets you build your community and lets your community share knowledge.

And that’s the key.  Like most other social software, what you get out of a stack site is highly correlated to what you put into it.  If, like the folks at Redmonk, you create a stack site about a topic on which you have expertise and publicize it where you know interested people will hear about it and spend time answering questions on it, I imagine you have a good chance to build a community around it. And once you get to a certain threshold, it will take on a life of its own.  But you need to provide that activation energy–it’s an organizational commitment.
If, on the other hand, you create a stack site and don’t have a community which can get excited about it, or don’t do a good job reaching out to them, you end up with an abandoned stack site (worse than an abandoned blog, imho).  I’m hoping that Teaching Ninja won’t be in this state for long, but right now there’s only 3 questions and no answers there.

The proliferation of social software infrastructure sites (I’m looking at you, ning) has made it easier than ever to create the foundations for communities online.  But, you need to have people for community software to have any value!  Because it is so easy, getting others involved is not a case of ‘if you build it they will come’ (if it ever was).  There are too many competing sites for other’s time.  Software can make it easier and easier to build the infrastructure around community, but it’s the invisible structures (bonds between you and your users, and between them) that will actually create ongoing value.

If you’re looking for an outward facing FAQ site and willing to invest the time in it, a stack site seems to be one of the best software platforms for building that right now.  (I have some qualms about who owns the data, but it seems like they are planning export functionality.)  Just don’t believe the hype: “The Stack Exchange technology is so compelling, sites can take off right away.”  No software can make a social site ‘take off right away’.

[tags]no silver bullet for community,ask yoda,stackoverflow,community[/tags]

Tips: Deploying a web application to the cloud

I am wrapping up helping a client with a build out of a drupal site to ec2. The site itself is a pretty standard CMS implementation–custom content types, etc. The site is an extension to an existing brand, and exists to collect email addresses and send out email newsletters. It was a team of three technical people (there were some designers and other folks involved, but I was pretty much insulated from them by my client) and I was lucky enough to do a lot of the infrastructure work, which is where a lot of the challenge, exploration and experimentation was.

The biggest attraction of the cloud was the ability to spin up and spin down extra servers as the expected traffic on the site increased or decreased. We choose Amazon’s EC2 for hosting. They seem a bit like the IBM of the cloud–no one ever got fired, etc. They have a rich set of offerings and great documentation.

Below are some lessons I learned from this project about EC2. While it was a drupal project, I believe many of these lessons are applicable to anyone who is building a similar system in the cloud. If you are building an video processing super computer, maybe not so much.

Fork your AMI

Amazon EC2 running instances are instantiations of a machine image (AMI). Anyone can create a machine image and make it available for others to use. If you start an instance off an image, and then the owner of the image deletes the image (or otherwise removes it), your instance continues to run happily, but, if you ever need to spin up a second instance off the same AMI, you can’t. In this case, we were leveraging some of the work done by Chapter Three called Project Mercury. This was an evolving project that released several times while we were developing with it. Each time, there was a bit of suspense to see if what we’d done on top of it worked with the new release.

This was suboptimal, of course, but the solution is easy. Once you find an AMI that works, you can start up an instance, and then create your own AMI from the running instance. Then, you use that AMI as a foundation for all your instances. You can control your upgrade cycle. Unless you are running against a very generic AMI that is unlikely to go away, forking is highly recommended.

Use Capistrano

For remote deployment, I haven’t seen or heard of anything that compares to Capistrano. Even if you do have to learn a new scripting language (Ruby), the power you get from ‘cap’ is fantastic. There’s pretty good EC2 integration, though you’ll want to have the EC2 response XML documentation close by when you’re trying to parse responses. There’s also some hassle involved in getting cap to run on EC2. Mostly it involves making sure the right set of ssh keys is in the correct place. But once you’ve got it up and running, you’ll be happy. Trust me.

There’s also a direct capistrano/EC2 integration project, but I didn’t use that. It might be worth a look too.

Use EBS

If you are doing any kind of database driven website, there’s really no substitute for persistent storage. Amazon’s Elastic Block Storage (EBS) is relatively cheap. Here’s an article explaining setting up MySQL on EBS. I do have a friend who is using EC2 in a different manner that is very write intensive, that is having some performance issues with his database on EBS, but for a write seldom, read often website, like this one, EBS seems plenty fast.

EC2 Persistence

Some of the reasons to use Capistrano are that it forces you to script everything, and makes it easy to keep everything in version control. The primary reason to do that is that EC2 instances aren’t guaranteed to be persistent. While there is an SLA around overall EC2 availability, individual instances don’t have any such assurances. That’s why you should use EBS. But, surprisingly, the EC2 instances that we are using for the website haven’t bounced at all. I’m not sure what I was expecting, but they (between three and eight instances) have been up and running for over 30 days, and we haven’t seen a single failure.

Use ElasticFox

This is a FireFox extension that lets you do every workaday task, and almost every conceivable operation, to your EC2 instances. Don’t delay, use this today.

Consider CloudFront

For distributed images, CloudFront is a natural fit. Each instance can then reference the image, without you needing to sync files across instances. You could use this for other files as well.

Use Internal Network Addressing where possible

When you start an EC2 instance, Amazon assigns it two IP addresses–an external name that can be used to access it from the internet, and an internal name. For most contexts, the external name is more useful, but when you are communicating within the cloud (pushing files around, or a database connection), prefer the internal DNS. It looks like there are some performance benefits, but there are definitely pricing benefits. “Always use the internal address when you are communicating between Amazon EC2 instances. This ensures that your network traffic follows the highest bandwidth, lowest cost, and lowest latency path through our network.” We actually used the internal DNS, but it makes more sense to use the IP address, as you don’t get any abstraction benefits from the internal DNS, which you don’t control–that takes a bit of mental adjustment for me.

Consider reserved instances

If you are planning to use Amazon for hosting, make sure you explore reserved instance pricing. For an upfront cost, you get significant savings on your runtime costs.

On Flexibility

You have a lot of flexibility with EC2–AMIs are essentially yours to customize as you want, starting up another node takes about 5 minutes, you control your own DNS, etc. However, there are some things that are set at startup time. Make sure you spend some time thinking about security groups (built in firewall rules)–they fall into this category. Switching between AMIs requires starting up a new instance. Right now we’re using DNS round robin to distribute load across multiple nodes, but we are planning to use elastic IPs which allow you to remap a routable ip address to a new instance without waiting for DNS timeouts. EBS volumes and instances they attach to must be in the same availability zone. None of these are groundbreaking news, it’s really just a matter of reading all the documentation, especially the FAQs.

Documentation

Be aware that there are a ton of documentation, one set for each API release, for EC2 and the other web services that Amazon provides. Rather than starting with Google, which often leads you to an outdated version of documentation, you should probably start at the AWS documentation center. This is especially true if you’re working with any of the systems that are newer with perhaps not as stable an API.

In the end

Remember that, apart from new tools and a few catches, using EC2 is not that different than using a managed server where you don’t have access to the hardware. The best document I found on deploying drupal to EC2 doesn’t talk about EC2 at all–it focuses on the architecture of drupal (drupal 5 at that) and how to best scale that with additional servers.

[tags]ec2,amazon web services,capistrano rocks[/tags]

Interesting GWO Case Study

I’ve written before about Google Website Optimizer.  But it’s always nice to see hard data.

Here’s an interesting GWO Case Study I found online, via a presentation by Angie Pascale.  It focuses on optimizing landing pages for a college system.  Conclusions:

Although the SEM agency did not find a correlation between brain lateralization and form location, they did succeed in optimizing Westwood’s program landing pages. On average, the program pages saw a 39.87% conversion rate improvement, with 83.1% being the highest upgrade. After significant results were revealed, the agency stopped each experiment and changed the format for every page to reflect the best-performing contact form location.

[tags]gwo, case study[/tags]

Using phpMyAdmin without the “Show Databases” privilege

phpMyAdmin is a pretty cool piece of software, and a very useful tool. If you haven’t used it before, it’s a full featured web-based interface to MySQL databases. You can use it to add data, update data, delete data, export data, and basically any other management of the database you might need.

I ran into an issue the other day. I was trying to allow phpMyAdmin to access a database on a remote host. The user that phpMyAdmin was connecting as didn’t have the “show databases” privilege (I imagine this is common in shared hosting environments, which is what this was). This, apparently, is what phpMyAdmin uses to populate the drop-down of databases on the left-hand side after you login. Since it didn’t display that drop-down, there is no way of selecting the database to which this user did have access.

I searched for a while, but couldn’t find anyone else with this problem. So I thought I would post the solution I found.

The solution is to hard code authentication data for the remote server in my config.inc.php file.  Then, you append the server and the database that you want to connect to the phpMyAdmin url.

In the config.inc.php file:
$cfg['Servers'][$i]['host'] = 'xxx.xxx.xx.xx';
$cfg['Servers'][$i]['user'] = 'user';
$cfg['Servers'][$i]['password'] = 'pass';

In the url that normal users use to access the database:
http://phpMyAdmin.example.com/phpMyAdmin/index.php?db=databasename&server=xxx.xxx.xxx.xxx

The left hand side still isn’t populated with any databases. But, this allows you to go directly to the database to which you do have access, and perform whatever tasks you want (and have permissions for). I tried adding the “db” parameter to the config.inc.php file, but that didn’t seem to work. I didn’t try using any other form of authentication than hardcoded, as I wanted to make this as simple as possible for the phpMyAdmin users.

I was running an older version of phpMyAdmin (2.11.9.5); I didn’t test this on newer versions.  If you do, please post a comment and let me know if this still works.

[tags]phpmyadmin,mysql privileges,remote database access[/tags]

Useful Tools: piwik, a worthy web statistics package

I recently installed a open source web analytics tool called piwik.  (You can demo it at that site.) I found out about it via the sourceforge.net mailing list. It was the featured project for July 2009. It bills itself as an alternative to Google Analytics (GA) (actually, right now, the home page states “Piwik aims to be an open source alternative to Google Analytics.”) and I can see why it does so. The architecture is similar, JavaScript executing on every page and sending data to a server; the interface is similar as well, with lots of whizzy Web 2.0, JavaScript heavy features and detailed data.

I had to see been using the Wusage installation that came with my web hosting service. piwik was quite a step up from that, with richer graphs, results and UI. Plus, because it was JavaScript executing and I was assured that every visit was actual visit by an actual person. Since it’s hosted on my server, I control all the data, which was a sticking point for me considering using Google Analytics.

I recently upgraded to 0.4.2, which broke the dashboard, but I’ve been assured a fix is in SVN (Update Aug 4: They no longer plan to fix the bug, but there is a workaround in that thread.).  If you want to get the latest code, go hereYou can download 0.4.1, the last working version I know of, here. I’ll update this to point to the piwik website when they have a release up that works. For some reason they don’t have a release archive that I could find.

So what’s good about piwik?  Well compared to what, Google analytics, or other website analytics tools? This is a fundamental question, because if you are using GA just for the web stats piece, or are using some other static logfile analysis tool, piwik is well worth reviewing.

In comparison to Google Analytics

The downside is

  • you have to maintain another server/database, etc.  I imagine that someone will offer piwik via SAAS sometime soon, though I couldn’t find anyone doing that right now.
  • it’s a beta product and is not as mature as Google Analytics, as evidenced by the 0.4.2 issue above
  • some key GA features are missing (goals, funnels, etc).

In comparison to the other website analytics tools I’ve used, AWstats (which I’ve written about before and is open source) and wusage (not open source, but free with my hosting contract), piwik has

  • a slick user interface
  • JavaScript execution, so you know you’re getting a real browser instead of a bot (the javascript browser guarantee)
  • click outs easier to track
  • easier configuration
  • javascript widgets available

The downside is:

This is obviously not intended to be a full, detailed analysis of all the differences between these tools, but I think that piwik has a lot of promise.  They have a roadmap full of planned features but they definitely aren’t yet an alternative to Google Analytics for anyone who uses some of the more advanced features of that product. Funnels, the click overlay or goals, are all unsupported in piwik as of this version. In the forums, I saw several requests for such richer analysis tools, and in the roadmap I saw a goal tracking plugin as a blocker for version 1.0, so the team is aware of the lack.

When browsing around doing research for this post, I saw a post (sorry, couldn’t find it again) about how piwik features would be developed for smaller websites because it’s an open-source alternative, but I believe that the support of openX (an ad server company that I wrote about in the past), who is funding at least one of the developers, will prevent such feature capture.  In addition, I find that open source projects that have an existing project to model themselves on (like GA), tend to try to reach feature parity.  If piwik continues on its current valid path of replicating Google Analytics features, then I think it will live up to its aim.

If you’re simply using Google Analytics to see who referred traffic to your sites, or for which keywords search engines are showing your site, and you want something more open or more control of your data, piwik is a good fit.  If you use any other web stats tool, and want a slicker admin interface or the javascript browser guarantee, piwik is also worth a look.

Update, 7/31: A friend pointed out this broad survey of the current state of free (as in beer) web analytics options
[tags]piwik,the javascript browser guarantee,google analytics, piwik vs google analytics, web stats[/tags]

Data.gov–freely accessible data in standard formats from the USA federal government

Have you ever wondered where the world’s copper smelters are?  Or pondered reservoir storage data for the Colorado river?  Had questions about the residential energy use of US households?

Now you can find the answers to these questions, using data.gov, the stated purpose of which is “to increase public access to high value, machine readable datasets generated by the Executive Branch of the Federal Government.”  I’ve been writing for a while about the publishing power of the internet, but data.gov takes this to a whole new level.

It’s definitely a starting point, not an end, as there are only 47 raw datasets that you can access.  They cover a wide range of data and agencies, and were apparently chosen to kick things off because they “already enjoy a high degree of consensus around definitions, are in formats that are readily usable, include the availability of metadata, and provide support for machine-to-machine data transfer.”  The four main formats for data provided by data.gov are XML, CSV, KML and ESRI.  (There are also a number of widgets, and tools you can use, including the census factfinder.)

More datasets can be requested, and I’m hoping that they will be rolled out soon.  What a playground! Go take a look!

Update, 4:55: Here’s a great article on the whole process and problem.
[tags]data.gov,public data[/tags]

FreshBooks Review: An Invoicing System for Freelancers

Last November, I started using FreshBooks (full disclosure: if you click on that link, sign up and end up paying them money, I get some money), which is online invoicing software… Wait, let’s back up a minute.

When I started contracting and consulting (oh yes, they’re different), I originally was doing all my invoices in Word docs (actually, OpenOffice Writer), and it seemed natural to track my time there as well. Don’t repeat yourself, right?

After a while, it was tedious to have to open up each invoice every time I switched the client I was working for, so I started using an OpenOffice Calc spreadsheet. On the left hand column were dates, and on the top row were projects/tasks (“work order 13”, “troubleshooting database performance issues”). I ended up having a second row, which was client identifier.

This worked well for a while. When invoicing time came around, I would take the hours and task data from the spreadsheet and put it in the custom invoice. I’d export the custom invoice as a PDF, and put it in a directory. When the client paid, I’d move the invoice to a ‘paid’ directory. This differentiation let me send gentle reminders to clients who hadn’t paid yet.

I realized early on that this wasn’t the most efficient system. Several times, I did research on invoicing systems. I looked around SourceForge and FreshMeat, but the invoicing systems I found there were aimed at invoicing for products, not hours. I had a friend who’d used GnuCash but it seemed to be so much more than I needed.

So, each time, I went back to my old friends, the spreadsheet and the word document.

Mid last year, I was slammed with work. This made me realize, again, how inefficient my invoicing system was. I thought, ‘hey, I’ll outsource it’. A friend had a good person doing some administrative work for him. I hired her to do my invoicing, once a month. I talked her through my system, and she got it. I double checked her first invoice, and it was great. The price was reasonable, I could still use my familiar system, I was happy.

Then this lady decided to raise her prices fifty percent, in a rather high handed and arbitrary manner. Now, I was not a big client. Frankly, I can understand how 1-3 hours of work a month was not very attractive (though I did throw her some research work at least once). However, I was slightly offended at the price increase, and decided to take my work elsewhere. This shows the power of the initial price, because if she’d quoted the hourly fee at the higher rate originally, I would not have blinked.

I tried to find someone else to do the job. I found a relative, who could do the job, but required a fair amount of hand holding. The relative kept talking about the automated accounting systems she had used in the past, so I undertook my search again.

This led me to last October. I looked around for web based invoicing system that would work with me the way I wanted to work:

  • mostly time and materials billing
  • some fixed bid, including for clients that I do t&m for
  • varying hourly rates
  • ~10 clients
  • pdf invoices I could email
  • web based
  • time tracking including
  • professional looking invoices generated

I narrowed it down to three contenders (this was as of October, 2008; there may be more now):

I took one look at QuickBooks Online, saw that it was IE only, and discarded that option (I may run Windows, but I want to support the open web).

I spent significant time investigating both Cashboard and FreshBooks. I liked the Cashboard interface better, but there was a huge stumbling block. Some of my clients have both fixed bid and t&m work, during the same month. I don’t ever want an invoice to show how much time it took to do a fixed bid project–that’s my business and not the client’s concern (that’s why it is fixed bid). Cashboard had no way of putting both a fixed bid project and t&m work (which must show hours spent) on the same invoice. I even asked Cashboard support about this, and got this answer back: “At the preset(sic) time there is no way you can show time for some projects on an invoice, and hide it for others.” Updated, 4/30/2009, 20:25: Apparently I was wrong, Cashboard can do both types of line items on the invoice; see discussion below.  Now I’m not sure exactly the source of my choice of FreshBooks over Cashboard.

FreshBooks lets you do exactly that, so I signed up for their free account (full disclosure: if you click on that link, sign up and end up paying them money, I get some money). And I’ve been using it since last November. After I decided to use it, I upgraded to get more client accounts.

In general, I’ve been happy with FreshBooks. It is always an adjustment to change your business processes and/or software but I’ve been happy with a lot of what FreshBooks has to offer.

First, though, the gripes and caveats:

FreshBooks is not a full accounting package (and they don’t pretend to be one). I still don’t have one–other than the accountant who does my taxes. This means that I don’t have a precise view of my business’s health all the time. I do find out my profit and loss numbers once a year (tax time), and I find that is enough for me. The business’s expenses and income just aren’t that complicated. I had about 30 deposits into the business checking account, and about 50 checks written against it (and a number of EFTs and fees as well).

The web interface is cumbersome and non-intuitive at times. It can be learned, but the crazy urls and heavy use of javascript are occasionally issues for me. For example, editing time that you already have entered, especially changing the day the time was entered for, took me a while to figure out. I also have to re-login to my account every day, even if the browser window has been open for the entire time.

There is no free widget/desktop app for Windows XP users to use for FreshBooks time tracking (you can always use notepad). There is one you can pay a modest fee for. I think it’d be cool to write an AIR FreshBooks time tracking app–maybe sometime…

What I like about FreshBooks:

This last quarter I qualified for the report card, which is free quarterly data comparing your invoicing statistics with others in your industry. Statistics include number of invoices, amount invoiced, % revenue from new clients, and average time to collect payment. This data is great, and I don’t know how I’d find it otherwise. (As an aside, one of my other project ideas is to have a local Boulder/Denver survey of rates for web software development. I’d pay for that–would you?)

I like the reporting available, including which hours have been invoiced and which haven’t (though I have issue with a fixed bid project that I’ve invoiced, but can’t seem to mark invoiced). If you had more than one employee, this reporting would rapidly increase in value.

The cost is reasonable–I have a Shuttle Bus plan, which is 14 dollars a month. (My bank does charge me a bit extra, because FreshBooks is a Canadian company).

They have a great blog.

FreshBooks has an API, which lets you develop any number of widgets (including the time tracker mentioned above) and/or access your data from other programs.

I want to emphasize that the FreshBooks invoicing software is no one-size-fits-all solution. I am running a one person software/consulting business with a fairly stable set of clients and minimal expenses. FreshBooks has many features I don’t use (postal mail invoices, basecamp integration, expense tracking, estimates).

The biggest component that I don’t use is client login. Freshbooks makes it easy to create client accounts. Clients can then login and view documents, see outstanding invoices, contest them, and even pay them with online payment systems. This seems nice, but doesn’t fit with my client expectations. I may try this with new clients, but I hate to ask someone who is paying me money to login to yet another system, just to make things easier for me. Besides, I like to thank my clients in a personal email around the first of the month (along with sending them my invoice)–it never hurts to thank your clients.

So far, FreshBooks has been a great choice for me. If you’re in the Excel/Word invoicing world, or have an invoicing system you’re looking to dump, check it out.

[tags]invoicing,business process, freshbooks[/tags]

Boulder New Media Breakfast Notes: A Presentation by John Jantsch

I went to the second Boulder New Media Breakfast last week (this will be a monthly event, but this particular talk was delayed by a week due to weather).  It was interesting–a 15 minute networking session over bagels and coffee, then an hour presentation.  The catch is that it started at 7:45 in the morning–so you still had a full day left when you were done.

It was an interesting presentation and crowd and I think I’ll attend in the future.  It was a much smaller crowd (30-40 people) and  far more focused on marketing than the typical group I attend (New Tech, BJUG, CU Colloquia [which incidentally is having an interesting talk on “leveraging social networks in information systems” on Apr 7]).  I talked to a couple of people who were PR folks interested in technology, which isn’t my typical networking group.  I also talked to a fellow named Joe, who often asks hard questions, but always wears the great hat, at the Boulder Denver New Tech meetup.  I also got a chance to talk to Dave Taylor (of elm and Ask Dave Taylor fame [who answered a tough question well enough it got emailed around to me])–it was interesting to talk to him about his move from software developer to strategic business consultant.

After the networking, we all sat down for a presentation on reputation management by John Jantsch.  The following are my scrawled notes from that presentation (any sentences that start with I are my thoughts).

Notes:

Lots of people use the internet to do research for products and services.  36% of people think more positively of companies with a blog.  I don’t know how many people think less positively of such companies.

As a company, says John, you need to have a policy on digital conversations.  Such conversations with customers will happen, so you need a policy, and HR is the right department to produce one.  He discussed three types of conversation: person to person (like Dell customer service reps answering questions on forums), thought leaders (like a blog from a industry heavy weight who happens to be employed by IBM) and company communication (like an official blog from the USPS).

John also mentioned that it may make sense to, in the same way that sales folks sign non compete agreements, to have  customer service reps that interact through social media sign such agreements.  After all, if someone is the face of the company, and then they switch jobs, do they (and their new company) have a right to all the followers on twitter that were acquired through the original company’s hours?  Who owns the facebook profile?  I never have liked non competes, but the idea follows logically from the personalization of customer service on company time.

Another concept that is important is transparency.  Given the proliferation of digital communication, transparency into a company is here–now the question is, how can you influence it.  The best way to influence it is to host your conversations as much as possible.  In addition, be proactive in responding to issues (ie, customer complaints).

As a company, you need to have coherence in your branding across your internet presence.  Just as the website used to be ignored 10 years ago, facebook profiles are now often ignored and grow up from the ranks.  This leads to lack of message and branding consistency.

Now John moved on to cover some tools that are useful.  Most of the tools are free, but he did mention a few paid services.  The following are free alert services that help you search for keywords in various areas of the internet:

He referred to twitter as a “stream of sewage” and stated that tools to filter that stream were needed.  (As an aside, this video commentary on the twittersphere is hilarious.)   Twitter has a location specific search options (in advanced search) that you should definitely leverage for competitive analysis.

John also talked about making sure that your online presence is high quality.  This is not only done by making sure your website/blog/facebook profile/twitterstream/etc/etc are updated regularly and with good content, but also by taking advantage of tools that aggregators like search engines provide.  For example, if you have a local business that appears in Google’s local search, you can add update the entry using the local business center.  This lets you claim the listing, add pictures and verify other information.  Other search engines have analogous processes, and it is well worth your time to try to stand out.  I don’t quite know what will happen when everyone does this updating–the value of accurate content will remain, but having a picture won’t be enough to stand out.

Then we went into a question and answer period.  One person asked for examples of good corporate users of twitter. John gave these examples: Dell, radian6.

Another person asked about the personal/business divide: if you’re running a small business, do you want to provide info in your twitter stream (or other digital media) that identifies you as a person (“I like to tele ski”) or just have it focus on business issues.  John answered that the line is still blurry and being defined.  I personally try to keep my blog focused on business, but I think it depends on what you’re selling.  If I were selling socks, tales of adventures in my socks would be appropriate.  Since I’m selling software services, you probably don’t want to hear about the killer desert hike I went on last year.

End notes

I really enjoyed the breakfast and encourage anyone with an interest in digital media to try it out.  The next presentation (at the end of April) will be a presentation by Terry Morreale on personal digital security (I believe).

[tags]boulder new media breakfast,twitter,duct tape marketing,it’s at 7:45??[/tags]

Popups in GWT and IE8

Just starting to test some GWT applications against IE8.  (Using IE Collection, which is very useful.  It includes standalone versions of IE from IE1 to IE8.  Very useful, even if the version of IE8 doesn’t include the developer tools.)

The only issue I’ve seen so far is that popups don’t work correctly.  Some appear, but not where they are supposed to.  Others, particularly with the lightbox we’re using (from the GWT-Widget project), just don’t appear at all.

For the latter, you get the very helpful message:

Line: 2618
Char : 324
Error: Not implemented
Code: 0
File: url to your GWT cache.js file.

(This is with GWT 1.5.3.)  After compiling the GWT with “-style DETAILED”, I looked at the precise line causing the error message.

It was in this method:

function com_google_gwt_user_client_ui_impl_PopupImplIE6_$onShow__Lcom_google_gwt_user_client_ui_impl_PopupImplIE6_2Lcom_google_gwt_user_client_Element_2(popup){
var frame = $doc.createElement($intern_1350);
frame.src = $intern_1351;
frame.scrolling = $intern_1352;
frame.frameBorder = 0;
popup.__frame = frame;
frame.__popup = popup;
var style = frame.style;
style.position = $intern_1314;
style.filter = $intern_1353;
style.visibility = popup.style.visibility;
style.border = 0;
style.padding = 0;
style.margin = 0;
style.left = popup.offsetLeft;
style.top = popup.offsetTop;
style.width = popup.offsetWidth;
style.height = popup.offsetHeight;
style.zIndex = popup.style.zIndex;
/*
style.setExpression($intern_110, $intern_1354);
style.setExpression($intern_111, $intern_1355);
style.setExpression($intern_91, $intern_1356);
style.setExpression($intern_93, $intern_1357);
style.setExpression($intern_1358, $intern_1359);
*/
popup.parentElement.insertBefore(frame, popup);
}
You can see where I commented out the style.setExpression calls, which seemed to fix the issue (the $intern strings are css property names like 'left').  Obviously not very sustainable. The other fix available right now is to add this meta tag to the HEAD section of your HTML documents:
<meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" />

For more information on this, follow the GWT issue tracker bug 3329.