Skip to content

All posts by moore - 79. page

Book Review: Google Maps API V2

Seven months ago, I wrote about Google Maps Gotchas. I mentioned Scott Davis’ Google Maps API Pragmatic Friday article, published by the Pragmatic Programmer folks. Well, a few things have happened since then. In April, Google released version two of their maps API (though they still haven’t set a date when version one will no longer be supported), Scott revised his article and I spent a tax deductible $8.50 to give it a read. What you’ll find below is my take on his article.

The good: first, the ordering was easy, and I received my custom PDF (complete with “Prepared Exclusively for Daniel Scott Moore” as a footer on every page) in less than 20 minutes. Scott explains in a very easy to understand fashion how to create a map. He also covers each of the API’s javascript objects and how to use them. In particular, I thought the list of events and objects that fire them (in the ‘Events’ chapter) was a good reference. Now, Google provides a class reference, but Scott’s are a bit easier to understand here’s a comparison, for the Gmarker class:

Google API:

A GMarker marks a position on the map. It implements the GOverlay interface andthus is added to the map using the GMap2.addOverlay() method.A marker object has a point, which is the geographical position where the marker is anchored on the map, and an icon. If the icon is not set in the constructor, the default icon G_DEFAULT_ICON is used.

After it is added to a map, the info window of that map can be opened through the marker. The marker object will fire mouse events and infowindow events.

Davis’ Book:

In the Core Objects section, we introduced the GLatLng. A GLatLng stores a Latitude / Longitude coordinate, but it doesn’t offer you a way to visualize it on a map. A GMarker is the way to add GLatLngs GMarker to the map for display purposes. The GMarker constructor takes a GLatLng as the only required argument.Once we have the marker, we need to tell the map to display it; map.addOverlay(myMarker) should do the trick. (Objects that you superimpose over the map are called Overlays.) You can remove the Overlays marker using map.removeOverlay(myMarker). To remove all overlays, use map.clearOverlays( ).

var myPoint = new GLatLng(38.898748, -77.037684);
var myMarker = new GMarker(myPoint);
map.addOverlay(myMarker);

Theoretically a map can support an unlimited number of markers, but anecdotal evidence suggests that performance starts to slow down significantly after a hundred or so markers. (File under, “Doc, it hurts when I do this.”)

I liked the real world examples–the fact that you could click through and see the code Scott was writing about in action on his website is a real plus. In addition, he builds a decently complex example in Chapter 7 where the user can add and delete cities. He also gives a good warning about examples that use Gmap, rather than Gmap2.

However, there were some issues. Scott’s coverage of the upgrade to version two of the API is, unfortunately, rather spotty. In his blog, the June release of that feature, and the April revision of the book). He also doesn’t cover GDownloadURL, a convenience method for XMLHttpRequest processing, or the GUnload methods. I’ll freely admit that the maps API is a moving target, and some of the omissions above may be due to that.

However, there are other problems. Though billed as a beginner book, he omits what I consider to be one of the fundamental challenges of Google Maps development–the performance obstacles large numbers of database driven markers (other than the comment mentioned above in the GMarker reference). In addition, he doesn’t cover design options, nor cross browser issues (like the transparent PNG in IE issue).

In the last chapter, he mentions good examples of mapping websites, but Scott omits references to useful websites–something that even dead tree books do. In particular, he doesn’t mention mapki.com (a wiki full of useful user provided data) nor the Google Maps group (which some users consider a primary differentiator between Google and Yahoo Maps).

One final gripe is that the 75 pages of content that I expected were really only 45–text only filled about 60% of the column width. I expect that in articles I read for free on the web, but in books that I pay for, I like a bit higher content to page ratio.

In short, this ebook is a good choice for the first time Google Maps builder. This is due to the tutorial nature of much of the book, the examples, and the explanation of typical good javascript code, such as using anonymous functions for the event handlers. It is not entirely adequate in covering version 2 of the API, possibly due to API changes, and it ignored some of the more complex aspects of the API.

If you’re looking for a folksy introduction to Google Maps api, it’s worth the $8.50 to have a coherent guide. If you’ve muddled through one google maps project, piecing together things from the API docs and various blogs, it becomes less worthwhile. But if you want some kind of discussion about complex Google Maps issues this document is not the right place to look.

[tags]Scott Davis, Google Maps, Pragmatic Fridays[/tags]

11 Tips for Managing Virtual Employees

Via WebWorkerDaily, here’s 11 tips for managing virtual workersUpdate, 6/2009: this link is dead, but here’s the wayback machine’s archived version.
I have been working virtually on and off for over three years (mostly as a contractor). I think that most of Scott’s comments are spot on, except two of them.

#1 (It’s not cheaper) says that you’ll need hardward like webcams and wireless keyboards. I’ve done prefectly well with a laptop and cellphone. In fact, one of the great things about working virtually is that everyone can choose their equipment to suit their needs.

#4 (Metrics, Metrics, Metrics) is one I’d amend, rather than totally discount. I think what Scott is really saying is that you need some means of verifying that work is getting done. And if it is not, you want to know sooner rather than later, so the more incremental feedback you can get, the better. In software, this can be accomplished by metrics, but frequent releases can also serve the same purpose.

That said, all of his post is worth reading.

[tags]working virtually,virtual software development[/tags]

On recruiters and job boards

Joel, of Joel On Software fame, has created a jobs board with some rather different rules for posting: no hidden company names (no ‘anonymous jobs’) and a money back guarentee. The main goal is to make a niche job board where folks from the JoS community can find jobs at good companies. Requiring company names also screens out many recruiters.

I think intermediaries in the hiring process can provide high value (I love reading the insights of Nick Corcodilos’ “Ask The Headhunter”). I’ve gotten contracts through recruiters before. Such recruiters can add value by screening candidates and pulling from a wider pool than a company might have available. It’s outsourced HR.

However, I have a resume on Monster. Often, I’ll get an email from a recruiter about a job that is obviously a wrong fit. For example, I have some ATG Dynamo experience and some StoryServer experience, both from several years ago. I’ve received emails detailing jobs where extensive experience with the latest version of ATG Dynamo or StoryServer is required. If the emailer had bothered to read my resume after the keyword match, they’d know I was not a good fit. It is job spam. I’ve hidden my resume on Monster, which helps, but before I did so, my email was sucked into the recruiter databases.

Now, you may be saying “Boo-hoo, Dan! You’re being offered jobs that aren’t a good fit. Just delete the emails!” It’s true, I can treat those emails like any other spam. And the costs are the same as any other spam–my attention. (Occasionally it’s useful to see what positions and technologies are being recruited for, which keeps me from sending all the job spam directly to the trash). The business model for these recruiters is also the same–send out enormous quantities of (nearly costless) email and play the numbers. Employees do this kind of thing all the time when they send out massive numbers of resumes, and it’s broken.

Back to Joel’s jobs board. After a few days, someone violated the no ‘anonymous jobs’ rule, and Joel’s request for suggsions on how to deal with it sparked this discussion, which is pretty lively. Pretty much everyone fell into two categories: don’t allow ‘anonymous jobs’ lest you become another Monster.com, or allow the recruiters, but make anyone submitting an anonymous job check a box, and let users hide such entries. Some folks also suggested charging recruiters more (sometimes much more) than companies, or creating a job board just for recruiters, or creating some kind of rating system which will let ‘good’ employers (whatever that means) float to the top of listings. It does look like Joel has decided to keep the ‘no anonymous jobs’ rule. The JoS job board is now an intermediary, even though it provides nothing more than aggregation and a bit of screening.

I said before that intermediaries in hiring process can have value. I believe this is true of any intermediary in any process, whether it be job board, recruiter, real estate agent, car salesman, travel agent or anything else. But (cue Jaws Theme) disintermediation due to decreased information distribution costs means intermediaries can no longer add value just by having access to information (whether it be MLS listings or phone numbers for hotels in Australia). Now they need to provide something beyond what the internet can provide, whether that be deep experience in a particular city’s real estate or a good relationship with a hiring manager or aggregation of interesting eyeballs.

(“And that’s something that job spammers simply cannot provide.” Well, that was my last sentence until I re-read my post. But job spammers do actually aggregate interesting eyeballs; they just do it inefficiently.)

[tags]joel on software, job boards, recruiters, disintermediation[/tags]

Book Review: Fallout

This graphic novel, subtitled “J. Robert Oppenheimer, Leo Szilard, and the Political Science of the Atomic Bomb”, is a good quick read. It’s hard for my generation, raised with the fall of the Soviet Union, to appreciate how stupendous the atomic bomb really was. But this book does a great job of making the history of that period accessible. The book is not that short–around 200 pages–but, due to its graphic nature, is very easy to read.

Fallout is really divided into two major sections. The first is concerned with the idea and creation of the atomic bomb, starting from Szilard’s ideas in the 1930s and ending with the Trinity test in 1945. The second is concerned with the inquiry into Oppenheimer’s advisory position to the Atomic Energy Commission, which occured in the political climate of the 1950s. Both these are worth reading, but the second one, which has much more text–portions of letters are printed along with the graphics–is a chilling reminder of the craziness of that time.

With 6 different authors listed on the cover (and more in the back pages), the illustrations change often enough that you do have to pay attention to know who is speaking. Additional difficulties arise because there are so many characters. I think the book would be stronger if one author had been responsible for all of the graphic content because the characters would be easier to keep track of.

One very nice aspect of this book is the end notes. At the back of the book, extensive text outlines what parts are true and what parts are surmise. As the front of the book saysm “many of the quotes and incidents that you’ll think most likely to be made up are the best documented facts.” For example, Teller, one of the scientists, denies his similarity to Dr Strangelove, and another, Szilard, devises his own cancer treatment using radiation.

All in all, if you’re in for a light introduction to the history of one of the heaviest subjects, Fallout is a good choice.

“Fallout” at Amazon.

[tags]atomic bomb, graphic novel, oppenheimer[/tags]

Eclipse and existing CVS modules

On a project I’m working on, we just made the move from local CVS via a windows share to remote CVS via ssh. I’m a big fan of that, because I was seeing some weird behavior from the drive, and because Eclipse doesn’t support local repositories.

The Eclipse/CVS FAQ had some good information, but I had a bit of trouble importing CVS source into a java project (which is important so you get the command line completion, etc).

The easiest way I’ve found to get things working in Eclipse with previously existing CVS modules is to do this:

  • Make sure everything is checked into CVS using command line tools.
  • Open eclipse.
  • Delete the project(s). When it prompts you “Are you sure you want to
    delete project ‘xxx’?” choose “Do not delete contents” and click “Yes”.
  • For each project:
    • On the menu, Go to File/New Project.
    • Choose “Java Project”.
    • Give the project name you’ve used for the module.
    • On the same screen, choose “Create project from existing source”. Put the path of the directory of the previously checked out file.
    • Click finish.

Eclipse is smart enough to connect to CVS by looking in the CVS subdirectories, although you may need to change your project’s CVS settings. I had to change it from the ext to the extssh connection method.

This was with Eclipse version 3.1.2 Build M20060118-1600 on Windows, with cygwin.

[tags]Eclipse, cvs[/tags]

Google Projects?

Update, 9:57am: Well, I feel a bit foolish. Looks like Google is planning to host some open source projects.

Brian thinks the world needs google projects. Unlike Brian, I am not a committer on any open source project, but I do use a fair number. I also have believed in the principles of open source for years.

Does the world need yet another projects website? As I said above, I can’t speak for committers, but as a user of their work, I’d like the environment to be as productive as possible for them (how generous of me, no?).

But as a user, I can tell you that a real problem is just finding some of the great work that has already been done. As an example, Google and Yahoo had no idea that the CalendarTag library was out there, at least in the first 30 pages of results. Perhaps I was choosing poor keywords, but I had a tough time finding what I was looking for. Perhaps a microformat for open source software projects would help?

SourceForge has flaws, but the fact is that its search found what I was looking for, probably because SourceForge search is by nature limited to software projects. The fact that you have to go through three pages to download a tarball is an annoyance, but not a capital one. It’s overweighed by the fact that SF has a enormous number of projects (126,520, as of today) and is relatively fast.
Brian and I are approaching the problem of a new projects database from different perspectives, but I believe that one very real problem in management of open source projects is location of the appropriate project; search in other words. Sites like CMSMatrix are a start, but don’t work as well for smaller components. And adding Google Projects to the mix isn’t going to solve this problem.

[tags]Google, project management, SourceForge[/tags]

Moving to WordPress

Well, I finally decided to move to a more modern blogging platform. I have used Moveable Type 2.64 for almost three years, but it was time to move on:

* I had turned off comments because of blog spam. But I’ve recently heard from several folks that they’d wanted to comment. I love comments and the discussion that ensues, so I wanted a more sophisticated commenting workflow.

* I wanted easy support for tagging posts. How Web 2.0&#tm;!

* General cruft from a 3 year old program: MT is well designed and I have had few problems with it, but I wanted to see what the current state of blogging software was.

I don’t know whether I could have had such features with a more modern version of Movable Type, but it certainly seemed to me that WordPress has more mindshare, plus it’s open source. And it is supported by my ISP. So, I moved from Movable Type 2.64 to WordPress 2.0.2. I followed these fantastic directions. and, for importing my 350+ entries with correct permalinks, I followed these directions.

I ran into only a few problems.

* The directions on codex.wordpress.org appear to be for a slightly different version of wordpress and reference import-mt.php, rather than mt.php

* I ended up having to edit my php.ini file to up the memory to import my 1.5 meg MT export. 10M wasn’t enough, 50M was plenty.

* The directions for preserving your MT search engine entries are great, but I ran into one problem. Because I have an old version of Apache, this RewriteRule did not work:

RewriteRule archives/0*(\d+).html /uri/to/blog/index.php?p=$1

Instead, I had to use plain old character classes:

RewriteRule archives/0*([0123456789]+).html /uri/to/blog/index.php?p=$1

Her’s my entire RewriteEngine entry:

RewriteEngine on
RewriteRule weblog/archives/0*([0123456789]+).html wordpress/index.php?p=$1
RewriteRule weblog/index.rdf /wordpress/index.php?feed=rdf
RewriteRule weblog/index.rss /wordpress/index.php?feed=rss
RewriteRule weblog/index.xml /wordpress/index.php?feed=rss2
# http://www.mooreds.com/weblog/archives/2004_10.html to
# http://www.mooreds.com/wordpress/?m=200410

RewriteRule weblog/archives/([0123456789][0123456789][0123456789][0123456789])_([0123456789][0123456789]).html /wordpress/index.php?m=$1$2

# http://www.mooreds.com/weblog/archives/cat_books.html to 3
RewriteRule weblog/archives/cat_books.html /wordpress/index.php?cat=3
RewriteRule weblog/archives/cat_java.html /wordpress/index.php?cat=5
RewriteRule weblog/archives/cat_mobile_technology.html /wordpress/index.php?cat=7
RewriteRule weblog/archives/cat_programming.html /wordpress/index.php?cat=6
RewriteRule weblog/archives/cat_technology.html /wordpress/index.php?cat=4
RewriteRule weblog/archives/cat_technology_and_society.html /wordpress/index.php?cat=2
RewriteRule weblog/styles-site.css /wordpress/wp-content/themes/ocadia/style.css
RewriteRule weblog/ /wordpress/

* Users I imported, even if I gave them the Editor role, weren’t able to edit posts they owned. I may figure this out later, but right now I just made every user an admin.

So far I’ve been very happy with my decision, if for no other reason than the built-in comment moderation and the UI advances. Let’s see if WordPress lasts for three years.

[tags]wordpress, weblog migration, moveabletype,[/tags]

Notes from a talk about DiamondTouch

I went to another University of Colorado computer science colloquium last week, covering Selected HCI Research at MERL Technology Laboratory. I’ve blogged about some of the talks I’ve attended in the past.

This talk was largely about the DiamondTouch, but an overview of Mitsubishi Electronic Research Laboratories was also given. The DiamondTouch is essentially a tablet PC writ large–you interact through a touch screen. The biggest twist is that the touch screen can actually differentiate users, based on electrical impulses (you sit on special pads which, I believe, generate the needed electrical signatures). To see the DiamondTouch in action, check out this YouTube movie showing a user playing World Of Warcraft on a DiamondTouch. (For more on YouTube licensing, check out the latest Cringely column.)

What follows are my loosely edited notes from the talk by Kent Wittenburg and Kathy Ryall.

[notes]

First, from Kent Wittenburg, one of the directors of the lab:

MERL is a research lab. They don’t do pure research–each year they have a numeric goal of business impacts. Such impacts can be a standards contribution, a product, or a feature in a product. They are associated with Mitsubishi Electric (not the car company).

Five areas of focus:

  • Computer vision–2D/3D face detection, object tracking
  • Sensor and data–indoor networks, audio classification
  • Digital Communication–UWB, mesh networking, ZigBee
  • Digital Video–MPEG encoding, highlights detection, H.264. Interesting anecdote–realtime video processing is hard, but audio processing can be easier, so they used audio processing to find highlights (GOAL!!!!!!!!!!!!) in sporting videos. This technology is currently in a product distributed in Japan.
  • Off the Desktop technologies–touch, speech, multiple display calibration, font technologies (some included in Flash 8 ), spoken queries

The lab tends to have a range of time lines–37% long term, 47% medium and 16% short term. I think that “long term” is greater than 5 years, and “short term” is less than 2 years, but I’m not positive.

Next, from Kathy Ryall, who declared she was a software person, and was focusing on the DiamondTouch technology.

The DiamondTouch is multiple user, multi touch, and can distinguish users. You can touch with different fingers. The screen is debris tolerant–you can set things on it, or spill stuff on it and it continues to work. The DiamondTouch has legacy support, where hand gestures and pokes are interpreted as mouse clicks. The folks at MERL (and other places) are still working on interaction standards for the screen. The DiamondTouch has a richer interaction than the mouse, because you can use multi finger gestures and pen and finger (multi device) interaction. It’s a whole new user interface, especially when you consider that there are multiple users touching it at one time–it can be used as a shared communal space; you can pass documents around with hand gestures, etc.

It is a USB device that should just plug in and work. There are commercial developer kits available. These are available in C++, C, Java, Active X. There’s also a Flash library for creating rapid prototype applications. DiamondSpin is an open source java interface to some of the DiamondTouch capabilities. The folks at MERL are also involved right now in wrapping other APIs for the DiamondTouch.

There are two sizes of DiamondTouch–81 and 107 (I think those are the diagonal measurements). One of these tables costs around $10,000, so it seems limited to large companies and universities for a while. MERL is also working on DiamondSpace, which extends the DiamondTouch technology to walls, laptops, etc.

[end of notes]

It’s a fascinating technology–I’m not quite sure how I’d use it as a PC replacement, but I could see it (once the cost is more reasonable) but I could see it as a bulletin board replacement. Applications that might benefit from multiple user interaction and a larger screen (larger in size, but not in resolution, I believe), like drafting and gaming, would be natural for this technology too.