Skip to content

A web presence in two hours

Being a computer guy is a bit like being a doctor.  Not in the vast amount of learning (I don’t even have a degree in CS), nor in the paycheck (no mid six figure salary here).  Rather, on finding out that you have some job associated with computers, people will ping you with questions:

  • Why is my computer running slow?
  • How do I send video with my email?
  • Can you make me a website?

I usually beg off the first two questions claiming lack of expertise.  The third question always perks up my ears–building websites and web applications is my area of expertise.

However, many times, I find that people don’t need a website, they need a web presence.  This is entirely different–websites imply a level of maintenance that is often more than is needed.  A web presence is perfect for a offline business that simply wants to establish a beachhead on the internet, but likely won’t interact with users via the web (more likely via phone or email).  In other words, a web presence is like a brochure for the company–designed once, changed rarely, and read many times.

(A quick note: I believe most small businesses will benefit by using the internet to enable new ways of interacting with customers, but such applications can be a big leap–starting out with a web presence can be a great first step.)

A friend of mine often recommends leveraging other websites, like myspace or facebook, to provide a web presence for small companies and independent operators.  I prefer blogs.

Here are the steps to set up a web presence in two hours.  I assume you can create a word document, have an email address, and have access to a web browser.

  1. Gather your page content together.  Typically, you should have a home page explaining what you do and what you offer.  You should have a contact page, explaining how people can get in touch with you, including phone number, maps, etc.  Any other pages are optional, but other useful ones may include how to donate to your organization, a portfolio of your services, or a links page pointing to other useful information about your niche.
  2. Choose a blogging site.  Two main ones are wordpress.com and blogspot.com.  WordPress has a better post editor and handles pages more gracefully; it will in general be easier for neophytes to use, but does not allow advertising.  It also is easier to migrate to a full featured website.   Blogspot gives you more options, in terms of layout control and look and feel.  I’ve heard that content on blogspot is easier found by google.  Either one of these is a fine place to start.
  3. Sign up for your blog.  Make sure you get a name that makes sense: joesmithesquire.wordpress.com, for example.
  4. Work through a tutorial/faq: Here’s one for blogspot (called blogger in this tutorial–they’re the same).  Here’s one for wordpress.
  5. Add your content in pages via cut and paste.  If you are using blogspot, you will need to set up the ‘Blog Posts’ section of the layout to only show the last post, and then make sure to have your home page is the last page you add.  If you are using wordpress.com, you should just create pages.  Set your homepage by going to ‘settings’ then ‘reading’ and set ‘Front page displays’ to ‘a static page’.
  6. Choose a theme from one of the many options provided by the blogging sites.  It won’t be perfect, but it will be free.  You can always spend money to customize the look and feel later (to re-do your brochure).
  7. If you want, you can register a domain name, like joesmithesquire.com, (GoDaddy will be happy to help with this for about $10/year) and point it to the blog.  wordpress.com charges $10/year for this feature, blogspot is free.

That should be it.  It’s not perfect, but setting up a web presence lets you get your organization world wide publicity for the cost of a few hours of learning.  You can now maintain it; if any information changes, you can make those modifications without paying expensive people like me 🙂 .

[tags]web presence[/tags]

Interview: SEO basics for web developers with Ashley Rader

I just recently did an interview with an old high school friend, Ashley Rader, who is now in the SEO business.  Since I build custom web applications and websites, but know next to nothing about SEO, I took the opportunity to ask questions that I think web developers of my background would appreciate.  The interview follows.

Dan Moore: My understanding of SEO is that it is how you show up on the first page in search engine results.  How is that understanding incorrect?

Ashley Rader: That is mostly correct, yes.   Although if you were to ask Google that question, their aim is to deliver sites at the top of the search engines that are most relevant to the search.   They like quality sites, with unique content, and with a level of “authority” where google has found other sites to reinforce that relevance.   SEO is the process by which webmasters and site owners can attempt to increase the likelihood that Google thinks their site is relevant to a particular search. Sites can still rank at the top of google for particular results without SEO techniques, but it is a lot harder to do so.

Dan Moore: Why is it harder?

Ashley Rader: Without SEO you would have to have some other form of publicity working in your favor.   For example a new movie comes out.   There are commercials on TV and buzz amongst movie gurus.   They might naturally begin linking their websites and blogs to the movies website which in turn yields higher search engine results for that website.    The webmaster of the site probably doesn’t have to do anything “unnatural” to enhance their search engine rankings because other forms of marketing are working on their behalf to enhance their search ranking for a particular phrase.   Most ordinary website owners and business owners don’t have that kind of marketing available to them so in those cases, they would need to use SEO techniques to get the same result.

Dan Moore: So, if I understand you correctly, SEO is a form of marketing, just like TV ads?

Ashley Rader: Not exactly, there are forms of marketing that you can use as a part of an SEO or SEM campaign.   Buying text or banner links and advertising on a page might be part of an SEM campaign.   However even when it comes to marketing your site in other mediums online, there are different goals for purchasing an ad.   For our network of sites we sometimes purchase ads in order to gain traffic from a particular site, and in other cases we might purchase advertising for the sole purpose of search engine optimization and having google see a link from a particular site to our site. SEM (search engine marketing) could be considered a form of marketing, that is the broader name for it.   SEO is the process of in a sense manipulating the way google (or the other search engines) might view your site and the topics or search phrases that it might deem it relevant for.

Dan Moore: Is it worth targeting anyone other than Google?

Ashley Rader: Not worth spending a lot of time on anyone but Google, no. That’s why I keep mentioning google.   Some of the things you do will enhance the results for Yahoo, Live, etc, but not worth spending hours and hours trying to rank for both.

Dan Moore: Can you elaborate a bit on how you started out in the SEO field?

Ashley Rader: I started my website back in 2005 and really started researching online, how do all these sites get to the top of the search engine results.   I didn’t have a lot of money for advertising my website so I was more interested in finding out how to do it for FREE (or with the only cost being my own sweat and labor).   There are a million ebooks out there on how to do SEO and I was lucky enough to find one from a guy who really knew what he was talking about, and is one of the experts in the industry.

Dan Moore: How many ebooks/methods did you try before you found the one that worked?  And how did you know it worked?

Ashley Rader: Actually just the one.  As I started to implement some of the things I was reading about, my first site started moving up the search results pretty quickly.    Since then the author of the original ebook that I read started an SEO/SEM membership organization (called Stompernet) of which I’ve been a faithful member since 2006.   SEO has changed completely even since I started in 2005 so it is really important to stay current on the kinds of things that work today.   The things that I did SEO wise for my site back in 2005 might get me banned from Google today, so its always changing and definitely keeps things interesting.

Dan Moore: So even now, you are constantly overhauling your website(s) to take advantage of new techniques that you read about or discover?

Ashley Rader: Not exactly overhauling, but tweaking – yes, I’m constantly tweaking.   If you start off the bat by implementing good structure you likely won’t need to completely overhaul it, but there are always little tweaks and things that tomorrow might require a different kind of title tag, or a different linking structure.

Dan Moore: A different kind of title tag?  What do you mean–different phrasing, or what?

Ashley Rader: Changing up the keywords, changing the way it is written, changing the “call to action”.    Since title tags control what appears on the primary line of the search results, its important that they are optimized not only for SEO purposes – so including your main keyword(s) for the page.   You also need to balance that with the fact that it is your main line of advertising in the search results so you want it to be compelling for people to click on. Title tags written without keywords or with a bunch of extraneous unrelated text are often not clicked on.

Dan Moore: You mentioned implementing good structure at the beginning of website development–what does that typically look like?

Ashley Rader: Title tags – those are probably the most important on page factors related to SEO results.   You want them to be unique (unique from other pages as duplication in the title tags can lead to penalties), and keyword rich.    You should think of each page of a site not as a subpage of the entire website, but as a single page that has the potential to rank on its own.   Keyword and Description tags should be filled as best as possible (and not left blank), but they don’t affect SEO nearly as much as they used to.   Beyond that, the internal linking on the page.  Your most important pages should be no more than 2 links away from the homepage if possible.   Having good navigation that google can follow (so no javascript) is critical to the ability for google to index and revisit your site. Also using nofollow tags for links that point to pages that you don’t care about being indexed or ranking should be used when possible to boost the link authority of the other links on your page.

Dan Moore: There are ‘validation’ programs to show malformed HTML on websites; are there any analogous SEO programs?

Ashley Rader: Yes, there are a TON of programs out there that can look at many of the factors that influence ranking.   All should be taken with a grain of salt as the ultimate determining factor is how you actually rank.  The one that I use is iBusinessPromoter.   They will analyze many of the onpage factors and tell you if you are over or under optimizing in comparison to the sites that are top ranking for your particular keyword.  I have a number of colleagues who focus SO much on tweaking their on page factors and aren’t looking at the big picture of their overall SEO campaign, and they are just spinning their wheels wondering why their site is still not ranking well.   These tools can help, but are not the end all be all. Also Google Webmaster Tools provides some fantastic information as to why a site might not be ranking well whether it be the site layout, lack of incoming links, or if maybe a penalty has been imposed.

Dan Moore: I always tell people that the best way to get better placement in search results is to produce regular, interesting content about whatever niche they are in.  You have been talking about some techniques that can produce better results, so is my advice incorrect?  If so, do you have a similar (or better!) short capsule of advice?

Ashley Rader: Your advice is 100% correct.   There are other things you can do to reinforce that, but google LOVES regularly updated, unique, quality content. Google loves blogs for that very reason.

Dan Moore: Do you blog?

Ashley Rader: I have a number of blogs (I think maybe 6 or 7 for some of the niches that my websites are in), although at this time I have so many fires in the pot that I am not contributing to them as much as I should.   That is one of the areas I’m working on consolidating and improving on.

Dan Moore: 6 or 7, that’s quite a few.  How do you mean “consolidating”?  Generating more multipurpose content?  Getting other contributors?

Ashley Rader: Other contributors.  Many of them are set up right now with RSS feeds from different sources but that is not ideal.  Matt Cutts from google says that RSS feeds are ok to include, but blogs still need original content.   I have a writer from Odesk who does daily posts for 2 of my blogs.

Dan Moore: Ahh, ok.  Can you talk about the future of SEO–what developments on the horizon excite you?

Ashley Rader: Well I’m not 100% sure what lies in the future, SEO is kind of an “in the moment” type of science.   Since we don’t know what changes Google is going to make tomorrow to their ranking algorithm it is hard to say.   But since I have started, they are moving more and more towards rewarding sites that provide really unique, valuable content to their users.   Actually, one thing they are beta testing right now is the concept of “social search”.   Search results that are different for every user based upon their surfing patterns and sites they visit and elect to come up higher.   I can log into my google account and all of my websites (even my brand new ones) are all #1 on google for their search terms!   🙂   Right now google says these voted results aren’t affecting the current algorithm, but I would guess in the future they will.  Social networks seem to be the way that google is moving so out are the days where you could put crappy, scraped content on your site and spam it to rank well.   You need to provide a valuable experience for your website visitors and in turn google will begin to reward you for that with higher rankings.

Dan Moore: Are there any SEO campaigns that you are particularly proud of and/or would point to as proof of your expertise?

Ashley Rader: My main website is www.momentsofelegance.com.   Currently (just checked) #4 on google for “wedding favors” which is a very popular and competitive search phrase. During the summer months which is the height of wedding season, we get about 3500 hits a day and 98% of those are from organic google search results. Sorry it’s #5 on google not #4.

Dan Moore: Cool, well, thank you very much for your time. This was very helpful; any parting thoughts?

Ashley Rader: The most important factors I would say are make sure you know and understand your industry, the search phrases that people are using to find the times of products or services that you offer (and to research this as what you “think” people might not be searching for is not always what people are actually looking for).  Use those keywords in your site design – not in a spammy way but a natural way.   Then if you are providing good, quality content that appeals and attracts visitors, your site will gain relevant links and will be rewarded by google with better rankings.  You’ve got the right idea, content is king and with good content you will have more returning visitors and better rankings.

One other small point I should make is not to ignore the “long tail” or more specific keyword searches for a site.  We get about 25% of our traffic from our main search phrase “wedding favors”, but the majority of the traffic comes into our product pages using very specific keywords.   Those are actually the “money phrases” as people who are looking for “wedding favors” are usually just researching, people searching for “hot pink favor boxes with personalized ribbon” are usually ready to buy.   Just an important distinction to make to not ignore the more specific search phrases.

Dan Moore: That’s actually how the search keywords for my blog work out–the majority of the search terms are unique and I get maybe 1 hit a month on them.

Ashley Rader: Thats the good thing about blogs, there are a few basic plugins you can use and your site is pretty much SEO optimized and your internal blog pages can rank very well for the longer tail search phrases.

Dan Moore: Thanks again!

[tags]seo,interview,ashley rader[/tags]

Need To Identify Your Motherboard? Try CPUID

I was trying to identify a motherboard for relatives so that I could buy them some memory for the holidays.  (What can I say, I’m a romantic!)

I wasn’t able to find it via the Windows Device Manager, but luckily, Google turned up the answer: CPU-Z.  It’s a freeware program that not only tells you any number of useful bits of information about your computer (like the motherboard model and manufacturer), but also will export it to an HTML file suitable for emailing.

This is not information I need every day, but when I did, CPU-Z came through.  Thanks, folks!

[tags]motherboard identification, memory, hardware[/tags]

Squid cache log configuration thoughts

I have found that debugging squid, the excellent web proxy, is difficult.

I believe this is for a number of reasons, but primarily because I’m a developer, and just want to get squid working. I’ve found that I do this with a lot of tasks that are required of the small independent developer with time pressures–CSS/UI development, database design, system administration. All these are things that can be done well by competent specialists, but my clients often don’t have the money to hire them, or time to find them, so they get me. What they get is not in depth CSS knowledge and in depth knowledge of the entire user interface problem domain, for example. Rather, clients get my ability to figure out the problem (based, to a large extent on internet knowledge and to a lesser extent on my deep understanding of web applications and how they should behave) and my tenacity to test and test again to ensure that corner cases are dealt with. It’s a tradeoff.

Regardless, squid is hard for me to debug, and I found this page which lists all the options for the cache.log debugging parameters useful. However, not all that useful–‘client side routines’, number 33, apparently includes squid ACL parsing. Rather, I’ve found the Squid FAQ to be the single most helpful document in terms of my understanding squid enough to ‘get things done’. However, this time I ended up viewing the access log of my http server, while deleting my browser cache multiple times, to confirm that caching was set up the way I thought I had configured it.

[tags]squid,getting things done[/tags]

Google Chrome First Impressions

Here are my thoughts on Google Chrome. Yup, I’m following the blogging pack about a week late. First off, the install process was smooth. The comic book stated that the rendering engine is Webkit, which should make testing relatively easy. This is borne out by the user agent string: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.29 Safari/525.13

They give you the ability to change the search engine, and other options, easily. It definitely follows a Macish configuration processs–you don’t have to apply or save the config changes you make, you just make them and close the options screen.
As Farhad Manjoo mentions, there is a lack of addons. (Addons are pieces of functionality that are extend the browser’s behavior [think adblock], as opposed to plugins which extend the browser’s ability to handle content [think flash]). I didn’t see much about addons or plugins for Chrome searching today, other than some strong desire for it. I don’t remember any mention of addons in the comic book or on the Chrome website. Also, as Manjoo mentions, opening a new tab by clicking next to the existing tabs doesn’t work (though there is a plus icon up there which it should not take too long to get used to).

It looks like there is already a way to create simple desktop applications like a calculator that use chrome as their ‘shell’, javascript as the programming language and HTML for user interface definition. That’s very similar to Adobe AIR (at least the ajax version) and something like the C#/XAML pairing as well. Let’s hear it for declarative markup for user interfaces!

The custom start page seems pretty neat, with the ability to have bookmarks not in a pulldown menu, but right on the start page, which also includesthe ‘most visited’ sites. Machine learning of this type can be a great time saver.

From a development standpoint, there is a javascript/DOM console, which looks similar to Safari’s. It is, however, much more responsive and stable, though I still can’t figure out what the ‘search’ box does. However, the wealth of development tools that I use everyday in FireFox (web developer, yslow, firebug, whois, live HTTP headers) will take time to migrate over to Chrome, if they do so at all. This will continue to make developing in FireFox first and testing in other browsers my default strategy.

Finally, Cringley has some interesting comments on Google’s motivation.

[tags]google chrome[/tags]

Useful Tools: Skype

Skype is something that I’m relatively late to the game on, but now I can’t believe I lived without it. I use it as a secondary line for my business.  (I know, I know, next thing you know I’ll be telling y’all that the “internet” is the next big thing.) I have a relatively old plan and get nailed whenever I go over in minutes–35 cents a minute. I am planning to get a new plan and phone sometime soon, but want to make sure I get a great phone, and haven’t had time to comparison shop.

Skype is letting me postpone getting a new phone by letting me use my minutes more effectively. Check out my sweet old phone–a super solid Nokia:

Phone

At first, I only had Skype Out, which let me call people in the US for the measly sum of $6 dollars a month. Just recently, I signed up for Skype In, which is actually less expensive–$24/year, and lets me receive phone calls.
However, I won’t be passing out my new Skype number–I want people to associate me with the same number I’ve had since I first got my cell phone. Instead, I’ll just forward all calls from my phone to my Skype number when I m in front of a computer.

I originally got Skype because of the cost savings, but it also makes me more effective on the phone. With a headset that I bought for $40 a year ago, I can type and talk at the same time, as opposed to using speaker phone and/or craning my neck.

The only thing I use my phone for that Skype doesn’t replace is text messages–I use teleflip.com for that, because it is free.

I haven’t really explored the other features of Skype–computer to computer calls or conference calling–but right now Skype does what I need it to do.

Useful Tools: Password Safe

One of my clients has a fairly complicated web application. In any application of this nature, there are a lot of usernames and passwords–for DNS management, databases, accounts for integrated services, etc. Where can you store all these?

Well, you could have a master excel spreadsheet that gets version controlled. You could also go the route that some of my sysadmin friends have–a text file that is PGP encrypted. You could manage all the passwords via extensive sticky note or social memory (I might recommend against the latter two methods).  Or you could use a specialized password management software. I haven’t done extensive research on in this area, but so far we’ve been using Password Safe, and I’ve been relatively happy with it.

Good features:

  • You can open up the password file as ‘read only’, preventing you from mistakenly adding or changing data in the file.
  • There’s an option to generate passwords. So, if you’re having a hard time coming up with a secure password, the software can string together a random set of characters.
  • You can copy the password from Password Save and paste it into your application without even seeing it.
  • Once you give it the master password, it will only stay open for a certain number of minutes.
  • You can group password entries within the tool for better organization.
  • You can associate a URL with a password entry, and use keyboard shortcut to open that URL when you are viewing the password entry. This is something I thought was kind of useless until I started using it.
  • Password Safe supports windows and linux, and there’s other projects out there supporting more platforms, like Password Gorilla

The only issue I can think of is that version controlling the password file can be tedious. Just like version controlling any other binary file, CVS (or any other source management system) can’t do merges. That means that when you change something, you need to make sure that anyone who will change something else in the file updates before they do. The alternative is to not have it version controlled, but I like everything in version control.

If you have many passwords to manage, I’d recommend taking a look!

Useful Tools: wget

I remember writing a spidering program to verify url correctness, about six years ago. I used lwp and wrote threads and all kinds of good stuff. It marked me. Used to be, whenever I want to grab a chunk of html from a server, I scratch out a 30 line perl script. Now I have an alternative. wget (or should it be GNU wget?) is a fantastic way to spider sites. In fact, I just grabbed all the mp3s available here with this command:

wget -r -w 5 --random-wait http://www.turtleserviceslimited.org/jukebox.htm

The random wait is in there because I didn’t want to overwhelm their servers or get locked out due to repeated, obviously nonhuman resource requests. Pretty cool little tool that can do a lot, as you can see from the options list.

Useful tools: the catch all email address

When working on a web application that requires authentication, email address is often chosen as a username. It’s guaranteed to be unique, it’s something that the user knows rather than another username they have to remember, and communication to the user is built in–if they’re having trouble, just have send them an email.

However, when developing the initial registration portion of a site that depends on email address for the username, you often run through many email addresses as you tackle development and bugs. Now, it is certainly easy enough to get more email addresses through Yahoo or hotmail. But that’s a tedious process, and you’re probably violating their terms of service.

Two other alternatives arise: you can delete the emails you want to reuse from the web application’s database. This is unsavory for a number of reasons. One is that mucking around in a database when you’re in the middle of testing registration is likely to distract you. Of course, if you have a the deletes scripted, it’s less of an issue. You’ll need to spend some time ensuring you’ve reset the state back to a true pure place; I’ve spent time debugging issues that arose for anomalous user state that could never be achieved without access to the back end.

Which is why I favor the other option. If you own your own domain name and have the catch all key set, all email for your domain that does not have a specified user goes to the catch all account. (I wasn’t able to find out much of hose this is set up, other than this offhanded reference to the /etc/mail/virtusertable file.)

I find having this available tremendously useful. You have an infinite number (well, perhaps not infinite, but very large) of addresses to throw away. At times, the hardest part is remembering which address I actually used, which is why having a system of some kind is useful. For example, for my dev database on my current project, I start all users with foo and then a number. For the stage database, I start all users with bar and then a number.

In addition to helping with development, it’s useful to have these throwaway email addresses when you are signing up for other web applications or posting on the web. For example, my jaas@mooreds.com account, which was posted on my JAAS and Struts paper, is hopelessly spammed. If I had put my real address on that paper, I would have much more spam than I do now, as jaas@mooreds.com simply goes to /dev/null courtesy of procmail. I’ve also used distinctive email addresses for blog comments and for subscribing to various mailling lists; this way I can find out if everyone really keeps their data as private as they say they will. Of course, there are free services out there that let you have throwaway email addresses but having your own domain gives you a bit more security and longevity.

All in all, I find that having a catch all email address set up for a given domain is a very useful development tool, as well as a useful general web browsing technique.