Skip to content

Advice for startup founders

On an email list I’m on, someone was recently accepted to an incubator program. They asked for advice about startups. I couldn’t resist!

I wrote this (lightly edited) and wanted to post it here so it’d have a permanent spot on the internet.


I’ve been a founder or early employee/contractor of six startups over my two+ decades. Some are still kicking, but none have had an exit. Please consider that when contemplating my advice.

When building an early-stage startup:

  • talk to customers, talk to customers, talk to customers
  • revenue > funding
  • know your business domain. If possible, co-found with someone who is an expert in the area if you are not
  • choose boring technology whenever possible
  • choose technology that you know whenever possible
  • be prepared to consult or find other ways keep the company alive if you don’t have immediate revenue
  • take advantage of in-person support systems to keep your spirits up
  • exercise: it’ll help you be a better founder
  • when you have an ask for someone, make it easy for them to say yes (be specific, do your research, scale the ask to the relationship strength)
  • remember, building a company is a marathon not a sprint (but sometimes you need to sprint!)
  • VCs and founders have shared incentives (both want a successful business)
  • VCs and founders also have misaligned incentives (you get one bet, they get N bets; you need $X, they need $10X-100X)
  • understand your financial runway
  • understand your emotional runway
  • if you have a spouse or partner, make sure they are on board with the big decisions you make
  • being lucky is usually more of a factor than being good
  • you can sometimes make your own luck through hard work
  • all founders should take part in the sales process, no excuses
  • your company’s biggest competitor is customer inertia (or Excel)
  • all advice is contextual, always understand the context of the giver
  • talk to customers
  • talk to customers

Best of luck!
Dan

PS Talk to customers.


Thoughts on a freemium software product

At FusionAuth, we have a free software product that is a critical part of our business model.

A free product is pretty common in the software space because of two things:

  • Software needs to be used to determine its efficacy; a software package is not like a shovel. With a free option, money is no longer a barrier (though time is).
  • Software has zero marginal cost; once you put the effort in to build the first copy, you can create 1M copies for essentially the same cost.

However, supporting a free software product sure isn’t free. This post covers what you need to think about in terms of investing in a free product.

If FusionAuth were a public company, this is where there’d be lawyerese talking about forward looking statements and safe harbors and whatnot. Suffice it to say that this entire post is my personal opinion.

Side note: a free product is often but not always open source. Free can mean a tier of a SaaS, an open source project, or a downloadable product. Open source has additional complexities, so I’m going to focus on products that are “free as in beer” in this post. FusionAuth has a downloadable product that devs can run for free, within constraints.

We have internal discussions about tweaking the free version of the product. Options include:

  • Keeping the product as-is, but making investments in the community version. This includes bug fixes and feature improvements. Since our free product is production ready and feature rich, this is a solid choice.
  • Improving the free product. There are multiple dimensions to doing so, including:
    • Improving discoverability of the free option, such as highlighting it more on the website, advertising it, or investing in additional documentation around its features and usage.
    • Changing the license to make it usable across a greater number of use cases or with different limits.
    • Move features currently restricted to paid plans to the free product.
  • Degrading or limiting the free product. These are basically the flip side of improving it:
    • Increasing friction to find or use it.
    • Modifying the license to prohibit currently allowed use cases.
    • Clawing back features from the free plan, focusing on features useful to businesses who are likely to pay. For example, the free plan currently allows unlimited SAML connections, which many competitors throttle.

For the foreseeable future, we are following the first path, improving the free product. The free product is usable and robust, and we get substantial benefits from the community’s usage.

Let’s dig into these benefits.

The value of the free users

First, if developers are using our free software to solve their authentication needs, they aren’t using someone else’s. While you can’t pay a mortgage with developer attention, that doesn’t mean it has no value. Such attention expands our mindshare and market share. More mindshare means that people are learning about our solution. Gaining market share means they aren’t using someone else’s solution. Therefore they are neither paying a competitor money nor getting more familiar with their solution.

Second, such users spread the word about our solution. Sometimes they talk about it on social channels and sometimes on a review site. But often it is prompted by us; here is an example of one of my favorite set of blog posts, entirely drawn from community experiences. Talking to community members has opened up my eyes to the wide variety of ways our product is used. Community stories are not, however, useful as case studies for the sales process. They aren’t detailed enough. But they are still helpful to spread the word about the product, highlight our community members and catch long tail keywords.

Third, free users improve FusionAuth. They do this by:

  • Finding flaws in FusionAuth, such as bugs or regressions. Here’s an example issue, including workarounds.
  • Requesting new features. We leverage the community further by asking users to upvote such requests so we know what the community wants from the product.
  • Exercising the software by performing integrations that we would never have done. This is a variant on finding bugs. For example, a free user reported we aren’t to spec with regards to the SAML relay state; that’s never been an issue for the existing SAML integrations.

Finally, free users also offer each other support. While not all community members are active all the time (our community is more of a Google than a Facebook), a few have a presence on our forum, slack and GitHub issues.

The conversion of free users to paid users

Free users may purchase the software in the future. Once someone needs paid features, they may stop using the free plan and buy. We’ve built up trust as a solid solution in their mind and they have already integrated us. So a free user will consider paying you when they are looking to purchase.

Of course, you need  a product worth paying for above and beyond your free offering. At FusionAuth, we accomplish this in three dimensions:

  • Operating the product. Many of our customers are fine with the features of the free plan but don’t mind paying us to run it. This can include service level agreements (SLAs) as well, which are like catnip for enterprises.
  • Paid features. These are either features good when you are at a certain size (like SCIM) or enhancements of features available in the free tier, such as a more customizable registration form or MFA policy. Choosing the features to charge for is critical, but is really hard to get good data for since it is very business specific.
  • Support. Knowing you can ask questions of the engineers behind the product is valuable, especially for larger businesses.

There are two ways for free users to convert.

They can do it directly, where they use the free version to evaluate or run a “proof of concept” to ensure that our product meets their needs before they ever engage with us. We have plenty of customers who say “we’re already a few months into integrating with you” on our purchase kickoff call, and that ability to “get going” with the product without talking to a sales rep or pulling out a credit card makes the decision easier. Again, they trust the solution will solve their problem. They can also see how the company treats the free users and the community in general too.

There are also people who “kick the tires” with the free product and discover that it doesn’t meet their needs. We don’t hear about as many of those, but I have talked to a few. In this case, both parties win; getting to a quick no is not as good as a “yes” but is still pretty good. There are also people who do a self-led POC and incorrectly determine it isn’t a fit. They might miss features through lack of docs or a conceptual mismatch. We keep trying to improve our docs, education and product, but you can’t win them all!

There’s also an indirect conversion path, alluded to in the above mindshare point. Free users may use our software on a side project or to learn about authentication and OAuth in general. This lets them add FusionAuth to their toolkit. Then, later, perhaps years later, they can bring us into a project as an option to evaluate or to recommend us for purchase.

In general, free products let you build trust and de-risk purchases.

Keeping up with the Joneses, err the competition

Finally, competition matters. Lots of our competitors have a free offering. Again, this is due to the nature of software. Yay to zero marginal cost!

This is also compounded by the nature of developer tools. Devs are looking for tools to solve a problem, but once they’ve integrated one, it sticks around. One time I picked a bug tracking system in a few days (phpbt, what what!) and it was used at our company for years. This means if you have a friction free evaluation process, you stand a good chance of being embedded.

Offering a free product matters for our market positioning.

Nothing sells quite like free.

In conclusion

When you make a free product available, you are offering users something of value. Resources are scarce, and supporting the free product isn’t free.

Determining how much time and effort to expend supporting it depends on the value your company gains. You won’t always be able to calculate it in dollars and cents, but it exists nonetheless.

20 years

Wow, it’s been 20 years since I started this blog. The world and my life have changed quite a bit, but I still like to muse on technology, business and more.

Thanks for reading! I appreciate everyone who takes the time to listen to me, whether you scan it, share it, or reach out to me to let me know.

Still think blogging is one of the best things you can do if you want to:

  • engage with people
  • understand a subject
  • build a business
  • create a name for yourself
  • think deeply about a topic
  • find a job (but not quickly!)

Thanks again for 20 great years!

Docker thoughts

Docker is amazing tech for developer productivity.

You can package up the dependencies for your application into one file (a Dockerfile) or more than one file (a docker-compose file). You might use the former for an application or service and the latter if your application depends on a database, cache, or other external architectural component.

I am definitely late to the Docker party, as I remember Eric Norlin interviewing Solomon Hykes in the early 2010s at Gluecon. In 2018 I used it to stand up some consulting projects, but wondered at the value and the required investment.

I’ve recently been using it extensively at work to set up quickstarts. These tutorials let folks stand up our software quickly for evaluation purposes. Docker is a fantastic fit for this use case.

I’ve been a big fan of detailed documentation and readmes for setting up development environments, but Docker takes all that documentation (including tedious things like ‘install python 3.8.12. 3.8.11 doesn’t work’) and puts it into code that is accessible with single command.

I have a foggy memory of detailed readmes, hours of setup and dependency hell, but Docker removes all of that. Doesn’t mean your Docker images/compose files can’t get stale and won’t need love and attention, but it does mean you can control when you test and upgrade such dependencies, rather than discovering it when a new developer comes on the team and can’t get their environment set up.

Shipping software to production via containers (Docker or otherwise) has operational benefits, but I’m less familiar with those. One of the nice things about developer relations is you’re separated a bit from operations. (It’s one of the downsides as well.) I haven’t recently worked with a production system where containers were used, but I have read nice things about the approach. Here’s an article from Google which covers the technical benefits.

What about the licensing? This business model change rocked the world of happy Docker users, but when you take funding, you have to make money. Actually, if you want to survive as a company, you have to make money, but if you take funding, you have to try to make a LOT of money. That’s the game, make sure you know before you start to play. But that’s a different post.

As far as Docker licensing, as a Docker user, you have two options:

  • pay docker when you reach the threshold
  • use a docker alternative such as finch or podman

As the previous paragraph implies, Docker is a subset of containers. The benefits I mention above apply to container based systems, not just Docker systems, but Docker is the solution I’m most familiar with.

Docker, and containerization in general, feels like as big a leap as Stackoverflow, git or memory managed languages in terms of developer productivity.

Ending a side project

I think side projects are great. They let you experiment, scratch and itch, and learn in a safe and low pressure environment. They can raise your professional profile too and are great fodder for a blog. Of course, they also take precious precious free time.

These all mean that it is entirely okay to shut down a side project when it is no longer serving you.

The side project could be:

  • less fun than it used to be
  • taking more time than you want
  • an area you used to be interested in but are no longer
  • related to a job you have left

In all of these cases, you have NO obligation to keep doing something simply because you used to.

So, shut it down.

How you do so depends on what the side project is. A blog should be treated differently than a webapp which should be treated differently than a library.

There are some things you can do for any side project:

  • Set an date to end your efforts–this is something for you internally.
  • Run through the finish line–keep up the good work.
  • Announce the end and what it means for the project.
  • Thank everyone who helped you.
  • Optionally make it available for a while longer.

What a “while” is depends on the effort and money required to make the project available, as well as how useful it is in the end state. A library can live on GitHub forever, especially if you archive it so it is clear that it won’t be getting updates from you. It’ll still be available to fork and may be useful to others. A directory of local restaurants offering patios, on the other hand, will decay in usefulness pretty quickly as businesses start and close.

Ending a project is just as natural as starting it. Make sure you do it right.

Protecting a CDN source using basic auth

I have a website that is behind a content delivery network (CDN). I want to protect it from being crawled by any robots. I want all access to go through the CDN for reasons. There may be errant links to the source; I don’t care if they continue to work.

htaccess and basic auth is perfect for this situation.

I added an .htaccess file that looks like this:

AuthType Basic
AuthName "Secure Content"
AuthUserFile /path/to/.htpasswd
require valid-user

I needed to make sure the path and the file are readable by the web server user.

Then, I added a .htpasswd entry that looks like this:

user:passwdvalue

If you don’t have access to htpasswd, the typical program used to generate the password value, this site will generate one for you.

Then, I had to configure my CDN to give it the appropriate header.

Use the Authorization header, and make sure to pass the username and the password. This site will generate the appropriately base64 encoded values.

Voila. Only the CDN has access.

Now, the flaws:

  • Depending on how the CDN accesses the site, it may be possible to snoop out the username and password
  • If you ever want to get the origin site over HTTP, you’ll need the username/password

Setting up password gorilla on an ARM Mac

I love password gorilla. It’s a portable locally hosted password manager that is compatible with passwordsafe. It has all the features I need in a password manager:

  • account groups
  • password generation
  • metadata storage (so you can add a note)
  • keyboard shortcuts for copy and pasting the url, username, and password
  • local storage of secrets

It doesn’t have fancy integration with browsers, remote backup or TOTP, but I don’t need these. For browser integration, I use the clipboard. For remote backup, pwsafe files can be copied anywhere. And for TOTP, the whole point of MFA is that each factor is separate.

But recently, I started setting up an Apple M1 machine, and the password gorilla downloads failed to start. I looked at the github issues of the repository and didn’t find a ton of help, though there were some related issues.

After some poking around, here’s what I did that worked for running password gorilla on my Apple M1:

  • installed tcl-tk using brew: brew install tcl-tk
  • cloned the password gorilla repository: git clone https://github.com/zdia/gorilla.git
  • wrote a small shell script and added the directory where it lived to my path.
#!/bin/sh

cd <cloned repository directory>/sources
/opt/homebrew/opt/tcl-tk/bin/tclsh gorilla.tcl

That’s it. After I did this, I can run this script any time and password gorilla starts right up.

LLM training data, or, a broken virtuous cycle

Have you used ChatGPT?

I have and it’s  amazing.

From the whimsical (I asked it to write a sonnet about SAML and OIDC) to the helpful (I asked for an example of a script with async calls using Typescript), it’s impressive.

However, one issue I haven’t seen mentioned is training data. Right now, there are sets of training data used to teach these large language models (LLMs) what is correct about the world and what is not. This podcast is a great intro to the model families and training process.

But where does that training data come from? I mentioned that question here, but the answer is humans provide it. Human effort and knowledge are gathered on reddit, wikipedia, and other places.

Why did humans spend so much time and effort publishing that knowledge? Lots of reasons, but some include:

  • Making money (establishing yourself as an expert can lead to jobs and consulting)
  • Helping other humans (feels good)
  • Internet points (also feels good)

In each case, the human contributing is acknowledged in some way. Maybe not by the end user who doesn’t, for example, read through the Wikipedia wiki editing history. But someone knows. Wikipedia editors know and celebrate each other. Here’s a list of folks who have edited that site for a decade or more.

What about search engines? Google reifies knowledge in a manner similar to ChatGPT. But, cards notwithstanding, Google offers a reputational reward to publishers. It may be in money (Adwords) or site authority. Other applications like Ahrefs help you understand that authority and I can tell you as a devrel, high search engine ranking is valuable.

ChatGPT offers none of that, at least not out of the box. You can ask for links to sources, but the end user must choose to do so. I doubt most do, and, in my minimal experience, the links are often broken or made up.

This fact breaks the fundamental virtuous cycle of internet knowledge sharing.

Before, with search engines:

  • Publisher/author writes good stuff
  • Search engine discovers it
  • User reads/enjoys/learns from it on the publishers site
  • Publisher/author gains value, so publishes more
  • Search engine “sees” people are enjoying publisher, so promotes it
  • More users read it
  • Back to step one

After, with LLMs:

  • Publisher writes good stuff
  • LLM trains on it
  • User reads/enjoys/learns from it via ChatGPT
  • … crickets …

The feedback loop is broken.

Now, some say that the feedback loop is already broken because Google over optimized Adwords. Content farms, SEO focused garbage sites and tricks to rank are hard to stomach, but they do make money from Google’s traffic. This is especially acute with products and product reviews because the path to monetization is so clear; end users are looking to buy and being on page 1 will result in money. I agree with this critique; I’m not sure the current knowledge sharing experience is optimal, but humans have been working around Google’s limitations.

More human labor helps with this. I’ve seen this happen in two ways, especially around products.

  • Social media, where searchers are relying on curation from experts. Here end users aren’t searching so much as browsing from a subset of answers.
  • Reddit, where searchers are relying on the moderators and groups of redditors to keep spam out of the system. Who among us hasn’t searched for “<product name> review reddit” to avoid trash SEO sites? This also works with other sites like Stackoverflow (for programming expertise).

In contrast, the knowledge product disintermediation of ChatGPT is complete. I’ll never know who helped me with Typescript. Perhaps I can’t know, because it was one million little pieces of data all broken up and coalesced by the magic of matrix algebra.

This will work fine for now, because a large corpus of training data is out there and available. But will it work forever? I dunno. The cycle has been broken, and we will eventually feel the effects.

In the short term, I predict that within the next three months, there will be a creative commons type license which prohibits the usage of published content by LLMs.

Using GPT to automate translation of locale messages files

At my current employer, FusionAuth, we have extracted out all the user facing messages to properties files. These files are maintained by the community, and cover over fifteen languages.

We maintain the English language version. Whenever new user facing messages are added, the properties file is updated. Sometimes, the community contributed messages files are out of date.

In addition, there are a number of common languages that we simply haven’t had a community member offer a translation for.

These include:

  • Korean (80M speakers)
  • Hindi (691M)
  • Punjabi (113M)
  • Greek (13.5M)
  • Many others

(All numbers from Wikipedia.)

While I have some doubts and concerns about AI, I have been using ChatGPT for personal projects and thought it would be interesting to use OpenAI APIs to automate translation of these properties files.

I threw together some ruby code, using ruby-openai, the ruby OpenAI community library that had been updated most recently.

I also used ChatGPT for a couple of programming queries (“how do I load a properties file into a ruby hash”) because, in for a penny, in for a pound.

The program

Here’s the results:


require "openai"
key = "...KEY..."

client = OpenAI::Client.new(access_token: key)

def properties_to_hash(file_path)
  properties = {}
  File.open(file_path, "r") do |f|
    f.each_line do |line|
      line = line.strip
      next if line.empty? || line.start_with?("#")
      key, value = line.split("=", 2)
      properties[key] = value
    end
  end
  properties
end

def hash_to_properties(hash, file_path)
  File.open(file_path, "w") do |file|
    hash.each do |key, value|
      file.write("#{key}=#{value}\n")
    end
  end
end

def build_translation(properties_in, properties_out, errkeys, language, client)
  properties_in.each do |key, value|
    sleep 1
# puts "# translating #{key}"
    message = value
    content = "Translate the message '#{message}' into #{language}"
    response = client.chat(
      parameters: {
        model: "gpt-3.5-turbo", # Required.
        messages: [{ role: "user", content: content}], # Required.
        temperature: 0.7,
      }
    )
    if not response["error"].nil?
      errkeys << key #puts response 
    end 

    if response["error"].nil? 
      translated_val = response.dig("choices", 0, "message", "content") 
      properties_out[key] = translated_val 
      puts "#{key}=#{translated_val}" 
    end 
  end 
end

# start the actual translation 
file_path = "messages.properties" 
properties = properties_to_hash(file_path) 
#puts properties.inspect 
properties_hi = {} 
language = "Hindi" 
errkeys = [] 

build_translation(properties, properties_hi, errkeys, language, client) 
puts "# errkeys has length: " + errkeys.length.to_s 

while errkeys.length > 0
# retry again with keys that errored before
  newprops = {}
  errkeys.each do |key|
    newprops[key] = properties[key]
  end

  # reset errkeys
  errkeys = []

  build_translation(newprops, properties_hi, errkeys, language, client)
  # puts "# errkeys has length: " + errkeys.length.to_s
end

# save file
hash_to_properties(properties_hi, "messages_hi.properties")

More about the program

This script translates 482 English messages into a different language. It takes about 28 minutes to run. 8 minutes of that are the sleep statement, of which more below. To run this, I signed up for an OpenAI key and a paid plan. The total cost was about $0.02.

I tested it with two languages, French and Hindi. I used French because we have a community provided French translation. Therefore, I was able to spot check messages against that. There was a lot of overlap and similarity. I also used Google Translate to check where they differed, and GPT seemed to be more in keeping with the English than the community translation.

I can definitely see places to improve this script. For one, I could augment it with a set of loops over different languages, letting me support five or ten more languages with one execution. I also had the messages file present in my current directory, but using ruby to retrieve them from GitHub or running this code in the cloned project would be easy.

The output occasionally needed to be reviewed and edited. Here’s an example:

[blank]=आवश्यक (āvaśyak)
[blocked]=अनुमति नहीं है (Anumati nahi hai)
[confirm]=पुष्टि करें (Pushṭi karen)

Now, I’m no expert on Hindi, but I believe I should remove the English/Latin letters above. One option would be to exclude certain keys or to refine the prompt I provided. Another would be to find someone who knows Hindi who could review it.

About that sleep call. I built it in because in my initial attempt, I saw error messages from the OpenAI API and was trying to slow down my requests so as not to trigger that. I didn’t dig too deep into the reason for the below exception; at first glance it appears to be a networking issue.


C:/Ruby31-x64/lib/ruby/3.1.0/net/protocol.rb:219:in `rbuf_fill': Net::ReadTimeout with #<TCPSocket:(closed)> (Net::ReadTimeout)
        from C:/Ruby31-x64/lib/ruby/3.1.0/net/protocol.rb:193:in `readuntil'
        from C:/Ruby31-x64/lib/ruby/3.1.0/net/protocol.rb:203:in `readline'
        from C:/Ruby31-x64/lib/ruby/3.1.0/net/http/response.rb:42:in `read_status_line'
        from C:/Ruby31-x64/lib/ruby/3.1.0/net/http/response.rb:31:in `read_new'
        from C:/Ruby31-x64/lib/ruby/3.1.0/net/http.rb:1609:in `block in transport_request'
        from C:/Ruby31-x64/lib/ruby/3.1.0/net/http.rb:1600:in `catch'
        from C:/Ruby31-x64/lib/ruby/3.1.0/net/http.rb:1600:in `transport_request'
        from C:/Ruby31-x64/lib/ruby/3.1.0/net/http.rb:1573:in `request'
        from C:/Ruby31-x64/lib/ruby/3.1.0/net/http.rb:1566:in `block in request'
        from C:/Ruby31-x64/lib/ruby/3.1.0/net/http.rb:985:in `start'
        from C:/Ruby31-x64/lib/ruby/3.1.0/net/http.rb:1564:in `request'
        from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/httparty-0.21.0/lib/httparty/request.rb:156:in `perform'
        from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/httparty-0.21.0/lib/httparty.rb:612:in `perform_request'
        from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/httparty-0.21.0/lib/httparty.rb:542:in `post'
        from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/httparty-0.21.0/lib/httparty.rb:649:in `post'
        from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/ruby-openai-3.7.0/lib/openai/client.rb:63:in `json_post'
        from C:/Ruby31-x64/lib/ruby/gems/3.1.0/gems/ruby-openai-3.7.0/lib/openai/client.rb:11:in `chat'
        from translate.rb:33:in `block in build_translation'
        from translate.rb:28:in `each'
        from translate.rb:28:in `build_translation'
        from translate.rb:60:in `

(Yes, I’m on Windows, don’t hate.)

Given this was a quick and dirty program, I added the sleep call, but then, later, added the while errkeys.length > 0 loop, which should help recover from any network issues. I’ll probably remove the sleep in the future.

I signed up for a paid account because I was receiving “quota exceeded” messages. To their credit, they have some great billing features. I was able to limit my monthly spend to $10, an amount I feel comfortable with.

As I mentioned above, translating every message into Hindi using GPT-3.5 cost about $0.02. Well worth it.

I used GPT-3.5 because GPT-4 was only in beta when I wrote this code. I didn’t spend too much time mulling that over, but it would be interesting to see if GPT4 is materially better at this task.

Worries

Translating these messages was a great exploration of the power of the OpenAI API, but I think it was also a great illustration of this tweet.

I had to determine what the problem was, and how to get the data into the model, and how to pull it out. As Reid Hoffman says in Impromptu, GPT was a great undergraduate assistant, but no professor.

Could I have dumped the entire properties file into ChatGPT and asked for a translation? I tried a couple of times and it timed out. When I shortened the number of messages, I was unable to figure out how to get it to ignore comments in the file.

One of my other worries is around licensing. I’m not alone. This is prototype code running on my personal laptop and the license for all the localization properties files is Apache2. But even with that, I’m not sure my company would integrate this process given the unknown legal ramifications of using OpenAI GPT models.

In conclusion

OpenAI APIs expose large language models and make them easy to integrate into your application. They are a super powerful tool, but I’m not sure where they fit into the legal landscape. Where have we heard that before?

Definitely worth exploring more.

How to have great guest posts on a blog

I have another blog that I’ve been running for a few years, called Letters to a New Developer.

It is full of advice I wish I had had at the beginning of my software development career. I even turned it into a book.

One thing I’ve done the entire time I’ve been writing this is to ask for guest posts. No advice is generic; it’s all based on where and who you are. By asking for guests to share their experiences, I expose my readers to a wider variety of viewpoints. This leads to better understanding of the wide world of software.

After all, if you want to work for a FAANG, it’s better if you hear from someone who prepped, interviewed and was hired for such a company, rather than me, who will never work for such a company. Or if you are looking to understand what it’s like to work in open source, hearing from someone who has made it their career. Have imposter syndrome? Others have struggled with it too. Also works for techniques; it’s great to hear from someone who is a methodical journaler, for example.

I bet you get the point. There’s no way I could have written any of those articles.

Guest posting also helps spread the word about my blog, since a guest poster usually shares their writing with their friends and network.

For the right kind of blog, guest posts can be great. Here’s my criteria:

  • Does the blog have decent traffic? If not, folks probably won’t want to guest post. I’d say at least 25 visits a day.
  • Have you done this for a while? A years worth of regular posting shows you’re serious.
  • Does the blog present a variety of viewpoints? I wouldn’t want a guest post for this blog, because it’s all me, all the time.
  • Is the blog non-corporate? If it is a company blog, guest posts should be paid for with money, or be part of a co-marketing project.

If you want to have great guest posters, here’s my recipe for finding them and fostering them:

  • Don’t accept inbounds, unless you know the person or can vet what else they’ve written.
  • Don’t accept any content that is off-topic.
  • Set a target guest. For me, it is someone who is or was a developer with different viewpoint and perspective. I was especially interested in highlighting voices of people who were not white men.
  • Keep an eye out for interesting posts or interviews. Reach out to the authors and see if they might be game to guest post.
  • Be okay with a cross post. I’d say about 60% of my guest posts are original content, but more recently authors want to post the content on their blog first. Offer a link back.
  • Be okay with a repost, especially if the author is prominent. Review existing content and find one or two articles that fit with your blog’s theme. Ask if you can re-publish. Offer a link back.
  • Some folks will say no. They can do so for a variety of reasons (don’t want to write, don’t syndicate their content, etc), and you have to be okay with it.
  • If someone says “I can’t right now, sorry.” reply with something like “Totally get it. I’ll follow up in 6 months, unless you’d rather I didn’t.” Then, follow up. Sometimes they’ll ignore you, sometimes they’ll have more time.
  • If they are writing original content for you, offer to edit it. Find out how much editing they want. I’ve over-edited a few times and that’s a bad thing if someone is volunteering their time and knowledge.
  • If they ask about topics, advise them to focus. Lots of people want to write about “5 things you need to do” but in my experience, articles with focus on one topic are better received.
  • Write up guidelines so you can easily share. This can include audience, benefits, formatting, delivery mechanism, word count, topic ideas, and more.

Hope this helps. Happy guest blogging!