Skip to content

Use Cameo for dev focused marketing

Recently, we used a Cameo for a developer focused announcement. If you are not familiar with this service, it lets you request a short video from an actor. You send the actor your idea, pay them, they send you the video, and you can use it for a limited number of purposes. If you, or someone you know, has a favorite actor, it can make for a real fun birthday message. But it also is fun for marketing messages and can help you stand out from the crowd.

My experience below is based on one business Cameo. We plan to do more, so there may be updates.

Why consider a Cameo

It is still relatively unique. I’ve seen a few celebrity endorsements of technical products via Cameo, but not that many. This means that it stands out in a fun way. Using Cameo also gets you easy access to a famous or semi-famous person. All you have to do is submit a form and pay some dollars. Compare this to any kind of commercial, which may involve a casting director, ad agency and other parties.

It also is relatively cheap. I looked at a few actors and none cost more than $2000 for commercial usage (more about that below). While this isn’t cheap, I also saw actors for a couple hundred bucks. We ended up choosing an actor who worked for $500.

Note that a Cameo is a pure brand marketing play. It is fun for shock or surprise value, rather than a CTA. It’s unlikely you’ll get deep technical analysis as well. This playful nature fit with our brand, but make sure it fits with yours.

How it works

You can check out the Cameo site FAQs, but here’s how the process worked for me.

  • Browse actors and come up with a shortlist.
  • Filter out actors who won’t do commercial messages. (Some actors won’t, so check before you get excited.)
  • Decide on a topic to be covered.
  • Review licensing terms for commercial use.
  • Sign up for an account.
  • Put in a credit card.
  • Submit a request on the website. This was limited to 250 characters. (Not 250 words. 250 characters. So the guidance was general.)
  • Install the application to get messaging. (The actor enabled free messaging so he could ask questions.)
  • Go back and forth with the actor and answer clarifying questions, maybe 2 rounds of qs. This had to be done on the Cameo app (boo!).
  • Accept the delivered video.
  • Promote and share it.

Note what wasn’t in there:

  • Any writing talent. I did talk to a number of writers and even selected one. However, after reviewing the constraints, we but mutually decided it didn’t make sense. There just isn’t a lot of room for a complex story line or even a funny line or two. That’s probably why Cameo has the limit.
  • A specific story line. I was able to convey one message to the actor, but otherwise it was in his hands.
  • A lot of back and forth or workshopping. I think I talked about this internally for maybe 15 or 30 minutes and definitely had a good idea of what we wanted to cover. But other than some questions, it wasn’t super collaborative. And, to be honest, that was fine. I believe any actor on Cameo is funnier and knows more about speaking to the camera than I do.

I do wonder whether all the actors would have the same devotion to detail. As mentioned above, the actor enabled free messaging and really dug into the topic. Everyone who watched the video was delighted.

After it is delivered

After it is delivered, it’s time to promote. At the time we bought the Cameo, you could put it on one social media platform or your website for 30 days. We chose Twitter. Then I realized that the actor had recorded 5+ minutes. You aren’t allowed to edit the videos, and the maximum length of a Twitter video is 2:20. So we posted it on an unlisted Youtube link and shared that. Check out the current terms (search for Business CAMEO Videos).

I submitted it to a few online communities, shared in social networks and basically did any other kind of promotion you’d do with an interesting video. It was shared to several email lists and slacks as well. We also bought some traffic.

It didn’t go viral, but it got ~10x the usual number of retweets and interactions as our normal tweets do. It’s unclear if any business came from it.

What I wish we had done differently

  • Understood my limits earlier. I spent a lot of time talking to writers before I realized that 250 characters meant sending over an idea and trusting in the actor. Would have been less stressful to have known that earlier.
  • Be a bit more familiar with the actor. One of the best parts of the Cameo was made in response to an offhand request from a co-worker who was more familiar with their work than I was. I should have done a bit more research.
  • While I focused on the topic and asked the actor to do it in character, I should have included the following in my pitch:
    • How to pronounce the brand.
    • Whether or not I should be mentioned (I was, but that was unnecessary).
    • The optimal length of time (2:20).

At the end of the day, this is a fun alternative (or complement) to the normal boring press release. If you have a character which is in line with your brand or product usage, do check it out.

You should use forums rather than Slack/Discord to support developer community

Hot take after a year or so of trying to build a developer community. If you can pick only one, use forum software rather than synchronous chat software for community building around a developer platform.

While there are tradeoffs in terms of convenience and closeness, for most developer communities a public, managed forum is better than a private, unsearchable Slack.

There are a few key differences between forum software (which includes packages like nodebb, forem, discourse, and others) and chat software (like Slack, Facebook groups, or Discord).

First, though, it’s important to know what you are trying to accomplish. If you are trying to get immediate feedback from a small set of users, then synchronous solutions are better. You can be super responsive, your users will feel loved, and you’ll get feedback quickly. However, synchronous solutions go beyond chat and include phone and video calls. The general goal of validating user feedback at an early stage is beyond the scope of this post.

But as you scale with a chat solution, major problems for the longevity and value of the community emerge.

Problem #1: the memory hole

This occurs when there’s a great answer to a common question, but it isn’t available or is hard to find. This matters more for community Slacks than other synchronous solutions, since Slack limits free plans to 10,000 messages.

It still exists for solutions such as discord because older messages scroll up. Search isn’t great. As a participant it feels easier to just re-ask the question. If the community is vibrant and willing to help newcomers, such questions get answered. If not, they languish or are ignored, frustrating the new participants. Not good.

You can work around the memory hole as a community member by extracting and reifying interesting chat posts. I have done this by generalizing and publishing the messages as blog posts, to a newsletter, or even to a google sheet. But this is additional work that may not be done regularly, or at all.

Side note: for some communities, discussing current events or just chatting with friends, this is actually a feature, not a bug. Who needs to remember who said what six months ago when conversing with friends.

But for developer communities, friendly chat is important, but so is sharing knowledge; the memory hole actively thwarts the latter.

Forums, on the other hand, are optimized for reading. nodebb even suggests related posts as you begin a topic, actively directing people to older posts that may solve their issue without them ever posting.

And if published on the internet, forums are searched via Google.

Problem #2: Google can’t see inside chats

Google is the primary user interface for knowledge gathering among developers.

I hope this statement isn’t controversial. It is based on personal experience and observation, but there’s also some research.

I have seen many many developers use Google as soon as they are confronted by a problem. Youtube, books, going to a specific site and using that search: these all are far less used alternatives when a developer has a question.

Google has made it so easy to find so much good information that most developers have been trained when they face a problem to open a new tab, type in the search term and trust the first page of results.

Chat systems don’t work well with this common workflow, because all the content is hidden.

This means when you use a chat for a developer community, you don’t get compounding benefits when someone, either a team member or a community member, answers a question well or has an insightful comment that would be worth reading. Very few folks ever benefit in the future.

With chat, people who aren’t present at message posting or soon thereafter never learn from that knowledge.

Problem #3: synchronous communication is synchronous

When you are in a chat system, the information is ephemeral. This means that valuable comments can be lost if there is a flurry of other messages.

People can feel ignored even though the reality is that they just posted at an inopportune moment. This feeling can be intimidating; I’ve definitely felt miffed as a question or comment I posted was ignored or unseen and other people’s questions were answered. “Was it me? My topic? Are other people more welcome here than I am?”

People who like to answer questions may feel the need to do so quickly. This may interfere with time for deep work. The Pavlovian response is real; I’ve felt it myself. It feels “better” to write a response to help someone than it does to write a document that will help many, because the former is so concrete and immediate.

When you pick a chat solution, you are optimizing for this kind of response.

Problem #4: less capable moderation tools

Forums have been around a long time. It’s a well understood problem space. There’s a rich set of functionality in most of them for handling the more frustrating aspects of online community management (see also “A Group is its own worst enemy”).

Chat applications have uneven support for this aspect of community management; I have heard Discord is pretty good, but Slack is not.

Remember that when you are running a community, you will inevitably attract trolls and spammers. Make sure you have the tools to protect the community from abuse.

In addition, make sure you have the time/energy. The community may be able to police itself when it gets to a certain size, but initially and for a long while, with a chat solution you may need to be ready to jump in and moderate.

Forums still require attention, don’t get me wrong, but the tooling and the separation of topics means they aren’t quite as vulnerable. They are a higher value target, because of Google, however.

Problem #5: you’re missing out on long tail content

This issue is related to problem #2, but slightly different. When you are building developer tools, there is a wide surface area of support needed. Questions from developers help define that space. When they go to a chat, someone needs to capture the questions and make them public to help future developers. You can capture this knowledge in formal docs.

When using a forum, the answers are made available to the long tail of searchers without any effort at all. A company I worked for got about 5-6% of their traffic from their forum pages.

That traffic was essentially free because the time to answer the questions was required with either solution. (This assumes the question would have been asked in either a chat or a forum.)

Problem #6: questions can be flippant

When I am talking in person to someone and they ask a question, I don’t expect them to have done a ton of research or thinking about it. It’s a conversation, after all.

The same attitude occurs during real time chat.

For technical questions, this can be frustrating because you want to help immediately (see problem #3) and yet you don’t have all the information you need. In async discussions, because they are async, more context is typically provided by the questioner.

This makes it easier for people who want to help to do so.

Why do people use Slack/Discord/etc?

Wow, so many problems with chat and all these reasons are why forum software is better.

So why do so many folks building developer communities choose solutions like Slack or Discord?

There are two motivations that I can see.

One from the company perspective and one from that of the developer.

From the company side: it helps build community between members.

I don’t know about you, but I am much more likely to pitch in and help when I have had a conversation with someone than if some rando drops by with a question and leaves.

A Slack can begin to feel like a real community, where you know people. It doesn’t feel as transactional when I see a question in a Slack when I’ve seen the questioner post other messages or share a bit about themselves. This type of interaction can happen in a forum, but seems more common in a Slack. This type of interaction makes the community more sticky and people more likely to help. A minor benefit is that chat can be hosted elsewhere for free so the startup cost and friction is low.

From the developer’s side: when I run into an issue or problem, I want an answer as soon as frickin’ possible. It’s blocking me, otherwise I wouldn’t have asked it.

Sure, I can context switch, but that has its own costs. So there’s tremendous value from a knowledge seeker’s side to pick a synchronous method of asking questions.

If you had a burning question to ask, which would you prefer? Hopping on the phone or sending a physical email. That’s the allure of the chat platforms.

I will say that some forum software has built chat in, but that isn’t going to get you an answer immediately.

What’s right for you?

Well, what do you want to emphasize? Long term aggregation of knowledge and a culture of completeness, or community and a culture of immediacy.

As alluded to initially, you can of course use both tools at different times in your community’s evolution. I think the longer you build, the more you’ll move to a forum or other public knowledge sharing solution.

Here’s a tweet survey that I ran a month ago asking how developers wanted to get tech help. (Something else turned out to mostly be “well written documentation”, from the thread responses.)

 

Slack failing to open with NS_ERROR_DOM_ABORT_ERR error

I use Firefox as my primary browser (version 84 as of today). As part of this, I keep logged into a number of slack channels. These are some of my favorite communities and I go there to chat with folks, learn about interesting topics and hunt for neat links to resources I didn’t know about. Occasionally I’ll even ask a question or two.

Recently, I saw some weird behavior. The slack channels weren’t updating. The notice at the bottom of the browser bar was something like “Slack is trying to connect”.

Hmm, I thought, that’s weird. I tried reloading the page. I couldn’t do it. That is, even hitting control-f5 didn’t change anything. However, I could open other website just fine.

I tried quitting all my browser tabs and restarting. Made sure Firefox was up to date. Checked the Slack status page to see if slack was down. Opened a new tab and typed the slack channel url into the address bar.

Nothing.

So, I popped open the dev tools and looked at the network tab. I saw this error when loading app.slack.com:

NS_ERROR_DOM_ABORT_ERR

That turned up this bug. From a scan, several other folks where having this issue, with slack and other sites. This comment shared the solution:

Clearing storage for app.slack.com fixed the issue, and the Slack workspace loads correctly now.

All I had to do was clear the storage for app.slack.com and Slack started working again, magically. Even though I was warned that it might force me to log in again, I didn’t have to do so.

Terraform with multiple workspaces and environments

I recently was setting up a couple of AWS environments for a client. This client had a typical web application which talked to an RDS database. There was DNS, a CDN and other components involved. We wanted to use Terraform to maintain traceability and replicability, and have the same configuration for production and staging, with perhaps small differences like ec2 instance size. We also wanted to separate out the components into their own Terraform workspaces to limit the blast radius (so if one component had changes that caused issues or Terraform corruption, it wouldn’t affect others). Finally, we wanted each environment to have its own Terraform backend, again to separate the environments.

I wasn’t able to complete this project due to external factors (I left the position before testing could be completed), but wanted to share the concepts. Obviously I can’t share the working code, but I set up an example project which is simpler. That’s the project I’ll be examining in this post. I also want to be clear that while I’ve tested this as much as I could and have validated the ideas with others who have more Terraform experience, this hasn’t been run in production. You have been warned. (Here’s the Terraform docs about setting up modules, workspaces and repositories.)

Using a tool like Terraform is great for a number of reasons, but my favorite is that it lets you track changes to cloud infrastructure. More than once I’ve wandered into an AWS account and wondered why certain resources were set up in the way they were, and what might break if I changed them. There are occasionally comments, but it is far better to examine a commit. Even better to review the set of commits and see the customer request or bug tied to it. (Bonus link: learn more about Terraform and other cloudy tools in this podcast episode with the creator of Terraform.)

So this simpler example project has a lambda that writes to an SQS queue. For now, it just writes the date of invocation, but obviously you could have it reach out to an external API, read from a database, or do some kind of calculation. The SQS queue could then be read from by an EC2 instance, which processes the message and perhaps updates a database. You have three components of the system:

  • The lambda function
  • The SQS queue
  • The EC2 instance (implementation of which is left as an exercise for the reader)

The SQS queue is shared infrastructure and needs to be accessed by both of the other systems. However, the SQS system doesn’t need to know about either the lambda or the EC2 instance. Using Terraform, we can create each of these components as their own workspace. Each of the subsidiary systems can evolve or change (for instance, the EC2 instance could be replaced with an autoscaling group) with minimal impact on other systems. They could be managed by different teams as well if that made sense.

To enforce this separation, set up each component as a separate Terraform workspace. (All code is on github here.) I use remote state so that more than one person can manage the terraform state, and use the S3/dynamodb backend because we are targetting AWS and want a free scalable solution. This post assumes you know how to set up Terraform using s3/dynamodb as a remote state storage.

Here’s the outputs of the SQS system:

output "queue_url" {
  value = "${aws_sqs_queue.myqueue.id}"
}

output "queue_arn" {
  value = "${aws_sqs_queue.myqueue.arn}"
}

I explicitly define the output variables so I can pull them in from the lambda and EC2 workspaces. This is how you can do that.

...
data "terraform_remote_state" "sqs" {
  backend = "s3"
  config = {
    bucket = "${var.terraform_bucket}"
    key = "sqs/terraform.tfstate"
    encrypt = true
    dynamodb_table = "terraform-remote-state-locks"
    profile = "${var.aws_profile}"
    region = "us-east-2"
  }
}
...
resource "aws_lambda_function" "mylambda" {
...
  environment {
    variables = {
      sqs_url = "${data.terraform_remote_state.sqs.outputs.queue_url}"
    }
  }
}

The terraform_remote_state block defines the location of the previously defined sqs workspace, and the ${data.terraform_remote_state.sqs.outputs.queue_url} references that url. That is then injected as an environment variable into the lambda, which reads it and uses the url to create an SQS client. It can then post whatever message it wants.

You can see how this would work with any number of configuration parameters. If you have typical three tier database driven application with a separate caching layer you can create each of these major components and inject the values into either the environment (for lambda) or the userdata (for EC2). I’m not sure I’d use this with a microservices architecture because using a services registry might be more appropriate.

Note that the lambda component has a rudimentary lambda function (you have to define something). It also uses Terraform to deploy the lambda code. That’s fine for the toy example, but for production you will want to use a real CI/CD system to deploy your lambdas.

Now, suppose you want to run production and staging environments, because you are ready to launch. Here are the constraints you’d want:

  • Production and staging run the same config (except when staging is changing, of course)
  • Production and staging may differ in a few details (the size of the EC2 instance, for example)
  • Production and staging execute in different AWS accounts to limit access and issues. You don’t want an error in staging to affect production. This is handled by creating different profiles which have access to different accounts.
  • Production and staging execute in different Terraform backends for the same reason as the separate AWS accounts.

Staging and production can use the same git repository, but when pulled down they are kept in two places on the filesystem. This is because you need to specify the profile and the bucket when using terraform init. So you end up running something like these two commands:

git clone git@github.com:mooreds/terraform-remote-state-example.git # staging
git clone git@github.com:mooreds/terraform-remote-state-example.git production-terraform-remote-state-example # production

I set up the project so that staging can be managed by normal terraform commands (since that will happen more often), and that production uses either special incantations or a script. For the initialization of the production Terraform environment, this looks like: terraform init -backend-config="profile=trsproduction" -backend-config="bucket=bucketname". For staging, it’s just terraform init. I didn’t have a lot of luck switching between these two Terraform backends in the same filesystem location, so that having two trees was a straightforward workaround.

Any changes between production and staging are each pulled out to a variable, with the staging value as the default. Then each workspace has a script which applies the Terraform configuration to the production environment. The script sets variables to be the correct value for production. Here’s an example for the lambda workspace:

terraform apply -var aws_profile=trsproduction -var terraform_bucket="mooreds-terraform-remote-state-example-production" -var env_indicator="production" -var lambda_memory_size=256

We pass in the production terraform_bucket in case any references need to be made to the remote state (to pull in the SQS queue url, for example). We also pass in an increased lambda memory size because, hey, it’s production. Other things that might vary between environments: for example, VPC or subnet ids, API endpoints, and S3 bucket names.

For simplicity, we just use two profiles for staging and production (in ~/.aws/credentials), but any way of getting credentials that works with Terraform will work:

[trsstaging]
aws_access_key_id = ...
aws_secret_access_key = ...

[trsproduction]
aws_access_key_id = ...
aws_secret_access_key = ...

This lets us separate out who has production access. Some users can have both staging and production profiles (perhaps operations), and others can have only staging profiles (perhaps developers). You can pass region values in via variables as well.

Using this system, the workflow for a change would be:

  • Check out the terraform git repository
  • Create a feature branch (including an issue identifier)
  • Pull request and approval
  • Run terraform apply to apply to staging
  • Run any additional tests
  • Merge to master
  • Run prodapply.sh

Again, I want to be clear that I’ve implemented this partially, but I didn’t get a chance to run this fully in production. I tested all these concepts with the simple system mentioned above (and you can stand up your own using the code on github). There will be issues that I haven’t experienced. But I hope that this post helps illuminate the complexity of managing multiple workspaces and environments within a single Terraform github repository.

Ever felt like your codebase was out of control?

I certainly have. A couple of times in my career the combination of technical debt, business model shift and lack of time for a proper fix have left me feeling out of control.

But reading this post on Hacker News made me realize that it all could have been so so much worse. A couple of “best ofs”:

To give you some examples, I originally came on as a contractor because they had some refactoring they wanted done. The entire system was home built (including the programming language) and there was a file size limit of 32,767 lines. They had many functions that were approaching this limit and they didn’t know what to do, so they hired me.

and:

Once upon a time, there was a search product and one of the data sources that it could search was a Solr/Lucene database. This should be no problem, since search is what Solr does. It should be as simple as passing the user’s query through to Solr and then reading the response. The problem was, it was important to know exactly which parts of any matched records were relevant to the search.

 

The Guy Before Me™ decided that the best way to implement this would be to split the user’s search into individual words, perform a separate search query through Solr’s HTTP API for each individual word, and then do a bunch of very clever and complex post-processing on the result sets to combine them into a single set of results.

and (last one, I promise):

At my first gig I teamed up with a guy responsible for a gigantic monolith written in Lua. Originally, the project started as a little script running in Nginx. Over the course of several years, it organically grew to epic proportions, by consuming and replacing every piece of software that it interfaced with – including Nginx.

 

There were two ingredients in the recipe for disaster. The first is that Lua comes “batteries excluded”: the standard library is minimalist and the community and set of available packages out there is small. That’s typically not an issue, as long as one uses Lua in the intended way: small scripts that extend existing programs with custom user logic (e.g. Nginx, Vim, World of Warcraft). The second is that Lua is a dynamic language: it’s dynamically typed, and practically everything can be overridden, monkey patched and hacked, down to the fundamental iterators that allow you to traverse data structures.

shivers. There, but for the grace of God.

Easily extracting conversations from a slack group

Man slack liningSlack is an amazing productivity tool when used correctly. One of the primary uses I’ve seen is for open source projects to provide support (Craft CMS, OG-AWS) or for communities to be built (Techfriends, Denver Devs). If you don’t have the luxury of the owner of your slack being Slack’s VP of engineering, the costs of $x/month/user can cause these types of slacks to remain on the free plan.

Which means that you are limited to the last 10k of messages.

And that’s fine for the vast majority of messages. Sometimes, however a discussion is so good that it deserves to be indexed and shared, which means it needs to be pulled out of the Slack walled garden and onto the web (I also wrote about how to do this with the Facebook Group walled garden last year). Sometimes you might just want to save it beyond the 10k message limit for your own selfish reasons.

You can of course do this extraction manually (I did so here and here). But that’s a lot of work.

Another option is to use Zapier. The slack integration is trivial to set up, and has a number of options. From there you can push to a google spreadsheet (if you want to do further reification) or directly to WordPress (or any of the other integrations).

The nice part about this is that the Zapier slack integration is that you have a variety of options that can trigger the publishing of a message to a spreadsheet:

  • a post of a public message in a specific channel
  • a post of a public message in any channel
  • starring of a message by you
  • attachment of a certain reaction emoji (I picked a floppy disk) to a message, no matter who adds the emoji

I’ve just started doing this but am excited to have a low friction way to pull high value conversations out of slack. Slack is great for synchronous communication and easy discussion. When real knowledge drops, it should be shared with the future and anyone who can type into a search box. Do make sure to let folks know because there may be some expectation of privacy that you’ll want to respect.

Obstacles to building high availability software systems

Open sign
Is your system available?

I saw a discussion on a slack about obstacles to high availability systems and wanted to record the edited version for posterity (mostly for future me, as I blog for myself). Note that in any mention of high availability systems would be remiss if I didn’t mention the Google SRE book, which is slow reading but free and full of great information.

First, what is high availability? I like this definition from Digital Ocean:

In computing, the term availability is used to describe the period of time when a service is available, as well as the time required by a system to respond to a request made by a user. High availability is a quality of a system or component that assures a high level of operational performance for a given period of time.

Design considerations of a system that will hinder high availability fall into two categories.

The first category is actions that you don’t take, but could take:

  • single points of failure: if you have a piece of your system which is unique and it fails (and everything fails, all the time), the entire system’s availability will be affected.
  • missing or incomplete automation: if you need human beings to resurrect failed parts of your system, it will meaningful amounts of time and will be error prone.
  • failing to build in elasticity and scalability of resources: when usage increases, new resources should be automatically brought online. Failure to do so will impact system performance and that could impact system availability
  • missing or incomplete system instrumentation: if you don’t monitor your system, you won’t be able to even know its availability (until you hear from your users).
  • application statefulness (on the compute nodes): this impacts your ability to use elastic resources and to grow parts of your system that are under load. (If you aren’t designing a greenfield system, this may be an externally imposed requirement due to existing software.)

The second is in actions you can’t take because of external requirements on the system:

  • data sovereignty: if you are legally limited to certain data centers, you have fewer options for your system, this can hinder building the system.
  • tenancy: if you need to have single tenancy for security or legal reasons, you may have fewer options for elastic solutions.
  • data models and authority requirements: poorly performing data models can impact performance. If your application requires certain operations must be from the source of record (permissions checks, for example) then a poorly performing source data model can impact performance which can impact availability.
  • latency: if you have a highly latency sensitive system, then you may need to trade availability for decreased latency. Since availability often means geographic dispersion (to avoid disasters impacting multiple pieces of a system), it impacts latency requirements.
  • cost: high availability systems, because they have no single points of failure, cost more.

Again, this was a discussion from a slack of AWS instructors, but the commentary is mine, as are any mistakes. Thanks to Chad, Richard, Jon, Ryan and everyone else!

Hipster Hosting at BSW, Tomorrow Only

Lady with computer mouse
She doesn’t look like she needs hosting, does she?

I’m doing a short presentation with a few other people at Boulder Startup Week on hosting. Tomorrow, Thur, at 10am MT.

Would love to see you there. Feel free to heckle.

If you can’t make it, here is the salient point of my presentation: startups are hard, so you should host your code and infrastructure at the highest level of abstraction that you can, so that your developers can focus on delivering business value through new features rather than doing ops. In practice, prefer hosting options in this order:

  • serverless
  • platform specific hosting (wpengine, etc)
  • general purpose PAAS (heroku, elastic beanstalk)
  • cloud VMs
  • colo
  • server in the closet

Of course, all advice is context dependent; my advice is aimed at small startups and the more flexibility your developers need around aspects of technology the lower on the list you’ll have to go.

Anyway, looking forward to a good discussion.

Imposter syndrome

This article resonated with me. I became familiar with imposter syndrome when my SO spoke on it several times (she’s available to speak to your group if you’d like).

When you are deep in a discipline, it can be very easy to “know what you don’t know” and downplay your expertise. I often am asked to support desktop computers because I work in software (a la this post). But I know how little I know about the problem.

I think the issue is also exacerbated by the continuous flow of information that we are all offered by the internet. This makes it very easy to compare ourselves with what other folks choose to share (typically, though not always, their best side and successes). This makes me, I will be honest, feel inadequate. Why didn’t I learn more about k8s? Why haven’t I built a successful saas business? Why haven’t I worked at scale like that? Why haven’t I built a react native app? And so on and so on.

And when someone asks me “can you do that?” I always have that moment of fear and have to force myself to say yes.

My answer is to breathe, take chances, remember that failure is an option, and recall that while we see other people’s successes, we rarely see their failures. It isn’t fair to me to compare my “inside” with someone else’s “outside”.