Skip to content

All posts by moore - 8. page

The Challenger Sale

I just finished reading The Challenger Sale, a book about consultative selling. I really appreciated its data driven approach. The book, written in 2011, outlines a new approach to selling that is fundamentally about bringing the seller’s business knowledge to bear to provide value to the seller. But not just value, value in a way that is both striking (something new the customer hasn’t thought of before) and that emphasizes the product the seller has to offer. An example they give is Grainger, who sells parts. Grainger did research and determined that a large amount of the dollar spend with them was for unplanned part purchases, which can be expensive in both purchase price and staff time. They worked with customers to take advantage of their sprawling inventory to better plan parts purchases.

They cover the different kinds of sales techniques that their research uncovered, as well as tactics to help people adopt “challenger” traits to become more successful. They also cover how to sell this methodology to front line sales managers.

Two things really stood out for me. The first is that every company needs to answer why their customers should purchase from them, as opposed to anyone else. This can be a hard conversation to have because once you strip away all the “innovation” and “customer centricity” sometimes you aren’t left with much. I know that when I was a contractor, I would have had a hard time with this–my best answer would probably have been “I’m trusted, available, knowledgable and local”, which kinda sounds like a copout.

The other great part of the book was at the very end when they talked about how these techniques could be used for the “selling” of internal services (IT, HR, market research, R&D). I found that really interesting in the context of larger corporations where some of the functions aren’t valued for strategic insight, but rather are order takers from the business. I have in fact myself been an order taker. It’s easy, but not as fun as being part of the strategic conversation.

Book Review: The Economists’ Hour

I recently finished The Economists’ Hour, a book about the rise of economics professionals in public policy. It focuses primarily on the USA during the 1960s-2010s, but it does cover some other countries (Taiwan and Chile primarily). It covers a variety of topics including monetary policy, deregulation, shock doctrine, and inflation. The book also focuses on personalities, from the more prominent like Milton Friedman to the more obscure (at least to me) like Alfred Kahn, and uses them to humanize the economic topics by framing the economics through the human beings who argued for and against them.

This book was a bit heavy going at times, but given the breadth of topics and time it covered, I found it pretty compelling. I lived through part of these times, but there were many things I learned, including the impetus behind US airline deregulation (the power of the airline industry relative to trucking and the success of intrastate carriers in CA and TX) and how Taiwan became an electronics powerhouse (a meeting over coffee and strong industrial policy). If you’re interested in the intersection of economics and government policy, this book is highly recommended. (Here’s a great podcast with the author as a bonus.)

Not delivering end user value

When you’ve worked in software for as long as I have, you have made mistakes. I’m going to catalog some here intermittently so I can analyze them and hopefully avoid them in the future. I may change identifying details or use vague descriptions.

When you interact with a client, especially if they are knowledgeable about their domain, it can be hard to come in and seize the reins. You want to be respectful of what they know and what you don’t. But at the same time, they hired you for what you know, and sometimes you have to stop hurrying forward and take a look around, especially if the project is not running smoothly. This is a story about a time when I didn’t seize the reins, and the problems that followed.

I was working on a long running project for a client. It was a bear of a project, with a ton of domain knowledge and some complicated partially done software. The team working for the client churned, including employees and consultants. When I was brought on, I didn’t have a full picture of the problem space and it felt like we didn’t have time to gather it. No one had this global view. Meetings with the client would often go down rabbit holes and into the weeds. The project was a rewrite and the system we were targeting to replace kept evolving. This original software system powered the business and yet didn’t operate with normal software development practices like version control. When looking at the code, it was unclear what code was being used and what was not.

But most importantly, other than diagrams and meetings, we didn’t deliver anything of value regularly to the client. We would spin in circles trying to understand previous work and take a long time to make small changes. We did write a testable new system and follow other SDLC best practices including version control, CI/CD and deployment environments.

This project finally started to turn around when we shifted from trying to replace the entire running system to replacing the smallest part that could possibly work end to end. This gave the team a clear vision, and showed the client a path to business value. We accomplished more in a couple weeks with this perspective than we had in the previous couple of months. This also let us commit to a real project plan and timeline. Unfortunately, the client wasn’t happy about the projected and past expense, and shut down the project weeks after the development team was starting to show traction.

Lesson: I wish we would have had taken the “smallest bit that would possibly work” approach from the very beginning. I wish I’d had the insight to call a halt and not continue down a path that was clearly not working a few weeks after observing it, not a few months.

Tips for meetup speaker wranglers

Ruby pendantSo I’ve been a speaker wrangler for the Boulder Ruby Meetup for the past year. This means I screen, find and schedule speakers for the meetup. It’s been a lot of fun. You get to meet new people and often help push people past their comfort zone. For many developers, public speaking is a hardship, but a meetup is the perfect place to start. At the Boulder Ruby Meetup, we have between 10 and 40 friendly people, and talks can range from 10 minutes to 60+.

I wanted to capture some tips around doing the speaker wrangling for technical meetups.

Think about what your audience wants to hear, and how they want to hear it.

  • You need to want to attend (not every night, but most nights). This is substantially easier if you work in the technology, because you’ll be motivated to attend as well.
  • It also helps to hang out at the meetup for a few months. You learn who the regulars are, which people are really knowledgable, and what kind of talks the community likes and is used to.
  • Think of alternatives to the traditional 30-40 minute talks. Panels, social nights and lightning talks are all alternate ways to have people share their knowledge.
  • Tap your personal network, but not just your network.
  • If you see something work in a different meetup, steal it!
  • Leverage external events. We move our meeting every year to happen during Boulder Startup Week, which is good for BSW (more sessions) and good for us (more attendees and visibility).
  • Don’t be afraid to stray outside of your core technology. We focus on ruby, but have had popular talks on
    • Interviewing
    • AI and ML
    • User experience
    • General software design
    • CDNs
  • If you have facilities for it, remote presentations are great. This opens up who can speak at your talk to a lot more people. We’ve had guests from Google and AWS and the founder/owner of SideKiq come, at zero additional cost.
  • Recording talks is something that I think has a lot of value, but we’ve had a hard time getting that done. If you do record the talk, make sure to get permission (some folks are ok with it, some speakers are not).

Actually finding the speakers is of course crucial.

  • Whenever possible, schedule the talks as far ahead of time as you can. I just use a google spreadsheet to keep track of speakers and follow up a month or two ahead of time.
    • Sometimes people cancel (travel and personal events happen) and it’s nice to know about it ahead of time.
  • Since you know who the experts are in your group, you can often ask them to fill in if a speaker has to bail. (It’s extra nice if one of the meetup organizers has a talk in their back pocket.)
  • To find speakers, put the call out where people are:
    • Slack workspaces and channels around the technology
    • On a website (this is super low effort once you have a website up). A website is a great place to put topic ideas, audience size, expected length, etc.
    • At the meetup. At every meetup I put a plug in for speaking.
    • Twitter is full of people that might be good speakers.
    • Anyone you have coffee with.
  • I also always ask people that I meet. You know those “so what do you do” conversations you have? Always be on the lookout for someone who is doing something that might be interesting to your meetup.
  • Ask folks new to development as well as experienced developers. Newer folks may feel more comfortable with a shorter timeslot, but they also deserve the chance to speak.
    • Remember that the chance to speak professionally is a benefit. By asking people to speak you are actually doing them a favor.
    • Reach out to heroes or other big names that you want to build some kind of relationship with. They may ignore you, but so what.
  • Some meetups have a form on their website where people can submit. I haven’t seen much luck with that.
  • You can even do outreach. If you see a company in your area posting on slacks, StackOverflow or HackerNews with either articles or job postings, reach out and ask if they have anyone that would be interested in speaking.

Don’t forget to run through the finish. Make sure your speakers have a great time speaking and that you set them up for success.

  • Reach out to them a few months ahead of time to make sure they are still interested and available. Get their email address, and talk description (so you can have it posted ahead of time).
  • The week of:
    • tweet about them speaking.
      • reach out to them about recording or anything else. If you have another volunteer who handles that, this is a great time to hand off. I always hand off via email because everyone has that.
  • The day of:
    • make sure you greet them when they come to the meeting and thank them for their time.
    • have a good question or two up your sleeve if no one else does.
  • The day after, tweet thanking them for their time.

Getting good speakers is a key part of any meetup. There’s a lot else that goes into a successful meetup (a good space, sponsors for food and drink, publicity) but finding and scheduling speakers is important. Hopefully some of these tips will be helpful to you.

Heading to AWS re:Invent 2019

AWS re:Invent logoI’m excited to be heading to AWS re:Invent this week. I’ve never been to Las Vegas (other than stopping at a Chipotle on the outskirts on the way to SoCal), so I’m looking forward to seeing the Strip. I’ve heard it’s a bit of a madhouse, but I did go to the Kentucky Derby this year, so we’ll see how it compares.

I’m also excited to re-connect with people I’ve met at other conferences or only online. There are a number of AWS instructors that I interacted with only over email and Slack that I hope to meet face to face. (If you want to meet up with me, feel free to connect via Twitter.) This is also my first conference “behind the booth”. I have been to plenty of conferences where I was the one wandering around the expo, kicking tires and talking to vendors, so I’m interested to be on the other side.

Finally, I’m excited to get feedback on the new direction Transposit is taking. We’ll be showing off a new tool we’ve built to decrease incident downtime. I wish I’d had this tool when I was on-call, so I’m really looking forward to seeing what people think.

The lifechanging magic of a separate work computer

Man performing magic trickFor a span from 2002 to 2019, I almost never had a work computer. There was one or two times where a contract provided a computer. But primarily my work computer where I did, you know, my work, and my home computer, where I worked on side projects and did my writing and personal internet access, were one and the same.

At Transposit, where I recently started, I have a separate work computer and a personal computer.

This is huge.

Here’s what it means. (I work from home, so boundaries are a bit more fluid.)

  • I’m no longer tempted to work (not even look at Slack) when I pick up my computer to say, write a blog post.
  • I can set down my work computer at end of the day and feel “done”.
  • When I pick up my personal computer to work on a personal project, I’m more focused.

Working is such a endorphin rush sometimes. Having a separate work computer and not installing any work software (not email, not Slack, not nothing) on my personal computer helps me maintain work life balance. This means when I’m working I am working and when I’m not, I am not.

 

16 years

Fiat and treesWow, 16 years of blogging. For the record, this is the 992nd post.

Does this mean my blog can drive?

16 years ago I wrote my first post about RSS. I recently just added RSS to a static site generator that had a blog component. What is old is new again, indeed.

Blogging has taught me so much. How to write. How to promote. How to investigate a problem. How to describe what I do to non technical people. How to handle gobs of traffic. How to handle tumbleweeds (aka no one visiting my blog).

I tell everyone I meet to blog. Not for other people, but for yourself.

Having a record of my life (not the whole thing, but at least some aspect of it). Having a chance to help other people. Even just making the time to sit down and think about something deeply. All huge benefits.

Sometimes my posting schedule is less frequenty (especially when I’m writing elsewhere, as I am now on the Transposit blog), other times I speed up (I wrote 100 blog posts in 100 days, it was great).

Either way, I’ve enjoyed this blog immensely, and appreciate everyone who swings by to read a post, leave a comment, submit to an aggregator or subscribes to new posts.

Thanks for 16 great years!

Develop Denver Recap

Develop Denver LogoI attended Develop Denver in August and it was a great experience. It’s a really fun conference. There are a number of things I liked about it.

  • There is a real culture of inclusivity.
  • They have speakers across the spectrum, including experienced speakers, new speakers, and speakers from underrepresented groups.
  • It is entirely volunteer run.
  • There is a fun tradition, the Ballmer Peak Hackathon.
  • They have both social and technical events.
  • A large number of the speakers are voted on by attendees.
  • The topics are broader than at the typical conference, ranging from product to development to career to design.
  • The venues are spread across the RiNo district, so you walk between them. This makes it easier to start conversations and gives you a breath of fresh air.
  • It’s affordable for a two day multi track tech conference (< $400).
  • The community is rooted in a slack and meetup, so there’s year round engagement if you want it.
  • It was big enough that I can meet new people but small enough that I recognized folks from last year when I attended (~450).

Definitely enjoyed. It got me out of the Boulder Bubble as well, so that was a plus too.

It’ll be coming back in August of 2020. I’m not sure when they’ll be opening registration, but I’d check back in May 2020.

PS Here is one overview and a second overview of the conference worth reading.

Joining Transposit

I am starting a new job today. I joined Transposit as a developer advocate.

I’m excited for two main reasons.

I think that the company is in the right place to solve a real customer pain point. In my mind, this stands at the intersection of Heroku and Zapier. I love both these companies and have used them, but sometimes you need something that is more customizable than a chain of Zaps (perhaps something that maintains state or interacts with an API action that Zapier doesn’t support) and yet you don’t want to be responsible for the full SDLC of an app running on Heroku, including all the pain of deploying and building authentication. Even with Rails, you still need to snap together a number of components to build a real application on Heroku. You might reach for AWS Lambda, especially if you are only working within the AWS universe, but what if you need access to other APIs? You can pull down an SDK, but you just put yourself back in the land of more complexity.

I’ve encountered this myself and understand how much software doesn’t get built for these reasons. (Or it gets built and does half the job it could, or it gets built and turns into a maintenance problem in a year or two.)

Transposit threads this needle by creating a low code solution. You have all the power of Javascript (with the perils as well). It handles some of the things that pretty much every application is going to have (authentication, scheduled jobs, per user settings) and hosts your application for you. The big win, however, is the API composition abstraction. Every API they integrate with (full list) is just a database table. The syntax can be a bit weird at times, but the abstraction works (I’ve created a few apps). Authentication with an API is managed by Transposit as well (though you have to set it up) and you have the option of having the authentication be per user or application wide.

I think that Transposit is going to make it much easier to build software that will help automate business and make people’s lives easier. That’s something I’ve been thinking about for a long time. It’s free to signup and kick the tires, so you can go build something, like a slackbot that fits into a tweet.

The second reason I’m excited to join Transposit is because I’ll be shifting roles. After a couple of decades as a developer, CTO, engineering manager, tech lead and technology instructor (not all at the same time!) I’ll be trying out the developer advocate role. I’ll be doing a lot more writing and interaction with Transposit’s primary users, developers, to help make the platform into the best solution it can be.

PS, we’re hiring.

Terraform with multiple workspaces and environments

I recently was setting up a couple of AWS environments for a client. This client had a typical web application which talked to an RDS database. There was DNS, a CDN and other components involved. We wanted to use Terraform to maintain traceability and replicability, and have the same configuration for production and staging, with perhaps small differences like ec2 instance size. We also wanted to separate out the components into their own Terraform workspaces to limit the blast radius (so if one component had changes that caused issues or Terraform corruption, it wouldn’t affect others). Finally, we wanted each environment to have its own Terraform backend, again to separate the environments.

I wasn’t able to complete this project due to external factors (I left the position before testing could be completed), but wanted to share the concepts. Obviously I can’t share the working code, but I set up an example project which is simpler. That’s the project I’ll be examining in this post. I also want to be clear that while I’ve tested this as much as I could and have validated the ideas with others who have more Terraform experience, this hasn’t been run in production. You have been warned. (Here’s the Terraform docs about setting up modules, workspaces and repositories.)

Using a tool like Terraform is great for a number of reasons, but my favorite is that it lets you track changes to cloud infrastructure. More than once I’ve wandered into an AWS account and wondered why certain resources were set up in the way they were, and what might break if I changed them. There are occasionally comments, but it is far better to examine a commit. Even better to review the set of commits and see the customer request or bug tied to it. (Bonus link: learn more about Terraform and other cloudy tools in this podcast episode with the creator of Terraform.)

So this simpler example project has a lambda that writes to an SQS queue. For now, it just writes the date of invocation, but obviously you could have it reach out to an external API, read from a database, or do some kind of calculation. The SQS queue could then be read from by an EC2 instance, which processes the message and perhaps updates a database. You have three components of the system:

  • The lambda function
  • The SQS queue
  • The EC2 instance (implementation of which is left as an exercise for the reader)

The SQS queue is shared infrastructure and needs to be accessed by both of the other systems. However, the SQS system doesn’t need to know about either the lambda or the EC2 instance. Using Terraform, we can create each of these components as their own workspace. Each of the subsidiary systems can evolve or change (for instance, the EC2 instance could be replaced with an autoscaling group) with minimal impact on other systems. They could be managed by different teams as well if that made sense.

To enforce this separation, set up each component as a separate Terraform workspace. (All code is on github here.) I use remote state so that more than one person can manage the terraform state, and use the S3/dynamodb backend because we are targetting AWS and want a free scalable solution. This post assumes you know how to set up Terraform using s3/dynamodb as a remote state storage.

Here’s the outputs of the SQS system:

output "queue_url" {
  value = "${aws_sqs_queue.myqueue.id}"
}

output "queue_arn" {
  value = "${aws_sqs_queue.myqueue.arn}"
}

I explicitly define the output variables so I can pull them in from the lambda and EC2 workspaces. This is how you can do that.

...
data "terraform_remote_state" "sqs" {
  backend = "s3"
  config = {
    bucket = "${var.terraform_bucket}"
    key = "sqs/terraform.tfstate"
    encrypt = true
    dynamodb_table = "terraform-remote-state-locks"
    profile = "${var.aws_profile}"
    region = "us-east-2"
  }
}
...
resource "aws_lambda_function" "mylambda" {
...
  environment {
    variables = {
      sqs_url = "${data.terraform_remote_state.sqs.outputs.queue_url}"
    }
  }
}

The terraform_remote_state block defines the location of the previously defined sqs workspace, and the ${data.terraform_remote_state.sqs.outputs.queue_url} references that url. That is then injected as an environment variable into the lambda, which reads it and uses the url to create an SQS client. It can then post whatever message it wants.

You can see how this would work with any number of configuration parameters. If you have typical three tier database driven application with a separate caching layer you can create each of these major components and inject the values into either the environment (for lambda) or the userdata (for EC2). I’m not sure I’d use this with a microservices architecture because using a services registry might be more appropriate.

Note that the lambda component has a rudimentary lambda function (you have to define something). It also uses Terraform to deploy the lambda code. That’s fine for the toy example, but for production you will want to use a real CI/CD system to deploy your lambdas.

Now, suppose you want to run production and staging environments, because you are ready to launch. Here are the constraints you’d want:

  • Production and staging run the same config (except when staging is changing, of course)
  • Production and staging may differ in a few details (the size of the EC2 instance, for example)
  • Production and staging execute in different AWS accounts to limit access and issues. You don’t want an error in staging to affect production. This is handled by creating different profiles which have access to different accounts.
  • Production and staging execute in different Terraform backends for the same reason as the separate AWS accounts.

Staging and production can use the same git repository, but when pulled down they are kept in two places on the filesystem. This is because you need to specify the profile and the bucket when using terraform init. So you end up running something like these two commands:

git clone git@github.com:mooreds/terraform-remote-state-example.git # staging
git clone git@github.com:mooreds/terraform-remote-state-example.git production-terraform-remote-state-example # production

I set up the project so that staging can be managed by normal terraform commands (since that will happen more often), and that production uses either special incantations or a script. For the initialization of the production Terraform environment, this looks like: terraform init -backend-config="profile=trsproduction" -backend-config="bucket=bucketname". For staging, it’s just terraform init. I didn’t have a lot of luck switching between these two Terraform backends in the same filesystem location, so that having two trees was a straightforward workaround.

Any changes between production and staging are each pulled out to a variable, with the staging value as the default. Then each workspace has a script which applies the Terraform configuration to the production environment. The script sets variables to be the correct value for production. Here’s an example for the lambda workspace:

terraform apply -var aws_profile=trsproduction -var terraform_bucket="mooreds-terraform-remote-state-example-production" -var env_indicator="production" -var lambda_memory_size=256

We pass in the production terraform_bucket in case any references need to be made to the remote state (to pull in the SQS queue url, for example). We also pass in an increased lambda memory size because, hey, it’s production. Other things that might vary between environments: for example, VPC or subnet ids, API endpoints, and S3 bucket names.

For simplicity, we just use two profiles for staging and production (in ~/.aws/credentials), but any way of getting credentials that works with Terraform will work:

[trsstaging]
aws_access_key_id = ...
aws_secret_access_key = ...

[trsproduction]
aws_access_key_id = ...
aws_secret_access_key = ...

This lets us separate out who has production access. Some users can have both staging and production profiles (perhaps operations), and others can have only staging profiles (perhaps developers). You can pass region values in via variables as well.

Using this system, the workflow for a change would be:

  • Check out the terraform git repository
  • Create a feature branch (including an issue identifier)
  • Pull request and approval
  • Run terraform apply to apply to staging
  • Run any additional tests
  • Merge to master
  • Run prodapply.sh

Again, I want to be clear that I’ve implemented this partially, but I didn’t get a chance to run this fully in production. I tested all these concepts with the simple system mentioned above (and you can stand up your own using the code on github). There will be issues that I haven’t experienced. But I hope that this post helps illuminate the complexity of managing multiple workspaces and environments within a single Terraform github repository.