What I wish I had known when I was starting out as a developer

As I get older, I wish I could reach back and give myself advice. Some of it is life advice (take that leap, kiss that girl, take that trip) but a lot of it is career advice.

There’s so much about being a software developer that you learn along the way. This includes technical knowledge like how to tune a database query and how to use a text editor. It also includes non technical knowledge, what some people term “soft skills”. I tend to think they are actually pretty hard, to be honest. Things like:

  • How to help your manager help you
  • Why writing code isn’t always the best solution
  • Mistakes and how to handle them
  • How to view other parts of the business

These skills took me a long time to learn (and I am still learning them to this day) but have had a material impact on my happiness as a developer.

I am so passionate about this topic that I’ve written over 150 blog posts. Where are these? They’re over at another site I’ve been updating for a few years.

And then I went and wrote a book. I learned a bunch of lessons during that process, including the fact that it’s an intense effort. I wrote it with three audiences in mind:

  • The new developer, with less than five years of experience. This book will help them level up.
  • The person considering software development. This book will give them a glimpse of what being a software developer is like.
  • The mentor. This book will serve as a discussion guide for many interesting aspects of development with a mentee.

The book arrives in August. I hope you’ll check it out.

Full details, including ordering information, over at the Letters To A New Developer website.


A quick look at xkit

I was prototyping a small app in xkit and wanted to document this useful tool. When I first saw this launch on HackerNews, I couldn’t quite understand what the purpose was. But now that I’ve spent a bit of time playing with it, I understand it a bit more.

Suppose you are writing a recipe management SaaS and realize that you want to integrate with some other services. Perhaps you want to be able to export the steps of a recipe to a Trello board, or to a Google doc, or to a PDF.

These are all services available on the internet with an API which will allow end users to give your application access to their accounts. This lets you publish to each user’s Google docs account or Trello board.

(I’m not as familiar with services offering PDF generation functionality, but a quick search turns up some options, including some that you can self host.)

There’s a fair bit of hoop jumping in terms of setting up API keys and OAuth consent screens, however.

And this is the problem that xkit solves. If they’ve already written the connection (here’s a list), it is quite simple to add the ability for a user to connect to the service. With no previous experience, I was able to connect to Trello in about an hour. The user experience of connecting the external SaaS application is really smooth and far better than something I could whip up quickly.

If they haven’t written a connector, I don’t believe you can write one yourself. For example, for that PDF service, you’d need to contact the xkit folks and ask them to add one.

This is different than, say, Zapier, because it’s operating at a different level. Zapier is excellent (and has been for years) at letting users connect their apps. But xkit lets you let your users connect apps, basically letting you build a mini Zapier (in terms of connectivity, not functionality).

You can also host your own app catalog if you want to. I didn’t get into this too much, though, so it’s unclear what the benefits of that are.

They provide a user data store out of the box, but also integrate with a number of other providers (including FusionAuth). This means you can leverage your existing auth solution and still get the easy integration with other third party APIs.

Their pricing seems reasonable, given what they take off your plate.

Nothing’s perfect, however. I found a few documentation bugs, which I let them know about (they host their docs on readme.com and I found the suggestion process delightful). When I tried to sign up, the service was down, but a quick Tweet exchange resolved the issue within 30 minutes.

It is bizarre to me as an authentication focused company that they don’t have a “forgot password” link on their login pages. The documentation is javascript heavy, with nary a mention of other languages, but that’s understandable as they’re just starting out. It’s also strangely video heavy, which I found a bit distracting; that, however, could just be my learning style.

All in all, if you are looking to integrate third party APIs which require OAuth interactions on the part of your users, you’d be well served to take a look at xkit.


Book Review: A Memory Called Empire

I was at a bookstore the other day and was rummaging around for an escape book. I had picked up one book based on staff recommendation, but came across another that had won the Nebula. A Memory Called Empire appeared to be an award winning space opera novel.

I thought, why not, and picked it up. I was not disappointed.

There are two main cultures of different power and viewpoints. The action follows an new, unprepared ambassador from one culture to the other. The cultures are coherent and yet alien. Alien to each other and to me. Competitive poetry, internal reflection, and constant political intrigue define one culture. We get drips and drabs of the other, less powerful civilization, but you learn enough to be appreciative of their scrappyness and reverence for the collective.

The cultures are never described to the detriment of the action. The ambassador is dropped into a political mess and acts and reacts to help save her nation and herself. Whether she is trying to meet powerful people to negotiate for her people, reading correspondence, or escape danger, there’s no downtime. The entire book happens in span of about two weeks.

I also enjoyed the character development. Many of the characters pop in and out of the storyline, but you follow a few main characters for a while. You get to understand and appreciate the way they interact with each other. It doesn’t feel forced at all. At the same time, the “otherness” of the ambassador provides a constant tension which is understandable to anyone who has been dropped into an uncomfortable situation.

The plot revolves around a mystery. But even when it is unveiled, there’s still plenty of excitement, as a confluence of outside, political events ensnare the protagonists.

Some books are so good they keep you reading late into the night. This was one of them.


Thoughts on static site generators and WordPress

Blog in scrabble lettersIn my last few jobs, I’ve done a lot more writing. I’ve learned to work with static site generators (SSGs), such as jekyll and 11ty. I even moved a database driven side project to Netlify. Here’s an interesting survey about SSGs from Redmonk, if you want to learn a bit more. I am also a longtime WordPress user, and think it has some tremendous strengths. I wanted to capture my thoughts on these two options for building your website.

Here’s why I’d pick WordPress for a content heavy site:

  • Avoid any kind of compilation pipeline or step
  • The authors don’t have technical chops and want WYSIWYG
  • You want more than a blog, with additional functionality pulled from the wide world of plugins. Also, themes!
  • You want to allow people to just write, without software getting in the way

Here’s why I’d pick a static site generator for the same:

  • Need performance and scalability at a low low price
  • Wanted ‘set it and forget it’ security (you can’t hack a static HTML page)
  • Authors are technical enough to write markdown and to leverage the data driven possibilities of an SSG
  • No need for interactive functionality, beyond what JavaScript/JAMStack can provide

Like any other choice in software engineering, these two solutions offer tradeoffs.

I think that SSGs are far better for simpler websites, and in some ways they are a more sophisticated version of very early websites. I remember writing a perl program to take a usenet file and turn it into a set of web pages in the last 1990s. (It was jokes, if you must know the content.) When you build and deploy a static site, you know exactly what you’re getting, but you can also extract common functionality to shared files. You will have to compile it, however, and typically uses a version control system, which can be a lot for some non technical folks.

WordPress is fantastic for just getting going. I can get a blog started in 15 minutes that can be used by anyone who knows the basics of Microsoft Word. The flip side of the speed to first page is operational complexity, slower performance (all things being equal), as well as a more complicated application to secure.

I will say that as a developer, SSGs are growing on me.

PS I know some people have combined them both, and using WordPress as the backend and publishing a set of static pages is really appealing. I’ve done some preliminary work on this, but haven’t found a great solution out there.


Use managed services. Please.

“Use managed services.”

If there was one piece of advice I wish I could shout from the mountains to all cloud engineers, this would be it.

Operations, especially operations at scale, are a hard problem. Edge cases become commonplace. Failure is rampant. Automation and standardization are crucial. People with experience running software and hardware at this scale tend to be rare and expensive. The mistakes they’ve made and situations they’ve learned from aren’t easy to pick up.

When you use a managed service from one of the major cloud vendors, you’re getting access to all the wisdom of their teams and the power of their automation and systems, for the low price of their software.

A managed service is a service like AWS relational database service, Google Cloud SQL or Azure SQL Database. With all three of these services, you’re getting best of breed configuration and management for a relational database system. There’s configuration needed on your part, but hard or tedious tasks like setting up replication or backups can be done quickly and easily (take this from someone who fed and cared for a mysql replication system for years). Depending on your cloud vendor and needs, you can get managed services for key components of modern software systems like:

  • File storage
  • Object caches
  • Message queues
  • Stream processing software
  • ETL tools
  • And more

(Note that these are all components of your application, and will still require developer time to thread together.)

You should use managed services for three reasons.

  • It’s going to be operated well. The expertise that the cloud providers can provide and the automation they can afford to implement will likely surpass what you can do, especially across multiple services.
  • It’s going to be cheaper. Especially when you consider employee costs. The most expensive AWS RDS instance is approximately $100k/year (full price). It’s not an apples to apples comparison, but in many countries you can’t get a database architect for that salary.
  • It’s going to be faster for development. Developers can focus on connecting these pieces of infrastructure rather than learning how to set them up and run them.

A managed service doesn’t work for everyone. If you need to be able to tweak every setting, a managed service won’t let you. You may have stringent performance or security requirements that a managed service can’t meet. You may also start out with a managed service and grow out of it. (Congrats!)

Another important consideration is lock-in. Some managed services are compatible with alternatives (kubernetes services are a good example). If that is the case, you can move clouds. Others are proprietary and will require substantial reworking of your application if you need to migrate.

If you are working in the cloud and you need a building block for your application like a relational database or a message queue, start with a managed service (and self host if it doesn’t meet your needs). Leverage the operational excellence of the cloud vendors, and you’ll be able to build more, faster.


When in doubt, test it out

When I taught AWS certification courses, I’d often get questions about how a service behaved under load or other unusual circumstances. Frequently I could answer from personal experience or by asking other instructors; occasionally class members provided their insights. Sometimes I could dig up relevant vendor documentation.

However, my default answer was:

“Test it for yourself. There’s no substitute for testing.”

This is one of the great advantages of the cloud. When you have a question about the performance or behavior of a service or system, spin it up and test it. This will cost you money and some time configuring the system, but certainly will be cheaper than ordering hardware, racking it and then also configuring the system. When you’re done with your testing, you can tear down the infrastructure and never worry about it again. Sure beats shipping a server back to the manufacturer.

Of course, no testing scenario can replicate production perfectly. But you can get pretty close (especially if you can reuse production traffic).

When you do test, start by documenting what you want to achieve. What is the question you are trying to answer? Make sure to seek feedback from other team members and/or search online, as it’s possible someone has already answered your question. If you do find answers, understand under what circumstances the tests were performed, as the cloud and the offered services change over time.

Some examples of cloud infrastructure questions you might want to answer:

  • How do EBS volumes of different sizes and types perform under load?
  • When a Kubernetes cluster running on GKE is under load, what happens when you add an additional node? An additional pod?
  • What happens when you turn off a NAT gateway while a file is being uploaded to S3 from an EC2 instance in a private subnet (without an S3 VPC endpoint)?
  • What is the cold start time for an empty Azure function? What about a function loading your dlls?

Think about what steps you are going to take to try to answer the question.

With your question and methodology spelled out, spin up your testing environment. Having your infrastructure represented as code will make this quick, especially if you have a complicated environment. If you are creating the test environment manually, record settings and other configuration in a text file to be able to re-create the environment later.

Run your tests. If you are load testing, find an open source or commercial load testing tool. What you need depends on your goals: you need a different tool to test 100k+ simultaneous users on a website than you do when trying to understand how an internal API handles 100 requests/second.

Review the data to see if your questions are answered. More questions or areas of interest may appear. Adjust your tests to answer them.

Once you have your answers to the desired level of certainty, tear down your testing infrastructure.

Document what you tested, how you tested and your results. Circulate this internally to help your team. If possible, publish it on your company blog to both help others in the same boat and to boost your company’s standing in the community.

All the vendor documentation in the world is no substitute for rolling up your sleeves and testing.


Joining FusionAuth

I wanted to let y’all know that I’ve joined FusionAuth as a developer advocate. I’ll be working to help our customers succeed and promote the virtues of standards based user management systems. I get to write a lot of content and example applications against a full featured API.

I’ve built enough systems to know two things:

  • Users and their behavior are almost always a key part of any software application.
  • User management is difficult to get right, especially if you want to use secure best practices and standards such as OAuth.

FusionAuth wants to elevate everyone’s user identity management system. The community edition is free and will always be. (It’s important to note that it is free as in beer, not free as in speech, but almost all of the development happens in the open.) If you want to run FusionAuth on your own forever, that’s great! You get a secure user store that supports OAuth, SAML and two factor authentication, free forever. We’ll happily provide you “best effort” support in our forums and we’ve seen the community help each other out too (most notably in the creation of helm charts to run FusionAuth on Kubernetes).

If, on the other hand, you find value in FusionAuth and want guaranteed support, custom development, or hosting, we’re happy to sell that to you. The price is often a fraction of the other solutions out there. Another differentiator for FusionAuth is that you can host it wherever you want: in your data center, in your cloud, or on our cloud servers. Not every client needs that level of control, but many do.

I really love the business model of providing a ton of value to your end users and monetizing only a small percentage of them with unique needs. (I’ve been involved in this type of business before.) The business thrives and there’s a ton of consumer surplus generated.

I’m really excited about this opportunity. It’s a nimble company with a passionate team based in Denver. If you need a user identity management system built from the ground up for developer happiness, please check us out.


Book Review: Am I Being Too Subtle?

I recently finished “Am I Being Too Subtle”, which is a business book by Sam Zell. He is an American businessman and real estate tycoon.

I quite enjoyed the book. We learn a bit about his parents and upbringing, and then he plunges right into the deals. Some of the details are great (he was able to complete a transaction once because he found out the owner of the property had buried a favorite dog in the backyard, and added a clause to the contract allowing the pet to be exhumed). He addresses his successes (helping professionalize the REIT market, selling the equity group, the loyalty of his employees) and his failures (the bankruptcy of the Tribune company in 2008).

The single biggest lesson that I took away was that to succeed you need to be looking in places that others aren’t and that eventually what worked for you will stop working and you’ll need to change. There are other pieces of wisdom in the book, and it’s less than 250 pages. A great fun read if you enjoy learning about business and deals.


Creating a CircleCI orb to authenticate a user during a build

I’m a big fan of automating all the things and I also believe in the DRY principle. I’ve been using CircleCI for years and noticed that they had added a way to abstract away some repeated configuration called orbs. I recently built one and wanted to share my experience.

If you are thinking about building an orb, first take a look at the list of existing orbs. After all, it’s better to reuse code someone else will maintain if it does what you need.

In my case, I wanted to explore building an orb. I dreamed up a use case for which no one else had written an orb. The situation is that you store user data in FusionAuth. Each time a build runs, you want to verify that the user running the build is active andvalid before continuing the build. If the user can’t authenticate, fail the build.

Set up the authentication server

I set up a FusionAuth server using their instructions on EC2. It can’t be on localhost because CircleCI needs to communicate with it during the build. The server didn’t run well on a t2.micro size so I ended up springing for a t2.large, where it worked fine. I also had some difficulty installing mysql on the Amazon Linux AMI, but this SO answer helped me out. I added a FusionAuth application and user via the admin panel. I also got an API key. I limited the API key drastically; it could only post to the login endpoint. I tested access with curl like so (ALL_CAPS strings are placeholders you’d want to replace with real values):

curl -s -o /dev/null -w "%{http_code}" -XPOST -H 'Authorization: API_KEY' \
-H 'Content-Type: application/json' \
-d '{ "loginId": "USERNAME", "password": "PASSWORD", "applicationId": "APPLICATION_ID" }' \
http://FUSION_AUTH_SERVER_HOSTNAME:9011/api/login|grep 202 > /dev/null

FusionAuth returns a 404 status code if the user isn’t authenticated successfully (they don’t exist or have an incorrect password) and 202 if the user logs in successfully (more API information here). Note that this login process may skew some of the reporting (around DAUs, for example) so you’ll want to create a separate application just for CI/CD.

The curl + grep statement above exits with a value of 0 if the user successfully authenticates and with a value of 1 if authentication fails. The former will allow the build to continue and the latter will stop it.

The check I want to run worked. Now I just needed to build and publish the orb.

 

White orb

Building the orb

An orb is reusable CircleCI configuration. You have three kinds of configuration you can re-use:

  • jobs: a normal CircleCI job, but you can also pass in parameters. A great option if you have a job you want to use across different projects; something like ‘run this formatting tool’.
  • executors: an environment in which to execute code (docker container, vm, etc).
  • commands: a set of steps that can be reused. It is lower level than a job and can be used across different jobs.

I chose to create a single command. I also created a job to run while I was developing (I added it as a project in CircleCI). You can see the full source code here.

Here are the interesting bits where I define the command to be shared (ah, yaml):

commands:
  verifyauth:
    parameters:
      username:
        type: string
        default: "user"
        description: "FusionAuth username to try to validate"
      applicationid:
        type: string
        default: "appid"
        description: "FusionAuth application id"
      hostbaseurl:
        type: string
        default: "http://ec2-52-35-2-20.us-west-2.compute.amazonaws.com:9011/"
        description: "FusionAuth host base url"
      password_env_var_name:
        type: env_var_name
        default: BUILDER_PASS
        description: "The user's FusionAuth password is stored in this environment variable"
      fusionauth_api_key_env_var_name:
        type: env_var_name
        default: FUSION_AUTH_API
        description: "The FusionAuth API key is stored in this environment variable"
    steps:
      - run: |
         curl -s -o /dev/null -w "%{http_code}"   -XPOST -H "Authorization: ${<< parameters.fusionauth_api_key_env_var_name >>}"  -H 'Content-Type: application/json' -d '{ "loginId": "<< parameters.username >>", "password": "'${<< parameters.password_env_var_name >>}'", "applicationId": "<< parameters.applicationid >>" }' << parameters.hostbaseurl >>api/login |grep 202 > /dev/null
      - run: |
         echo User authorized

The command verifyauth takes parameters. These have defaults and descriptions. Anything that you wouldn’t mind seeing checked into a source code repository can be passed as a parameter. You then call the command in your job and pass parameters as needed (we’ll see that below).

However, there are sometimes secrets which need to be stored as environment variables (in the project or the context): API keys or passwords, for example. However, I still wanted to make them configurable by whoever uses the orb. Enter the  env_var_name parameter type. This type lets the user specify the name of the environment variable. If I set the password_env_var_name to AUTH_CHECK_PASS, then I need to make sure there is an AUTH_CHECK_PASS environment variable set somewhere in my project containing the password with which we’ll authenticate against FusionAuth. This lets the orb be both configurable and secure.

Finally, you can see that the first step of the command is posting login data to the authentication server. Again, if we see anything other than 202 we fail and the build stops. (You’ve seen that curl command before.)

Publishing the development orb

To be able to use the orb with a different project, I needed to publish the orb (I could have developed the orb inline to avoid this). The publishing instructions are here. The only issue I ran into was that I had to update my CircleCI organization settings and allow “Uncertified Orbs” before I could create a namespace. After that I was able to publish a development version of my orb:

circleci orb publish .circleci/config.yml mooreds/verifyauth@dev:testing

I was in the directory of my orb code and referenced my config. mooreds is my orb namespace, verifyauth is the orb name (which is arbitrary and not connected to the source repository name in any way) and dev:testing is the version of the orb. Note that there are two types of orb version: production, which strictly follow semantic versioning, and development, prefaced by dev: and after string that can contain “up to 1023 non whitespace characters”. Development orbs have other limitations: they are not public, are mutable and only last for 90 days. You’ll want to publish your orbs with production versions if you are using them for any purpose other than prototyping or exploration.

I published my orb via the command line, but the docs outline publishing via a CircleCI job.

Testing the orb

Now I wanted a second project to test the orb. Here’s the project source code. Here’s the interesting code:

...
orbs:
  verifyauth: mooreds/verifyauth@dev:testing
jobs:
  build:
    steps:
      - verifyauth/verifyauth: # when called from external job, command is namespaced to by the name of the orb
          username: "circlecimooreds"
          applicationid: "98113cee-d1a8-4abf-baf5-a6ea742f80a1"
  ...

You can see that I pull in the orb at the development version, which I’d previously published. Then I call the namespaced command with some parameters. For this command to work, I also needed to set up required environment variables (in this case, BUILDER_PASS and FUSION_AUTH_API because I didn’t pass in any of the env_var_name parameters). If you don’t set those environment variables (or, alternatively, set the parameters to different values, and then set those environment variables), the build will fail no matter what, as the API call won’t succeed.

I then pushed this sample project up to CircleCI and ran a few builds to make sure the parameters were being picked up.

Publishing the production orb

Now that I had an orb that is parameterized and exposed the command we want to share, I needed to publish it for everyone to use. Note that your configuration code is entirely exposed if you publish an orb. You can see the source of any orb via the circleci orb source command. circleci orb source mooreds/verifyauth@0.0.2 will show you the entire source of my sample orb. They warn you a number of times about this.

To promote to production an orb that you have published to development, update the dev version: circleci orb publish .circleci/config.yml mooreds/verifyauth@dev:testing to catch any changes and then promote it: circleci orb publish promote mooreds/verifyauth@dev:testing patch.

Note that the patch argument at the end of the promote command bumps the patch number (0.0.1 -> 0.0.2) but you can also bump the minor and major numbers. Any changes you make to a production orb require you to publish and promote it again; production orb versions are immutable. For instance, I wanted to update the description of some parameters, but had to publish an entirely new version.

After publishing, you’d want to update any projects that use the orb to use the production version.

Publiished orb listing

The listing of my published orb

Areas for further work

This was a slightly contrived example. I wanted to gain some experience both with FusionAuth and with CircleCI (I have friends who work at both companies). There are a number of areas where this could be improved:

  • authenticate against a different authentication server (LDAP, Okta, AWS IAM)
  • store additional information about the user in the authentication database (for instance, which projects they can build) and convert the authentication curl command into an authorization command
  • run the identity server over SSL (I just used HTTP because it was easier to get up and running, but that’s obviously a production no-no)
  • pull the user and password from the build environment. It’s pretty clear to me how you’d pull the user (there’s a CIRCLE_USERNAME environment variable) but I’m not sure how to pass the password. I can think of a couple of solutions:
    • don’t login at all, just allow the API key to pull user data and match on the username (this is probably the best option)
    • pass the password via a pipeline parameter, which means you’d have to set up an API call to build
    • have one common password for all users in the FusionAuth system, and use it only for access control to the build pipeline
    • make the password the same as the username in the FusionAuth system, and use it only for access control to the build pipeline

In conclusion

If you want to interact with external services from within CircleCI, check out the list of existing orbs.

If you have a service that you want to make it easier for CircleCI users to interact with and use, create an orb and publish it.

If you are working with CircleCI and have duplicate configuration that you want to share between projects, setting up your own orbs is a great idea. Orbs are flexible and easy to parameterize. If you’re OK with your configuration being public (it wasn’t clear to me if there was any way to have the configuration kept private), you can encapsulate your build and deploy best practices in an easy to consume manner.


The Case For Space

Sunrise over a planetI recently read “The Case For Space“, by Robert Zubrin. It was great. Now, I have a degree in physics, but it’s been a long time since I did any math more complicated than algebra. So I can’t speak to the nuances of his calculations–I didn’t verify them. (A google search for reviews where people work through the math doesn’t turn up anything either.)

But I thoroughly enjoyed this overview. This book is in two parts.

The first covers where we are in space exploration now, and where the physics can let us be. He spends a lot of time on what we could do right now. But he also write a number of chapters on where we can be if there are scientific advances in energy generation (namely fusion). The author starts with low earth orbit (LEO) and then heads to the moon, Mars, the asteroid belt, the gas giants and beyond.

This section is a lot of fun. Zubrin covers areas that I never considered to be related to space (the power of transorbital flight to make the earth even smaller than it is today for travelers). He also talks about the economics of space flight and colonization. What exactly will the moon settlers or Martians have that they can sell? There’s a reference to a three way trade between Earth, Mars and the asteroid belt. I really enjoyed the details he dove into. For example:

“In the almost Earth normal atmosphere of Titan, you would not need a pressure suit, just a dry suit to keep out the cold. On your back, you could carry a tank of liquid oxygen, which would need no refrigeration in Titan’s environment…and could supply your breathing needs for a weeklong trip…”

The second part is supposed to be more inspirational, as if getting to space wasn’t inspirational enough. He covers various reasons (freedom, security, survival) that we should be getting off Earth. I found this section less exciting, but I can understand why he included this–if you aren’t excited about space travel for the adventure, Zubrin might convince you in this section.

There were parts of this book that dragged on a bit, but for the most part the prose is accessible and the math skippable. The real world plans that he outlines, especially at the beginning of the book focused on reusable spacecraft technology, LEO, the moon and Mars, are fascinating for their audacity and specificity. Recommended.



© Moore Consulting, 2003-2020