Skip to content

Devops

GitHub Actions Are Amazingly Easy

GitHub Workflows are automated jobs that can be triggered by various events against a GitHub repository. They are pretty awesome.

GitHub Actions are a way to encapsulate configuration and functionality in a way that can be easily reused in GitHub Workflows.

I was thinking it’d be fun to create some GitHub Actions (yes, I’m the life of the party), so I sat down a few mornings ago to do this. I was shocked at how easy it was.

I followed a few lines of this tutorial to create a workflow. Then I created an action by following this tutorial. Finally, I edited my workflow to use the new action. That was it.

It was amazingly simple and took me about 30 minutes. I ran into one unrelated issue (to set the executable bit on a shell script in windows, I had to modify the shell script contents in order to ensure the change was sent to the remote repo).

If you take a look, you’ll see these are both toy repositories, to be sure. However, the ability to write jobs which will be executed on a git push, pull request or other events is great and removes toil. Being able to extract common functionality to an action is even better. Finally, the ability to share the action publicly by adding it to the GitHub marketplace is fantastic.

I’ve liked CircleCI for a long time, but if I were them I’d be worried.

One issue I found is that the testing/release cycle is pretty tedious (I’ve mentioned that action debugging to be an issue for a while).

While I was troubleshooting my executable bit error, I had to do the following every time I wanted to test a change:

  • make a change in the action repository
  • create a new tag
  • push it to the remote
  • switch to the workflow repository
  • bump the action version
  • push to the remote
  • wait for the workflow to complete

Not horrific, but pretty tedious. I don’t know if there are other options such as local deployment which would reduce that cycle, but that would be swell.

Other than that, 10 out of 10, would write more actions.

Creating a CircleCI orb to authenticate a user during a build

I’m a big fan of automating all the things and I also believe in the DRY principle. I’ve been using CircleCI for years and noticed that they had added a way to abstract away some repeated configuration called orbs. I recently built one and wanted to share my experience.

If you are thinking about building an orb, first take a look at the list of existing orbs. After all, it’s better to reuse code someone else will maintain if it does what you need.

In my case, I wanted to explore building an orb. I dreamed up a use case for which no one else had written an orb. The situation is that you store user data in FusionAuth. Each time a build runs, you want to verify that the user running the build is active andvalid before continuing the build. If the user can’t authenticate, fail the build.

Set up the authentication server

I set up a FusionAuth server using their instructions on EC2. It can’t be on localhost because CircleCI needs to communicate with it during the build. The server didn’t run well on a t2.micro size so I ended up springing for a t2.large, where it worked fine. I also had some difficulty installing mysql on the Amazon Linux AMI, but this SO answer helped me out. I added a FusionAuth application and user via the admin panel. I also got an API key. I limited the API key drastically; it could only post to the login endpoint. I tested access with curl like so (ALL_CAPS strings are placeholders you’d want to replace with real values):

curl -s -o /dev/null -w "%{http_code}" -XPOST -H 'Authorization: API_KEY' \
-H 'Content-Type: application/json' \
-d '{ "loginId": "USERNAME", "password": "PASSWORD", "applicationId": "APPLICATION_ID" }' \
http://FUSION_AUTH_SERVER_HOSTNAME:9011/api/login|grep 202 > /dev/null

FusionAuth returns a 404 status code if the user isn’t authenticated successfully (they don’t exist or have an incorrect password) and 202 if the user logs in successfully (more API information here). Note that this login process may skew some of the reporting (around DAUs, for example) so you’ll want to create a separate application just for CI/CD.

The curl + grep statement above exits with a value of 0 if the user successfully authenticates and with a value of 1 if authentication fails. The former will allow the build to continue and the latter will stop it.

The check I want to run worked. Now I just needed to build and publish the orb.

 

White orb

Building the orb

An orb is reusable CircleCI configuration. You have three kinds of configuration you can re-use:

  • jobs: a normal CircleCI job, but you can also pass in parameters. A great option if you have a job you want to use across different projects; something like ‘run this formatting tool’.
  • executors: an environment in which to execute code (docker container, vm, etc).
  • commands: a set of steps that can be reused. It is lower level than a job and can be used across different jobs.

I chose to create a single command. I also created a job to run while I was developing (I added it as a project in CircleCI). You can see the full source code here.

Here are the interesting bits where I define the command to be shared (ah, yaml):

commands:
  verifyauth:
    parameters:
      username:
        type: string
        default: "user"
        description: "FusionAuth username to try to validate"
      applicationid:
        type: string
        default: "appid"
        description: "FusionAuth application id"
      hostbaseurl:
        type: string
        default: "http://ec2-52-35-2-20.us-west-2.compute.amazonaws.com:9011/"
        description: "FusionAuth host base url"
      password_env_var_name:
        type: env_var_name
        default: BUILDER_PASS
        description: "The user's FusionAuth password is stored in this environment variable"
      fusionauth_api_key_env_var_name:
        type: env_var_name
        default: FUSION_AUTH_API
        description: "The FusionAuth API key is stored in this environment variable"
    steps:
      - run: |
         curl -s -o /dev/null -w "%{http_code}"   -XPOST -H "Authorization: ${<< parameters.fusionauth_api_key_env_var_name >>}"  -H 'Content-Type: application/json' -d '{ "loginId": "<< parameters.username >>", "password": "'${<< parameters.password_env_var_name >>}'", "applicationId": "<< parameters.applicationid >>" }' << parameters.hostbaseurl >>api/login |grep 202 > /dev/null
      - run: |
         echo User authorized

The command verifyauth takes parameters. These have defaults and descriptions. Anything that you wouldn’t mind seeing checked into a source code repository can be passed as a parameter. You then call the command in your job and pass parameters as needed (we’ll see that below).

However, there are sometimes secrets which need to be stored as environment variables (in the project or the context): API keys or passwords, for example. However, I still wanted to make them configurable by whoever uses the orb. Enter the  env_var_name parameter type. This type lets the user specify the name of the environment variable. If I set the password_env_var_name to AUTH_CHECK_PASS, then I need to make sure there is an AUTH_CHECK_PASS environment variable set somewhere in my project containing the password with which we’ll authenticate against FusionAuth. This lets the orb be both configurable and secure.

Finally, you can see that the first step of the command is posting login data to the authentication server. Again, if we see anything other than 202 we fail and the build stops. (You’ve seen that curl command before.)

Publishing the development orb

To be able to use the orb with a different project, I needed to publish the orb (I could have developed the orb inline to avoid this). The publishing instructions are here. The only issue I ran into was that I had to update my CircleCI organization settings and allow “Uncertified Orbs” before I could create a namespace. After that I was able to publish a development version of my orb:

circleci orb publish .circleci/config.yml mooreds/verifyauth@dev:testing

I was in the directory of my orb code and referenced my config. mooreds is my orb namespace, verifyauth is the orb name (which is arbitrary and not connected to the source repository name in any way) and dev:testing is the version of the orb. Note that there are two types of orb version: production, which strictly follow semantic versioning, and development, prefaced by dev: and after string that can contain “up to 1023 non whitespace characters”. Development orbs have other limitations: they are not public, are mutable and only last for 90 days. You’ll want to publish your orbs with production versions if you are using them for any purpose other than prototyping or exploration.

I published my orb via the command line, but the docs outline publishing via a CircleCI job.

Testing the orb

Now I wanted a second project to test the orb. Here’s the project source code. Here’s the interesting code:

...
orbs:
  verifyauth: mooreds/verifyauth@dev:testing
jobs:
  build:
    steps:
      - verifyauth/verifyauth: # when called from external job, command is namespaced to by the name of the orb
          username: "circlecimooreds"
          applicationid: "98113cee-d1a8-4abf-baf5-a6ea742f80a1"
  ...

You can see that I pull in the orb at the development version, which I’d previously published. Then I call the namespaced command with some parameters. For this command to work, I also needed to set up required environment variables (in this case, BUILDER_PASS and FUSION_AUTH_API because I didn’t pass in any of the env_var_name parameters). If you don’t set those environment variables (or, alternatively, set the parameters to different values, and then set those environment variables), the build will fail no matter what, as the API call won’t succeed.

I then pushed this sample project up to CircleCI and ran a few builds to make sure the parameters were being picked up.

Publishing the production orb

Now that I had an orb that is parameterized and exposed the command we want to share, I needed to publish it for everyone to use. Note that your configuration code is entirely exposed if you publish an orb. You can see the source of any orb via the circleci orb source command. circleci orb source mooreds/verifyauth@0.0.2 will show you the entire source of my sample orb. They warn you a number of times about this.

To promote to production an orb that you have published to development, update the dev version: circleci orb publish .circleci/config.yml mooreds/verifyauth@dev:testing to catch any changes and then promote it: circleci orb publish promote mooreds/verifyauth@dev:testing patch.

Note that the patch argument at the end of the promote command bumps the patch number (0.0.1 -> 0.0.2) but you can also bump the minor and major numbers. Any changes you make to a production orb require you to publish and promote it again; production orb versions are immutable. For instance, I wanted to update the description of some parameters, but had to publish an entirely new version.

After publishing, you’d want to update any projects that use the orb to use the production version.

Publiished orb listing

The listing of my published orb

Areas for further work

This was a slightly contrived example. I wanted to gain some experience both with FusionAuth and with CircleCI (I have friends who work at both companies). There are a number of areas where this could be improved:

  • authenticate against a different authentication server (LDAP, Okta, AWS IAM)
  • store additional information about the user in the authentication database (for instance, which projects they can build) and convert the authentication curl command into an authorization command
  • run the identity server over SSL (I just used HTTP because it was easier to get up and running, but that’s obviously a production no-no)
  • pull the user and password from the build environment. It’s pretty clear to me how you’d pull the user (there’s a CIRCLE_USERNAME environment variable) but I’m not sure how to pass the password. I can think of a couple of solutions:
    • don’t login at all, just allow the API key to pull user data and match on the username (this is probably the best option)
    • pass the password via a pipeline parameter, which means you’d have to set up an API call to build
    • have one common password for all users in the FusionAuth system, and use it only for access control to the build pipeline
    • make the password the same as the username in the FusionAuth system, and use it only for access control to the build pipeline

In conclusion

If you want to interact with external services from within CircleCI, check out the list of existing orbs.

If you have a service that you want to make it easier for CircleCI users to interact with and use, create an orb and publish it.

If you are working with CircleCI and have duplicate configuration that you want to share between projects, setting up your own orbs is a great idea. Orbs are flexible and easy to parameterize. If you’re OK with your configuration being public (it wasn’t clear to me if there was any way to have the configuration kept private), you can encapsulate your build and deploy best practices in an easy to consume manner.

Fixing the RubyGems “Too Many Requests 429” error

lots of gemsA server on which I am working runs this command: /usr/bin/gem install --no-rdoc --no-ri aws-sdk to get the aws-sdk. I was seeing this error message:

Error: Execution of '/usr/bin/gem install --no-rdoc --no-ri aws-sdk' returned 1: ERROR:  While executing gem ... (Gem::RemoteFetcher::FetchError)
    bad response Too Many Requests 429 (https://api.rubygems.org/api/v1/dependencies?gems=aws-sdk-elasticloadbalancingv2)

Every time I ran it I’d see a different gem that triggered the 429 response. There wasn’t much out there when searching, other than a note that I should update to a new version of bundler (which I wasn’t using).

Finally, I figured out how to get past this. What I did was manually run /usr/bin/gem install -f --no-rdoc --no-ri aws-sdk multiple times, and each time the command would get a little further. Finally all the dependencies had been downloaded. Then I was able to run it without the -f switch after that.

Obstacles to building high availability software systems

Open sign
Is your system available?

I saw a discussion on a slack about obstacles to high availability systems and wanted to record the edited version for posterity (mostly for future me, as I blog for myself). Note that in any mention of high availability systems would be remiss if I didn’t mention the Google SRE book, which is slow reading but free and full of great information.

First, what is high availability? I like this definition from Digital Ocean:

In computing, the term availability is used to describe the period of time when a service is available, as well as the time required by a system to respond to a request made by a user. High availability is a quality of a system or component that assures a high level of operational performance for a given period of time.

Design considerations of a system that will hinder high availability fall into two categories.

The first category is actions that you don’t take, but could take:

  • single points of failure: if you have a piece of your system which is unique and it fails (and everything fails, all the time), the entire system’s availability will be affected.
  • missing or incomplete automation: if you need human beings to resurrect failed parts of your system, it will meaningful amounts of time and will be error prone.
  • failing to build in elasticity and scalability of resources: when usage increases, new resources should be automatically brought online. Failure to do so will impact system performance and that could impact system availability
  • missing or incomplete system instrumentation: if you don’t monitor your system, you won’t be able to even know its availability (until you hear from your users).
  • application statefulness (on the compute nodes): this impacts your ability to use elastic resources and to grow parts of your system that are under load. (If you aren’t designing a greenfield system, this may be an externally imposed requirement due to existing software.)

The second is in actions you can’t take because of external requirements on the system:

  • data sovereignty: if you are legally limited to certain data centers, you have fewer options for your system, this can hinder building the system.
  • tenancy: if you need to have single tenancy for security or legal reasons, you may have fewer options for elastic solutions.
  • data models and authority requirements: poorly performing data models can impact performance. If your application requires certain operations must be from the source of record (permissions checks, for example) then a poorly performing source data model can impact performance which can impact availability.
  • latency: if you have a highly latency sensitive system, then you may need to trade availability for decreased latency. Since availability often means geographic dispersion (to avoid disasters impacting multiple pieces of a system), it impacts latency requirements.
  • cost: high availability systems, because they have no single points of failure, cost more.

Again, this was a discussion from a slack of AWS instructors, but the commentary is mine, as are any mistakes. Thanks to Chad, Richard, Jon, Ryan and everyone else!

Who’s Afraid of Continuous Deployment?

Fish leaping to a larger pool
Leaping to larger pool

So, who’s afraid of continuous deployment? I am, for one. And I’m not alone. I taught hundreds of people in AWS courses over the past two years. We often discussed continuous delivery and deployment and I asked if this was practiced at their places of work. I’d say about 5-10% of folks said yes. I conducted a very informal survey across two technical slacks as well. Unfortunately I had my terms wrong and asked about continuous delivery:

Wanted to do a quick poll. Can you please give a thumbs up to this message if you or your team does continuous delivery of your software product, and a thumbs down if you don’t. And a :penguin: if it doesn’t apply?

The results were:

  • Did CD: 27
  • Did not do CD: 25
  • Does not apply: 3

In the poll, I defined continuous delivery as “if a change is merged to the mainline branch and passes all the tests, it is deployed to production (or whatever environment your customers see) without human involvement”. This was actually a source of discussion, as some folks were very close to this (they deployed to beta environments where only a few customers saw it, or required one human to push a button to actually release, but everything up to that point was automated). Also, someone shared this link about the difference between continuous delivery and continuous deployment. Turns out I was using the term continuous delivery incorrectly. What I defined as continuous delivery was actually continuous deployment. Whoops!

That said, it was interesting that a large number of folks did not deploy code automatically, almost half (note that I believe the poll had a bias because I asked in one slack on the #devops channel. The numbers from the other slack had less than half doing continuous deployment). I’ve worked at a number of small startups, some without paying customers, and I’ve never worked in a place with continuous deployment. I’ve been in jobs with continuous integration and continuous delivery (and this provides a lot of value) but not continuous deployment. I wanted to talk about some reasons why.

The first reason is that continuous deployment simply doesn’t apply. If you are building software that is deployed to customer sites (on-prem), or is tied to hardware, then it doesn’t make sense to work toward CD because there will always be a manual delivery component. Another reason why it might not apply is legal compliance. Folks in the slacks pointed out that in some regulatory regimes you legally are required to have a human ‘push a button’ to deploy because more than one person needed to be involved in a code deploy to satisfy the law and the auditors. These are totally legitimate reasons for not doing continuous deployment.

Next, let’s discuss the reasons based on fear or lack of software hygiene (automated tests or a robust type system). Before I step into this, I want to acknowledge that there may be times in the life of your business where such software hygiene is detrimental to your chances of survival–you need to get an MVP out and test your value in the market, for example. However, in my years of experience I find that following proper software hygiene is far easier to do if adhered to from the beginning. If you don’t, eventually the difficulty of changing the system will grow along with its complexity. You can bolt on testing later, but it is difficult.

I also want to emphasize that I’ve been in all these situations myself. In some ways this blog post is a warning for future me when I try to shirk these practices.

  • If you don’t have automated test coverage, continuous deployment is reckless. This often happens in systems where the testing was bolted on after the system had been developed for a while. The solution is to work towards having enough test coverage to give yourself confidence (it swaddles your code).
  • A system may have configuration deeply tied to a database. Many content management systems are in this boat, which makes it very difficult to roll new configuration forward automatically.
  • Not having an automated rollback strategy. If you are going to continuously deploy, you need to have a way to rollback with confidence, with one script. If you are on heroku, heroku rollbacks help here. If you are running rails code, you can use db:rollback but you’ll need to know how many steps to rollback (I couldn’t find anything that rolled all migrations back to a given timestamp) and you’ll want to be careful about losing data. It may make more sense to run migrations in a different release, and always have the code be backward compatible. Lots of interesting reading about that strategy in strong_migration’s docs. This solution will vary from application to application.
  • Not having enough users to safely canary. One way to know if your new release has problems is to do a blue/green deployment and send just a fraction of your traffic there (you could use a weighted DNS round robin solution). But if you only have a small number of users, the canary userbase won’t adequately run through all the code paths.
  • Fear of breaking key user flows. At a recent company we did basic manual regression tests just before deployment. These could have been easily automated via selenium and would have made sure that at least basic functionality was available. Also see this post from 2013 on smoke testing.

All of these are not really technical issues, they’re prioritization issues. At this point in time most web applications can be continuously deployed. The tooling and the knowledge is out there, given the business and technology teams commitment.

However, this in some ways sidesteps the real question. Why is continuous deployment a goal worth prioritizing, especially when the team has to spend time supporting that instead of giving customers more features? CD is extra work to set up, but once it is running then you can deliver features at a very rapid pace, and you never have a feature sitting around waiting for other orthogonal features. So, in a way, it will actually lead to more features and better development. There’s also the long term benefits of software hygiene for the ability of the system to evolve.

Ephemeralization is the future

In the same vein as “write less software“, this is a great post about ephemeralization. To ephemeralize software is to make it disappear either by pushing it to another provider or just removing the feature. From the post:

Proposals to ephemeralize a component or feature will sometimes be met with emotionally-charged responses from your team. It’s totally reasonable to feel attached to a component everyone has worked hard on and has been important historically. But realize that the component has value because it got you to where you are today, not necessarily because of its ongoing existence in the future.

Path dependence of software is very real. What you’ve done before determines what you can easily do in the future. But you can take large right turns if you are prepared to make the investment. As the above quote notes, it’s not just an investment in time, knowledge and money, it can also be an emotional investment.

Heroku and SSL

Heroku is a great hosting platform. Using it lets you focus on your application and not worry about operations tasks that might otherwise take developer time. Tasks like updating the operating system, patching the web server, and configuring third party services like monitoring. There are limits of course–once you reach a certain size it can be pricey. And if you have a app that has special requirements (example here), you might have to jump through some hoops. But if you are building a typical web app backed by a relational database, I’d highly recommend Heroku.

SSL is the technology which ensures that data sent over the web is transmitted untampered. Heroku has a number of offerings helping to make SSL easier to install; here’s a page to help you decide between the offerings. Last year Heroku announced support for free, automatically renewable SSL certificates. It does have some limits (no wildcard support, for example).

I just finished updating our system from an older SSL solution to the automated certificate management offering. Other than a slight hiccup around DNS being set up incorrectly by me (quickly fixed after a helpful answer from Heroku support), this update went swimmingly. Now rather than having to deal with SSL updates once a year (reviewing my notes, googling around and paying for a third party certificate), the SSL certificate is updated automatically.

This is just another example of Heroku taking something that is key to operating a web application and making it trivial. What a relief.

CircleCI shutting down version 1.0

I’ve been a happy user of CircleCI at multiple companies. Right now we pay them at The Food Corridor and they handle almost all our deployments. (I still deploy to production manually.)

We just got a note that they are shutting down their 1.0 offering and will not support it as of Aug 2018. The 2.0 offering was announced in 2016 and generally available in 2017. So, there will be about one year of overlap. Not too long.

I understand that desire to move forward.Trust me, I do.

I don’t know how much engineering effort it takes to support the two versions, but my guess is that they’ll see some significant customer loss from this. Why? CI is something that you just want to work. You don’t want to think about it. Which is why a SaaS solution makes so much sense. I am happy to just keep paying them month after month for their excellent product.

But, if I have to take some cycles to move from CircleCI 1.0 to CircleCI 2.0, why wouldn’t I take some time and evaluate other solutions too? I assume they’ve run the numbers and the amount of money it takes to support 1.0 must be more than the amount they will lose via churn.

AWS does a good job of this–they never deprecate anything (you can still set up SimpleDB if you want). They just hide it, make other offerings better, and make older offerings more expensive.

In fact, if I were CircleCI, I might offer a ‘legacy’ CircleCI 1.0 plan, where people with significant investments in the older infrastructure can pay more for access to that old codebase. Depending on the amount of support required, that might be some significant free money.

Relatedly, Amy Hoy has a great post on how to get your customers to pay you more money.