Who’s Afraid of Continuous Deployment?

Fish leaping to a larger pool

Leaping to larger pool

So, who’s afraid of continuous deployment? I am, for one. And I’m not alone. I taught hundreds of people in AWS courses over the past two years. We often discussed continuous delivery and deployment and I asked if this was practiced at their places of work. I’d say about 5-10% of folks said yes. I conducted a very informal survey across two technical slacks as well. Unfortunately I had my terms wrong and asked about continuous delivery:

Wanted to do a quick poll. Can you please give a thumbs up to this message if you or your team does continuous delivery of your software product, and a thumbs down if you don’t. And a :penguin: if it doesn’t apply?

The results were:

  • Did CD: 27
  • Did not do CD: 25
  • Does not apply: 3

In the poll, I defined continuous delivery as “if a change is merged to the mainline branch and passes all the tests, it is deployed to production (or whatever environment your customers see) without human involvement”. This was actually a source of discussion, as some folks were very close to this (they deployed to beta environments where only a few customers saw it, or required one human to push a button to actually release, but everything up to that point was automated). Also, someone shared this link about the difference between continuous delivery and continuous deployment. Turns out I was using the term continuous delivery incorrectly. What I defined as continuous delivery was actually continuous deployment. Whoops!

That said, it was interesting that a large number of folks did not deploy code automatically, almost half (note that I believe the poll had a bias because I asked in one slack on the #devops channel. The numbers from the other slack had less than half doing continuous deployment). I’ve worked at a number of small startups, some without paying customers, and I’ve never worked in a place with continuous deployment. I’ve been in jobs with continuous integration and continuous delivery (and this provides a lot of value) but not continuous deployment. I wanted to talk about some reasons why.

The first reason is that continuous deployment simply doesn’t apply. If you are building software that is deployed to customer sites (on-prem), or is tied to hardware, then it doesn’t make sense to work toward CD because there will always be a manual delivery component. Another reason why it might not apply is legal compliance. Folks in the slacks pointed out that in some regulatory regimes you legally are required to have a human ‘push a button’ to deploy because more than one person needed to be involved in a code deploy to satisfy the law and the auditors. These are totally legitimate reasons for not doing continuous deployment.

Next, let’s discuss the reasons based on fear or lack of software hygiene (automated tests or a robust type system). Before I step into this, I want to acknowledge that there may be times in the life of your business where such software hygiene is detrimental to your chances of survival–you need to get an MVP out and test your value in the market, for example. However, in my years of experience I find that following proper software hygiene is far easier to do if adhered to from the beginning. If you don’t, eventually the difficulty of changing the system will grow along with its complexity. You can bolt on testing later, but it is difficult.

I also want to emphasize that I’ve been in all these situations myself. In some ways this blog post is a warning for future me when I try to shirk these practices.

  • If you don’t have automated test coverage, continuous deployment is reckless. This often happens in systems where the testing was bolted on after the system had been developed for a while. The solution is to work towards having enough test coverage to give yourself confidence (it swaddles your code).
  • A system may have configuration deeply tied to a database. Many content management systems are in this boat, which makes it very difficult to roll new configuration forward automatically.
  • Not having an automated rollback strategy. If you are going to continuously deploy, you need to have a way to rollback with confidence, with one script. If you are on heroku, heroku rollbacks help here. If you are running rails code, you can use db:rollback but you’ll need to know how many steps to rollback (I couldn’t find anything that rolled all migrations back to a given timestamp) and you’ll want to be careful about losing data. It may make more sense to run migrations in a different release, and always have the code be backward compatible. Lots of interesting reading about that strategy in strong_migration’s docs. This solution will vary from application to application.
  • Not having enough users to safely canary. One way to know if your new release has problems is to do a blue/green deployment and send just a fraction of your traffic there (you could use a weighted DNS round robin solution). But if you only have a small number of users, the canary userbase won’t adequately run through all the code paths.
  • Fear of breaking key user flows. At a recent company we did basic manual regression tests just before deployment. These could have been easily automated via selenium and would have made sure that at least basic functionality was available. Also see this post from 2013 on smoke testing.

All of these are not really technical issues, they’re prioritization issues. At this point in time most web applications can be continuously deployed. The tooling and the knowledge is out there, given the business and technology teams commitment.

However, this in some ways sidesteps the real question. Why is continuous deployment a goal worth prioritizing, especially when the team has to spend time supporting that instead of giving customers more features? CD is extra work to set up, but once it is running then you can deliver features at a very rapid pace, and you never have a feature sitting around waiting for other orthogonal features. So, in a way, it will actually lead to more features and better development. There’s also the long term benefits of software hygiene for the ability of the system to evolve.


Pact Testing

PadlockI attended the Google Developer Group meetup last week and enjoyed many of the talks. It was a lightning session, so there were ten speakers. In particular I really enjoyed “Pact Contract Testing” by Claire Chen. The idea behind Pact Testing, which has been around since 2013 and has had four major specification releases, is to formalize the contract between an API consumer and producer and allow each side of the API conversation to be developed independently. You can record the interactions between each consumer and producer and re-play them during testing to verify that no regressions have occurred. It’s really designed for a situation where you control both the consumer and the producer and want to verify that there are no breaking changes when either of them evolve.

So, this seems like mocks and stubs on steroids with the additional benefit of being cross platform (many languages are supported) and exercising the entire producer or consumer independently. You can also run an external server to maintain all the pacts independently.

If you are running a microservices architecture, I’d strongly recommend taking a look at this. Next time I’m involved in an API consumer/producer project, I’ll definitely be using this, and will report back then.

See also “convince me that Pact Testing is a good idea” and “what is Pact not good for?”.


Always break rails migrations into smallest chunks possible, and other lessons learned

So this was a bit of a sticky wicket that I recently extracted myself from and I wanted to make notes so I didn’t make the same mistake again. I was adding a new table that related two existing tables and added the following code

class CreateTfcListingPeople < ActiveRecord::Migration
  def change
    create_table :tfc_listing_people do |t|
      t.integer :listing_id, index: true
      t.string :person_id, limit: 22, index: true

      t.timestamps null: false
    end

    add_foreign_key :tfc_listing_people, :people
    add_foreign_key :tfc_listing_people, :listings

  end
end

However, I didn’t notice that the datatype of the person.id column (which is a varchar) was `id` varchar(22) COLLATE utf8_unicode_ci NOT NULL

This led to the following error popping up in one of the non production environments:

2018-02-27T17:10:05.277434+00:00 app[web.1]: App 132 stdout: ActionView::Template::Error (Mysql2::Error: Illegal mix of collations (utf8_unicode_ci,IMPLICIT) and (utf8_general_ci,IMPLICIT) for operation '=': SELECT COUNT(*) FROM `people` INNER JOIN `tfc_listing_people` ON `people`.`id` = `tfc_listing_people`.`person_id` WHERE `tfc_listing_people`.`listing_id` = 42):

I was able to fix this with the following alter statement (from this SO post): ALTER TABLE `tfc_listing_people` CHANGE `person_id` `person_id` VARCHAR( 22 ) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL.

But in other environments, there was no runtime error. There was, however, a partially failed migration, that had been masked by some other test failures and some process failures, since there was a team handoff that masked it. The create table statement had succeeded, but the add_foreign_key :tfc_listing_people, :people migration had failed.

I ran this migration statement a few times (pointer on how to do that): ActiveRecord::Migration.add_foreign_key :tfc_listing_people, :people and, via this SO answer, I was able to find the latest foreign key error message:

2018-03-06 13:23:29 0x2b1565330700 Error in foreign key constraint of table sharetribe_production/#sql-2c93_4a44d:
 FOREIGN KEY (person_id)  REFERENCES people (id): Cannot find an index in the referenced table where the
referenced columns appear as the first columns, or column types in the table and the referenced table do not match for constraint. Note that the internal storage type of ENUM and SET changed in tables created with >= InnoDB-4.1.12, and such columns in old tables cannot be referenced by such columns in new tables.
Please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-foreign-key-constraints.html for correct foreign key definition.

So, again, just running the alter statement to change the collation of the tfc_listing_people table worked fine. However, while I could handcraft the fix on both staging and production and did so, I needed a way to have this change captured in a migration or two. I split apart the first migration into two migrations. The first created the tfc_listing_people table, and the second looked like this:

class ModifyTfcListingPeople < ActiveRecord::Migration
  def up
    execute <<-SQL
      ALTER TABLE  `tfc_listing_people` CHANGE  `person_id`  `person_id` VARCHAR( 22 ) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL
    SQL

    add_foreign_key :tfc_listing_people, :people
    add_foreign_key :tfc_listing_people, :listings
  end
  def down
    drop_foreign_key :tfc_listing_people, :people
    drop_foreign_key :tfc_listing_people, :listings
  end
end

Because I’d hand crafted the fixes on staging and production, I manually inserted a value for this migration into the schema_migrations table to indicate that the migration had been run in those environments. If I hadn’t had two related but different migration actions, I might not have had to go through these manual gyrations.

My lessons from this episode:

  • pay close attention to any errors and failed tests, no matter how innocuous. This is a variation of the “broken window theory”
  • break migrations into small pieces, which are easier to debug and to migrate back and forth
  • knowing SQL and having an understanding of how database migrations work (they are cool, but they aren’t magic, and sometimes they leak) was crucial to debugging this issue

Speed up development by catching your mail locally

Have you ever been developing some kind of application that sends email? You need to test how the email looks, so you have to have access to an external SMTP server and you have to configure your application to use that. You can definitely set up sendgrid or another MTA to send email from your local computer and then use a real email address as your target. However, then to develop this portion of the application you need to be online.

Another option that I’ve found is the Mailcatcher gem. This is a small ruby program that you can easily configure as your SMTP endpoint. Then when your development environment sends mail, mailcatcher catches it. Then you can visit a URL on your local computer and view received emails. As soon as mailcatcher shuts down, the emails are lost, however.

Even though this is a ruby gem, you can use the app with different languages–as long as it you can configure the application to point to an SMTP server, you’re good (in the readme, there are examples for Django and PHP).

One note about it being a gem. Don’t put it in your Gemfile if you are building a rails app, because of possible conflicts. This means that if you manage your ruby environments via rvm you’ll need to re-install mailcatcher every time you change your ruby version.

Bonus: mailcatcher even has an API so you can use it in your integration test environment to verify that certain actions in your application caused certain emails to be sent.


Serverless Framework

I had coffee with an acquaintance who is doing a lot of event driven data processing. Whereas ten years ago to tackle this problem you might use an ETL tool like Pentaho or Talend, now his process runs entirely on AWS Lambda functions. He is leveraging the Serverless framework to manage and deploy these applications. As I understand it there is a thin shim layer between the business logic and the lambda event handler, but the business logic is isolated and knows nothing about its environment. That makes the business logic very testable.

His description of the Serverless framework intrigued me. As he described it, the framework is driven by a simple yaml file and takes care of, among other tasks, the complicated infrastructure set up to tie Lambda functions to a variety of AWS events. I haven’t done it myself, but I’ve heard that setting up a lambda to API Gateway link is a real bear. Doing so allows a lambda function respond to a web requests without any AWS authentication, and is a key use case.

You can write and deploy lambda functions in any language that AWS Lambda supports (unfortunately, not java 9 at the moment). Here’s a java/maven/serverless tutorial. It also supports multiple cloud providers, though I haven’t done much beyond note that the documentation exists.

However, using Serverless does require writing code. If evaluating a a complicated ETL process which non developers needed to be able to understand and support, Serverless would not be a good fit. I’m not aware of any abstraction layers on top of it, though I guess you could run, for example, Pentaho Kettle jobs within lambda. There’s also an issue around cold start times–when your code hasn’t been invoked for a while, it can take longer to start up when a request or event occurs. Apparently there are partial solutions, but your lambdas still get cycled every few hours regardless.

I worked through some of the tutorials and was impressed at just how easy it was to get started. If I had a simple API or data processing pipeline to build, Serverless would definitely be on my short list of possible implementation options. It is very inexpensive, scales easily and encourages encapsulation.

Incidentally, my acquaintance’s company is hosting a lunch and learn on this technology at the end of the month. More details here.


The power of automated testing

It took me a long time to understand the power of automated testing.  After all, it can end up being a large portion of your codebase and can be brittle.  Sometimes it feels like writing tests “gets in the way” of getting things done.  At one project I worked on, a colleague complained that it felt like you spent 5 minutes changing the production code and an hour changing the tests.  (And to be fair, sometimes that’s true, and there’s a balance to be struck between test code coverage and speed of development.  This can also indicate you need to spend time refactoring your tests, as you have multiple different test components testing the same production code.)

I like to think of tests like a gentle swaddling of your code.  It conforms as the body of your code changes, but changing that code does require some re-work of the tests.  And, if your code fails, it fails into the gentle swaddling, as opposed to the cruel outside world (bleeding all over your production users).  Alright, maybe the analogy fails :).

I write this today because I’m in the middle of a refactor of one of the scariest bits of The Food Corridor.  (Given we’re so young, it’s not that scary, but it’s quite complex–handling the creation and updating of bookings.)  There are many many paths through the code and if I didn’t have automated testing, I’d be far more worried about the changes I’m making.

So, consider this blog post to be a thank you to past me for making future me’s life easier by writing a comprehensive automated test suite.  If you don’t have one, you should.


Leverage

As a software developer, and especially as a senior (expensive) software developer, you need leverage. Leverage makes you more productive.  I find it also makes the job more fun.

Some forms of leverage:

  • test suite. A suite provides leverage both by serving as a living form of documentation (allowing others to understand the code) and a regression suite so that changes to underlying code can be made with assurances that external code behavior don’t happen.
  • libraries and frameworks, like Rails. By solving common problems, libraries, open source or not, can accelerate the building of your product. Depending on the maturity of the library or framework, they may cover edge cases that you would have to discover via user feedback.
  • iaas solutions, like AWS EC2. By giving you IT infrastructure that you can manipulate via software, you can apply software engineering techniques to ensure validity of your infrastructure and make deployments replicable.
  • paas solutions, like Heroku. These may force your application to conform to certain limitations, but take a whole host of operations tasks off your plate (deployments, patching servers). When a new bug comes out affecting Nginx, you don’t have to spend time checking your servers–your provider does. When you reach a certain scale, thost limitations may come back to bite you, but in the early days of a project or company, having them off your plate allow you to focus on business logic.
  • saas solutions, like Google Apps or Delighted. You can have an entire business solution available for a monthly fee. These can be large in scope, like Google Apps, or small in scope, like Delighted, but either way they solve an entire business problem. You can trade time for money.
  • experience, aka the mistakes you’ve made on someone else’s dime. This allows you leverage by pruning the universe of possibilities for solving problems, based on what’s worked in the past. You don’t spend time doing exploration or spikes. Note that experience may guide you toward or away from any of the points of leverages mentioned above. And that experience needs to be tempered with learning, as the software world changes.
  • team. There’s only so much software you can write yourself. A team can help, both in terms of executing against a software design/architecture and improving it via their own experience.

Leverage allows you to be more productive and the more experienced you get, the more you should seek it.


Testing Dossier Reports in Rails

One of the things I love about developing with rails is the vast array of open source, free components that you can drop in and extend your application.  Want an invoicing system?  Way to run your javascript testsSimple admin portal?  Just drop in a gem, run bundle install and you are good to go.

One of the gems I’ve used recently was dossier, which lets you write reports in SQL or active record, and then generate them in HTML or CSV (or JSON, but I didn’t use that). One tip–if you want your CSV results to have the same formatting as your HTML results, you’ll want to follow the steps on this issue.

I wrote up a couple of SQL reports, linked them into the appropriate admin pages, and called it good. Then, the app moved on, and at one point, the schema changed. (Some of you are shaking your head, knowing what is going to happen next.) Then, the reports failed.

I had forgotten the cardinal rule–write the tests first. I confess, I wasn’t sure how to, but a bit of research revealed that it wasn’t that hard. Here’s one of my spec files.

require 'spec_helper'

describe MonthlyHoursClientReport do

  # all this does is test that the SQL is valid
  it "sql valid" do
    report = MonthlyHoursClientReport.new
    sql = report.sql
    sql = replace_placeholders(sql)
    expect{ActiveRecord::Base.connection.execute(sql)}.to_not raise_error
  end

  def replace_placeholders(sql)
    sql = sql.gsub(":kitchen_id",1.to_s)
  end
end

This just gets the SQL from the dossier report and tries to execute the SQL in the test database. Super simple, but enough to catch the error I encountered. If/when I get more time, I could definitely add some more tests with some data in the test db to make sure the SQL is giving correct results, but I tend to be pretty confident in my SQL queries, especially when they don’t have any group by or having clauses.

Anyway, happy testing.


Perils of ORM caching

So, I was working a rails4 project today and added an after_create method to a model (call it model A) that checked on a related object (call it model B) to see its state, and if it met a certain criteria, did something to the model A object being created. The specifics don’t really matter, but I was using zeus to run my rpsec tests.

This caused three tests to fail in entirely unrelated sections of the application.

What on earth was going on?

Well, first I used git bisect to determine the exact commit that caused the issue.  (As far as I’m concerned, the existence of git bisect confirms my belief to ‘commit early, commit often’).

Then I dug in.  It appears that each of the tests was tweaking the model B object, and testing some aspect of the change, usually through the model A object.  Before I added the after_create method, the model B object wasn’t loaded into the ActiveRecord in memory network graph tied to the model A object when the test saved the model A object initially, but was loaded from the database when the method under test executed.

After the after_create method was added, the model B object was loaded into the in memory network graph tied to the model A object.  Then the test tweaked the model B object in the database, but didn’t reload the model A object, which had a dirty/old version of the model B object.

A simple reload of model A (and its network graph) fixed it (or a repositioning of when I modified the model B object), but it was quite a subtle testing bug to track down.


Python Minesweeper Programming Problem

At an interview, I was asked to build a python program, using TDD, which would output the results of a minesweeper game.  Not a fully functional game, just a small program that would take an array of bomb locations and print out a map of the board with all values exposed.

So, if you have a 3×3 board, and there is a bomb in each corner, it would print out something like this:

x 2 x
2 4 2
x 2 x

Or if you have a 3×3 board and there is only a bomb in the upper left corner, it would print out something like this:

x 1 0
1 1 0
0 0 0

I did not complete the task in the allotted time, but it was a fun programming exercise and I hope illuminating to the interviewers. I actually took it home and finished it up. Here’s the full text of the program:

class Board:
        def __init__(self, size = 1, bomblocations = []):
		self._size = size
		self._bomblocations = bomblocations
    	
	def size(self):
		return self._size
    	
	def _bombLocation(self,location):
                for onebomblocation in self._bomblocations:
                	if onebomblocation == location:
				return 'x'

	def _isOnBoard(self,location):
		if location[0] =self._size:
		    	return False
		if location[1] >=self._size:
		    	return False
		return True

	def whatsAt(self,location):
		if self._bombLocation(location) == 'x':
			return 'x'
		if not self._isOnBoard(location):
			return None
		return self._numberOfBombsAdjacent(location)

	def _numberOfBombsAdjacent(self,location):
		bombcount = 0
		# change x, then y
		currx = location[0] 
		curry = location[1] 
		for xincrement in [-1,0,1]:
			xtotest = currx + xincrement
			for yincrement in [-1,0,1]:
				ytotest = curry + yincrement
				#print 'testing: '+ str(xtotest) + ', '+str(ytotest)+ ', '+str(bombcount)
				if not self._isOnBoard([xtotest,ytotest]):
					continue	
				if self._bombLocation([xtotest,ytotest]) == 'x':
					bombcount += 1
		return bombcount

	def printBoard(self):
		x = 0
		while x < self._size:
			y = 0
			while y < self._size:
				print self.whatsAt([x,y]),
				y += 1
			x += 1
			print
				
def main():
	board = Board(15,[[0,1],[1,2],[2,4],[2,5],[3,5],[5,5]])
	board.printBoard()

if __name__ == "__main__": 
	main()

And the tests:

import unittest
import app

class TestApp(unittest.TestCase):

    def setUp(self):
    	pass

    def test_board_creation(self):

        newboard = app.Board()
        self.assertIsNotNone(newboard)

    def test_default_board_size(self):
     	newboard = app.Board()
        self.assertEqual(1, newboard.size())	

    def test_constructor_board_size(self):
     	newboard = app.Board(3)
        self.assertEqual(3, newboard.size())	

    def test_board_with_bomb(self):
     	newboard = app.Board(3,[[0,0]])
        self.assertEqual('x', newboard.whatsAt([0,0]))	

    def test_board_with_n_bombs(self):
     	newboard = app.Board(4,[[0,0],[3,3]])
        self.assertEqual('x', newboard.whatsAt([0,0]))	
        self.assertEqual('x', newboard.whatsAt([3,3]))	

    def test_board_with_bomb_check_other_spaces_separated_bombs(self):
     	newboard = app.Board(4,[[0,0],[3,3]])
        self.assertEqual(1, newboard.whatsAt([0,1]))	
        self.assertEqual(1, newboard.whatsAt([1,0]))	
        self.assertEqual(1, newboard.whatsAt([1,1]))	
        self.assertEqual(0, newboard.whatsAt([1,2]))	
        self.assertEqual(0, newboard.whatsAt([2,1]))	
        self.assertEqual(1, newboard.whatsAt([3,2]))	
        self.assertEqual(1, newboard.whatsAt([2,3]))	
        self.assertEqual(1, newboard.whatsAt([2,2]))	

    def test_check_other_spaces_contiguous_bombs(self):
     	newboard = app.Board(4,[[0,1],[0,0]])
        self.assertEqual(1, newboard.whatsAt([0,2]))	
        self.assertEqual(2, newboard.whatsAt([1,0]))	
        self.assertEqual(0, newboard.whatsAt([2,1]))	

    def test_off_the_board(self):
     	newboard = app.Board(3,[[0,0],[1,2]])
        self.assertEqual(None, newboard.whatsAt([3,3]))	

This was written in python 2.7, and reminded me of the pleasure of small, from the ground up software (as opposed to gluing together libraries to achieve business objectives, which is what I do a lot of nowadays).



© Moore Consulting, 2003-2017 +