Skip to content

Monthly Archives: October 2015

The Deployment Age

If you haven’t read The Deployment Age (and its follow on post), you should go read it right now.

The premise is that we’re entering a technology super cycle with the Internet and PC where the technology will become far more integrated and invisible and the chief means of financing will be internal company resources.  The focus will be on existing markets, not creating new ones, and refinement rather than innovation.

If you work in technology and are interested in the big picture, it is worth a read:

Some things we’ve learned over the past 30 years–that novelty is more important than quality; that if you’re not disrupting yourself someone else will disrupt you; that entering new markets is more important than expanding existing markets; that technology has to be evangelized, not asked for by your customers–may no longer be true. Almost every company will continue to be managed as if these things were true, probably right up until they manage themselves out of business. There’s an old saying that generals are always fighting the last war, it’s not just generals, it’s everyone’s natural inclination.

Go read it: The Deployment Age.

Launching

Yesterday, I launched a partial rewrite of a long running side project connecting people to farm shares.  Over the past few months I’d fixed a few pressing bugs and overhauled the software so that a site and data model that previously supported only cities and zips as geographic features now supported states as well.

For the past week I’d been hemming and hawing about when to release–fixing “just one more thing” or tweaking one last bit.

But yesterday I finally bit the bullet and released.  Of course, there were a couple of issues that I hadn’t addressed that only showed up in production.  But after squashing those bugs, I completed a few more tasks: moving over SEO (as best as I can) and social accounts, making sure that key features and content hadn’t been thrown overboard, and checking the logs to make sure that users are finding what they need.

Why was I so reticent to release?  It’s a big step, shipping something, but in some ways it’s the only step that matters.  I was afraid that I’d forget something, screw things up, make a mistake.  And of course, I didn’t want to take a site that helped a couple thousand people a month find local food options that work for them and ruin it.  I wasn’t sure I’d have time to support the new site.  I wanted the new site to have as close to feature parity as possible.  I worked on the old site for five years, and a lot of features crept in (as well as external dependencies, corners of the site that google and users have found, but I had forgotten).

All good reasons to think before I leapt.

But you can only plan for so much.  At some point you just have to ship.

Heroku drains

drain photoSo, I’ve learned a lot more than I wanted to about heroku drains. These are sinks to which heroku applications can write.  After the logs are out of heroku, you analyze these logs just as you would in any other application living outside of a PaaS.  Logs are very useful to see long term trends, debug, etc.  (I’ve worked both on a rails3 app and a java spring/camel app that are deploying to heroku.)

Here are some things I’ve learned:

  • Heroku drains are well documented.
  • You want definitely want them for any production application, because only 1500 lines of heroku logs are retained at any one time.
  • They can go to either syslog (great for applications with a lot of other infrastructure) or https (great for applications without as much infrastructure support).
  • They can’t do any kind of authorization.
  • You can’t know what ip address the logs are coming from, so you can’t limit access by IP.
  • There are third party extensions you can pay for to avoid dealing with drains at all (I’ve heard good things about papertrail.)
  • You can use logstash to pull heroku logs from a syslog drain into elastic search.
  • There are numerous github projects that can drain to databases, etc.  There’s even one that, with echos of Ouroboros, drains to another heroku app.
  • Drains have intelligent behavior if your listener (or listeners) fails.  From heroku support: “The short answer is yes, the drain will drop logs when the sink is not responsive, but this isn’t really the full story. There are a number of undocumented limits and backoff retries that happen when a drain connection is lost.”  And then they go on to explain how the backoff behaviour happens.  I’m not going to cut and paste their entire answer because I assume it is undocumented for a reason (maybe it changes, maybe they don’t want to commit to supporting this behavior).  Ask them yourself 🙂
  • A simple drain can be as easy as <?php error_log(file_get_contents('php://input'), 3, "/var/log/logfile.log"); ?>, but make sure you rotate that log file.
  • You can use puppet to manage drains if you are bringing servers up and down, using the heroku toolbelt and CLI authentication.

If you are deploying anything beyond a toy app on heroku, don’t forget the ops folks and make sure you set up your drain!