Skip to content

All posts by moore - 26. page

Thoughtworks Radar

Last night at the Boulder Ruby Meetup, Marty from Haught Codeworks walked through the Thoughtworks Technology Radar.  This is an informative way to keep on top of the latest technology trends without spending a lot of time researching and reading.  Sure, you could get the same information from reading Hacker News regularly, but the Radar is less haphazard.

The Radar is published every six months or so, and pulls from experts inside Thoughtworks (a leading software consultancy). You can view techniques, languages and frameworks, tools, and platforms.  Each area has a number of technologies and the proposed level of adoption (hold, assess, trial, adopt).  Within each radar you can dive in and learn more about the technology, like say, weex. You can also see the movement of a technology through the proposed levels of adoption over time.

You can also search for a given technology, if you are interested in seeing what the status us.  Sometimes technologies are archived, like Elm.

Note that this is not perfect.  The lag is real, but may help you avoid chasing shiny objects.  The Radar is also inherently biased because of the methodology, but given Thoughtworks size, scope and leadership, it’s probably one of the best technology summaries.  It’s certainly the best one I’ve run across.

Do you have a first, best, affiliated customer?

I find Ben Thompson engaging reading.  In one of his best posts, he discusses how AWS has a best, first customer, Amazon.  Every service that AWS builds has an automatic flagship customer that will stress test the application.

The cost to build AWS was justified because the first and best customer is Amazon’s e-commerce business

I really enjoyed that and have been looking around other places where you can see this same pattern.  8z Real Estate, a brokerage for which I used to work, has a social media offering, zavvie, which I believe all 8z brokers are using.  One of the kitchens that use The Food Corridor has an affiliated restaurant that uses a large amount of kitchen time. Would love to hear of other examples.

All of these companies have been able to make substantial investments in infrastructure knowing that, if nothing else, the benefits will accrue to a business with the same owners.  But at the same time the subsidiary can benefit from learning from other businesses, thought benefits flow both ways.  If other real estate brokerages use zavvie in different ways, zavvie can incorporate those lessons into the software and then 8z brokers will benefit.  If that restaurant needs a special piece of equipment, the kitchen can buy it, and all other clients of the kitchen will have access to it.

Do competitors have concerns about helping the parent business, even in a roundabout manner?  Sure.  Here’s an article about how startups should be wary of being co-opted by AWS.  The brokerages that are using zavvie have to be aware of the parent company (or, perhaps to be precise, that the owners of zavvie and the owners of 8z are the same).  However, if the subsidiary company can offer a service that is unique and/or very price competitive, that may cause competitors of the parent company to ignore their concerns, at least for the time bieng.

Once you get to a certain size of business, building an affiliated business which can serve both the original business and other competitors can be a great business model.  You can gain the benefits of marketplace intelligence along with providing the subsidiary business enough runway to survive.

Panel on Net Neutrality

I’ll be participating in a livestreamed panel at 7pm mountain tonight, hosted by Representative Jared Polis.  We’ll be discussing FCC net neutrality actions.  (They’re rolling back ‘common carrier’ status from ISPs as of today.)  (Update 12/12, apparently the reclassification is happening this week, not today.)

Please feel free to join!

Debugging options for AWS Lambda functions

AWS Lambda lets you write a ‘function as a service’, and run code from 100ms to 5 minutes in execution time without maintaining any servers.  This code has few limitations, but one of the issues I’ve always encountered is debugging lambda functions.

I mentioned it at a past meetup and encountered the following solutions:

I think the right debugging option depends on the complexity of your code and the urgency of the situation, but could see using all of these at different times.

Bonus: here’s a post on how to continuously deploy your lambda functions.

Supporting your local newspaper

We are subscribers to the local paper (the Daily Camera).  It’s not perfect (and there are of course online subscription options), but the experience of reading newspaper is different than the experience of reading online.  They both have strengths.

Online:

  • read from anywhere: anywhere that has an internet connection, of course
  • searchability: you can search for a particular word within an article or a specific article within an edition
  • shareability: super simple to pass on an article that you think a friend would enjoy via email, text, etc

Offline/paper:

  • discoverability: often you’ll discover stories that you didn’t know you needed to read
  • focus: when you’re reading the paper, you aren’t distracted by any notifications or other applications
  • browseability: you can easily switch between sections and discover related articles
  • shareability: when you read an article you like, you can point it out to a family member and they can immediately read it if they like

There’s also a comforting ritual about it–wandering out to the driveway every morning and enjoying the early morning across the seasons, younger family members seeking out the funnies, folding the newspaper and reading it on the couch.

Supporting local news organizations is much like supporting local shops or local farms. In all of these cases, you’re trading efficiency for resiliency (the centralized solution is in general more efficient, but concentration means more chokepoints).  If you don’t support local options, eventually they won’t be available. If that’s OK with you, that’s fine, but it’s not OK with me.

AWS Advent Calendar

There’s an AWS advent calendar, where new articles will be posted about various aspects of AWS starting Dec 25. If you’re interested in writing or reviewing the articles, feel free to sign up.  There are also some great posts from 2016, covering topics such as how to analyze VPC flow logs, cost control, lambda and building AMIs with packer.

There are also articles from years past.  I haven’t examined them closely, but I’d be wary of them, simply because the AWS landscape is changing so rapidly, and an article from three or four years ago may or may not be applicable.

Rails Gems: Ahoy

Sometimes you want more analytical detail than Google Analytics or Heap or other analytics offerings allow.  If you have an internal datastore that you want to match visits up with, you can either pull the tracking data from the web analytics tool, push your data up to the tool, or find some other way to get the web analytics data into your internal datastore.

If you choose the third option, ahoy is the rails gem for you.  It’s a simple install and will track visits and visitors (both signed in and anonymous), user agents, time of visit, and more.  You can then use it to correlate with internal goals.  For instance, if you have a funnel: ‘adwords -> visit -> signup -> create profile -> join group -> participate in group’, you may want to track each step of the funnel.  You may want to know how many groups each adwords click joining user joins.  There may be aspects of your application that are not accessible via the web that you want to correlate with external indicators (‘how often does billing fail with people who use safari’ is a (fartfetched) example). Answering these questions may be easier to do with SQL than it would be with leveraging an external tool.

However, cohort analysis and other sophisticated statistical analysis may be harder with this data, and if you are looking at doing that you may want to make the investment in pushing data up to one of the other tools, or tagging your application such that all relevant goals are measured by the web tool.

Regardless, ahoy is simple to set up and if you’re looking to pull in web tracking data into your datastore for additional insights, I highly recommend it.

Hiring: Apples and Oranges

I ran across this great article about hiring in my twitter feed.  (Incidentally, the author is seeking employment at the moment.)  Here’s the author’s take on what’s wrong with hiring in tech today:

Job postings focus on the current tooling and products rather than factor in future plans. Interviews focus on the immediate technical knowledge and abilities of candidates, rather than on the candidates’ capabilities for learning and adapting. Companies hire the wrong people for the wrong reasons, then are surprised to find they can’t scale or adapt to meet strategic goals.

It really resonated.  It crystallized what I’ve seen reading and writing job descriptions over the past 20 years, which is typically an over focus on hard skills and an underfocus on softer skills.  From a higher level, it’s a focus on immediate needs vs long term compatibility (perhaps the emphasis on ‘fit’ is a counter to that).  Job postings focus on current needs to the exclusion of adaptability because it’s a lot harder to measure adaptability.  After all, everyone is capable of change in some aspects of their lives, so how do you know if an employee can handle change in the particular manner you need?

The entire article is worth a read.  Next time you write a job requisition, think about whether you need someone to hit the ground running with a particular technology, or whether you can afford to hire someone who’ll have to either be trained up or transfer skills from a similar but not identical concepts (from rails to django or vice versa, for example).  If the former, do they need to be an employee, and if so, what will happen to them when the exact need is done?  If the latter, you’ve vastly expanded your talent pool.

The wonders of outsourcing devops

I have maintained a Jenkins server (actually, it was Hudson, the precursor). I’ve run my own database server.  I’ve installed a bug tracking system, and even extended it. I’ve set up web servers (apache and nginx).

And I’ll tell you what, if I never have to do any of these again, I’ll be happy as a clam. There are so many tools out there that let you outsource your infrastructure.  Often they start out free and end up charging when you reach a certain point.

By outsourcing the infrastructure to a service privder, you you let specialists focus on maintaining that infrastructure. They achieve scale that you’d be hard pressed to. They hire experts that you won’t be able to hire. They respond to vulnerabilities like it is their job (which it is).

Using one of these services also lets you punch above your weight. If you want, with AWS or GCP you can run your application in multiple data centers around the globe. With heroku, you can scale out during busy times, and you can scale in during slow times. With circleci or github or many of the other devtool offerings, you can have your ci/cd/source repository environment continually improved, without any effort on your part (besides paying the credit card bill).  Specialization wins.

What is the downside? You lose control–the ability to fine tune your infrastructure in ways that the service provider may not have thought of.  You have to conform to their view of the world.  You also may, depending on the service provider, have performance impacted.

At a certain scale, you may need that control and that performance.  But, you may or may not reach that scale.

It can be frustrating to have to workaround issues that, if you just had the appropriate level of access, you would be able to fix quickly.  It’s also frustrating having to come up to speed on the docs and environment that the service provider makes available.

That said, I try to remember all the other tasks that these services are taking off my plate, and the focus allowed on the unique business differentiators.

The power of automated testing

It took me a long time to understand the power of automated testing.  After all, it can end up being a large portion of your codebase and can be brittle.  Sometimes it feels like writing tests “gets in the way” of getting things done.  At one project I worked on, a colleague complained that it felt like you spent 5 minutes changing the production code and an hour changing the tests.  (And to be fair, sometimes that’s true, and there’s a balance to be struck between test code coverage and speed of development.  This can also indicate you need to spend time refactoring your tests, as you have multiple different test components testing the same production code.)

I like to think of tests like a gentle swaddling of your code.  It conforms as the body of your code changes, but changing that code does require some re-work of the tests.  And, if your code fails, it fails into the gentle swaddling, as opposed to the cruel outside world (bleeding all over your production users).  Alright, maybe the analogy fails :).

I write this today because I’m in the middle of a refactor of one of the scariest bits of The Food Corridor.  (Given we’re so young, it’s not that scary, but it’s quite complex–handling the creation and updating of bookings.)  There are many many paths through the code and if I didn’t have automated testing, I’d be far more worried about the changes I’m making.

So, consider this blog post to be a thank you to past me for making future me’s life easier by writing a comprehensive automated test suite.  If you don’t have one, you should.