AWS Questions: ASGs and Amazon Inspector

More questions from AWS course students.

  • EC2 instances in auto scaling groups have a warmup period that you can specify (so that the EC2 instance can be fully ready to take traffic directed to it).  I retold a story from another consultant about the warmup period for an ASG increasing over time (due to increasing numbers of security patches against the base AMI) and one student asked: “Can you set an alarm on instances overrunning the the warmup period?”
    • Since you can create custom metrics in cloudwatch and create alarms on those, you can definitely capture the warmup period.  All you’d need to do is, as the last step before an EC2 instance was fully configured, subtract the current time from the launch time (obtained via the API).  Store that number as your ‘warmup’ metric and set an alert if it ever gets close to your ASG health check value, and you’ll avoid ASG thrashing.
  • “Can the minimum and maximum number of instances of an ASG be changed after initial configuration?”
  • “Can you point Amazon inspector at non aws resources?  In your own data center, for example?”
    • Amazon is a security tool that looks for vulnerabilities in your EC2 instances.  It requires installing an agent on the instances that it will be monitoring, and thus doesn’t work outside of AWS.

AWS Questions: Cloudfront and SQS

I have recently started a contract teaching AWS courses. (None of the following posts speak for my client.) AWS stands for Amazon Web Services.

During every course I teach I get questions that are not directly covered in the course material that I don’t know. I’m going to try to capture some of the questions asked by my students and post the answers.

  • Does SQS have transactional messages akin to JMS?
    • No.  JMS has the idea of transactions over messages, so you can be sure that all or none of the messages were processed.  SQS has no such construct–each message is independent.  If I were going to have multiple units of work done, I’d use one message, perhaps pointing to different datastores if the message was too big for SQS.
  • Can you push content to the AWS CDN, Cloudfront, ahead of use requests?
    • No, the content always has to be pulled by a requester.  You can of course configure a crawler to pull the data from the origins through Cloudfront (which will then store it).
  • Can you configure Cloudfront to pull from origins over SSL/TLS?


Restoring a single table from an Amazon RDS backup

material-icon-1307676_640When you use SQL, how do you write delete statements at the database prompt?

A delete statement typically looks like this: delete from table_name where column_name = 'foo';. I usually write it in this order:

  1. delete
  2. delete where column_name = 'foo';
  3. delete from table_name where column_name = 'foo';

Even though this is a pain because you have to move back and forth (I really need to look into vi keybindings for mysql), it prevents you from making sending this command by accident: delete from table_name; which deletes all the data in your table.  (Another alternative is to never use the interactive client and always write out your delete statements in a file and run that file to delete data.)

But, recently, I did exactly that, because I forgot.  I deleted all the data from one table in our production database.  It was billing data, so rather important.  Luckily, I am using Amazon RDS and had set up backup retention.

I wanted to outline what I did to recover from this.

  • I took a deep breath.
  • I wrote a message on the slack channel documenting what had happened and the possible customer impact.
  • Depending on which data is removed, it’s possible you will want to put the application in maintenance mode and/or inform your customers of the issues.  What I deleted was used rarely enough that I didn’t have to take these steps.
  • I looked at how to restore an Amazon RDS backup.
  • I restored the missing data.
  • I communicated that things were back to normal to internal stakeholders.

Unfortunately, it wasn’t clear how to restore a single table.  I’m used to being able to download a .sql file and hand edit it, but that’s not an option.  Stackoverflow wasn’t super helpful.   But if there’s anytime you want clarity, it’s when you are restoring production data.  You don’t want to compound the problem by screwing up something else.

So, here’s how to restore a single table from an Amazon RDS backup:

  • Note the time just before you deleted the data.  (Another reason the slack message is nice.  chatops ftw.)
  • Start up another instance from that moment.  I named it something obvious like ‘has-data-from-tablename’.
  • Twiddle your thumbs anxiously while the new instance starts up.
  • The instance is put into your default security group (as of this writing) which probably doesn’t allow mysql access.  Make sure you modify this security group to allow access.
  • When the instance is up, do a dump of the table you need: mysqldump -t --ssl-ca=./amazon-rds-ca-cert.pem -u user -ppassword -h has-data-from-tablename.c1m7x25w24qor.us-east-1.rds.amazonaws.com -P3306 database_name tablename > restore-table_name.sql; (-t omits the create database/table statements.)
  • If your table is has had writes since you deleted everything, you may need to manually pull down the current data from the production system and merge it into restore-table_name.sql; I was able to avoid this step.
  • Load the data using mysql mysql --ssl-ca=./amazon-rds-ca-cert.pem -u user -ppassword -h production.c1m7x25w24qor.us-east-1.rds.amazonaws.com -P3306 database_name < restore-table_name.sql;
  • Review to make sure the data is correct.
  • Test the application.
  • Update the slack channel, and do any other notifications you need to (customers, internal contacts, etc).
  • Revoke the default security group access you allowed above.
  • Delete the ‘has-data-from-tablename’ instance.

Note this only works if you caught your mistake within the backup retention window. (Make sure you set that up.)  We aren’t multi AZ or clustered, so I’m not sure how that would affect things.

Happy deep breathing!


Bare minimum of ops tasks for heroku

Awesome, you are a CTO or founding engineer of a newborn startup.  You have an web app up on Heroku and someone is paying you money for it!  Nice job.

Now, you need to think about supporting it.  Heroku makes things way easier (no racking and stacking, no purchasing hardware, no configuring apache) but you still to set up some operations

Here is the bare minimum you need to do to make sure you can sleep at night.  (Based on a couple of years of heroku projects, and being really really cheap.)

  • Have a staging environment
    • You don’t want to push code direct to prod, do you?
    • This can be a free dyno, depending on the complexity of your app.
    • Pipelines are nice, as is preboot.
    • Cost: free
  • Have a one line deploy.
    • Or, if you like CD/CI, an automatic deploy or a one click deploy.  But make it really easy to deploy.
    • Have a deploy script that goes straight to production for emergencies.
    • Cost: free
  •  Backups
    • User data.  If you aren’t using a shared object store like S3, make sure you are doing a backup.
    • Database.  Both heroku postgresql and amazon RDS have point and click solutions.  All you have to do is set them up.  (Test them, at least once.)
    • Cost: freeish, depending on the solution.  But, user data is worth spending money on.
  • Alerting
    • Heroku has options if you are running professional dynos.
    • Uptimerobot is a great free third party service that will check ports every 5 minutes and has a variety of alert options.  If you want SMS, you have to pay for it, but it’s not outrageous.
    • Cost: free
  • Logging
    • Use a logging framework (like slf4j or the rails logger, and mark error conditions with a string that will be easy to search for.
    • Yes, you can use heroku logs but having a log management solution will make you much happier.  Plus, it’s free for 2 days of logfiles.
    • Set up alerts with papertrail as well.  These can be more granular.
    • Cost: free
  • Create a list of third party dependencies.
    • Sign up for status alerts from these.  If you have pro slack, you can have them push an email to a channel.  If you don’t, create an alias that receives them.  You want to be the person that tells your clients about outages, not the other way around.
    • Cost: free
  • Communication
    • Internal
      • a devops_alert slack channel is my preferred solutions.  All deploys and other alerts go there.
    • External
      • create a mailing list for your clients so you can inform them of issues easily.  Google groups is fine.  Don’t use an alias in your email–you’ll forget to add new clients.
      • do not use this mailing list for marketing purposes.
      • do make sure when you gain or lose clients you keep this up to date
    • Run through a disaster in your mind and make notes on how you would communicate the issue, both internally and externally.  How often do you update your team?  How often do you update your clients?  What about an internal issue (some of your code screwed up) vs an external issue.  This doesn’t need to be exhaustive, but thinking about it ahead of time and making some notes will help you in the crisis.
    • Cost: free

All of this is probably a four hour project, max.

But once this is done, you’ll rest easier at night, knowing you have what you need to troubleshoot and recover from production issues.


Machine sympathy vs human constraint

I had beers with an work acquaintance recently. He’s a developer of a large system that helps contact management. Talk turned, as it so often does in these situations, to the automation of development work. We both were of the opinion that it was far far in the future. This was three whole decades of experience talking, right? And of course, we weren’t talking our book–ha ha. I’m sure that artisan weavers in the 1800s were positive that their bespoke designs and craftmanship would mean full employment no matter what kind of looms were developed.

But seriously, we each had an independent reason for thinking that software development would not be fully automated anytime soon.

My reason:

It’s very hard to fully think through all the edge cases of development. This includes failure states, exceptional conditions, and just plain human idiosyncrasies. Yes, this is what every system must do. That’s right. Anything you want handled by an automated system has two options: plan for every detail or bump exceptional cases up to human beings to make judgements. The former requires a lot of planning and exercising the system, while the latter slows the system down and introduces labor costs into the mix.

This system definition is hard to do and hard to automate. I’ve seen at least five new languages/IDEs/software platforms over the years that claimed to allow a normal human being to build such robust automatic systems, but they all seem to fail in the short term. I believe that is because normal human beings just don’t think through edge cases, but those edge cases are a key part of software.

His reason:

When systems reach a certain size, abstractions fail (I commented about this years and years ago). Different size, different failures. But just as an experienced car mechanic knows what kind of system failures are likely under what conditions, experienced software engineers, especially those who understand first principles, have insight into these failures. This intution (he called it “machine sympathy”) is something that can only be acquired by experience, and, by its very nature, can’t be automated. The systems are so complex and the layers so deep that every failure is likely to be unique in some manner.

So, which one is more likely to remain a relevant issue. It depends on the organization and system size. Moore’s law (and all the corollaries for other pieces of software systems) works both for and against machine sympathy. For, because, as hardware gets better, the chances of system breakdown decrease, and against, because as hardware gets better, larger and larger systems get more affordable. Whereas I believe the human constraint is ever present at all sizes of system (though less present in smaller ones where there is less concern about ‘bumping up’ issues to humans, or even just not handling edge cases at all).

What do you think?


Dan Moore! Turns 13

number-437931_640Thirteen years ago, I wrote and posted my first blog post, about RSS. Since then, this blog has been a great journey for me: over 750 published posts and over 1300 approved comments.  I can’t even bear to count the number of spam comments!  It has been moved around three different blogging software platforms.  The world has obviously changed radically as well.

I’m not going to post any “best of” links, but I will say I’ve enjoyed blogging tremendously.  It’s allowed me to track progress in my career, test out ideas for books and engage with others. And, like all writing, blogging forces me to really think.

I wrote recently about why I blog (for myself), but I’m also very thankful for the emails, the comments and the pageviews.  Thank you, audience!

Who knows what will happen as my blog continues to grow up?


Extending an existing Rails application that wasn’t meant to be extended

I am modifying an existing open source rails 4.2 app and wanted to keep my changes (some of which are to models, some to controllers, some to views) as separate as I can, so that when a new release of the app comes out, I won’t be in (too much) merge hell.

This app was not designed to be extended, which makes things more interesting.

For the views, I’m just doing partials with a prefix (_xxx_user_cta.haml).

For the models and controllers, I started out hacking the code directly, but after some digging around I discovered how to monkey patch (I believe that is what it is called) the classes.

In the config/application.rb file, I added the following code:

config.to_prepare do
Dir.glob(Rails.root + "app/decorators/**/*_decorator*.rb").each do |c|
require_dependency(c)
end
end

And then, if I want to make a change to app/models/person.rb, I add the file app/decorators/models/person_decorator.rb. In that file is something like this:

Person.class_eval do
# ... changes
end

This lets me add additional relations, helper methods, and other classes to extend existing functionality. I try to prefix things with a unique identifier (xxx_set_timezone rather than set_timezone) to lessen the chances of a collision, because if a method is added to the Person class with the same name as a method in the decorator, the decorator will win.

Write tests around this new functionality so that if anything changes, I’m aware and can more easily troubleshoot.

The downsides of this approach is that it is harder to track logic, because instead of everything in one file, it is now in two. (I don’t know if there are memory or performance implications.) However, that is a tradeoff I’m willing to make to make it easier to keep up with the upstream development and to pull said development in as often as possible.

I’m still fairly new to rails and didn’t know if this is the only or best way, but thought I’d share.


Early Product Lessons

fence-238475_640I wanted to jot down some lessons I’ve learned being an early stage technical founder of an unfunded startup, from no product or revenue -> product and revenue. (Of The Food Corridor, if you’re interested in the startup.) I had the luxury of a co-founder who had spent years immersed in the problem space and months researching the niche. If you can find that, it really really helps in product development.

That said, here are some other lessons. For an idea of our timeline, we did a build or buy or both evaluation in March, started building in April, did beta testing in May and launched June 1.

 

Determine features through demand/pull, rather than push

Once you have a product that you can show users, show it to them!

It will be embarrassing.  Record all their feedback and note patterns (we did a month of beta testing, as noted above). Then, let the user requests pull features from you, rather than push features to them. This serves a couple of purposes:

  • people will know that you are hearing them, and will be more forgiving of inevitable issues
  • you will build features that people want to use
  • you’ll develop a sense of users needs
  • you’ll learn to politely say no to requests that are off base/only useful to one user

Everything is broken

Everything is borked, all the time. At an early stage startup you just don’t have time to do everything right (nor should you, because the wrong thing perfectly engineered is a waste). So there will be features that are half done, or edge cases unhandled, or undocumented build systems. Do the best you can, and realize that it gets better, but make your priority getting something out that users can give feedback from. “Usage is like oxygen for ideas.” – Matt Mullenweg

You have to walk a fine line between building something quickly and building something that you can build on later.  Get used to ambiguity and brokenness and apologizing to the customer.  (But not too used to the apologies!)

UX/UI polish is relative

Our app is a number of open source gems smashed together with some scaffolded ruby code.  The underlying framework had a decent look and feel, but there are definitely some UI and UX holes.  I thought I’d have to spend some time working on those, but our customers thought the product was beautiful and useful.  My standards were different than their standards.

That doesn’t mean that the app can look horrible, but a plain old bootstrap theme or one of the other common CSS themes is ok. You need to know your audience–many people are stuck using a mix of software and are used to navigating clunky user interfaces. If your interface is just decent, but still solves the problem, you’ll be OK.  Of course, you’ll want to solve gross UX issues eventually, but a startup is all about balance.  A friend of mine gave me the advice: “don’t allow your users to make any mistakes”.

Favor manual process for complex edge cases

There have been a couple of situations during the build where a lot of work was needed to handle an edge case. For example, prorating monthly plans. Once you start thinking about prorating in depth, it turns out to be a really interesting problem with a lot of edge cases. But guess what?  For your startup, edge cases can be a wild goose chase.

When an edge case rears its head, you should consider the following options (in preferential order).

  • can you outsource the complexity (Stripe handles proration, for example, and I guarantee you they handle edge cases you don’t).
  • can you make it a manual process? If it doesn’t happen that often and/or a real time response is unneeded, you can often get by with a manual solution. This may be partly automated, for example, an SQL query that generates an email to a human who can handle the exceptional situation.
  • if neither of the above apply, can you defer it? Maybe for a few months, maybe for just a few weeks. But sometimes requirements change and you learn things from users that may make this edge case less important.
  • if all of the above don’t apply, you may need to bite the bullet and write code.

Back end and front end development doesn’t have to be synchronized

Most users equate the front end with the complete product. Most developers know that, just like an iceberg, there’s a lot of back end processing hidden in any project. But guess what? When you are getting feedback from users, some of the backend processes need to work, but many don’t. For example, we had a billing system that handled monthly invoices. We didn’t need to build the billing system while we were getting feedback from users on what type of charges they needed to handle. We did, however, need to know that we could build it. So make sure you can build the backend system to support your front end system, perhaps by building one path through, but defer the full build-out until you have to.

What about you?  Any tips for early stage product engineering?


The single biggest obstacle to running self hosted Sharetribe

dollar-1362244_640So, you checked out Can I customize Sharetribe? and determined you need a backend programmer? And that you’re going to self host, whether on Heroku or elsewhere? That’s great!

Let me let you in on the single biggest obstacle to running Sharetribe self hosted. (It’s not the biggest obstacle to building a successful marketplace–that is building the community and getting liquidity–but it is a big one.)

There’s no great option for taking payments on self hosted Sharetribe.

Braintree has limitations and can only support companies in the USA (I believe).

Paypal isn’t supported with the open source version.

No other payment processors are supported ‘out of the box’.

So, if you are charging nothing for your marketplace, or if you monetize your site in some other way such as a listing fee or using the marketplace as a captive market, you’re fine with the self hosted version.

Otherwise you are going to have to pay a developer to integrate with a payment processor. Here is what to think about when you are evaluating payment processors:

  • Do they support the currency I need to support? (Sharetribe only supports one currency per marketplace at this time).
  • Do they handle splitting payments?  That is, if someone pays $100, and the marketplace commission is 10%, $10 goes to the marketplace and $90 to the seller.  How is that handled–many payment processors don’t support splitting payments in the manner.
    • If you aren’t splitting payments, how are you going to get your money and make sure the seller gets theirs?
  • If I need payment escrow, do they support that?
  • Are there legal ramifications (taxes, fees, etc) that my marketplace has to handle with regards to taking money?

It’s worth looking at these:

And if you are interested in having Stripe connect in your system, I’m working on such an integration.  Please sign up for the list to be notified when it is ready to go.



© Moore Consulting, 2003-2015 +