Useful gem: stripe_event

If you are going to use stripe for payments, you need to set up your webhooks. If you are using rails, the easiest solution I’ve found is stripe_event. This gem mounts a configurable endpoint and takes care of all the authentication you need to receive the webhooks.You then set up configuration in an initializer to receive the various webhooks you want to receive. The type of hooks you want depends on your application, but all the available events are listed here and the stripe support folks are happy to point you toward interesting ones if you approach them with a problem.

You can (and should) test the stripe events by using fixtures and request tests. I found the most difficult part of that testing process to be getting sample data for the json payload. The documentation has some, but you may need to run a sample event through your test dashboard and capture the json via a generic webhook capture. I ended up using this type of puts debugging to help get the json for events:

events.all do |event|
  ## debugging
  puts "xxxdebugging all events"
  puts event.to_s
end

In my experience, we never received enough load to really stress out this gem (I’ve seen maybe 30 requests a minute), but if you plan to have a high webhook load, you may want to do some load testing.

Definitely a gem worth having if you are using stripe.


Always break rails migrations into smallest chunks possible, and other lessons learned

So this was a bit of a sticky wicket that I recently extracted myself from and I wanted to make notes so I didn’t make the same mistake again. I was adding a new table that related two existing tables and added the following code

class CreateTfcListingPeople < ActiveRecord::Migration
  def change
    create_table :tfc_listing_people do |t|
      t.integer :listing_id, index: true
      t.string :person_id, limit: 22, index: true

      t.timestamps null: false
    end

    add_foreign_key :tfc_listing_people, :people
    add_foreign_key :tfc_listing_people, :listings

  end
end

However, I didn’t notice that the datatype of the person.id column (which is a varchar) was `id` varchar(22) COLLATE utf8_unicode_ci NOT NULL

This led to the following error popping up in one of the non production environments:

2018-02-27T17:10:05.277434+00:00 app[web.1]: App 132 stdout: ActionView::Template::Error (Mysql2::Error: Illegal mix of collations (utf8_unicode_ci,IMPLICIT) and (utf8_general_ci,IMPLICIT) for operation '=': SELECT COUNT(*) FROM `people` INNER JOIN `tfc_listing_people` ON `people`.`id` = `tfc_listing_people`.`person_id` WHERE `tfc_listing_people`.`listing_id` = 42):

I was able to fix this with the following alter statement (from this SO post): ALTER TABLE `tfc_listing_people` CHANGE `person_id` `person_id` VARCHAR( 22 ) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL.

But in other environments, there was no runtime error. There was, however, a partially failed migration, that had been masked by some other test failures and some process failures, since there was a team handoff that masked it. The create table statement had succeeded, but the add_foreign_key :tfc_listing_people, :people migration had failed.

I ran this migration statement a few times (pointer on how to do that): ActiveRecord::Migration.add_foreign_key :tfc_listing_people, :people and, via this SO answer, I was able to find the latest foreign key error message:

2018-03-06 13:23:29 0x2b1565330700 Error in foreign key constraint of table sharetribe_production/#sql-2c93_4a44d:
 FOREIGN KEY (person_id)  REFERENCES people (id): Cannot find an index in the referenced table where the
referenced columns appear as the first columns, or column types in the table and the referenced table do not match for constraint. Note that the internal storage type of ENUM and SET changed in tables created with >= InnoDB-4.1.12, and such columns in old tables cannot be referenced by such columns in new tables.
Please refer to http://dev.mysql.com/doc/refman/5.7/en/innodb-foreign-key-constraints.html for correct foreign key definition.

So, again, just running the alter statement to change the collation of the tfc_listing_people table worked fine. However, while I could handcraft the fix on both staging and production and did so, I needed a way to have this change captured in a migration or two. I split apart the first migration into two migrations. The first created the tfc_listing_people table, and the second looked like this:

class ModifyTfcListingPeople < ActiveRecord::Migration
  def up
    execute <<-SQL
      ALTER TABLE  `tfc_listing_people` CHANGE  `person_id`  `person_id` VARCHAR( 22 ) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL
    SQL

    add_foreign_key :tfc_listing_people, :people
    add_foreign_key :tfc_listing_people, :listings
  end
  def down
    drop_foreign_key :tfc_listing_people, :people
    drop_foreign_key :tfc_listing_people, :listings
  end
end

Because I’d hand crafted the fixes on staging and production, I manually inserted a value for this migration into the schema_migrations table to indicate that the migration had been run in those environments. If I hadn’t had two related but different migration actions, I might not have had to go through these manual gyrations.

My lessons from this episode:

  • pay close attention to any errors and failed tests, no matter how innocuous. This is a variation of the “broken window theory”
  • break migrations into small pieces, which are easier to debug and to migrate back and forth
  • knowing SQL and having an understanding of how database migrations work (they are cool, but they aren’t magic, and sometimes they leak) was crucial to debugging this issue

Look for the value of the param, not its existence

We are rewriting the front end of the application on which I work. Several of the forms which gather data that drives the application previously had checkboxes, which would set a value to true or false in the database.

Some of the checkboxes were ancillary, however, and didn’t result in a database value. Instead, they were used to calculate a different result depending on whether the checkbox was checked or not. That result would then be saved to the database. This was all tested via rspec controller tests.

In at least one case, the calculation depended on the existence of the parameter. Checkbox parameters are sent when they are checked and omitted when they are not.

Then we changed the forms to radio buttons.

Uh oh.

Instead of sending the parameter if the box was checked yes or not sending the parameter if it was checked no, we sent yes if the radio button was checked and no if it was not.

Uh oh.

The tests continued to pass, because they hadn’t been updated to the new values.

Uh oh.

So the calculation was incorrect. Which meant that the functionality that depended on the calculation was incorrect. Unfortunately, that functionality was related to billing. So I just spent the last few days backing that out, checking with clients to see what they actually intended to do, and in general fixing the issue.

The underlying code was an easy enough fix, but this story just goes to show that software is complicated. Lessons learned:

  • favor a test for an exact parameter value rather than parameter existence
  • when changing forms, consider writing tests with the new input values
  • integration tests that drive a browser would have caught this issue. These are much slower so need to be used judiciously, but are still a valuable automated test

 


Speed up development by catching your mail locally

Have you ever been developing some kind of application that sends email? You need to test how the email looks, so you have to have access to an external SMTP server and you have to configure your application to use that. You can definitely set up sendgrid or another MTA to send email from your local computer and then use a real email address as your target. However, then to develop this portion of the application you need to be online.

Another option that I’ve found is the Mailcatcher gem. This is a small ruby program that you can easily configure as your SMTP endpoint. Then when your development environment sends mail, mailcatcher catches it. Then you can visit a URL on your local computer and view received emails. As soon as mailcatcher shuts down, the emails are lost, however.

Even though this is a ruby gem, you can use the app with different languages–as long as it you can configure the application to point to an SMTP server, you’re good (in the readme, there are examples for Django and PHP).

One note about it being a gem. Don’t put it in your Gemfile if you are building a rails app, because of possible conflicts. This means that if you manage your ruby environments via rvm you’ll need to re-install mailcatcher every time you change your ruby version.

Bonus: mailcatcher even has an API so you can use it in your integration test environment to verify that certain actions in your application caused certain emails to be sent.


Getting access to the very nice date functionality in non Rails Ruby

I am doing some small ruby scripts for a dashboard and need to do some date calculations, like the timestamp of the first of the previous month and the timestamp of the end of the previous month. Rails makes this so easy with (DateTime.now - 1.month).beginning_of_month. I looked around for a way to do it with straight up Ruby, but didn’t see a good solution.

Luckily, some of the nice parts of Rails have been broken out into the active support gem. You need to add it to your Gemfile or however else you are managing your gems, of course. Confusingly, the gem is activesupport and the require statement is require active_support/... (see the underscore?).

There’s an entire guide on how to pull in just the active support functionality you need. Unfortunately, I couldn’t make the targeted includes work (I was trying to pull in both numeric and date extensions precisely, but kept getting the error message undefined method `month' for 1:Integer (NoMethodError).

Finally, I just pulled in active_support/time and everything worked.


Useful rails gem: rack mini profiler

Sometimes you need to do some profiling of your application to see where performance issues lie. I haven’t done an extensive survey of the options, but a friend recommended rack mini profiler. As the name suggests, this can be used with any rack based framework (so rails, hanami, etc).

There are a number of reasons to use this profiler. Learning to use a profiler, any profiler and how to performance tune is a key part of being a software developer, as opposed to programmer.

It can run in production. You can limit who can access the profiler, and it usually sits in the upper left hand corner with a “page load time”, but you can jump into more detail as needed.

It runs all the time. This lets you have an ambient view of the application (at least the parts that you are touching regularly).

When you are investigating a performance issue, you can easily dive into the views, controllers and SQL queries that your application is making.

It is simple to install, just a simple gem in your Gemfile.

Well worth getting to know, if you are supporting a rails application.


Automating one off deployment tasks

When I am deploying a rails application, there are sometimes one off tasks that I need done at release time, but not before. I’ve handled such tasks in the past by:

  • adding a calendar entry (if I know when the release is happening)
  • add a task or a story for the release and capture the tasks there
  • writing a ‘release checklist’ document that I have to remember to check on release.

Depending on release frequency, application complexity and team size, the checklist may be small or it may have many tasks on it. Some tasks on a checklist can include:

  • sending a communication (an email or slack message) to the team detailing released features
  • restarting an external service to pick up configuration or code changes
  • notifying a customer that a bug they reported has been fixed
  • kicking off an external process via an API call now that a required dependency has been released

What these all have in common is that they:

  • are not regular occurrences; they don’t happen every deploy
  • are affecting entities (users or software) beyond code and database
  • may or may not require human interaction

after_party is a gem helps with these tasks. This gem is similar in functionality to database migrations because each after_party task is run once per environment (this is done by checkpointing the task timestamp in the database). However, after_party tasks are arbitrary rake tasks, rather than the database manipulation DSL of migrations.

You add this call to your deployment scripts (the example is for heroku, feel free to replace heroku run with bundle exec):

heroku run rake after_party:run --app appname

This rake task will will run every time, but if an after_party task has already been run and successfully recorded in the database, it will not be run again.

You can generate these tasks files: rails generate after_party:task my_task --description="desc"

Here’s what an after_party task looks like:

namespace :after_party do
  desc 'Deployment task: notify customer about bug fix'
  task notify_issue_111: :environment do
    puts "Running deploy task 'notify_issue_111'"

    TfcAdminNotesMailer.send_release_update_notification("customer X", "issue 111").deliver_now

    AfterParty::TaskRecord.create version: '20180113164945'
  end  # task :notify_issue_111
end  # namespace :after_party

This particular task sends a release update notification email to the customer service user reminding them that we should notify customer X that issue #111 has been resolved. I couldn’t figure out how to make a rake task send email directly, hence the ActionMailer. In general you will want to push all of your logic from the rake task to POROs or other testable objects.

I could have sent the message directly to the customer rather than to customer service. However, I was worried that if a rollback happened, the customer might be informed incorrectly about the state of the application. I also thought it’d be a nice touchpoint, since issues are typically reported to customer service initially. The big win is that ten of these can be added over a period of weeks, and I could have gone on vacation during the release, and the customer update reminders would still be sent.

All is not puppies and rainbows, however. This gem doesn’t appear to be maintained. The last release was over two years ago, though there are forks that have been updated more recently. It works fine with my rails4 app. The main alternative that I’m aware of is hijacking a database migration and calling a rake task or ruby code in the ‘up’ migration clause. That seems non intuitive to me. I wouldn’t expect a database migration to touch anything outside of, well, the database.

Another thing to be aware of is that there is no rollback/roll forward functionality with after_party. Once a task is run, it won’t get run again (unless you run the rake task manually or modify the task_records database table).

This gem will help you automate one off deployment tasks, and I hope you enjoy it.



The Rails Low Level Cache

I think the Rails low level cache is the bees knees. It lets you store complicated values easily, manage TTLs, and improve the performance of your application.

I’d be remiss in stating that you shouldn’t cache until you need to.  Caches will introduce additional complexity into your application, make troubleshooting harder, and in general can be confusing.  How do you know if you need to introduce a cache?  Profile.  Using rack-mini-profiler is a great quick and dirty way to profile your code.

The rails low level cache is fairly simple to configure (especially if you’re using a PaaS like Heroku–you can drop in a managed memcached service easily).

From there, you need to build a cache key.  This is a string, up to 250 characters in length, that uniquely identifies whatever you’re trying to cache.  Basically anything that would cause a change in the contents of the cached object should be included in this key.  However, since you are going to calculate this key every time the value is requested, you don’t want the cache key to be expensive to calculate, otherwise the value of the cache will be degraded.  Good things for the key: timestamps, max updated_at values for a collection, user ids or roles.

Then, putting a value in the cache is as easy as:

def pull_from_cache
  cache_key = 'mykeyval'
  myval = Rails.cache.fetch(cache_key, expires_in: 1.hours) do
      @service.build_expensive_object()
  end
end

In the case above, the cached value will automatically expire in 1 hour.  If you don’t set the expiration time, then the cached value will eventually be removed via LRU.  How long that is depends on the size of your cache and what else you are putting in there.

If you want to test that something is being put in the cache, you can run a unit test and see how many times build_expensive_object() is called.

#...
object.service = @service
expect(@service).to receive(:build_expensive_object).once
object.pull_from_cache  
object.pull_from_cache
#...

In a production troubleshooting situation, you may need to evict something from the cache.  You can do so from the rails console by finding the cache key and running Rails.cache.delete('key').

Using the low level cache is a great way to push read heavy hot spots of your rails application (database queries or other complicated calculations) into memory and make your application faster.


Useful Rails Gems: Pretender

I’m constantly amazed at how productive you can be with rails. It simply lets you work on typical webapp problems at a much higher level. At 8z, we had a web application and a customer support team. Occasionally the customer support person had to ‘impersonate’ a normal user to troubleshoot an issue. We built a piece of software that let them assume that role. (We called it ‘sudo‘, obviously.) It’s been a few years, but as I recall it was complicated and error prone, lived on a different domain and wasn’t fully functional.

I needed to add similar functionality to a rails web app, and was able to find a couple of gems that looked useful. I selected pretender, mostly on the basis of documentation and google search results placement. I followed the instructions, tweaked a few settings and was off to the races in about an hour.  (Note this isn’t a fair apples to apples comparison of the underlying technologies, due to the differences in available open source libraries between the mid 2000s and the late 2010s.)

Now, all this gem does is what it says it does. It lets a certain user or set of users in your application pretend to be another user. It doesn’t handle auditing or anything else you might want with an elevated privilege system.

But it does do what it does well.



© Moore Consulting, 2003-2017 +