Review of Modular Rails

I am currently working on modifying an existing large rails app.  I am customizing some of the look and feel and extending functionality.

The app is under current development and I wanted to be able to take advantage of bug fixes or improvements, without impacting my customizations.  Or at least minimizing that impact.

Being fairly new to Rails, I surveyed the landscape and thought that building my customizations as an engine would be a good way to go.  (I was wrong, because engines have a hard time reaching out and modifying the application that they are part of.  At least that seemed to be a non standard use of engines from what I can find.) The author of Modular Rails has some good blog posts about engines and modularity, so I bought his book.


  • Good overview of how to extend three major components of rails app, models, views and controllers
  • Easy reading style
  • Leverages existing gems like deface
  • Mentions testing
  • Starts from first principles and then later gives you a gem to speed up development
  • Not too long
  • Information on setting up your own gems server


  • Focus on ‘greenfield’ apps.  No mention of integration with existing monoliths.
  • Uses nested modules, unlike every other engine article out there
  • Assumes relatively advanced knowledge of rails
  • Fair bit of fluff–lots of ‘mv’ commands
  • Extra charge for source code

All in all I am glad I read this book.  It didn’t fit my needs, but it didn’t promise that either.  I found it a good overview of the engine concept, even if he did do some things in a non standard manner and was a bit verbose about unix commands.

If you have done more Rails development, it will be more useful, and it is a great way to think about building new freestanding applications.  I haven’t surveyed the entire rails book landscape but I haven’t found anything out there focusing on Rails engines that is better.

Using LinkedIn to Help Others Find Jobs

I like LinkedIn, I really do. If you can get past the recruiters randomly spamming you (and yes, I know they are LinkedIn’s primary source of revenue), the professional network is quite powerful.

As Gary Vaynerchuk and Liz Ryan say, you don’t have any excuses for not finding the right person to reach out to with your sales pitch or job interest.

But recently I was contacted by a former colleague who noticed that I had “liked” one of my other contacts’ job posting. He asked a few questions, applied and interviewed, and got the job. I love matching people up with jobs, but this was easy.

My plan is to search LinkedIn periodically and “like” all job listings posted by former colleagues or other trusted sources, and hope this experience happens again.

Blog for yourself

In my perusal of Twitter, I came across this piece by Dave Winer (the creator of RSS and an interesting, provocative blogger): What I learned from Om and Hossein.

The whole piece is worth a read (it’s short), but here’s the quote that resonated for me:

I write my blog not because I want to write a “good” blog post, or even one that’s read by a lot of people. And my own self is not scattered, it’s right here, and as long as I live it will continue to be here. And my online self doesn’t exist for the benefit of others, it’s here to help my real self develop his thinking and create a trail of ideas and feelings and experiences that I can look back on later.

Blogging for me has always been about the ability to engage with my ideas and experiences, and if others gain from it, the more the merrier. Of course, it’s hard not to check stats and subscribers, etc, etc, but the real win for me comes from when I’m searching for the answer to a question and my blog pops up.

Joining The Food Corridor

After I left 8z, I contracted for about a year and a half. It was great fun, moving between projects, meeting a lot of new developers, and learned a lot of new things. I worked on evaluating software products and processes, supporting machine learning systems, large workflow engines, and, most recently, backend systems to stop distracted driving (they’re hiring, btw).

But I saw an email from Angellist early this year about a company looking to build a marketplace for kitchen space. (Aside: if you are interested in the labor market for startup professions, Angellist emails are great–they not only give you the company name and job description, but also typically include equity and salary–very useful information.) I replied, the conversation started, and I did some research on the company. They were pre-revenue, but the founder had been grinding it out for months and had an extensive background in the industry. Was clear they weren’t a fly by night, “we just had an idea for an app and need someone to build it” operation.

After discussions, interviews and reference checks, it became clear that this was a fantastic opportunity to join an early stage startup as a technical co-founder. So, I’m thrilled to announce that I have joined The Food Corridor as CTO/Co-Founder.

Why does this opportunity excite me so?

  • By increasing visibility and availability of shared kitchen space, it can grow the local food system, especially value add producers, across the country
  • There’s a real need for some innovative software and process solutions that we can solve
  • The founding team has the diverse set of skills needed to run a great company
  • I am looking forward to learning about the business side of a software company
  • It’s right at the intersection of two of my passions–food and technology

If you are interested in following along with the TFC journey, there’s a monthly newsletter that will be focused on shared kitchen topics.

As far as the blog, I expect to be heads down and building product, but will occasionally pop up and post.

Here’s to new adventures!

Step back and review the problem: SSL Edition

I spent the better half of a week recently getting an understanding of how SSL works and how to get client certificates, self signed certificates and java all playing nicely.  However, in the end, taking a big step back and examining the problem led to an entirely different solution.

The problem in question is access to a secure system from a heroku application.  The secure system is protected by an IP based firewall, basic authentication over SSL, and client certificates that were signed by a non standard certificate authority (the owner of the secure system).

We are using HttpClient from Apache for all HTTP communication, and there’s a nice example (but, depending on your version of the client, run loadKeyMaterial too, as that is what ends up sending the client cert).  When I ran it from my local machine, after setting the IP to be one of the static IPs, I ended up seeing a wide variety of errors.  Oh, there are many possible errors!  But finally, after making sure that the I could import the private key into the store, was using the correct cipher/java version combo, writing a stripped down client that didn’t interface with any other parts of the system, learning about the switch, making sure the public certificates were all in the store, and that I had IP based access to the secure system, I was able to watch the system go through a number of the steps outlined here:

SSL Message flow

Image from the JSSE Guide

But I kept seeing this exception: error: Received fatal alert: handshake_failure. I couldn’t figure out how to get past that. From searching there were a number of issues that all manifested with this error message, and “guess and check” wasn’t working. Interestingly, I saw this error only when running on my local machine with the allowed static IP. When I ran on a different computer that was going through a proxy which provided a static IP, the client certificate wasn’t being presented at all.

After consulting with the other organization and talking with members of the team about this roadblock, I took a step back and validated the basics. I read the JSSE Reference guide. I used the openssl tools to verify that the chain of certificates was valid. (One trick here is that if you have intermediate certs in between your CA and the final cert, you can add only one -untrusted switch. So if you have multiple certs in between, you need to combine the PEM files into one file.) I validated that the final certificate in the keystore matched the CSR and the private key. Turns out I had the wrong key, and that was the source of the handshake issue. Doh!

After I had done that, I took a look at the larger problem. Heroku doesn’t guarantee IP addresses, not even a range. There are add on solutions (proximo, quotaguard, fixie) that do provide a static IP address, and that was our initial plan. However, all of these are proxy based solutions. A quick search turns up the unpleasant reality that proxies can’t pass client certificates. The post talks about a reverse proxy, but it applies as well to regular proxies. From the post:

Yes, because a client can only send its certificate by using encrypted and
SIGNED connection, and only the client can sign the certifikate (sic) so server
can trust it. The proxy does not know the clients private key, otherwise the
connection would not be secure (or not in the way most people know that).

SSL is made up to avoid man-in-the-middle attack, and the reverse proxy IS
the man-in-the-middls. Either you trust it (and accept what it sends) or
don’t use it.

All my work on having the java code create the client certificate was a waste. Well, not a total waste because now I understood the problem space in a way I hadn’t before, so I could perform far better searches.

I opened a support request with our proxy provider, but it was pretty clear from the internet, the support staff and the docs that this was a niche case. I don’t know if any static IP proxy providers support client certificates, but I wasn’t able to find one.

Instead, we were able to use AWS elastic IP and nginx to set up our own proxy. Since we controlled it, we could install the client certificate and key on it, and have the heroku instance connect to that proxy. (Tips for those instructions–make sure you download the openssl source, as nginx wants to compile it into the web server. And use at least version 1.9 of the community software.)

So, I made some mistakes in this process, and in a personal retro, I thought it’d be worth talking about them.

  • I jumped into searching too quickly. SSL and private certs is a complicated system that has a lot of moving pieces, and it would have been worth my time looking at an overview.
  • While I was focusing on accessing the system from the java code, there were hints about the proxy issue. I didn’t consider the larger picture and was too focused on solving the immediate issue.

How to get a YouTube channel as a podcast for $15/month

Want to create a podcast from a YouTube channel, but don’t have access to the original video or don’t have the time to set up a podcast?

Here’s how to set this up, in 6 easy steps.

  1. Find the YouTube channel URL, something like:
  2. Note the part after channel: UCsjtSWdw9t1XlBnyrV7mAOQ
  3. Download and install YouCast.  This is a Windows only program, so if you don’t have access to Windows, or you want to have this accessible across the world without leaving your home PC on, I’d recommend signing up for Azure hosting.  You can have a decent server (an A0) for ~$15/month.  I just used a standard Windows 2008 server, but I did have to upgrade .NET.  This and this will be helpful for setup.
  4. Start up YouCast, and set the channel id to what you have above.  Select the audio output format, and generate the URL.  You’ll get a funky looking URL like http://hostname:22703/FeedService/GetUserFeed?userId=UCsjtSWdw9t1XlBnyrV7mAOQ&encoding=Audio&maxLength=0&isPopular=False
  5. Set YouCast up as a service.  I used NSSM.  Otherwise when your server reboots (as it occasionally will), you won’t have access to your podcast.
  6. Add the URL to your podcast catcher (I like Podcast Addict).

For bonus points, use a service like FeedBurner or RapidFeeds to capture stats about the podcast and make the URL nicer.speaker photo

I did this for one of my favorite YouTube channels, the Startup Therapist.  (I’ve asked Jeff at least once to start a podcast, and I hope he does soon.)

Not sure exactly what the issue is, but even though the podcast RSS feed has all the episodes in it, I can only download three podcasts per channel at a time.  I’ve no idea why–whether it is a limit of Podcast Addict, YouTube, YouCast or some combination.  But apart from that this solution works nicely.




How to maintain motivation when blogging

clock photoAnother year slipped by! They seem to come faster and faster, just as promised by all the old men in the comic strips I read when growing up.

I recently had a couple of conversations about blogging: how to start, why to do it, how to maintain it. I thought I’d capture some of my responses.

After over twelve years of blogging (that’s correct, in 2016 my blog is a teenager!), here are the three reasons that I keep at it.

  • Writing crystallizes the mind. Writing a piece, especially a deep technical piece, clarifies my understanding of the problem (it’s similar to writing an email to the world, in some ways). Sometimes it will turn up items I hadn’t considered, or other questions to search on. It’s easy to hold a fuzzy concept in my mind, but when written down, the holes in my knowledge become more evident.
  • Writing builds credibility. I have received a number of business inquiries from my writing. (I suspect there’d be more if my blog were more focused. The excellent “How to start blogging” course from John Sonmez is worth signing up for.  The number one thing to have a successful blog is subject matter focus. But I have a hard time limiting myself to a single topic. Maybe I’m building credibility as a generalist?) And I’ve had a few people interview me for positions and mention they found this blog. It’s easy to say “I know technology XXX” in an interview or consulting situation, but I have found it to be powerful and credible to say “Ah yes, I’ve seen technology XXX before. I wrote a post about it six months ago. Let me send that to you.”
  • Writing helps others. I have had friends mention that they were looking for solutions for something and stumbled across my blog. In fact, I’ve been looking for solutions to issues myself and stumbled onto a post from my blog, so even my future self thanks me for blogging.  I don’t have many comments (real ones, at least. The spam, oh, the spam), but the ones that are left often thank me for helping them out. And I know I have been helped tremendously by posts written by others, so writing pays this help forward.

Of course, these reasons apply to almost all writing–whether magazine, comments on social networks, twitter, medium, answers on stack overflow or something else.  So why continue to write on “Dan Moore!”?  Well, I did try medium recently, and am relatively active on Twitter, HackerNews and StackOverflow, and slightly less active on other social sites like Reddit and  All these platforms are great, but my beef with all of them is the same–you are trading control for audience.  As long as I pay my hosting bill and keep my domain registered, my content will be ever-present.  In addition, my blog can weave all over the place as my available time and interests change.

If you blog, I’d love to hear your reasons for doing so.  If you don’t, would love to hear what is keeping you from doing so.

Year in review, aka what did I ship in 2015

What did I ship (or help ship) in 2015?

(I did this a few years ago, and then became an employee.  Though it is probably even more important to think about what you ship as an employee, when I am a contractor it is easier to publicize it.)

  • Rewrote my farm share directory to support multiple states, numerous bugfixes and a new feature to let folks add reviews.
  • Sent seven newsletters for said farm share directory.
  • An email course to educate consumers about farm shares.
  • Helped take a video to structured data project from failure to success.  I was brought in as a senior engineer to a team and helped with porting an admin app from one environment to another, reviewed and fixed python program which took video and generated images, managing datasets for training, writing java microservices around C libraries, documenting processes, and coordinating with an overseas team as needed.
  • Set up secor to pull logging from kafka to s3, as well as setting up java processes to log to kafka.
  • Helped integrate Activiti into a custom workflow engine, and promoted a test first culture on the team I worked with.
  • Dropped in and helped troubleshoot an e-commerce system with which I was totally unfamiliar during an emergency.
  • Learned enough ruby on rails and Stripe to add an online order form to an existing Heroku website.
  • Helped build a backend system to monitor phone and car locations to prevent texting and driving.  My role on this small team varied between devops, java development, QA, code review, defining process and documentation.
  • Installed and tuned an elk stack used for business intelligence and developer debugging.
  • Took my wife’s writing and turned it into a book (best surprise ever).
  • Wrote 34 blog posts.

Of course, there were other personal milestones too, like camping with the kiddos, getting solar installed, road trips, and date nights with the wife.  All in all, a great year.  Here’s to 2016.

When you encounter a PropertyReferenceException using SpringData and MongoDb

If you are using spring data and mongodb, you can use these magic methods called “derived queries”, which make writing simple queries very easy.

However, you may run into a PropertyReferenceException: No property module found for <type> message, with an exception similar to the one below.

This means you have a typo in your derived query (miscapitalized word, misspelled word, etc), so take a long look at that Repository interface.

[mongod output] 10:09:58.421 [main] WARN o.s.c.a.AnnotationConfigApplicationContext [] - Exception encountered during context initialization - cancelling refresh attempt 
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'tripDAO': Invocation of init method failed; nested exception is No property module found for type Trip!
at ~[spring-beans-4.1.7.RELEASE.jar:4.1.7.RELEASE]
at ~[spring-beans-4.1.7.RELEASE.jar:4.1.7.RELEASE]
at ~[spring-beans-4.1.7.RELEASE.jar:4.1.7.RELEASE]
at$1.getObject( ~[spring-beans-4.1.7.RELEASE.jar:4.1.7.RELEASE]
at ~[spring-beans-4.1.7.RELEASE.jar:4.1.7.RELEASE]
at ~[spring-beans-4.1.7.RELEASE.jar:4.1.7.RELEASE]
at ~[spring-beans-4.1.7.RELEASE.jar:4.1.7.RELEASE]
at ~[spring-beans-4.1.7.RELEASE.jar:4.1.7.RELEASE]
at ~[spring-context-4.1.7.RELEASE.jar:4.1.7.RELEASE]
at ~[spring-context-4.1.7.RELEASE.jar:4.1.7.RELEASE]
at org.springframework.context.annotation.AnnotationConfigApplicationContext.( [spring-context-4.1.7.RELEASE.jar:4.1.7.RELEASE]
at com.katasi.decision_engine.AbstractMongoDbTest.createApplicationContext( [test-classes/:na] 
at org.apache.camel.testng.CamelSpringTestSupport.doPreSetup( [camel-testng-2.15.2.jar:2.15.2]
at com.katasi.decision_engine.processor.CreateTripTest.doPreSetup( [test-classes/:na]
at org.apache.camel.testng.CamelTestSupport.setUp( [camel-testng-2.15.2.jar:2.15.2]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_31]
at sun.reflect.NativeMethodAccessorImpl.invoke( ~[na:1.8.0_31]
at sun.reflect.DelegatingMethodAccessorImpl.invoke( ~[na:1.8.0_31]
at java.lang.reflect.Method.invoke( ~[na:1.8.0_31] 
at org.testng.internal.MethodInvocationHelper.invokeMethod( [testng-6.8.21.jar:na]
at org.testng.internal.Invoker.invokeConfigurationMethod( [testng-6.8.21.jar:na]
at org.testng.internal.Invoker.invokeConfigurations( [testng-6.8.21.jar:na]
at org.testng.internal.Invoker.invokeMethod( [testng-6.8.21.jar:na]
at org.testng.internal.Invoker.invokeTestMethod( [testng-6.8.21.jar:na]
at org.testng.internal.Invoker.invokeTestMethods( [testng-6.8.21.jar:na]
at org.testng.internal.TestMethodWorker.invokeTestMethods( [testng-6.8.21.jar:na]
at [testng-6.8.21.jar:na]
at org.testng.TestRunner.privateRun( [testng-6.8.21.jar:na]
at [testng-6.8.21.jar:na]
at org.testng.SuiteRunner.runTest( [testng-6.8.21.jar:na]
at org.testng.SuiteRunner.runSequentially( [testng-6.8.21.jar:na]
at org.testng.SuiteRunner.privateRun( [testng-6.8.21.jar:na]
at [testng-6.8.21.jar:na]
at org.testng.SuiteRunnerWorker.runSuite( [testng-6.8.21.jar:na]
at [testng-6.8.21.jar:na]
at org.testng.TestNG.runSuitesSequentially( [testng-6.8.21.jar:na]
at org.testng.TestNG.runSuitesLocally( [testng-6.8.21.jar:na]
at [testng-6.8.21.jar:na]
at [surefire-testng-2.12.4.jar:2.12.4]
at org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.executeMulti( [surefire-testng-2.12.4.jar:2.12.4]
at org.apache.maven.surefire.testng.TestNGDirectoryTestSuite.execute( [surefire-testng-2.12.4.jar:2.12.4]
at org.apache.maven.surefire.testng.TestNGProvider.invoke( [surefire-testng-2.12.4.jar:2.12.4]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_31]
at sun.reflect.NativeMethodAccessorImpl.invoke( ~[na:1.8.0_31] 
at sun.reflect.DelegatingMethodAccessorImpl.invoke( ~[na:1.8.0_31]
at java.lang.reflect.Method.invoke( ~[na:1.8.0_31] 
at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray( [surefire-api-2.12.4.jar:2.12.4]
at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke( [surefire-booter-2.12.4.jar:2.12.4]
at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider( [surefire-booter-2.12.4.jar:2.12.4]
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess( [surefire-booter-2.12.4.jar:2.12.4]
at org.apache.maven.surefire.booter.ForkedBooter.main( [surefire-booter-2.12.4.jar:2.12.4]
Caused by: No property module found for type Trip!
at ~[spring-data-commons-1.10.2.RELEASE.jar:na]
at ~[spring-data-commons-1.10.2.RELEASE.jar:na]
at ~[spring-data-commons-1.10.2.RELEASE.jar:na]
at ~[spring-data-commons-1.10.2.RELEASE.jar:na]
at ~[spring-data-commons-1.10.2.RELEASE.jar:na]
at ~[spring-data-commons-1.10.2.RELEASE.jar:na]
at$OrPart.( ~[spring-data-commons-1.10.2.RELEASE.jar:na]
at$Predicate.buildTree( ~[spring-data-commons-1.10.2.RELEASE.jar:na]
at$Predicate.( ~[spring-data-commons-1.10.2.RELEASE.jar:na]
at ~[spring-data-commons-1.10.2.RELEASE.jar:na]
at ~[spring-data-mongodb-1.7.2.RELEASE.jar:na]
at$MongoQueryLookupStrategy.resolveQuery( ~[spring-data-mongodb-1.7.2.RELEASE.jar:na]
at$QueryExecutorMethodInterceptor.( ~[spring-data-commons-1.10.2.RELEASE.jar:na]
at ~[spring-data-commons-1.10.2.RELEASE.jar:na]
at ~[spring-data-commons-1.10.2.RELEASE.jar:na]
at ~[spring-data-commons-1.10.2.RELEASE.jar:na]
at ~[spring-data-mongodb-1.7.2.RELEASE.jar:na]
at ~[spring-beans-4.1.7.RELEASE.jar:4.1.7.RELEASE]
at ~[spring-beans-4.1.7.RELEASE.jar:4.1.7.RELEASE]
... 50 common frames omitted

Guide to Reindexing ElasticSearch data input with Logstash

I ran into an issue where I set up logstash to load data that was numeric as a string. Then, later on when we wanted to do visualizations with it, they were off. So, I needed to re-index all the data.

Total pain, hope this guide helps.  (Here’s some additional elastic search documentation: here and here.)

If you don’t care about your old data, just:

  • shut down logstash
  • deploy the new logstash filter (with mutates)
  • close all old indices
  • turn on logstash
  • send some data through to logstash
  • refresh fields in kibana–you’ll lose popularity

Now, if you do care about your old data, well, that’s a different story. Here are the steps I took:

First, modify the new logstash filter file, using mutate and deploy it. This takes care of the logstash indexes going forward, but will cause some kibana pain until you convert all the past indexes (because some indexes will have fields as strings and others as numbers).

Install jq: which will help you transform your data (jq is magic, I tell you).

Then, for each day/index you care about (logstash-2015.09.22in this example ), you want to follow these steps.

# get the current mapping
curl -XGET 'http://localhost:9200/logstash-2015.09.22/_mapping?pretty=1' > mapping

#back it up
cp mapping mapping.old

# edit mapping, change the types of the fields that are strings to long, float, or boolean.  I used vi

# create a new index with the new mapping 
curl -XPUT 'http://localhost:9200/logstash-2015.09.22-new/' -d @mapping

# find out how many rows there are.  If there are too many, you may want to use the scrolled search.  
# I handled indexes as big as 500k documents with the below approach
curl -XGET 'localhost:9200/logstash-2015.09.22/_count'

# if you are modifying an old index, no need to stop logstash, but if you are modifying an index with data currently going to it, you need to stop logstash at this step.

# change size below to be bigger than the count.
curl -XGET 'localhost:9200/logstash-2015.09.22/_search?size=250000'>

# edit data, just get the array of docs without the metadata
sed 's/^[^[]*\[/[/' |sed 's/..$//' >

# run jq to build a bulk insert compatible json file ( )

# make sure to correct the _index value. in the line below
jq -f jq.file |jq -c '\
{ index: { _index: "logstash-2015.09.22-new", _type: "logs" } },\
.' > toinsert

# where jq.file is the file below

# post the toinsert file to the new index
curl -s -XPOST localhost:9200/_bulk --data-binary "@toinsert"; echo

# NOTE: depending on the size of the toinsert file, you may need to split it up into multiple files using head and tail.  
# Make sure you don't split the metadata and data line (that is, each file should have an even number of lines), 
# and that files are all less than 1GB in size.

# delete the old index
curl -XDELETE 'http://localhost:9200/logstash-2015.09.22'

# add a new alias with the old index's name and pointing to the new index
curl -XPOST localhost:9200/_aliases -d '
   "actions": [
       { "add": {
           "alias": "logstash-2015.09.22",
           "index": "logstash-2015.09.22-new"

# restart logstash if you stopped it above.
sudo service logstash restart

# refresh fields in kibana--you'll lose popularity

Here’s the jq file which converts specified string fields to numeric and boolean fields.

# this is run with the jq tool for parsing and modifying json

# from
def translate_key(from;to):
  if type == "object" then . as $in
     | reduce keys[] as $key
         ( {};
       . + { (if $key == from then to else $key end)
             : $in[$key] | translate_key(from;to) } )
  elif type == "array" then map( translate_key(from;to) )
  else .

def turn_to_number(from):
  if type == "object" then . as $in
     | reduce keys[] as $key
         ( {};
       . + { ($key )
             : ( if $key == from then ($in[$key] | tonumber) else $in[$key] end ) } )
  else .

def turn_to_boolean(from):
  if type == "object" then . as $in
     | reduce keys[] as $key
         ( {};
       . + { ($key )
             : ( if $key == from then (if $in[$key] == "true" then true else false end ) else $in[$key] end ) } )
  else .

# for example, this converts any values with this field to numbers, and outputs the rest of the object unchanged
# run with jq -c -f  
.[]|._source| turn_to_number("numfield")

Rinse, wash, repeat.

© Moore Consulting, 2003-2015 +