Skip to content

Getting started with sharetribe development–vagrant style

I have recently spent a fair bit of time working with Sharetribe, an open source, MIT licensed marketplace platform that also powers a hosted solution.

First, let me say that the software is the least significant piece of a marketplace (like AirBnB).  The least significant! (Check out the Sharetribe Academy for some great content about the other steps.)  

But it is still a necessary component.  If you can get by with the hosted solution to prove out your idea, I suggest you do so–$100/month is a lot cheaper than hours of software development. There may come a time when you want to customize the sharetribe interface beyond what javascript injection can do.  If this is the case, you need a developer.  And that developer needs an environment.  That’s what this post is really about.

The sharetribe github readme explains the installation process pretty well, but I find it tedious, so I created a quick start vagrant VM. This VM has a sharetribe installation ready to go.  I use vagrant 1.6.3 and Virtualbox 5–google around for instructions on how to get those up and running. The guest VM is Ubuntu 14.04. This VM uses rvm to manage ruby versions, but I couldn’t be bothered with nvm. It will install sharetribe 5.8.0 and all needed components.

Assuming you have vagrant and virtual box installed, download the Vagrant file and put it in the directory where you want to work. Edit it and change any options you’d like. The options I changed were the port forwarding (I set it to 3003), networking options, and the amount of memory used (I allocate 4GB).

Then run vagrant box add http://www.mooreds.com/sharetribe/sharetribe-base-mooreds.box --name sharetribe-base-mooreds to download the file. It’s downloading a large file from my (small) server, so expect it to take a while (hours).

Then run vagrant init.

When you can login (password is vagrant if you use vagrant putty or you can use vagrant ssh) you can go to the sharetribe directory and do the following:

  • fork the sharetribe repo
  • update your git remote so that your origin is your forked repo (and not mine, because you won’t get write access to mine)
  • create a branch in your repo off of the 5.8.0 tag. There’s one startup script I tweaked a bit, but you can just ignore those changes.
  • update your mysql password: SET PASSWORD FOR 'root'@'localhost' = PASSWORD('MyNewPass');
  • start mailcatcher listening to all interfaces: mailcatcher --ip=0.0.0.0
  • start the rails/react server: foreman start -f Procfile.static
  • visit lvh.me:3003 and start your server
  • set up your local super user and first marketplace
  • edit the url you are redirected to to have the correct external port on it (from the vagrant settings): from http://testdev.lvh.me:3000/?auth=baEOj7kFrsw to http://testdev.lvh.me:3003/?auth=baEOj7kFrsw for example

This is running sharetribe 5.8.0, and I’m sure there will be follow on releases. Here’s how to sync the releases coming from the sharetribe team with your current repo. I’ve taken the liberty of creating an upstream branch for you.

This doesn’t cover deploying the code anywhere–I’d recommend this gist. Make sure you read the comments! Or I can install a vanilla version of sharetribe to heroku for a flat fee–contact me for details.

Perils of ORM caching

So, I was working a rails4 project today and added an after_create method to a model (call it model A) that checked on a related object (call it model B) to see its state, and if it met a certain criteria, did something to the model A object being created. The specifics don’t really matter, but I was using zeus to run my rpsec tests.

This caused three tests to fail in entirely unrelated sections of the application.

What on earth was going on?

Well, first I used git bisect to determine the exact commit that caused the issue.  (As far as I’m concerned, the existence of git bisect confirms my belief to ‘commit early, commit often’).

Then I dug in.  It appears that each of the tests was tweaking the model B object, and testing some aspect of the change, usually through the model A object.  Before I added the after_create method, the model B object wasn’t loaded into the ActiveRecord in memory network graph tied to the model A object when the test saved the model A object initially, but was loaded from the database when the method under test executed.

After the after_create method was added, the model B object was loaded into the in memory network graph tied to the model A object.  Then the test tweaked the model B object in the database, but didn’t reload the model A object, which had a dirty/old version of the model B object.

A simple reload of model A (and its network graph) fixed it (or a repositioning of when I modified the model B object), but it was quite a subtle testing bug to track down.

Adding a sitemap to sharetribe

map-1434486_640I have using the excellent Sharetribe framework to build a marketplace for food businesses and commercial kitchens for my new startup, The Food Corridor.  However, it didn’t have support for generating a sitemap.xml file for all the listings available.

How is someone going to find the right kitchen space when they use google, but we don’t have a sitemap so google can keep apprised of all the options?

This wouldn’t do.  So, I added the ability to generate a sitemap for all the listings in the marketplace.

First off, install the gem–I used sitemap_generator as it seemed to do what I needed–allow me to call out certain routes and add them to my sitemap.  Then you need to create a configuration file, at config/sitemap.rb.  Mine looks like:


SitemapGenerator::Sitemap.default_host = "https://"+APP_CONFIG.domain

SitemapGenerator::Sitemap.create do
  Listing.where(deleted: false, open: true).find_each do |listing|
    add listing_path(listing), :lastmod => listing.updated_at
  end
end

 

Then I just ran bundle exec rake sitemap:refresh:no_ping and a sitemap.xml.gz was generated in my public directory.

If you are running on AWS or someplace else with a persistent filesystem, you can skip to the text starting with “Then, I scheduled”.

If you are running on a PAAS like Heroku, where you don’t get a persistent filesystem, you’ll want to push this generated file to a persistent place. I chose S3. Since sharetribe already has paperclip as a dependency, I used the instructions here and here, with a few modifications for sharetribe.

My rake task to upload the sitemap file was:


require 'aws'
namespace :sitemap do
  desc 'Upload the sitemap files to S3'
  task upload_to_s3: :environment do
    s3 = AWS::S3.new(
      access_key_id: ENV['aws_access_key_id'],
      secret_access_key: ENV['aws_secret_access_key']
    )
    bucket = s3.buckets[ENV['s3_bucket_name']]
      file = File.join(Rails.root, "public", "sitemap.xml.gz")
      path = "sitemap/sitemap.xml.gz"

      begin
        object = bucket.objects[path]
        object.write(file: file)
        object.acl=(:public_read)
      rescue Exception => e
        raise e
      end
  end
end


I then run the sitemap:refresh:no_ping and upload_to_s3 tasks in the same heroku scheduled task: rake sitemap:refresh:no_ping sitemap:upload_to_s3. If you don’t do that (and instead do separate dynos) then the upload task won’t have access to the file (because it will have been generated on the first dyno’s filesystem).

You also need to make sure to add a sitemap controller to redirect from yourdomain.com/sitemap.xml.gz to the S3 bucket (again, as outlined in the articles linked above.

Then, I scheduled a daily refresh of the sitemap.xml file and submitted the file to relevant search engines.

Things I didn’t do:

  • handle more than 50k urls
  • support multiple communities (not really needed for me, but I bet if the folks behind sharetribe.com wanted to use this, they’d want such support).
  • add the sitemap.xml file to my robots.txt file, as outlined here.

The Trouble with Snapchat

I joined Snapchat a while ago. I found tremendous value in the snapstorms by Mark Suster. And some value in the chats from Justin Kan and Gary Vanyerchuk. I’m no Snapchat expert–never made a snap. Just followed people for their stories. But I was interested and was checking the app a couple times a day for a while.

Yet, now I deleted the app from my phone.

Why?

Because even though I was getting value from the media I was consuming, there were two major issues.

  • I couldn’t share a great snapchat. Other than suggesting “hey, why don’t you get on snapchat and follow this person because they are talking about lots of interesting things”, you can’t share the knowledge. I didn’t think that was very important until the fifth or sixth time I thought “geez, XXX would really enjoy this” and then realized I couldn’t share it with them and felt a twinge of annoyance. I miss having a universal resource locator that I can share as I please.
  • I couldn’t consume a snapchat when I wanted to. I often will email myself an article, or leave a tab open, or even post it to Twitter or Hacker News if I scan it and know I’d like to come back and read it more fully later. Even in my Twitter or Facebook feeds, I can scroll back years if I want to. Snapchat forces you to consume content on their schedule. And that gets frustrating.

I can see why both of these attributes good for content creators–they force the consumer to engage more. More on that here, from msuster. But this consumer is saying goodbye to Snapchat. At least until they give me URLs.

Cash Flow

Contracting, like any other business, is all about cash flow. You want to make sure you have more money coming in the door than leaving the door.

A friend of mine once told me that the best advice he had received about running a one person business was that there were three components to the work:

  • getting work
  • doing the work
  • getting paid for the work

and if you didn’t enjoy all three and treat them equally, you’d be in a world of hurt.

I find this be be very true. Don’t consider contracting if you are only interested in the doing of the work (whether that be development, design, data manipulation, etc). You don’t have to be perfect at the other pieces (getting the work and getting paid for the work), but you ignore them at your peril.

Good ways to get the work:

  • Market yourself. I like blogging, but contributing to open source and speaking at user groups and conferences seem to work well too.
  • Always be networking and helping others.
  • Look for work before your contract ends.
  • Have a cash reserve so you don’t have to take the first gig that comes along.

Good ways to get paid for the work:

  • Sign a contract
  • Stop work if you aren’t getting paid
  • Be persistent–I have chased invoices for five months before getting paid (this included sending the responsible party a holiday gift)
  • Use an accounting system (I like FreshBooks but use something! I started with a spreadsheet).

If you’d like to learn more about contracting, I am speaking at the June Boulder Ruby Meetup. You can RSVP here.

Review of Modular Rails

I am currently working on modifying an existing large rails app.  I am customizing some of the look and feel and extending functionality.

The app is under current development and I wanted to be able to take advantage of bug fixes or improvements, without impacting my customizations.  Or at least minimizing that impact.

Being fairly new to Rails, I surveyed the landscape and thought that building my customizations as an engine would be a good way to go.  (I was wrong, because engines have a hard time reaching out and modifying the application that they are part of.  At least that seemed to be a non standard use of engines from what I can find.) The author of Modular Rails has some good blog posts about engines and modularity, so I bought his book.

Pluses:

  • Good overview of how to extend three major components of rails app, models, views and controllers
  • Easy reading style
  • Leverages existing gems like deface
  • Mentions testing
  • Starts from first principles and then later gives you a gem to speed up development
  • Not too long
  • Information on setting up your own gems server

Minuses:

  • Focus on ‘greenfield’ apps.  No mention of integration with existing monoliths.
  • Uses nested modules, unlike every other engine article out there
  • Assumes relatively advanced knowledge of rails
  • Fair bit of fluff–lots of ‘mv’ commands
  • Extra charge for source code

All in all I am glad I read this book.  It didn’t fit my needs, but it didn’t promise that either.  I found it a good overview of the engine concept, even if he did do some things in a non standard manner and was a bit verbose about unix commands.

If you have done more Rails development, it will be more useful, and it is a great way to think about building new freestanding applications.  I haven’t surveyed the entire rails book landscape but I haven’t found anything out there focusing on Rails engines that is better.

Using LinkedIn to Help Others Find Jobs

I like LinkedIn, I really do. If you can get past the recruiters randomly spamming you (and yes, I know they are LinkedIn’s primary source of revenue), the professional network is quite powerful.

As Gary Vaynerchuk and Liz Ryan say, you don’t have any excuses for not finding the right person to reach out to with your sales pitch or job interest.

But recently I was contacted by a former colleague who noticed that I had “liked” one of my other contacts’ job posting. He asked a few questions, applied and interviewed, and got the job. I love matching people up with jobs, but this was easy.

My plan is to search LinkedIn periodically and “like” all job listings posted by former colleagues or other trusted sources, and hope this experience happens again.

Blog for yourself

In my perusal of Twitter, I came across this piece by Dave Winer (the creator of RSS and an interesting, provocative blogger): What I learned from Om and Hossein.

The whole piece is worth a read (it’s short), but here’s the quote that resonated for me:

I write my blog not because I want to write a “good” blog post, or even one that’s read by a lot of people. And my own self is not scattered, it’s right here, and as long as I live it will continue to be here. And my online self doesn’t exist for the benefit of others, it’s here to help my real self develop his thinking and create a trail of ideas and feelings and experiences that I can look back on later.

Blogging for me has always been about the ability to engage with my ideas and experiences, and if others gain from it, the more the merrier. Of course, it’s hard not to check stats and subscribers, etc, etc, but the real win for me comes from when I’m searching for the answer to a question and my blog pops up.

Joining The Food Corridor

After I left 8z, I contracted for about a year and a half. It was great fun, moving between projects, meeting a lot of new developers, and learned a lot of new things. I worked on evaluating software products and processes, supporting machine learning systems, large workflow engines, and, most recently, backend systems to stop distracted driving (they’re hiring, btw).

But I saw an email from Angellist early this year about a company looking to build a marketplace for kitchen space. (Aside: if you are interested in the labor market for startup professions, Angellist emails are great–they not only give you the company name and job description, but also typically include equity and salary–very useful information.) I replied, the conversation started, and I did some research on the company. They were pre-revenue, but the founder had been grinding it out for months and had an extensive background in the industry. Was clear they weren’t a fly by night, “we just had an idea for an app and need someone to build it” operation.

After discussions, interviews and reference checks, it became clear that this was a fantastic opportunity to join an early stage startup as a technical co-founder. So, I’m thrilled to announce that I have joined The Food Corridor as CTO/Co-Founder.

Why does this opportunity excite me so?

  • By increasing visibility and availability of shared kitchen space, it can grow the local food system, especially value add producers, across the country
  • There’s a real need for some innovative software and process solutions that we can solve
  • The founding team has the diverse set of skills needed to run a great company
  • I am looking forward to learning about the business side of a software company
  • It’s right at the intersection of two of my passions–food and technology

If you are interested in following along with the TFC journey, there’s a monthly newsletter that will be focused on shared kitchen topics.

As far as the blog, I expect to be heads down and building product, but will occasionally pop up and post.

Here’s to new adventures!

Step back and review the problem: SSL Edition

I spent the better half of a week recently getting an understanding of how SSL works and how to get client certificates, self signed certificates and java all playing nicely.  However, in the end, taking a big step back and examining the problem led to an entirely different solution.

The problem in question is access to a secure system from a heroku application.  The secure system is protected by an IP based firewall, basic authentication over SSL, and client certificates that were signed by a non standard certificate authority (the owner of the secure system).

We are using HttpClient from Apache for all HTTP communication, and there’s a nice example (but, depending on your version of the client, run loadKeyMaterial too, as that is what ends up sending the client cert).  When I ran it from my local machine, after setting the IP to be one of the static IPs, I ended up seeing a wide variety of errors.  Oh, there are many possible errors!  But finally, after making sure that the I could import the private key into the store, was using the correct cipher/java version combo, writing a stripped down client that didn’t interface with any other parts of the system, learning about the javax.net.debug=ssl switch, making sure the public certificates were all in the store, and that I had IP based access to the secure system, I was able to watch the system go through a number of the steps outlined here:

SSL Message flow
Image from the JSSE Guide

But I kept seeing this exception: error: javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure. I couldn’t figure out how to get past that. From searching there were a number of issues that all manifested with this error message, and “guess and check” wasn’t working. Interestingly, I saw this error only when running on my local machine with the allowed static IP. When I ran on a different computer that was going through a proxy which provided a static IP, the client certificate wasn’t being presented at all.

After consulting with the other organization and talking with members of the team about this roadblock, I took a step back and validated the basics. I read the JSSE Reference guide. I used the openssl tools to verify that the chain of certificates was valid. (One trick here is that if you have intermediate certs in between your CA and the final cert, you can add only one -untrusted switch. So if you have multiple certs in between, you need to combine the PEM files into one file.) I validated that the final certificate in the keystore matched the CSR and the private key. Turns out I had the wrong key, and that was the source of the handshake issue. Doh!

After I had done that, I took a look at the larger problem. Heroku doesn’t guarantee IP addresses, not even a range. There are add on solutions (proximo, quotaguard, fixie) that do provide a static IP address, and that was our initial plan. However, all of these are proxy based solutions. A quick search turns up the unpleasant reality that proxies can’t pass client certificates. The post talks about a reverse proxy, but it applies as well to regular proxies. From the post:

Yes, because a client can only send its certificate by using encrypted and
SIGNED connection, and only the client can sign the certifikate (sic) so server
can trust it. The proxy does not know the clients private key, otherwise the
connection would not be secure (or not in the way most people know that).

SSL is made up to avoid man-in-the-middle attack, and the reverse proxy IS
the man-in-the-middls. Either you trust it (and accept what it sends) or
don’t use it.

All my work on having the java code create the client certificate was a waste. Well, not a total waste because now I understood the problem space in a way I hadn’t before, so I could perform far better searches.

I opened a support request with our proxy provider, but it was pretty clear from the internet, the support staff and the docs that this was a niche case. I don’t know if any static IP proxy providers support client certificates, but I wasn’t able to find one.

Instead, we were able to use AWS elastic IP and nginx to set up our own proxy. Since we controlled it, we could install the client certificate and key on it, and have the heroku instance connect to that proxy. (Tips for those instructions–make sure you download the openssl source, as nginx wants to compile it into the web server. And use at least version 1.9 of the community software.)

So, I made some mistakes in this process, and in a personal retro, I thought it’d be worth talking about them.

  • I jumped into searching too quickly. SSL and private certs is a complicated system that has a lot of moving pieces, and it would have been worth my time looking at an overview.
  • While I was focusing on accessing the system from the java code, there were hints about the proxy issue. I didn’t consider the larger picture and was too focused on solving the immediate issue.