Using WordPress as a CRUD Database, API Included

Ethernet cordBased on this HN discussion, which I discussed a while back, I looked at how to set up WP as a CRUD database accessible via API.

It wasn’t hard. Steps:

  1. Install WordPress (I used ec2 and the Cloudformation sample template)
  2. Install the following plugins
  3. I also installed the following optional plugins
  4. I created a custom post type of ‘todo’ and added a couple of custom fields.
  5. I was able to get the todos by going to these URLs (apparently you can have the API live at wp-json, but that required some rejiggering of url permalinks).
    • http://host/wordpress/?rest_route=/wp/v2/todo/8
    • http://host/wordpress/?rest_route=/wp/v2/todos

Here’s an example of the output:

{
  "id": 8,
  "date": "2018-03-05T02:38:26",
  "date_gmt": "2018-03-05T02:38:26",
  "guid": {
    "rendered": "http://host/wordpress/?post_type=todo&p=8"
  },
  "modified": "2018-03-05T02:40:01",
  "modified_gmt": "2018-03-05T02:40:01",
  "slug": "auto-draft",
  "status": "publish",
  "type": "todo",
  "link": "http://host/wordpress/todo/auto-draft/",
  "title": {
    "rendered": "Buy Milk"
  },
  "template": "",
  "acf": {
    "": false,
    "due_date": "20180308",
    "description": "please buy milk.",
    "who_owns_it": {
      "ID": "1",
      "user_firstname": "",
      "user_lastname": "",
      "nickname": "mooreds",
      "user_nicename": "mooreds",
      "display_name": "mooreds",
      "user_email": "...",
      "user_url": "",
      "user_registered": "2018-03-05 02:21:36",
      "user_description": "",
      "user_avatar": "..."
    },
    "done": false
  },
  "_links": {
    "self": [
      {
        "href": "http://host/wordpress/wp-json/wp/v2/todo/8"
      }
    ],
    "collection": [
      {
        "href": "http://host/wordpress/wp-json/wp/v2/todo"
      }
    ],
    "about": [
      {
        "href": "http://host/wordpress/wp-json/wp/v2/types/todo"
      }
    ],
    "wp:attachment": [
      {
        "href": "http://host/wordpress/wp-json/wp/v2/media?parent=8"
      }
    ],
    "curies": [
      {
        "name": "wp",
        "href": "https://api.w.org/{rel}",
        "templated": true
      }
    ]
  }
}

The custom post fields are all under the ACF key, and you can see that there was an expansion of the who_owns_it field. If you are going to do this, make sure have the the normal title tag be part of the custom post, otherwise the WP UX for editing the custom posts won’t be much use.

Not perfectly restful, but a super simple way to set up an API that non technical folks can use to create, update or delete records and that you can consume in other systems.


Useful Tool: Intercom

Intercom In WallIntercom has been extremely helpful in allowing targeted messages to help users gain knowledge of The Food Corridor application. (Thanks for the recommendation, OTL!) What I’ve found most useful about Intercom is that it allows non technical users to build rulesets to target specific messages. This helps you help your users uncover new functionality, but only at the moment when the functionality is useful.

For example, if someone has signed up but not created a booking in TFC, we can send them a message a week after they sign up with a helpful link. This message can be via email or an in-app message, and if they respond to the in-app message, it notifies customer support just as if a chat was started any other way.

As a developer, all I had to do to get this data (which lives in our database) up to Intercom was to craft a javascript object. I added a method in the application_controller and have it cached for a while, because we ended up sending dozens of attributes up, and they are slow changing. From then on, the message targeting is done entirely within Intercom, where the non technical user can build rules based on this custom data plus any data that Intercom collects by default (last login, etc).

If you do want to target messages specific web pages (only pop up a message on page XXX, but not page YYY) you need an understanding of regular expressions. Depending on how comfortable your non technical users are and how complicated your site is, a developer may need to write the first regular expression and then have the other users extend it.

I’m skipping over other pieces of Intercom, including a knowledge base and in app synchronous chat. I think those are valuable as well, but the real win for me was allowing non technical users to control in app messaging with minimal software development investment.


How would you build a simple CRUD app in 2018?

Interesting discussion on HN about how you’d build a simple CRUD app in 2018. (CRUD stands for Create, Read, Update and Delete of records in a system.) As you’d expect, no shortage of opinions, with a lot of specifics. Also lots of recommendations to use what you know, which is always a good idea. CRUD apps, where you are simply gathering data via a web interface, run the gamut of complexity and usability, but it’s hard to believe how powerful having a centralized repository of data can be.

And the nice thing is that once you get the data into the backed of  a CRUD app (which is likely a SQL compatible database, but could be any other kind of datastore) you can either combine that data with other systems, expose the data via an API, or both.

Here was my answer to the question:

I’d use Rails today. But I can see the value in Django or Express or ASP.Net or Spring Boot.

 

Criteria I’d think about:

 

What is going to be easy for my organization to support (both operationally and with future code changes).

 

What do I know/want to learn? If the CRUD app is complicated, then I want a tech I know well. If the CRUD app is simple, then I may want to experiment with a different technology (again, within the organizational support guardrails).

 


Look for the value of the param, not its existence

We are rewriting the front end of the application on which I work. Several of the forms which gather data that drives the application previously had checkboxes, which would set a value to true or false in the database.

Some of the checkboxes were ancillary, however, and didn’t result in a database value. Instead, they were used to calculate a different result depending on whether the checkbox was checked or not. That result would then be saved to the database. This was all tested via rspec controller tests.

In at least one case, the calculation depended on the existence of the parameter. Checkbox parameters are sent when they are checked and omitted when they are not.

Then we changed the forms to radio buttons.

Uh oh.

Instead of sending the parameter if the box was checked yes or not sending the parameter if it was checked no, we sent yes if the radio button was checked and no if it was not.

Uh oh.

The tests continued to pass, because they hadn’t been updated to the new values.

Uh oh.

So the calculation was incorrect. Which meant that the functionality that depended on the calculation was incorrect. Unfortunately, that functionality was related to billing. So I just spent the last few days backing that out, checking with clients to see what they actually intended to do, and in general fixing the issue.

The underlying code was an easy enough fix, but this story just goes to show that software is complicated. Lessons learned:

  • favor a test for an exact parameter value rather than parameter existence
  • when changing forms, consider writing tests with the new input values
  • integration tests that drive a browser would have caught this issue. These are much slower so need to be used judiciously, but are still a valuable automated test

 


Building a Bridge as Your Clients Walk Across It

There was an interesting article posted to hacker news about the nuts and bolts of a SaaS product that you might not expect (article, discussion).  I commented based on my experience that the early days of a SaaS product are like building a bridge while your clients are walking across it.  You want the bridge to be far enough ahead of your clients that they won’t fall off it.  But, not so far that if you or they want to go a different direction you’ve wasted time and materials building a useless walkway section.

So, don’t build features your customers aren’t going to use. But do build features they are going to need. How do you know what the difference is?

  • you can ask them.  This is the only way to start unless you are a target user of your SaaS product (in which case, ask yourself).  Depending on the technical sophistication of your users, you may or may not get good requirements, but there’s no better way to understand their pain.  They will speak very confidently about their pain, however they will also try to give you suggested solutions.  Don’t take those as gospel, as they may not have thought through the ramifications of said solutions.  Find them by looking where they congregate online (facebook groups, forums, reddit).  Targeted email may be OK if you have a relationship.
  • you can build a placeholder.  This is a great way to see if folks want the feature, if you have some folks using your app.  We built a placeholder for document management: “email us and we’ll upload your documents”.  After a few emails, we knew it would be worth it to build out some way for folks to self serve.
  • you can build a MVF (minimum viable feature).  A feature does not need to spring from your mind fully tested, polished and automated.  Sprinkle in manual steps, use emails to people instead of automation, or release only a subset of a feature.  The goal here is again to see usage before you fully develop it.  Another benefit is that the MVF may be all that is needed.
  • you can wait until clients ask for it.  The value of this depends on when they need what they ask for.  If they need it when they ask for it, then it’s just another data point (“thanks for the request.  We’ve noted it in our roadmap”).  If they need it a week or a month after they ask for it, then you can actually build it for them.

It actually can be quite helpful to checkpoint feature usage every so often.  I’ve seen this done two different ways, though I’m sure there are more.  The first is to look at the data and see what features clients are using.  This is nice because it just takes developer time, digging through your OLTP database.  Make sure you write down the results and the queries.  However, this won’t work until you have some users who’ve been using your system for some period of time.  The second is to schedule user interviews and watch your clients or prospects use your system.  This is time intensive, but can lead to many many insights and gives you definite user empathy.

Now, this type of development doesn’t free you from having a strategy. You need to pop your head up every three months or so and revisit the strategy and see if your business is working toward it. But if you are a completionist than early stage SaaS is not for you.


Smashing: A Quick Dashboard Solution

I’m putting together a business metrics dashboard for The Food Corridor (what is old is new again, I remember a project at XOR, my first job out of school, that was all about creating a dashboard). I could have just thrown together some rails views, but I looked around and saw Smashing, which is a fork of Dashing, a dashboard project that came out of shopify.

Smashing is a sinatra app and is fairly simple to set up. It looks gorgeous, a lot better than anything I could hack together. I could install it on a free heroku dyno. Even though it will take a bit of time to spin up, it is now running for free. Smashing has nice MVC separation–you have dashboards which assemble widgets, and then jobs which push data to widgets on a schedule. Sending data looks something like this: send_event('val', { current: current }) where val is referenced in the widget.

You can create more than one dashboard (I did only one). They aren’t customizable by non developers, but once the widgets are written, they can be created by someone with a modicum of experience editing HTML.

Some tips:

  • Smashing stores its state in a file. If you are running on heroku, the filesystem is ephemeral. You have two options. You can store the state in an external data store like redis (patch mentioned here, I didn’t try it). Or you can rely on the systems you are polling for metrics to maintain the state. That’s the path I took.
  • The number widget has the ability to display percentage changed since last updated: send_event('val', { current: current, last: last }). Make sure that val is an integer–I sent a string like “100000” and that was treated as a zero for purposes of calculation.
  • If you are accessing any external systems, make sure you inject any secrets via environment variables.  For local development, I used dotenv.
  • You’ll want some kind of authentication system.
  • The widgets that come with Smashing aren’t complicated, but neither are they documented, so prepare to spend some time understanding what they expect.
  • I grouped jobs, which gather the data, by data source.  You can send multiple events per job, and I thought that made it clearer.  Connections to APIs or databases only needed to happen once as well.
  • The business metrics which I was displaying really only change on a monthly basis.  So I wanted to run the data gathering immediately, then in a week or two weeks.  Because of the ephemeral state, I expect the second run will never happy, but wanted to be prepared for it.  I did so by creating a function and calling it once on job load and then in the scheduler.

Here’s pseudo code for the job that pulls data from stripe:


Stripe.api_key = ENV['STRIPE_SECRET_KEY']

def stripe
  # pull data from stripe...
  send_event('stripeval', { current: current })
end

stripe

SCHEDULER.interval '1w' do
  stripe
end

Smashing is no full on technical metrics solution (like Scout or New Relic), but can be useful for displaying limited data in a beautiful format with a minimum of developmetn effort. If you’re looking for a dead simple dashboard, Smashing will work for you.


Interview with a early stage SaaS founder

I had the chance to talk with my good friend and former colleague Corey Snipes about his SaaS project. He recently launched Meeting Star, a lightweight SaaS tool to help coordinate tech meetups. This interview has been lightly edited for clarity.

——-

Why did you come up with this product?

It began as a desire to fill my own need, for a lightweight and inexpensive place to manage small local tech events. It seemed like a fairly straightforward set of features, which wouldn’t take long to build and release as a product. (Don’t they always?) I was aware of several other tech meetup organizers who were looking for an alternative to meetup.com (often due to price, sometimes due to dislike of the feature set or UI). I did quite a bit of research last fall and didn’t find any suitable alternatives so I built it.

What do you hope to achieve with this app?

I have a few parallel interests here. I want a tool that’s useful to me as a meetup organizer. I want to leverage my experience building, marketing, and operating other software products — both my own, and for customers I’ve had over the years. I want to add a business line in my portfolio that provides value and makes people happy, while also being financially sustainable. I also run a separate, conference-related application and I anticipate some complementary lift between the two.

How much research did you do before plunging in and writing it?

Quite a bit. You can always do more, of course. I poked around online and found several lists, articles, and discussion threads about alternative platforms. I followed conversations of other meetup organizers discussing the relative merits of various methods. I made a list and tried seven or eight of what seemed like the top contender products. I was looking for a place to run my meetups, though, not specifically looking at competition. But in the end, everything was either trying to be a full-featured community management piece, or was such a terribly crafted alternative that I felt there were exactly zero real usable options for what I wanted to do.

Who is the product aimed at?

This particular [app] was born of my own needs, and I tend toward tech and entrepreneurship meetups. It’s well-suited to tech and software meetups. Those are the people in my network. Those are the meetups I attend and organize, and those are the users whose needs I can most easily identify and meet. Since it’s a reasonably lightweight application, it’s actually well-suited to many different kinds of groups, but software/tech/biz meetups are my focus.

What would make you consider this product a success?

Right now, success is getting ten paying, happy customers and getting things dialed in to their needs. I subscribe to Patrick McKenzie’s wisdom that the first ten customers are critical for turning your idea into something people want to use. And also, for proving it’s not a fluke. If you can get to ten, you can get to a hundred. And if you can get to a hundred, you can get to a thousand. For me, long-term success is a useful, sustainable product that has revenue to turn into improvements, runs smoothly, and maybe also puts a little money in my kids’ college fund.

——-

You can find out more about Corey, including why he’s moving to Cleveland, at his website.


If the images in your Rails image_tag calls don’t have a checksum…

This last week, I spent a lot of time learning about how Rails serves static files, how it interacts with a CDN like CloudFront, and how misconfigurations can really screw up your application.  I wanted to document this here so that if I run into these situations again, I can troubleshoot them more easily.

Problem #1: We did an large release, with a lot of moving pieces

We (The Food Corridor) recently engaged with a consulting company to do a refresh of the look and feel of our application.  They don’t just do UX and design, but also implementation using overseas developers and QA.  I was excited to let said developers focus on look and feel (not my strength) but made a mistake in not setting up an entire environment for them.  Instead, I let them use our staging environment.  Things took longer than predicted (as they always do, and some of it was due to my availability).  I was doing some tidying up, changing the deploy process, etc, and ended up merging a lot of changes into our codebase.  This meant that the first release had a lot more risk that previous releases–there was some of the new look and feel as well as all my changes.

That made troubleshooting any issues that came up with the release difficult, because it wasn’t clear what caused it–was it deployment changes, some of my code, some of the look and feel code?

Solution #1: Set up a ‘review’ environment for any long running branches.  I haven’t gone the full ‘review app’ path yet, but based on the docs, it doesn’t seem too hard to set up.  For now we just have one review environment that can be shared.

Problem #2: Certain images appeared on staging but not on production.

This was one of the issues that caused some heartache during the first release.  There were some images (svgs, to be exact) that were present on staging and not on production (from the browser, you’d get a 404).  But staging and production had the exact same codebase, including images.  We were doing a fresh deploy to production.  What was the issues?

I rolled back a lot of the changes I’d made to make sure it wasn’t a deployment issue (turning off pipelines, unsetting environment variables, etc).  No love.  We were still seeing 404s.  Looking at the HTML, on production the image had a different name, without the hash at the end.  From slack:

Interesting. on prod the envelope image is here: &ltlimg alt=”Message” src=”https://d1soonciftqo56.cloudfront.net/images/tfc/message.svg”>

and on staging it is here: <img alt=”Message” src=”https://d1wspyydkkjqvw.cloudfront.net/assets/tfc/message-c9189c257a23964ea6b97b89416b25a4.svg”>
… the svg isn’t being compiled on production for some reason

That led me to learn about the heroku asset pipeline and in particular the way rails4 apps are treated.  I then dove into the compiled slug, and then saw that the message-c9189....svg image was present in the staging slug under public/assets but that there was no message.svg file in production. There was, however, a message-text-89ab....svg file.

Solution #2: The staging environment had some old copies of the image files. The files had been renamed, but the image_tag calls still referenced those old files. We had to update them. I also updated the build process to run a heroku repo purge-cache every time staging was built so that we wouldn’t have any of those old files lying around, using the heroku repo plugin/. (It’s fine to make a mistake, but try not to make the same mistake twice.)


Useful Tool: Delighted for NPS reports

So, I was looking to add a simple widget to collect net promoter scores (often called NPS).  I was astonished at the dearth of options for standalone NPS tracking.  I assume that most customer relationship software has an NPS tracker embedded in it, but we didn’t want to use anything other than a simple standalone widget.

Two options that popped up: Delighted and Murm.  I did a quick spike to evaluate each of these tools.  At the time I reviewed it, I couldn’t get Murm to work, and they didn’t respond to my customer support request.  Delighted, on the other hand, did so.  They let me have access to web display, which was a beta feature at the time and worked with us on price.  It was trivial to install, though I’m still a bit unclear how the form determines when to display.  Highly recommended.

The nice thing about having a NPS tracker on your website is you can get direct feedback from your users.  This has led to numerous useful conversations and feature requests, as someone who is using our software for the first time brings new clarity to confusing features or user interface.  Plus, it is a great number to track.


Heroku 503 errors

One day recently I woke up to a text from our monitoring service (part of minimal heroku operations tasks) saying our application was down.

Darn it.

Here’s the recreation of my steps in troubleshooting this issue:

  • Login, see that the app is indeed down.
  • Look at newrelic.
  • Look at the logs (papertrail!).
  • Look at the deployment history.
  • Note when the issue started–curses, not when I did a deploy.
  • Open a ticket with heroku (after doing some research). (Love their support.)
  • Double check that the database is good and hasn’t hiccuped.
  • Look at the logs more.
  • Add more dynos, see if that helps.
  • Google the error message.
    •  <app-name> heroku/router: at=info method=POST path=”<url>” host=app.thefoodcorridor.com request_id=8a17648f-2d84-46ea-abf2-5903be894a2c fwd=”216.191.191.58″ dyno=web.3 connect=1ms service=4ms status=503 bytes=477 protocol=https“Notice that the issue is being stated right in the log file (passenger request queue filling up).  Here are sample error messages?
    • <app-name> app/web.3: [ 2017-07-12 14:47:44.3688 65/7f652dffd700 age/Cor/Con/CheckoutSession.cpp:261 ]: [Client 3-281] Returning HTTP 503 due to: Request queue full (configured max. size: 100)
  • Find some posts about the error message.  Here and here.
  • Start researching how to increase request queue size.
  • Talk a walk to clear my head.
  • Think about what external services we call, as that seems to be what might cause the request queue to back up.
  • Read another post that says restarting passenger helped.
  • Restart all dynos.
  • Problem disappears.
  • Look at logs more closely.
  • Last dyno to be restarted was the only problematic dyno.
  • Add comment to ticket about this being the cause.
  • Heroku confirms that the issue may have been the dyno: “sometimes individual dynos will hang and cause errors with 503 responses”
  • Write note to customers about the issue explaining how access to app was affected.
  • Lower number of dynos.
  • Breath a sigh of relief.


© Moore Consulting, 2003-2017 +