Skip to content

Thoughts on managing a devtools forum

I’m one of the team members tasked with managing the FusionAuth community forum, where folks using FusionAuth who don’t have a paid support plan can find help and answers.

Here’s some advice for running such a forum. (I wrote previously about why you should use a forum rather than Slack/Discord/live chat.)
  • First, consider why are you going to run a forum? Lots of great reasons: ease a support burden, help with SEO, foster community, get product feedback. Get clear on what you are trying to build before committing, because it is a commitment.
  • Choose forum software carefully. Migration will be a pain. Common options include nodebb (what we use), discourse, and vanilla forums.
  • Seed the forum. This means gathering up questions as you see them pop up in other venues (support tickets, GitHub issues, customer calls). I did that religiously for a few months. I learned a lot about the product and the forum posts meant that folks were helped even when it was new. I’d recommend posting the question and then responding in-thread with an answer.
  • Forums will bubble up commonly asked questions. This can tell you where your docs should be improved.
  • You must groom the forum. It won’t be set and forget. You have to pay attention to it, answer questions, respond to responses. A forum full of unanswered questions is worse than no forum at all. Trust me, developers will notice (we’ve had customers mention that they appreciated how active our forum was).
  • Because we sell support, we don’t answer questions immediately or have engineering staff answer them. There are also questions that we can’t answer such as architecture recommendations. Immediate responses and answers requiring context and research are reserved for paying customers. This hurts my heart some times, but we are open about it. May not be applicable to in all cases.
  • Don’t be afraid to ban users. We ban anyone who spams, no questions asked. Delete the content and ban the user. We luckily haven’t had any abuse issues beyond spam.
  • Have a code of conduct. I grabbed GitHub’s (you can see ours here, and here’s GitHub’s)  but have something. We didn’t in the early days, but it’s a good thing to have out of the gate.
  • Don’t expect a lot of community to grow out of it. At least, I haven’t had that experience, most people just want their questions answered. May be because I’m extremely part time on it and haven’t fostered it, though. Slack/discord is much more likely to build community in my experience. But know what your users want: Google or Facebook?
  • At a certain point, I had to enable a post queue, where a team member approves every new user. We were getting a lot of spam accounts and then they’d post gambling ads and then direct a ton of traffic (1000s of pageviews) to the ads. I don’t know what the spammer endgame was, but approving each new post has solved the issue. I’d definitely look for that feature.
In general I love forums, and so do devs, but they do take some work.

Slack failing to open with NS_ERROR_DOM_ABORT_ERR error

I use Firefox as my primary browser (version 84 as of today). As part of this, I keep logged into a number of slack channels. These are some of my favorite communities and I go there to chat with folks, learn about interesting topics and hunt for neat links to resources I didn’t know about. Occasionally I’ll even ask a question or two.

Recently, I saw some weird behavior. The slack channels weren’t updating. The notice at the bottom of the browser bar was something like “Slack is trying to connect”.

Hmm, I thought, that’s weird. I tried reloading the page. I couldn’t do it. That is, even hitting control-f5 didn’t change anything. However, I could open other website just fine.

I tried quitting all my browser tabs and restarting. Made sure Firefox was up to date. Checked the Slack status page to see if slack was down. Opened a new tab and typed the slack channel url into the address bar.

Nothing.

So, I popped open the dev tools and looked at the network tab. I saw this error when loading app.slack.com:

NS_ERROR_DOM_ABORT_ERR

That turned up this bug. From a scan, several other folks where having this issue, with slack and other sites. This comment shared the solution:

Clearing storage for app.slack.com fixed the issue, and the Slack workspace loads correctly now.

All I had to do was clear the storage for app.slack.com and Slack started working again, magically. Even though I was warned that it might force me to log in again, I didn’t have to do so.

A quick look at xkit

I was prototyping a small app in xkit and wanted to document this useful tool. When I first saw this launch on HackerNews, I couldn’t quite understand what the purpose was. But now that I’ve spent a bit of time playing with it, I understand it a bit more.

Suppose you are writing a recipe management SaaS and realize that you want to integrate with some other services. Perhaps you want to be able to export the steps of a recipe to a Trello board, or to a Google doc, or to a PDF.

These are all services available on the internet with an API which will allow end users to give your application access to their accounts. This lets you publish to each user’s Google docs account or Trello board.

(I’m not as familiar with services offering PDF generation functionality, but a quick search turns up some options, including some that you can self host.)

There’s a fair bit of hoop jumping in terms of setting up API keys and OAuth consent screens, however.

And this is the problem that xkit solves. If they’ve already written the connection (here’s a list), it is quite simple to add the ability for a user to connect to the service. With no previous experience, I was able to connect to Trello in about an hour. The user experience of connecting the external SaaS application is really smooth and far better than something I could whip up quickly.

If they haven’t written a connector, I don’t believe you can write one yourself. For example, for that PDF service, you’d need to contact the xkit folks and ask them to add one.

This is different than, say, Zapier, because it’s operating at a different level. Zapier is excellent (and has been for years) at letting users connect their apps. But xkit lets you let your users connect apps, basically letting you build a mini Zapier (in terms of connectivity, not functionality).

You can also host your own app catalog if you want to. I didn’t get into this too much, though, so it’s unclear what the benefits of that are.

They provide a user data store out of the box, but also integrate with a number of other providers (including FusionAuth). This means you can leverage your existing auth solution and still get the easy integration with other third party APIs.

Their pricing seems reasonable, given what they take off your plate.

Nothing’s perfect, however. I found a few documentation bugs, which I let them know about (they host their docs on readme.com and I found the suggestion process delightful). When I tried to sign up, the service was down, but a quick Tweet exchange resolved the issue within 30 minutes.

It is bizarre to me as an authentication focused company that they don’t have a “forgot password” link on their login pages. The documentation is javascript heavy, with nary a mention of other languages, but that’s understandable as they’re just starting out. It’s also strangely video heavy, which I found a bit distracting; that, however, could just be my learning style.

All in all, if you are looking to integrate third party APIs which require OAuth interactions on the part of your users, you’d be well served to take a look at xkit.

Joining FusionAuth

I wanted to let y’all know that I’ve joined FusionAuth as a developer advocate. I’ll be working to help our customers succeed and promote the virtues of standards based user management systems. I get to write a lot of content and example applications against a full featured API.

I’ve built enough systems to know two things:

  • Users and their behavior are almost always a key part of any software application.
  • User management is difficult to get right, especially if you want to use secure best practices and standards such as OAuth.

FusionAuth wants to elevate everyone’s user identity management system. The community edition is free and will always be. (It’s important to note that it is free as in beer, not free as in speech, but almost all of the development happens in the open.) If you want to run FusionAuth on your own forever, that’s great! You get a secure user store that supports OAuth, SAML and two factor authentication, free forever. We’ll happily provide you “best effort” support in our forums and we’ve seen the community help each other out too (most notably in the creation of helm charts to run FusionAuth on Kubernetes).

If, on the other hand, you find value in FusionAuth and want guaranteed support, custom development, or hosting, we’re happy to sell that to you. The price is often a fraction of the other solutions out there. Another differentiator for FusionAuth is that you can host it wherever you want: in your data center, in your cloud, or on our cloud servers. Not every client needs that level of control, but many do.

I really love the business model of providing a ton of value to your end users and monetizing only a small percentage of them with unique needs. (I’ve been involved in this type of business before.) The business thrives and there’s a ton of consumer surplus generated.

I’m really excited about this opportunity. It’s a nimble company with a passionate team based in Denver. If you need a user identity management system built from the ground up for developer happiness, please check us out.

Moving a database driven side project to Netlify

FishermanI recently moved a side project to Netlify. Netlify, if you aren’t familiar with it, is a static file hosting service with a great workflow, free SSL certificates, and a built in CDN. I made the move because the side project was a database driven site, but didn’t really use other server side interactivity (no user generated content, etc). I haven’t invested much in the side project over the past couple of years, but it still gets tens of hits a day and is useful to a users. It’s a top hit on google for certain keywords. I didn’t want to spend time updating the underlying application, but wanted it to be more secure. The site is only updated over a month or so every year. It is then read only for the next eleven months of the year, as it provides information on a very seasonal service.

Moving to Netlify (or, frankly, any other static site provider) provides the following benefits:

  • extremely low cost (Netlify is free for the level of traffic I have).
  • faster browsing experience for the end user (due to being behind a CDN).
  • free SSL certificates, with no hassle of setting up letsencrypt myself.
  • indirection–the source site can now be anywhere. I could put it on an ec2 instance that I only boot up when I need to push a new build, or could even host it locally. This means that I don’t need to worry about the updating the source application.

I could have chosen to do this with S3/Cloudfront/AWS Certificate manager/Lambda (or using technologies from some other cloud provider). But even though I’m familiar with all the steps to do this, it was lot of work. And Netlify was free and had done all the grunt work of bringing all these technologies (or similar ones, I’m not familiar with the tech behind the site, but they do write about it a bit).

What steps did I take to move a database driven directory site to Netlify? Enough that I wanted to document this for my future self.

  • Change the DNS for the site to have a lower TTL. During the transition from your site to Netlify, users may see versions served from either the old site or the new site, and you want to minimize that.
  • Remove all forms and other server side interactivity that your site has. You can replace them with javascript driven interactivity (like Disqus for comments). Netlify has some support for functions, but I didn’t explore that.
  • Set up Netlify using a default URL (they provide it, it is something like ‘foo-bar-123.netlify.com’). I deploy from Bitbucket. (I know some people hate on Bitbucket, and I don’t like all of their UI, but they are free for private repos, which is a win for me.) This let me understand how to deploy a simple one page site.
  • Download the site. I used wget: wget -mk http://mysite.com/ . The switches set up mirroring and convert all links to be relative.
  • Move the site to a subdirectory (I used web) and configure Netlify to deploy the HTML from that subdirectory. This lets you have other scripts outside of the webroot that can help you build the site.
  • Check in the site with the downloaded content and push it up to Bitbucket.
  • Watch the site be deployed to the Netlify default URL.
  • Access the site via SSL: ‘https://foo-bar-123.netlify.com’ and look in the console for “mixed active content” errors. Fix those as applicable. (If this was for a client, rather than a side project, I’d solve all of these.) Note that if the source site is http only, you may need to hardcode https URLs.
  • See that “pretty” html pages like ‘https://foo-bar-123.netlify.com/about’ are rendered as text.
  • Ask on Stackoverflow about the problem.
  • End up solving it by generating a _headers file after downloading the site. Update Stackoverflow question with the answer.
  • Test again that the site at the default URL looks good.
  • Update the webserver config and DNS for the database driven site. I wanted it to answer to both the old addresses: mysite.com/www.mysite.com and a new address: generator.mysite.com
  • Update DNS to point mysite.com and www.mysite.com to the Netlify site. Instructions here.
  • Wait for DNS to propagate. When it does, check to make sure that SSL works.
  • Add a password for generator.mysite.com
  • Test the download process with the password (the wget command changes to wget --user=USER --password=PASS -mk http://mysite.com/
  • Write a script to do the download process.
#!/bin/sh
wget -mk  --user=USER --password=PASS http://generator.mysite.com/
mv generator.mysite.com mysitecom
cd mysitecom
find `pwd` -type f |grep  -v \\. |sed "s#`pwd`/##" > list 
for i in `cat list`; do echo "/$i" >> _headers; echo "  Content-Type: text/html" >> _headers; done
rm list
cd ..
rm -rf web
mv mysitecom web
git add web
git commit -m "pull in latest from generator.mysite.com"
# could do a git push here as well if you wanted

Now, any time that I make changes to the site, I need to run this script and push to Netlify. I could put it in cron or a scheduled lambda if I thought I was going to make changes often. For now, I’m content to run it manually. One downside is that I self-host my analytics code and that site is not SSL enabled. So I’ll need to make a change if I want to continue to get statistics.

So, in the end I have a fast, secure site that is hosted outside my infrastructure and that is easy to update. This process, while not trivial, was easy enough that it has me thinking about where else I could use it (this blog, for instance). Highly recommend.

Greenfield webapp data storage decision tree

Choice on a sign
Choices choices

I saw a discussion about storing data in one of my slack channels and saw a line too good to leave in the slack.

Here’s a data storage decision tree for 95% of applications. Do you have data to durably store?

1. Use an open source relational database

(The original poster specified PostgreSQL, but MySQL/MariaDB are viable alternatives in my mind. Each has different strengths.)

The modern, open source RDBMS is flexible and scaleable (even web scale [warning, video has cursing]). It’s free from licensing fees (though you can pay for support). There are turnkey solutions for managing it in the cloud. There are plenty of developers who know how to use it (even more developers know how to use SQL) and many DBAs and sysadmins who know how to tune it. . You can scale it out by using read replicas and scale up by buying better hardware (or VMs). Every language has a library that talks to RDBMS. The database will maintain data integrity. Many tools that non technical users favor can connect to them (even Excel, if you install the right ODBC driver).

There are plenty of other solutions out there (filesystem, no SQL variants, xml databases, data lakes). They exist for a reason. For certain problems and at certain scale they are better solutions than the Swiss army knife of a RDBMS. But the default decision should always be an RDBMS, and the onus should be on the other solution to justify its present. For 95% of problems, the your default should be MySQL/MariaDB or Postgres.

Using WordPress as a CRUD Database, API Included

Ethernet cordBased on this HN discussion, which I discussed a while back, I looked at how to set up WP as a CRUD database accessible via API.

It wasn’t hard. Steps:

  1. Install WordPress (I used ec2 and the Cloudformation sample template)
  2. Install the following plugins
  3. I also installed the following optional plugins
  4. I created a custom post type of ‘todo’ and added a couple of custom fields.
  5. I was able to get the todos by going to these URLs (apparently you can have the API live at wp-json, but that required some rejiggering of url permalinks).
    • http://host/wordpress/?rest_route=/wp/v2/todo/8
    • http://host/wordpress/?rest_route=/wp/v2/todos

Here’s an example of the output:

{
  "id": 8,
  "date": "2018-03-05T02:38:26",
  "date_gmt": "2018-03-05T02:38:26",
  "guid": {
    "rendered": "http://host/wordpress/?post_type=todo&p=8"
  },
  "modified": "2018-03-05T02:40:01",
  "modified_gmt": "2018-03-05T02:40:01",
  "slug": "auto-draft",
  "status": "publish",
  "type": "todo",
  "link": "http://host/wordpress/todo/auto-draft/",
  "title": {
    "rendered": "Buy Milk"
  },
  "template": "",
  "acf": {
    "": false,
    "due_date": "20180308",
    "description": "please buy milk.",
    "who_owns_it": {
      "ID": "1",
      "user_firstname": "",
      "user_lastname": "",
      "nickname": "mooreds",
      "user_nicename": "mooreds",
      "display_name": "mooreds",
      "user_email": "...",
      "user_url": "",
      "user_registered": "2018-03-05 02:21:36",
      "user_description": "",
      "user_avatar": "..."
    },
    "done": false
  },
  "_links": {
    "self": [
      {
        "href": "http://host/wordpress/wp-json/wp/v2/todo/8"
      }
    ],
    "collection": [
      {
        "href": "http://host/wordpress/wp-json/wp/v2/todo"
      }
    ],
    "about": [
      {
        "href": "http://host/wordpress/wp-json/wp/v2/types/todo"
      }
    ],
    "wp:attachment": [
      {
        "href": "http://host/wordpress/wp-json/wp/v2/media?parent=8"
      }
    ],
    "curies": [
      {
        "name": "wp",
        "href": "https://api.w.org/{rel}",
        "templated": true
      }
    ]
  }
}

The custom post fields are all under the ACF key, and you can see that there was an expansion of the who_owns_it field. If you are going to do this, make sure have the the normal title tag be part of the custom post, otherwise the WP UX for editing the custom posts won’t be much use.

Not perfectly restful, but a super simple way to set up an API that non technical folks can use to create, update or delete records and that you can consume in other systems.

Useful Tool: Intercom

Intercom In WallIntercom has been extremely helpful in allowing targeted messages to help users gain knowledge of The Food Corridor application. (Thanks for the recommendation, OTL!) What I’ve found most useful about Intercom is that it allows non technical users to build rulesets to target specific messages. This helps you help your users uncover new functionality, but only at the moment when the functionality is useful.

For example, if someone has signed up but not created a booking in TFC, we can send them a message a week after they sign up with a helpful link. This message can be via email or an in-app message, and if they respond to the in-app message, it notifies customer support just as if a chat was started any other way.

As a developer, all I had to do to get this data (which lives in our database) up to Intercom was to craft a javascript object. I added a method in the application_controller and have it cached for a while, because we ended up sending dozens of attributes up, and they are slow changing. From then on, the message targeting is done entirely within Intercom, where the non technical user can build rules based on this custom data plus any data that Intercom collects by default (last login, etc).

If you do want to target messages specific web pages (only pop up a message on page XXX, but not page YYY) you need an understanding of regular expressions. Depending on how comfortable your non technical users are and how complicated your site is, a developer may need to write the first regular expression and then have the other users extend it.

I’m skipping over other pieces of Intercom, including a knowledge base and in app synchronous chat. I think those are valuable as well, but the real win for me was allowing non technical users to control in app messaging with minimal software development investment.

How would you build a simple CRUD app in 2018?

Interesting discussion on HN about how you’d build a simple CRUD app in 2018. (CRUD stands for Create, Read, Update and Delete of records in a system.) As you’d expect, no shortage of opinions, with a lot of specifics. Also lots of recommendations to use what you know, which is always a good idea. CRUD apps, where you are simply gathering data via a web interface, run the gamut of complexity and usability, but it’s hard to believe how powerful having a centralized repository of data can be.

And the nice thing is that once you get the data into the backed of  a CRUD app (which is likely a SQL compatible database, but could be any other kind of datastore) you can either combine that data with other systems, expose the data via an API, or both.

Here was my answer to the question:

I’d use Rails today. But I can see the value in Django or Express or ASP.Net or Spring Boot.

 

Criteria I’d think about:

 

What is going to be easy for my organization to support (both operationally and with future code changes).

 

What do I know/want to learn? If the CRUD app is complicated, then I want a tech I know well. If the CRUD app is simple, then I may want to experiment with a different technology (again, within the organizational support guardrails).

 

Look for the value of the param, not its existence

We are rewriting the front end of the application on which I work. Several of the forms which gather data that drives the application previously had checkboxes, which would set a value to true or false in the database.

Some of the checkboxes were ancillary, however, and didn’t result in a database value. Instead, they were used to calculate a different result depending on whether the checkbox was checked or not. That result would then be saved to the database. This was all tested via rspec controller tests.

In at least one case, the calculation depended on the existence of the parameter. Checkbox parameters are sent when they are checked and omitted when they are not.

Then we changed the forms to radio buttons.

Uh oh.

Instead of sending the parameter if the box was checked yes or not sending the parameter if it was checked no, we sent yes if the radio button was checked and no if it was not.

Uh oh.

The tests continued to pass, because they hadn’t been updated to the new values.

Uh oh.

So the calculation was incorrect. Which meant that the functionality that depended on the calculation was incorrect. Unfortunately, that functionality was related to billing. So I just spent the last few days backing that out, checking with clients to see what they actually intended to do, and in general fixing the issue.

The underlying code was an easy enough fix, but this story just goes to show that software is complicated. Lessons learned:

  • favor a test for an exact parameter value rather than parameter existence
  • when changing forms, consider writing tests with the new input values
  • integration tests that drive a browser would have caught this issue. These are much slower so need to be used judiciously, but are still a valuable automated test