Navigating new systems

A mazeHere are some tips and tricks I have for navigating new software systems, which can sometimes be like navigating a maze. If you’re truly unlucky, it’s a maze, but you’re blindfolded and the walls are covered in randomly placed razors.

The first is to get a clear set of expectations. Will I own the system? Who owns it now? How long have they owned it? How often is it modified? When will it need to be modified again? Is it shaky or stable? Getting these questions answered helps me understand and refine further steps.

The next step is to gain access. There are a lot of different pieces of most modern systems, so access can mean different things. Here are some kinds of access which it may be worth seeking:

  • shell
  • version control
  • database
  • ftp
  • http
  • app level (admin, user)
  • documentation
  • project planning
  • different CI environments (prod, UA, staging)
  • build system
  • admin users
  • end users

After I have access, I like to look at the front end and the back end. By the front end, I mean the user interface. And by the back end I mean the data store. Just looking around and seeing what tables and pages an application has can help.

If the system in question is not entirely custom built, googling for the user guide for the default version of the application can be helpful. Finding that user guide and skimming through it can give me more high level understanding, as well as teaching me key nomenclature. Of course if there is any local documentation, that’s helpful too, but I read that with a skeptical eye, as it doesn’t always keep pace with the system.

I also like to look at logfiles. This can help me determine something as simple as if I’m on the correct server (if I reload the page and the access log file doesn’t change, I am looking at the wrong log file or am on the the wrong server). Even better if the system aggregates logs into something like an ELK system or papertrail.

Setting up a local development environment can help. Again, this lets me gain an understanding of the big picture components, and also lets me poke at various parts of a system, possibly breaking them, without affecting other developers or, worse, customers.

Asking questions is really important, but this can be hard because often the folks with the most knowhow are the busiest.

I also like to see what files or database tables change as I move through the system. With a modest sized database, I do this by taking a database dump before taking some action and then after. Then I diff the files, sometimes using sed to break the dump file apart even further (replacing all commas with commas and newlines, for example). If using mysqldump, you can target individual tables and make sure not to use extended inserts, as that makes diffing harder.

For the filesystem, it’s even easier. I touch a file (ts) and then take the action, then run find . -newer ts -print. This command will show me all the files the system has written that are newer than ts.

Hopefully some of these tips will be helpful to you as you navigate your next new system.


My first day at Culture Foundry

I am excited to pen that title. I’ve joined Culture Foundry, a digital agency that connects the world through beautiful technology. There are a number of reasons I accepted a position with this firm, but a few that jump to mind are:

  • they are good people. This is important as a great job with a bad manager is no fun. A great manager can help make a bad job better.
  • they are working on interesting technology problems, including API integrations and high traffic websites.
  • they are 100% remote. This flexibility is really important to me.
  • they work on stuff their clients’ value.
  • the team is big enough to take on larger projects, but small enough to be agile.

For now, I’m going to be drinking from the fire hose, trying to get up to speed on their systems, so the blogging may slow down a bit. But I’ll definitely be sharing things I learn from this new opportunity. Here’s to new adventures!


Useful gem: stripe_event

If you are going to use stripe for payments, you need to set up your webhooks. If you are using rails, the easiest solution I’ve found is stripe_event. This gem mounts a configurable endpoint and takes care of all the authentication you need to receive the webhooks.You then set up configuration in an initializer to receive the various webhooks you want to receive. The type of hooks you want depends on your application, but all the available events are listed here and the stripe support folks are happy to point you toward interesting ones if you approach them with a problem.

You can (and should) test the stripe events by using fixtures and request tests. I found the most difficult part of that testing process to be getting sample data for the json payload. The documentation has some, but you may need to run a sample event through your test dashboard and capture the json via a generic webhook capture. I ended up using this type of puts debugging to help get the json for events:

events.all do |event|
  ## debugging
  puts "xxxdebugging all events"
  puts event.to_s
end

In my experience, we never received enough load to really stress out this gem (I’ve seen maybe 30 requests a minute), but if you plan to have a high webhook load, you may want to do some load testing.

Definitely a gem worth having if you are using stripe.


Zen of Code Review Series

Tree trunk with mushroomsI stumbled (thanks John!) on this post about the right way to review code. This is a key skill for working in a team and one that is underappreciated. A solid code review makes sure the code changes can be understood.

If code can’t be understood, it can’t be changed. If code can’t be changed, business processes can’t be changed (code is, after all, just business process made digital). If business processes can’t be changed, the organization can’t adapt to new opportunities or threats. Have you ever seen a tree grow around a fence? The fence constrains the tree, but the tree keeps trying its best to work around the fence. Neither end up serving their inner purposes. That’s what I think of when I see code that can’t be changed.

Here is a brief excerpt to give you a flavor of the post.

Your goal, then, is clear: question, probe, analyze, poke, and prod to make sure that you, the reviewer, could support the code presented to you for review. From an overall perspective, there are several questions to keep in mind as you begin your task…

It seems like this is a series, where the author discusses code reviews from a variety of different perspectives. Recommended.


Big hammer or small hammer?

HammerI was talking to a friend the other day about startup vs big company life and he used an analogy so good I’m going to steal it (and expand upon it).

If you think about the problem you are trying to solve as a rock, and the business you are in that is trying to solve as a hammer with which to chisel or otherwise transform said rock, you can choose a brick hammer (which is a small hammer) or a sledge hammer (a large, heavy hammer), or anything in between.

The smaller the hammer, the more effort you have to put into the swing. However, it’s fairly easy to pick up, to manipulate and to re-orient if you decide you need to approach the rock from a different viewpoint.

If you, on the other hand, choose the sledgehammer, then when you swing you are wielding a lot of force. It becomes easier to make progress on your initial approach, but if you need to switch up your emphasis, it’s going to take some time, because of the weight of the hammer.

The larger the business, the more leverage and power you have to attack a single problem. I’ve worked at large companies in the past, and I can tell you the size and scope of problems they were able to work on, often in parallel, were amazing. However, there was a lot of time and effort spent on coordinating those efforts, and and a lot of bureaucracy and red tape if there was a process improvement needed. (There was also dead weight at some of these companies.)

At a smaller company or a startup, as I’ve worked at in the past, I didn’t have the bandwidth to take on multiple large projects. Doing more than one or two major projects was a recipe for distraction and impotence. However, when focusing on one effort, it was easy to try different approaches, work really hard and be super flexible when incorporating feedback from customers and iterating.

There are strengths in both the small and big hammer approaches. The important thing is to choose what is a good fit for both the problem you are trying to solve and your working style (which may change over time).



“Do you know anyone with 18 years of React experience?”

I often give referrals to folks looking for help (typically software development, but it could be any other area where I feel I know someone that could help another person or company out). It’s a great way to help out both parties.

However, a referral is not a referral is not a referral. Here are the seven types of referrals I have made.

  1. I have worked with this person/company directly, would do so again, and they’ve solved your problem before. I’d hire them if I could, you should.
  2. I have worked with this person/company directly, would do so again, and believe they can solve your problem. I’d hire them if I could, you should.
  3. I have interacted with this person/company in the real world in a non professional capacity and they seemed like “good people”, and might be able to help with your problem. Probably a good idea to talk to them.
  4. I have interacted with this person/company (online/offline) and they seem to know what they are doing. You may want to put them into your decision matrix.
  5. This person responded to a post I put up somewhere (maybe HN) and I like the fact they read it and responded. It’s worth looking at their resume.
  6. I have seen this person’s/company’s blog post or marketing material. They seem like they’d be worth a look.
  7. I do not know this person but they have been recommended by another person that falls into one of the above categories. May be worth a shot?

Obviously the closer to category one a referral is, the more I feel I can stake my reputation on a successful interaction. Lower down, it is less a referral and more of an informational service.

The important part in all this is being clear at what level you are interacting. You don’t want to hand someone a referral of level two and have them think it is a level five. Or vice versa.

 


Using WordPress as a CRUD Database, API Included

Ethernet cordBased on this HN discussion, which I discussed a while back, I looked at how to set up WP as a CRUD database accessible via API.

It wasn’t hard. Steps:

  1. Install WordPress (I used ec2 and the Cloudformation sample template)
  2. Install the following plugins
  3. I also installed the following optional plugins
  4. I created a custom post type of ‘todo’ and added a couple of custom fields.
  5. I was able to get the todos by going to these URLs (apparently you can have the API live at wp-json, but that required some rejiggering of url permalinks).
    • http://host/wordpress/?rest_route=/wp/v2/todo/8
    • http://host/wordpress/?rest_route=/wp/v2/todos

Here’s an example of the output:

{
  "id": 8,
  "date": "2018-03-05T02:38:26",
  "date_gmt": "2018-03-05T02:38:26",
  "guid": {
    "rendered": "http://host/wordpress/?post_type=todo&p=8"
  },
  "modified": "2018-03-05T02:40:01",
  "modified_gmt": "2018-03-05T02:40:01",
  "slug": "auto-draft",
  "status": "publish",
  "type": "todo",
  "link": "http://host/wordpress/todo/auto-draft/",
  "title": {
    "rendered": "Buy Milk"
  },
  "template": "",
  "acf": {
    "": false,
    "due_date": "20180308",
    "description": "please buy milk.",
    "who_owns_it": {
      "ID": "1",
      "user_firstname": "",
      "user_lastname": "",
      "nickname": "mooreds",
      "user_nicename": "mooreds",
      "display_name": "mooreds",
      "user_email": "...",
      "user_url": "",
      "user_registered": "2018-03-05 02:21:36",
      "user_description": "",
      "user_avatar": "..."
    },
    "done": false
  },
  "_links": {
    "self": [
      {
        "href": "http://host/wordpress/wp-json/wp/v2/todo/8"
      }
    ],
    "collection": [
      {
        "href": "http://host/wordpress/wp-json/wp/v2/todo"
      }
    ],
    "about": [
      {
        "href": "http://host/wordpress/wp-json/wp/v2/types/todo"
      }
    ],
    "wp:attachment": [
      {
        "href": "http://host/wordpress/wp-json/wp/v2/media?parent=8"
      }
    ],
    "curies": [
      {
        "name": "wp",
        "href": "https://api.w.org/{rel}",
        "templated": true
      }
    ]
  }
}

The custom post fields are all under the ACF key, and you can see that there was an expansion of the who_owns_it field. If you are going to do this, make sure have the the normal title tag be part of the custom post, otherwise the WP UX for editing the custom posts won’t be much use.

Not perfectly restful, but a super simple way to set up an API that non technical folks can use to create, update or delete records and that you can consume in other systems.


“Choose boring technology”

Reading a boring bookThis article on choosing boring technology is so good. It’s from 2015 so some of the tech it references is more mature, but the thesis is that the technology underlying a business should be well known and boring to implement. It also discusses how important cohesive technology can be (I love the final footnote).

I think this is an interesting contrast to microservices, which is essentially solving repeatedly for local optimizations and then relying on the API or message boundary to allow scaling. Microservices, in my experience, often get a bit hand wavey when it comes to operationalization (“just put it on Kubernetes, it’ll be fine”).

I have definitely dealt with this in previous companies where I had to put the kibosh on new technologies being introduced (“nope, we’re going to do it in java”). This meant that as a small team we had a standard deployment stack and process and could leverage our logging knowledge and debugging tools.

Here are some arguments I hear for using the latest and greatest technologies:

  • “Our developers will get bored and we’ll have a hard time recruiting”
    • This is a great reason to have regular hackfests and or support developers working on open source or side projects.
  • “New tech will help us do things faster”
    • It’s definitely worth evaluating new technology periodically and adding it to your stack (especially if it is outsourced, a la RDS, or boring, a la memcached). As the original post mentions, if you get substantial benefits from the new technology, then use it full bore and plan for a migration. Don’t end up in a state where you are half in/half out. Also consider tooling and processes to get things done faster–these may have quicker iteration times with lower operational risk.
  • “Choose the right tool for the job”
    • Most turing complete languages can be used for any purpose. And you shouldn’t be optimizing for the right tool for the job, but rather the right tool for the business for the job.

Really, you should just go read the post. It’s good.

 


Easily visualizing location data with Google Fusion Tables

Hands with map on itSometimes you have a list of locations in a Google spreadsheet and want to visualize where the locations using a map. Google Fusion tables lets you do just that, for free, with no technical expertise needed.

To do so, you need to create a spreadsheet. I created a spreadsheet of birthplaces of the Presidents of the USA. Here’s the spreadsheet. You want to make sure you have column headers and that your location information is all in one column. You can see that I concatenated birth city and state into one column, because you can only map one column (unless you have lat/lng, in which case you can use those two columns). You also can’t concatenate any columns when the data has been pulled into fusion tables.

Then, create a new google fusion table (under the ‘more’ menu). Choose a spreadsheet as your data source and then past in the spreadsheet link.

Your location column may or may not have been given a data type of location. If not, use the ‘edit’ menu then ‘change columns’ to convert it to a location type.

Add a map using the red plus sign and select your column as the location. Wait for your column to be geocoded (if you have lat/lng you shouldn’t need to do this).

And there you are. Here’s the map I generated. You can see that if you click on the marker you will get information about each president, which is a nice bonus feature. You can also share this within your organization or with the wider world and do additional filtering.

Fusion tables is great when you have the structured data and just need a simple map representation.

Caveats:

  • Once your data is in fusion tables, you are extremely limited on what you can do with it (see the concatenation above, for example). Do whatever data massaging you need in the spreadsheet. This also means that you probably want the spreadsheet to be your source of record.
  • There’s no way to update your data. So when the next president enters office, I will either have to create a whole new fusion table or delete all the rows and re-import.
  • Fusion tables seems to no longer be under active development. At least I haven’t seen many feature changes over the past couple of years. It is out of beta. I think it’s fine to build adhoc tooling on top of this service, but if I were looking for the core of my business I’d avoid this.


© Moore Consulting, 2003-2017 +