I had a lot of fun working on a one day ‘hackfest’ project with Amazon Alexa. I learned a lot about voice UX and Alexa implementation details.It’s an interesting platform, especially if you have broad brand recognition and can deliver high level valuable information via short chunks of text.
From my blog post on the Culture Foundry site:
The multi step interaction is a bit clunky, but I think it’s a great way to avoid collisions between different skills. Basically, the user calls out an ‘invocation’ like ‘open color picker’. Interactions with Alexa after that are send directly to that particular skill until an end point is reached in the interaction tree. Each of these interactions is triggered by a different voice command, and is handled by something called an ‘intent’. Intents can have multiple triggering commands (‘what is my favorite color’ vs ‘what is my color’, for example). There’s also a lightweight, session level storage while the entire invocation is occurring, which means you can easily pass data between intents without reaching out to a more persistent data storage.
You can read the whole post over there.
I was playing around recently with some data (from the 
I am excited to go to
Slack is an amazing productivity tool when
I was troubleshooting a data issue in a production environment. It wasn’t heroku, rather a rails environment hosted on AWS. It was Rails 4.2, ruby 2.2.3.
I ran into someone at a meetup recently who’d built a SaaS that had a pretty decent MRR. Enough to support one person. Which is a huge achievement!
For the past couple of months I’ve been doing a short segment at the beginning of the 