Kibana Visualizations that Change With Browser Reload

I ran into a weird problem with Kibana recently.  We are using the ELK stack to ingest some logs and do some analysis, and when the Kibana webapp was reloaded, it showed different results for certain visualizations, especially averages.  Not all of them, and the results were always close to the actual value, but when you see 4.6 one time and 4.35 two seconds later on a system under light load and for the exact same metric, it doesn’t inspire confidence in your analytics system.

I dove into the issue.  By using Chrome Webtools, I noticed that the visualizations that were most squirrely were loaded last.  That made me suspicious that there was some failure causing missing data, which caused the average to change. However, the browser API calls weren’t failing, they were succeeding.

I first looked in the Elastic and Kibana configuration files to see if there was any easy timeout configuration values that I was missing.  But I didn’t see any.

I then tried to narrow down the issue.  When it was originally noted, we had about 15 visualizations working on about a months worth of data.  After a fair bit of URL manipulation, I determined that the discrepancies appeared regularly when there were about 10 visualizations, or when I cut the data down to four hours worth.  This gave me more confidence in my theory that some kind of timeout or other resource constraint was the issue. But where was the issue?

I then looked in the ElasticSearch logs.  We have a mapping issue, related to a scripted field and outlined here, which caused a lot of white noise, but I did end up seeing an exception:


org.elasticsearch.common.util.concurrent.EsRejectedExecutionException: rejected execution (queue capacity 1000) on org.elasticsearch.search.action.SearchServiceTransportAction$23@3c26b1f5
        at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:62)
        at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
        at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
        at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.execute(EsThreadPoolExecutor.java:79)
        at org.elasticsearch.search.action.SearchServiceTransportAction.execute(SearchServiceTransportAction.java:551)
        at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:228)
        at org.elasticsearch.action.search.type.TransportSearchCountAction$AsyncAction.sendExecuteFirstPhase(TransportSearchCountAction.java:71)
        at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:176)
        at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.start(TransportSearchTypeAction.java:158)
        at org.elasticsearch.action.search.type.TransportSearchCountAction.doExecute(TransportSearchCountAction.java:55)
        at org.elasticsearch.action.search.type.TransportSearchCountAction.doExecute(TransportSearchCountAction.java:45)
        at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
        at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:108)
        at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:43)
        at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
        at org.elasticsearch.action.search.TransportMultiSearchAction.doExecute(TransportMultiSearchAction.java:62)
        at org.elasticsearch.action.search.TransportMultiSearchAction.doExecute(TransportMultiSearchAction.java:39)
        at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
        at org.elasticsearch.client.node.NodeClient.execute(NodeClient.java:98)
        at org.elasticsearch.client.FilterClient.execute(FilterClient.java:66)
        at org.elasticsearch.rest.BaseRestHandler$HeadersAndContextCopyClient.execute(BaseRestHandler.java:92)
        at org.elasticsearch.client.support.AbstractClient.multiSearch(AbstractClient.java:364)
        at org.elasticsearch.rest.action.search.RestMultiSearchAction.handleRequest(RestMultiSearchAction.java:66)
        at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:53)
        at org.elasticsearch.rest.RestController.executeHandler(RestController.java:225)
        at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:170)
        at org.elasticsearch.http.HttpServer.internalDispatchRequest(HttpServer.java:121)
        at org.elasticsearch.http.HttpServer$Dispatcher.dispatchRequest(HttpServer.java:83)
        at org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:329)
        at org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:63)
        at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.messageReceived(HttpPipeliningHandler.java:60)
        at org.elasticsearch.common.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.elasticsearch.common.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)
        at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.elasticsearch.common.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)
        at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
        at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
        at org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
        at org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
        at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
        at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
        at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
        at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
        at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
        at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
        at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)

Which led me to this StackOverflow post.  Which led me to run this command on my ES instance:


$ curl -XGET localhost:9200/_cat/thread_pool?v
host            ip           bulk.active bulk.queue bulk.rejected index.active index.queue index.rejected search.active search.queue search.rejected
ip-10-253-44-49 10.253.44.49           0          0             0            0           0              0             0            0               0
ip-10-253-44-49 10.253.44.49           0          0             0            0           0              0             0            0           31589

And as I ran that command repeatedly, I saw the search.rejected number getting larger and larger. Clearly I had a misconfiguration/limit around my search thread pool. After looking at the CPU and memory and i/o on the box, I could tell it wasn’t stressed, so I decided to increase the queue size for this pool. (I thought briefly about modifying the search thread pool size, but this article warned me off.)

This GH issue helped me understand how to modify the threadpool briefly so I could test the theory.

After making this configuration change, search.rejected went to zero, and the visualization aberrations disappeared. I will modify the elasticsearch.yaml file to make this persist across server restarts and re-provisions, but for now, the issue seems to be addressed.


Five rules for troubleshooting an unfamiliar system

trouble photo

Photo by Ken and Nyetta

A few weeks ago, I engaged with a client who had a real issue.  They sold a variety of goods via a website (if this was the 90s, they would have been called an ‘e-tailer’), and had been receiving intermittent double orders through their ecommerce system.  Some customers were charged two times for one order.  This led, as you can imagine, to very unhappy customers.  This had been happening for a while and, unfortunately, due to some external obstacles, internal staff were not available to investigate the issue–they had their hands full with an existing higher priority project.

I was called in to see if I could solve this issue.  I had absolutely no familiarity with the system.  But in less than ten hours of time, I was able to find the issue and resolve it.  How I approached the situation can be summed up in five rules:

Number one: define the problem.  Ask questions, and capture the answers.  What is the exact undesired behavior?  When is the undesired behavior happening?  What seems to trigger it?  When did it start?  Were there any changes that happened recently?  Does the client have reproduction steps?

I gathered as much information as I could, but keep it high level.  I asked for architecture and system diagrams.  For the history of the application.  For access to all systems that could possibly be relevant (this will save you time in the future).  For locations of log files, source repositories, configuration files.  For database credentials and credentials for third party systems like CC processors.  It is important at this time to resist the temptation to dive in–at this point the job is to get a high level understanding so I can be efficient in the next steps.

You will get speculation about what the solution is when you are asking about the problem.  Feel free to capture that, but don’t be influenced by it.

Number two–find the finish line.  After getting a clear definition of the problem, I looked in the orders database and find out if the double orders were showing up there.  They were, which was a clue as to which part of the system was malfunctioning, but more importantly let me see the effectiveness of any changes I was making.  It also lets the customer know the objective end goal, which can be important if this is a t&m project, and it let me know the end state to which I was headed–important for morale.  (BTW, don’t do fixed bids for this type of project–overruns will be unpleasant, and there will be overruns.)

I was able to write a SQL script to find double orders over a given time frame.  I ended up writing a script which emailed the results of this query to myself and the client nightly, as an easy way to track progress.  The results of this query were a quantifiable, objective measure of the problem.

Number three–start where you are familiar.  I could have dove in and looked at the codebase, but due to my problem definition, I knew that there had been no changes to the checkout portion of the code base for years.  I also was unfamiliar with the particular software that managed the ecommerce site and could have wasted a lot of time getting up to speed on the control flow.  Instead, once I had the SQL query, I could find users that had been double charged, and look at their sessions in the web server logs.  I’ve been looking at apache http logs for over a decade and was very familiar with this piece of the system.

Number four–follow your nose. I followed a few of the user sessions using grep and noticed some weirdness in the logs.  There were an awful lot of messages that indicated the server had been restarted, and all the double orders I looked at had completed 5-6 seconds after the minute changed.  (It’s hard to define weirdness explicitly, which is why it behooved me to start with a portion of the system that I was experienced with–it made the “weirdness” more obvious.)  From here, I ended up looking at why or how the server was being restarted regularly.  Ended up finding an errant cron job which was restarting the server often enough that the ecommerce system was getting confused and double booking orders–once before the restart and once after.  This was easily fixed by commenting out the cron job.

Number five–know when to stop.  This ecommerce system obviously had a logic flaw–after all, restarting the web server shouldn’t cause an order to be entered twice, whether you restart it every hour or once a year.  I could have dug through the code to find that out.  But instead, I commented out the cron job, let the system run for a week or so and waited for more double orders.  There were none, indicating that the site was low traffic enough that whatever flaw was present didn’t get exercised often, if at all.  I confirmed with the client that this situation met his expectations of completeness, and called it good.

Being thrown into a new system, especially when troubleshooting, is a difficult task.  I am thankful the client was relatively responsive to my questions, and that pressure, while present, wasn’t intense.  These five steps should help you, if you are put in any troubleshooting situation.


Twitversation: how much do you converse on Twitter?

twitter photo

Photo by eldh

You know what I said a few days ago?

I’d love to have stats on this to make myself more accountable, but I wasn’t able to find an easy way to show my Twitter usage (new tweets vs replys vs retweets)–does anyone know one?

Well, I didn’t find anything and thought it’d be fun to learn some of the Twitter API, a bit of Django, Bootstrap, and how to host something on Heroku.  So, I wrote an app, Twitversation, which gives you a rough approximation of how much you converse on Twitter, as opposed to broadcasting.  You can enter your Twitter username and it presents a breakdown graph and a numeric score (I’m 60 out of 100, whereas patio11 scores 78 and Gary V scores a hefty 83.

Twitversation only pulls the last 200 tweets, so it’s not canonical, but it should be enough to give you a flavor.  Sarah Allen has a post up about her score.

What’d I learn?  Among other things:

  • Heroku is super easy to get started on. And it’s free!  Perfect for your MVP.
  • Django has an unfortunate term for the C in the MVC (they call it a view).
  • You can create a pie graph using only CSS and HTML.
  • Side projects take longer than you think.
  • Picking a side project that doesn’t require any feeding is liberating.  Twitversation will keep running without any attention on my part, as opposed to my other side project.
  • Python’s dependency management is a bear for a newbie.  I didn’t have to do much with this project, because it had its own vagrant vm, but I saw some of the complexity out of the corner of my eye.  Makes me long for the JVM and classpaths, and I never thought I’d say that.
  • Catchy names are hard to come up with.

Hope you enjoy!


400 error on heroku git:clone

heroku photo

Photo by jacobian

I’m working on an estimate for changes to a heroku hosted web application.

I was trying to run heroku git:clone --app appname after adding myself as a collaborator. I was running this on a new vagrant box running ubuntu.

However, I kept getting this error message:

vagrant@precise64:~$ heroku git:clone --app appname
Cloning from app 'appname'...
Cloning into 'appname'...
error: The requested URL returned error: 400 while accessing https://git.heroku.com/appname.git/info/refs
fatal: HTTP request failed

And I couldn’t understand why.

After some fiddling, I determined that first you need to have an ssh key generated:

vagrant@precise64:~$ ssh-keygen -t rsa

And then you can run:

vagrant@precise64:~$ heroku git:clone --app appname --ssh-git
Cloning from app 'appname'...
Cloning into 'appname'...
Warning: Permanently added the RSA host key for IP address 'ipaddress' to the list of known hosts.
Fetching repository, done.
remote: Counting objects: 902, done.
remote: Compressing objects: 100% (473/473), done.
remote: Total 902 (delta 433), reused 826 (delta 379)
Receiving objects: 100% (902/902), 28.06 MiB | 556 KiB/s, done.
Resolving deltas: 100% (433/433), done.

Hope this helps someone, somewhere.


Symfony2 Impressions

violin photo

Photo by mitch98000

Recently, I had a short engagement for a client who had an existing application written in Symfony2.  I haven’t really touched the modern PHP frameworks (the most recent experience was CakePHP 1.2, which was last released almost 3 years ago).  It was a pleasant surprise.

What was awesome about Symfony?

  • It forces you to think in terms of components (called bundles).  Even the core functionality is a bundle.
  • There are many bundles out there to let you get up to speed quickly.
  • It uses Composer, a dependency management tool much like Maven or NPM, to manage packages.
  • The documentation is extensive and versioned.
  • There is a clear product roadmap, including long term releases and clear deprecation dates
  • It has built in integration testing functionality, which lets you test clicks and form submissions and search the DOM for expected results.
  • There is clear configuration support for different environments.
  • It uses an ORM which seems capable–I didn’t get to dive into this too much.

Of course, the proof is in the pudding, and I didn’t get a chance to live with this solution for more than a few weeks.  I don’t know how active the community is, though the google group seems relatively active.  I’m sure there are warts in Symfony2–searching for ‘symfony2 sucks’ turned up a few rants.  But, for a greenfield webapp project, I’d happily use Symfony2.


Don’t forget to deactivate inactive items in easyrec

I wrote before about easyrec, a recommendation system with an easy to integrate javascript API, but just recently realized that I was still showing inactive items as ‘recommended’.  This is because I was marking items inactive in my database, but not in my easyrec system.

Luckily, there’s an API call to mark items inactive.  You could of course manually login and mark them as inactive, but using the API and a bit of SQL lets me run this check for all the items.  Right now I’m just doing this manually, but will probably put it in a cron job to make sure all inactive items are marked so in easyrec.

Here’s the SQL (escaped so it doesn’t wrap):

select concat('wget "http://hostname/api/1.0/json/setitemactive?apikey=apikey&tenantid=tenantid&\
active=false&itemtype=ITEM&itemid=',id,'"; sleep 5;')
from itemtable where enabled = 0;

I have an item table that looks like this:

CREATE TABLE `itemtable` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
...
`enabled` tinyint(1) NOT NULL DEFAULT '1',
...

Where enabled is set to 0 for disabled items and 1 for enabled items. The id column is numeric and also happens to be the easyrec item id number, in a happy coincidence.

The end output is a series of wget and sleep commands that I can run in the shell. I added in the sleep commands because I’m on the demo host of easyrec and didn’t want to overwhelm their server with updates.

 


GuideStar, or Find the Form 990

nonprofit photo

Photo by jurvetson

Tis the season!  You may not give as much as Bill Gates (in fact, I bet you don’t).  But, if you give money to any non profits, you should be aware that the IRS requires every 501(c)(3) organization to file a form 990. (This post is not going to be useful to my readers in other countries, but I’d be interested to know if there are analogous services elsewhere.)

That form 990 is required to be publicly available. You can request a copy as well.

But, you may be in the middle of making a donation, or you may not want to contact the organization to request a 990. GuideStar is a non-profit information aggregator, and they have 990’s for most organizations. You do have to create a free account, but that’s it. If you pay for a premium account, you have access to more years of 990s, but I’ve found that for the casual inquiry, what the free account provides is enough.

There is all kinds of interesting information in the 990. Here’s the 2013 990 for the New York City Ballet. In it, we can see that this organization:

  • has 66M of revenue and expenses
  • has millions of dollars of investment income
  • paid 3.4M in advertising
  • paid 20k for meetings
  • has a large number of directors who all work for free
  • pays the ‘ballet master in chief’ 800k in salary

And plenty of other interesting information.

If you are thinking of giving to an organization, you can learn a lot about it from the form 990’s it files.


Follow people on Hacker News, get new posts in your inbox

Ever since Jeff Beard introduced me to Hacker News, it’s been one of my favorite sites.  You can check out my profile here.

Justin Jackson pointed me to a useful tool which lets you follow activity on HackerNews: HNWatcher. This lets you set up searches for various terms, users or other interesting topics on Hacker News, and emails you the results regularly.

Right now, I simply follow a few users that say interesting things, but I could see it being as useful as google alerts for staying on top of certain topics.

If you’re an avid Hacker Newser, or are interested in what this demographic (startups, hackers, overindexed on Silicon Valley, overwhelmingly young and male) has to say, sign up for HNWatcher today.


Helping a friend gather data and reach prospects with gentle intros

coffee photo

Photo by My Aching Head

I had coffee with a friend the other day, and he shared a business idea. I thought it was an awesome idea–I certainly saw the need in the marketplace and believed he had the skillset and resources to execute on the idea.

He’s still in the exploratory phase, so I offered to send gentle intros to people in my network who I thought would benefit from his idea. (The target market is anyone with a custom web application that makes money, or anyone who builds custom web applications and is looking for a way to provide ongoing support–if that is you, contact me if you would like to learn more.)  I asked him to write a small spiel that he’d feel comfortable with me sharing.  If you are thinking of doing this, make your friend write a spiel for you.  If they can’t write a spiel, chances are they won’t be good at follow up and your intros will be wasted.

Then, I went through my LinkedIn network and put contacts into categories:

  1. this person (or the company for which they work) might want to partner with my friend
  2. this person (or the company for which they work) is a possible client for my friend’s offering
  3. this person might know people who are in categories 1 or 2.
  4. this person (or the company for which they work) is not a good fit for what my friend is working on
  5. who is this person?

And then I sent soft pitch emails to almost everyone in categories 1, 2 and 3.  The content varied based on which category someone was in, but for category 1, the email was something like:

I have a friend who owns a hosting company who is looking to talk to consulting companies about a possible new product he is thinking about offering.  Here is his spiel:

 

[…spiel from friend …]

 

I wasn’t sure if this kind of software maintenance was something that your company wanted to keep inhouse, or if you would be interested in discussing this with him.  I wanted to check before I did intros.    Is this something you think is worth learning more about?

This way, my friends and contacts on LinkedIn don’t get spammed from someone they don’t know.  Instead, they get an informative email from me, asking if they want to learn more.  If they do (and about 10% did), I do mutual introductions, and then the ball is in their court.  (Side note: here’s a great intro email etiquette guide.)

Why did I do this?  Well, there were a couple of reasons.

First and foremost, because I thought it would be a win win for both sides.  My friend gets more data about his offering and how the market will react to it.  My contacts/friends on LinkedIn learn about a new product from a trusted source.

Second, I was able to do some social network housecleaning.  I was able to ‘unlink’ with all people in category #5–it’s always nice to clean up your social graph.

Third, I reached out to people and had some interesting conversations.  Some folks I hadn’t talked to in years.  It’s good to reach out to people, and always better to do so with something of use to them, rather than a plea for work.

This was a fair bit of effort (a couple of hours).  I can’t imagine doing this monthly, but once a quarter seems reasonable, especially if I’m reaching out to a different segment of my network each time.  And I don’t have to do the whole process every time–spiel, linkedin, soft pitch, intro.  I actually like scanning news sites and simply sending interesting articles to old contacts: “Thought you might be interested in this <link> because of XXX and YYY”.  Those are super simple to send, and again, provide value and raise your profile.

Next time you talk to a friend who has a great idea, who can execute on it, and who will follow up with anybody you introduce them to, consider reviewing your social graph for prospects.  Gentle intros can benefit all three of you.


Open Source, Consulting and Building SaaS Products

construction photo

Photo by JD Hancock

I was browsing Hacker News the other day, and ran across this article, lamenting how difficult it was to support a company with an open source project and that insomuch as one could, consulting generated far more revenue than selling SaaS services like hosting.  For the record, I’ve never touched LocomotiveCMS.  From a brief glance, it looks nice.

While I feel for them, I think that they have alternatives:

  • Sell premium support.  Right now, it appears the only way to get premium support is to host with them, and it seems that many clients are more interested in self hosted solutions.  Makes sense–if you are a rails developer (the target market for this CMS) you already have a hosting solution.  But if premium support was offered separately, they could hire someone (possibly part time) less skilled than Didier, the primary developer, and have them take care of tier 1 support.  And still offer a warm fuzzy feeling for harder problems, which would escalate to Didier.  Companies like to pay for that kind of service, even if they don’t always use it.  This strategy would also decrease the amount of revenue needed to hire someone to help Didier (customer server folks are less expensive than developers).
  • Sell an ebook (or a couple).  These are far easier to create and sell than a SaaS product.  (I use leanpub!)  It could be an ‘authoritative guide to LocomotiveCMS’ or just focus on one part.  Since Didier knows which questions he often answers for people who have paid him money, he’s probably got a very good idea of where the pain points are.
  • Someone suggested this in the comments, but a marketplace for plugins to LocomotiveCMS seems like a natural way to go.  Again, i don’t know that community, and marketplaces for CMSes can be hard to kick start, but this is worth evaluating.
  • I’m sure there are others.  Here’s an exhaustive list of business models, courtesy of the AVC community, so if I were them, I’d review and see what was a fit.

In my comment on the HN post, I talk about how products often face a “round peg in an elliptical hole” problem. I meant that products often solve 80% of the problem for 80% of the users.  They also require users to change their processes (more crystallization).  Typically there’s just enough offset that people feel cognitive drag.  (Of course, the same thing usually happens with custom solutions, you just don’t know that until you are done.  Doh!)

Especially in crowded markets, like CMSes, it is far far easier to sell enough hours to make a living customizing a solution than it is to sell enough products to make a living.  Brennan Dunn covers this ground well.  Every consulting company I’ve ever seen or been a part of, and every consultant I’ve ever known (except the ones who were contracting for one client and really were employees with more flexibility), dreams of transitioning from non scalable consulting by the hour to scalable product sales.  One friend even had a name for it–the “von MacIntyre machine”, which would make money while he slept.

But it’s hard.



© Moore Consulting, 2003-2017 +