Skip to content

What is Cordova CLI?

I’m planning to write a number of posts documenting my adventures through the new Cordova (aka Phonegap, though there are differences) command line interface, known as Cordova CLI. I am in the midst of working on a Cordova application and have been using Cordova CLI heavily for a while now. My focus has been on the Android platform, though we plan to release to iOS as well, so that will be where the majority, if not all, of my examples are.

These posts only cover cordova cli 2.9. Cordova 3.0 is an overhaul that changes how the core Cordova app is assembled, and therefore has different constraints.

Here’s a list of what I plan to cover:

  • What is Cordova CLI
  • Alternatives to Cordova CLI
  • Installing And Using Cordova CLI
  • Placing Cordova CLI projects under version control
  • Upgrading projects managed with Cordova CLI
  • Hooks and Cordova CLI
  • Configuration management using Cordova CLI
  • Setting up CORS
  • Platform Specific CSS/Javascript with Cordova CLI
  • Platform specific configuration files with Cordova CLI
  • How to set up a new plugin for use with Cordova CLI
  • How to ‘plugman’ify an existing plugin
  • Releasing with Cordova CLI

Cordova CLI is different than the create scripts that have been part of Cordova/Phonegap (hereafter called Cordova for brevity) for a while (though it does use those create scripts). The create scripts only helped you create the project; afterwards you were on your own. Cordova CLI helps you create a project, manage css and javascript resources that differ per platform, develop and test it using both emulators and an in-browser emulator called ‘ripple’, manage plugin dependencies, and build packaged apps suitable for installation on phones and emulators during development. It is a nodejs application that runs on MacOS, Linux, and possibly Windows (see this bug).

Cordova CLI lets you keep the shared html, css, and javascript code in one place, and deploy it to multiple platforms. This is one of the main strengths of Cordova development–one set of logic deployed to multiple mobile platforms.

Be aware that Cordova CLI is early beta software. This is bad because you’ll run into issues; this is good because the software is under active development, and I’ve found the developers to be quite responsive to bug reports. I have occasionally filed a bug one day, and had a fix to download the next.

There are also areas where Cordova CLI has significant deficiencies–luckily they’ve built the software to be flexible enough you can work around it (or, if you can contribute fixes, they are happy to accept help). So, if you plan to use Cordova CLI, make sure you bookmark the issue tracker. Consider joining that jira and submitting bugs, but at the least search it when you run into an issue.

In the next post, I’ll discuss alternatives to Cordova CLI.

Subscribe to my infrequent Cordova newsletter

A respectful hiring process

I have been on both sides of the hiring equation.  I’ve been the one dressing up slightly nicer than usual, google mapping directions to a strange place, and nervously arriving ten minutes early. I’ve also been the one doing phone screens, trying to fit interviewing into a packed schedule, providing resumes to other team members for review, and making the call on which of two candidates would serve my organization better. (Thankfully, I haven’t had to fire anyone yet.)

Hiring is not easy, for either party. That’s why I think a respectful hiring process is so important. What are key components of such a process?

From the employers perspective:

  • Be honest. Be as clear as you can about the job and the expectations around the job. Comp is hard to be totally clear about because there’s always a dance around this, but levels of comp can be pretty clearly stated in the job req (entry/junior/mid/senior)–which means you have to do the research for appropriate comp levels to fit your budget.
  • Have compassion. Remember what it is like to be on the other side of that call or table.
  • Keep track of all your applicants in a database (even just a issue tracker). This will allow you to make sure you don’t lose track of anyone (or their documents) and know where everyone is in the process. As a plus, I’ve also mined this database when other openings come up.
  • Set deadlines for yourself, and tell applicants about them. Far too often, once the process starts, communication comes it fits and starts. Setting deadlines forces you to communicate at expected times (even if the communication is just ‘we have to move the previously stated deadlines’, it is still welcome).
  • When an applicant isn’t a good fit (for whatever reason), or the position goes away, tell the applicant as soon as possible.

From the applicant’s perspective:

  • Only apply for jobs that you fit the requirements for, or at least most of them. Applications that clearly meet only one or none of the requirements are a waste of everyone’s time.
  • Be honest.
  • In an interview, when you don’t know, say you don’t know, but also say how you’d try to figure it out.
  • Realize that this isn’t just an interview, it’s a chance to make a connection. I’ve connected people who didn’t quite fit my requirements with other employers, and asked people I’ve interviewed with for technical advice. Treat the interview process as one stroke in a broader picture, rather than a test to be passed.

I’m sure I missed something–any other suggestions to make the hiring process more humane?

Testing time dependent kettle transformations

Testing transformations that depend on the date will often be required when you only want to process new data, or if you want to treat events that happened in the past differently depending on how long ago they occurred.

I have handled the time dimension in one of two ways.

The first is to have a SQL statement that is pulled in via a ‘Get Variables’ step.  This statement is then executed.  For the production job, the statement simply pulls the current date from the database: ‘select curdate()‘ for mysql.  For testing, the statement returns some known date: ‘select str_to_date(‘2012-05-27′,’%Y-%m-%d’)‘ for mysql.

The benefit to this is that you can make this SQL call in your transformation, and everything stays tidily in there.  The disadvantage is that you’re making another database call and mostly just for testing purposes.

The second is just to have a variable that is set previously in the job and is passed in to a transformation as a named parameter.  This date can be pulled from a file (for test), or using the ‘Get System Info’ step, or a database lookup (for production).  The benefit to this is that you aren’t necessarily making another database call and it is more understandable.  I can’t think of any downside, so this is my recommended method.

After this setup is done, you can pivot your test data around the hardcoded test date.  For example, if your data should change state one year after insertion, you can set the date in your input data rows to 364, 365 and 366 days from your test date.  This kind of condition testing ensures that when the logic changes (you should change state two years after insertion), your test will fail, and you will know about the issue before your users do.

This is content from my email newsletter about Pentaho Kettle Testing. To receive similar emails in your inbox, sign up below.

Signup for my infrequent emails about pentaho testing.

Switching wordpress themes…

After many moons (almost 7 years), I switched up my theme to a more modern (thought still stark) one.

Why?

  • The older theme was borked in a couple of ways that I couldn’t be bothered to investigate.
  • I wanted something that was more responsive and a better experience for mobile users (only 6% of my traffic in 2013 is mobile, and I’m hoping to increase that).
  • Sometimes it’s just time for a change.

So, I hope you enjoy the new theme.  Same dorky content, new dorky look!

Older versions of Sinon.js don’t work with jquery 2.0

This is a quick hit, hopefully to help someone avoid spending the half day I just did.

The older versions of sinon.js, a helpful javascript testing tool which lets you mock up and stub out objects, do not work with jquery 2.0.  Even though 2.0 is API compatible with the 1.x series, apparently some different stuff happens under the covers.  This is an issue for me because a few months ago, I followed these instructions to set up our testing infrastructure, and used sinon.js version 1.4.2.  That worked fine with jquery 1.8.2, but when I upgraded everything, tests where I mocked up server calls failed–the backbone model’s parse method was never called.

The answer?  Use at least version 1.7.1 of sinon.js.

An open letter to Robert Reich about BDNT

Hi Robert,

I attended another BDNT on June 4, as I do every quarter or two.  You asked some questions tonight of the community that I think deserve a more measured response than I could muster yelling out in the auditorium.  Questions like: why do you come here?  What does the future of BDNT look like?  Jeez, won’t anyone volunteer to take video?  How can we leverage all the great people at BDNT during the time when we aren’t all in the same room?

First, I want to thank you, Robert, and all the many volunteers and sponsors of BDNT that make it possible.  I have been to a number of them, presented at two, and know some of the volunteers.  I can’t say I’ve met friends there, but it is a great place to go with existing friends to get pumped up about the Colorado tech scene, and new technology in general.

BDNT is, and has currently been, a fantastic presentation venue and gathering place for local tech community.  The focus has always been on building community and helping presenters (and their companies) get better (check out the second to last question on the ‘submit a presentation’ form).

I see two major BDNT constituencies: fly by nighters and regulars.  I’m a fly by nighter–I won’t attend when I get busy or BDNT falls off my radar, so I make it about 2-4 times a year.  Beyond speaking and attending, I have posted a few jobs on the job board, some reviews on my blog, some tweets, and exchanged some cards and some emails from people I’ve met there, but that’s been the limit of my involvement.

The quality and diversity of the presentations is BDNT’s biggest strength–the five minute format and enforced time limits (as well as the coaching) make presentations so tight.  And if a snoozer slides in, the audience only waits for five minutes.  Therefore, BDNT is a quality, time efficient event where I can check on the pulse of the tech community (is technology XXX going to be big?  how many jobs were mentioned for technology YYY?).

Because the presentations are so important, the biggest service BDNT could provide to us fly by nighters is to video tape the presentations.  I understand, Robert, that BDNT is a shoestring operation and that video takes time and money.  I don’t know exactly how to tackle that–two ideas that jump to mind are: ask a local video production company for sponsorship (People Productions jumps to mind) or set up an ipad and share to youtube, and provide cheaper, lower quality video.

As for the regulars, I don’t have the faintest idea of what they need.  Robert, you or the other volunteers probably do–they reach out to you with requests for features, help, etc.  So, I’ll have to rely on you to guide BDNT to serve their needs.

A caution: please don’t turn BDNT into another local, professional social network.  I already have too many ‘networks’.  I also fear that BDNT doesn’t have the mass to avoid being a ghost town.  (How many of those 10k members have only been to one meetup?  how many people who are not recruiters post to the message boards?)  We have all seen digital ghost towns before and they aren’t much fun to be around.  And I don’t want another place to keep a profile up to date–please ask to pull from LinkedIn and StackOverflow all you want, but please don’t make me fill out another skills list.  (I just joined the BDNT LinkedIn group (well, I applied for permission to join) because that’s the right place to do professional social networking.)

I will say that I’ve enjoyed the various experiments I’ve been a part of through BDNT (e.g, the twitter backplane, the non profit hack fest, the map of tech in Colorado).  Robert, if you want to experiment with a social network because of what the regulars or your gut is saying, do so!  Just don’t be surprised if us fly by nighters don’t really participate.  But whatever you do, please don’t stop experimenting.

It is worth asking how BDNT could be better, but, Robert, don’t forget that being ‘only’ the premier technology meetup in Colorado and a place where many many people come to check in on the tech community, present ideas, meet peers, and learn is quite an achievement.  Ask Ignite and Toastmasters about being ‘just’ a successful presentation organization–it is a success in this world of infinite opportunity and limited attention.

Bask in the glory of creating a successful community.

Finally, for everyone who wasn’t there, some fun facts from the June 4 2013 BDNT:

  • The unemployment rate for software engineers in the USA is 0.2%
  • The New Tech Meetup site code is available on github (no license I could see, however)
  • There was a really cool robot company (Taleus Robotics?  I couldn’t find a website for them) that is selling the computer needed to drive robotics for $299 that will expose servos and motors as linux devices.

Gardening and software development

It’s the end of spring/early summer in the northern hemisphere, so it’s time to plan the vegetable garden again. I was putting some tomatoes in the other day and musing about the similarities between gardening and software development. To wit:

  • I have a lot of hesitancy about planting–especially perennials.  It feels so permanent, and I might screw things up, and maybe I should go back to the drawing board, or maybe just do it next weekend….  But just starting makes the problem so much easier–it loses its weight.  Your garden will never be perfect, but an imperfect garden is 100% better than no garden.  Similarly, when confronted with a new project or feature, half the battle is just starting.
  • You will have ample opportunity to make mistakes in both gardening and software development, so feel free to learn from them.  I don’t know where I heard it, but “it’s fine to make mistakes, just try not to make the same ones.”
  • Automate, automate, automate.  The more you can rely on machinery to free you from the drudge of gardening, the more you can rest assured that you will have a great crop.  Similarly, the more you can rely on automated testing and scripts, the more complex you can make systems, and the more freely you can change them.
  • Trying something different is fun.  I planted artichokes this year.  I also played around with easyrec.  I can’t speak for the artichokes yet, but exploring a new tool was interesting and fun.  Look up from your code once and a while and visit hackernews (thanks to Jeff Beard for turning me on to that resource) to find something new to learn about.

I think that many software developers are obsessed with passive income, but I think that gardening is the original passive income stream–food grown for you while you are doing something else!

Testing with Pentaho Kettle – next steps

So, to review, we’ve taken a (very simple) ETL process and written the basic logic, constructed a test case harness around it, built a test suite harness around that test case, and added some logic and a new test case to the suite.  In normal development, you’d continue on, adding more and more test cases and then adding to your core logic to make those test cases pass.

This is the last in a series of blog posts on testing Pentaho Kettle ETL transformations. Past posts include:

Here are some other production ready ETL testing framework enhancements.

  • use database tables instead of text files for your output steps (both regular and golden), if the main process will be writing to a database.
  • run the tests using kitchen instead of spoon, using ant or whatever build system is best for your operation
  • integrate with a continuous integration system like hudson or jenkins to be aware when changes break the system
  • mock up external resources like database tables and web services calls

If you are interested in setting up a test of your ETL processes, here are some tips:

  • use a file based repository, and version your kettle files.  Being XML, job and transformation files don’t handle diffs well, but a file based repository is still far easier to version than in the database. You may want to try an XML aware diff tool to help with versioning difficultties.
  • let your testing infrastructure grow with your code–don’t try to write your entire harness in a big upfront effort.

By the way, testing isn’t cost free.  I went over some of the benefits in this post, but it’s worth examining the costs.  They include:

  • additional time to build the harness
  • hassle when you add fields to the output, because you have to go back and add them to all the test data as well
  • additional thought required to decide what to test
  • running the tests takes time (I have about 35 tests in one of my kettle projects and it can take about 10 minutes to run them all)

However, I still think, for any ETL project of decent size (more than one transformation) or that will be around for a while (any time long enough to evolve), an automated testing approach makes sense. 

Unless you can guarantee that business requirements won’t change (and I have news for you, you can’t!), testing can give you the ability to explore data changes and the confidence to make logic changes.

Happy testing!

Signup for my infrequent emails about pentaho testing.

Testing with Pentaho Kettle – adding new logic

We finally have a working test suite, so let’s break some code.  We have a new requirement that we greet users who are under the age of 30 with ‘howdy’ because that’s how the kids are saying ‘hello’ nowadays.

You just jumped into a series of blog posts on testing ETL transformations written with Pentaho Data Integration. Previous posts have covered:

The first thing we should do is write a test that exercises the logic we are trying to write.  We make a directory with a name descriptive of the behavior we are trying to test, and add a row to the tests.csv driver file pointing to the files in that directory. Here’s how the additional line will look:

agebasedgreeting,agebasedgreeting/input.txt,agebasedgreeting/expected.txt

And we will copy over the data files from the first test case we had (simplerun) and modify them to exhibit the expected behavior (a new greeting for users under 30). We don’t have to modify my input file, since it has people both under 30 and over 30 in it, but just to catch any crazy boundary conditions, we will add someone who is 30 and someone who is 31 (we already have Jane Doe, who is 29).

Then we need to modify the expected output file to reflect the howdyification of the greeting. You can check out both source files on github.

Then we run the tests.

pentaho-failed-test-75

You can see the failure in the log file that kettle generates and in the build/results directory.  You can also see that we added a job entry to clean up the build directory so that when we run tests each time, we have a clean directory into which to write our output file.

pentaho-failed-test-75

Now that we have a failing test, we can modify the core logic to make the test pass. Writing the logic is an exercise left to the reader. (Or you could look at the github project :).

We re-run the tests to see if they pass, but it looks like simplerun fails before we can even test agebasedgreeting:

pentaho-failed-test-2-75

We can do a diff of the expected and output files and see that, whoops, the simplerun testcase had some users that were under 30 and affected by the logic change.

This points out two features of this method of testing:

  1. Regression testing is built in.
  2. Because of the way we are abort tests, TestSuiteRunner only runs until our first failure.

The easiest way to fix this issue is to inspect output.txt and verify that it is as expected for the simplerun test.  If so, we can simply copy it over to simplerun/expected.txt and use that file as the new golden table.

We also realize that we are passing in the hello field to the output.txt file and that doing so is no longer required.  So we can update the expected.txt in both directories to reflect that.  Running the tests again gives us success.

pentaho-success-75

Now that we’ve added code, I’ll look at some next steps you can take if you are interested in further testing your ETL processes.

Signup for my infrequent emails about pentaho testing.