Tuesday, December 11, 2012

npm and bundler - the Good Parts

I need two things from a language to take it seriously: good testing framework and some kind of package management. I've dealt with testing in my previous blog posts, now I compare two package management tools that I like, npm and bundler.

:: NPM

Early this year I went fairly deep into node.js development. I wanted to understand how it works and where I could use it in the future. Getting node on your local development environment is easy, but getting used to coding with async callbacks takes a bit of practice to master.

All I needed on OSX to get started with node was:

$: brew install node
When I wanted to use a package - let's say mocha.js - I just used npm.

$: npm install mocha

It installed everything into the project root under the node_modules directory, locally by default like this:

$: lltree
   some_nodejs_project
   |--node_modules
   |----mocha
   |------bin
   |------images
   |- ...
Sure you can have it installed globally, but you need to use the "-g" flag to do that. I did not want to pollute my node installation, I was glad with its default behavior where it installed everything under node_modules.

You do pay a price for this behavior. When you execute "mocha" in the terminal, the executable is not found.

% mocha
zsh: command not found: mocha
You have to locate that file under the node_modules directory. It's in "node_modules/mocha/bin" dir, by the way.
To get around this I just use a Makefile with a couple tasks in it:

REPORTER = list

test: test-bdd

test-bdd:
 @./node_modules/mocha/bin/mocha \
  --reporter $(REPORTER) \
  --ui bdd \
  -- spec/*.js

test-doc:
 @./node_modules/mocha/bin/mocha \
  --reporter $(REPORTER) \
  --ui bdd \

This way I can easily run my tests by executing "make" in the terminal.

:: BUNDLER

I switched from rvm to rbenv early this year. Rbenv with bundler makes a very powerful combo but when you install a gem it's going to install it globally by default.
Not good. I want to keep my gems local to the current project and I don't want to pollute my Ruby install with different versions of gems. What if I use Rails 3.2.9 in one project but I have to use 3.1.1 in another? Sure you could use rbenv-gemsets to get around this, but I already started using node with npm and I wanted to have a similar experience.

The "--path" switch in bundler lets me specify which directory I want to install my gems into. When I start a new project I immediately create a Gemfile. It's very simple, all you need is this:

source "http://rubygems.org"
gem "light_service", "->0.0.6"
gem "rspec"

Then I install all the gems through bundler with this command:

$: bundle install --path vendor/bundle --binstubs

Bundler puts all my gems under the vendor/bundle directory and creates a bin directory with executables for the gems that produce such a file. When I run rspec for my project this is what I do:

$: bin/rspec spec

You could either use "bin/rspec" or "bundle exec rspec", either works.

As you see, nor npm, neither bundler has the best solution. But they have facets that I like in both.

  default local install easy access to executables
npm
bundler

Can we sync the good parts? Could both have local install by default with easy access to the executables?

Update

Shortly after I published this post, my good friend Joe Fiorini pinged me on twitter. Here is our conversation:

I did not know that npm creates a ".bin" directory under "node_modules" with symlinks pointing to the individual executable files. This way it's very easy to run these files:

$: node_modules/.bin/mocha -h

Thanks Joe for pointing this out!

Wednesday, November 21, 2012

Preparing For My Visit in Chicago

When I saw Peter's comment on my previous blog entry I realized I can't send a short response. It deserves an entire blog post, so here it goes.

Two years ago I was very unhappy with my job. I used tools and languages that did not excite me and worked on projects that I was not interested in. Reading Chad Fowler's book, The Passionate Programmer book did not help me much either. I reevaluated my life and I realized I can't spend 8 hours a day doing something I am not passionate about.

I was willing to go part time, work only 3 days a week and use 2 days to visit other companies. My wife supported me as she saw how unhappy I was when I got home from work every day. It never happened: I was able to score a Ruby job and I did not have to go to extreme measures to find happiness.

Later on I planned taking 4 months off and not work at all. I wanted to dedicate my time off to learning, visiting companies in western Europe, working on open source software and spending some time with my family back in Europe. Since I am the only person in my family who gets a paycheck, the 4 months off did not fly so well with my significant other.

I planned on visiting Hashrocket for a few days in Jacksonville, FL early March, but unfortunately that did not happen.

After playing so much with the idea I felt I was ready. I was willing to take unpaid leave for 4 days just to visit companies this fall.

Two people helped me to get in touch with the companies I visited there: Corey Haines and Michael "Doc" Norton. I worked together with Corey at a large insurance corporation and I think we met sometime in 2006. Doc led the studio side of LeanDog up until recently. So yes, I did know both of them. But not knowing them would not have stopped me, I was ready to reach out to the companies as well but I figured doing someone the intro for me would help me.

Not knowing Corey should not stop you, go ahead and ping the companies you're interested in visiting. If they reject your visiting idea I am sure the place is not worth checking out.

I have attended a couple of Coderetreats already. It's a fantastic way to get to know other developers and learn from it. The experience of visiting companies is different. The developers were up against real tasks, against real dedlines and could not afford throwing their code away after every pomodoro session. Both Coderetreats and visiting companies are great, the experience you get out of the latter is different and I think that's the key here: you learn something else.

Friday, November 16, 2012

A (Mini) Programming Tour

The TL;DR version
If you can't go to conferences, try to visit companies. Even in your own home town. You get to know many people and learn a lot from fellow developers.


I spent the last week in Chicago visiting four different companies, shadowing, talking and pairing with fellow developers. A good friend of mine shared his condo in downtown for a couple of days which made this trip very affordable for me. Here are the companies I visited and a brief summary of what I saw there:

:: TrunkClub
I did not know much about Trunk Club up until a couple of months ago. I reached out to Corey Haines seeking companies I could visit and he suggested them. They have a small but very talented group of developers building their - mostly internal - apps in Ruby on Rails. I shadowed Corey Ehmke on the project he was currently working on: he tried to come up with a recommendation engine using MongoDB and Ruby. Having seen him exercising the different algorithms I was gently reminded that I should probably brush up on my statistical skills.
Have you seen a company that has its own beer tap and wine cellar? Well, Trunk Club is one of them! At the end of the day we enjoyed the different variety of beers right in their offices. Now how cool is that?!

:: Hashrocket
I spent the next day at the Hashrocket Chicago office. They have a cute little space converted from a condo with a couple of bedrooms attached so people visiting from the Jacksonville, FL office can stay there. A large video screen is linked up with their home office where people stop by and say hello to the folks in Chicago.
I shadowed Matt Polito first who remote paired with another rocketeer. They used tmux for sharing their terminal sessions. I heard about tmux before, but I have not tried it yet. I noticed that developers even used it locally when they were not pairing with anybody else for the benefit of being able to suspend and resume sessions.
Interestingly the guys at Hashrocket are using a strategy pattern based solution to solve complex problems which is very similar to what I described in my Refactoring Workflows to Chain of Actions blog post.
They also used Google Plus for video conferencing with multiple people. I am not a big fan of social media but I'll definitely check out Google Plus for this.

:: 8th Light
The company's new office is very close to Union Station, which makes it easy for the commuter employees to get there. Had I not checked their current address on their web site Google Maps would have sent me to their former office.
I spent the morning shadowing Colin Jones, who worked on a file uploader web app in Clojure. I noticed how much more readable is speclj compared to Clojure test, I am going to switch to that! He also wrote a multi-method implementation that I only read about before.
The web app used joodo as the underlying web framework which seems very clean to me, but the views were built using hiccup which can be a bit too cryptic for a developer who spent a long time in HTML land.
I wrapped up my day pairing with an other engineer on some Backbone.js code test-driving it with Jasmine.

:: Groupon
I only visited Groupon, I did not sit down and paired with anybody there. Our host, Michael "Doc" Norton showed us around. Their office seems like a fun place and I have never seen so many 27" Cinema Displays in one room. Developers are working in small groups and everybody can pretty much find the project they want to work on.

What's my takeaway from all this?
I met with many talented developers. I learned how they work, what tool they use, how they develop software. I will give joodo and Clojure a try and will build a web app using them just to learn the language and the paradigm.
I know people in the US don't have a lot of vacation. But if you can do it, maybe just one day a year, visit other companies. The benefits are enormous!

I'd like to thank my current employer, Dimple Dough, sponsoring and helping me with this trip!

Tuesday, October 2, 2012

The Organizations - Users - Roles Kata

We began rewriting one of our applications a couple of months ago. It's a fairly easy app, the only challenge we had so far was replacing our permission based authorization to something more sophisticated saving set up time for our admins. In our legacy app the authorization is controlled through fine grained permission sets. This allows us to do everything we need, but setting it up is a long and tedious process since it does not support role inheritance through organization structures. Thinking more about the business logic I figured other people might like to think about this problem. So here it is:

The Organizations - Users - Roles kata

We have three layers of organizations: root organization, organizations and child organizations.

There is only one root organization that we call "Root Org".
Organizations have one parent.
The parent of all organizations is the Root Org.
The organizations can have any number of child organizations, but the child orgs do not have children of their own (they are leaves).

There are three different roles in the system:

  • Admin
  • User
  • Denied

Roles are inherited through the organization hierarchy: an admin to an organization is an admin to all of its child organizations as well. For example - using the organization structure in the diagram above - if I have admin role access to Org 1, than I should have admin access to Child Org 1 and Child Org 2.

If a role is specified to a child org for a given user, that role takes precedence over the inherited role from the organization level.
When I have the "denied" role for Child Org 2, than I only have admin access to Org 1 and Child Org 1 and I don't even see Child Org 2.

Please consider writing code for the logic described above using tests to verify your logic. Simulate the data access code and try to keep the number of queries to a minimum.

Saturday, September 8, 2012

My List

I was asked a couple of weeks ago to come up with a list of engineering guidelines at my current employer. Not something we want to enforce but an initial collection that all the team members could agree upon and would follow. Here is the list I provided:
  • Leave the code better than you found it
  • Simplicity rules (aka YAGNI)
  • Try to do TDD, or at least cover the code you write with unit tests
  • Develop automated acceptance tests
  • Strive for "simple" code

→ Leave the code better than you found it
Uncle Bob twisted the boy scout rule - "Leave this world a little better than you found it" - a bit in his book: Clean Code.
When you see a variable name "ou" change that to "organization_user" so next time when you look at it you'll know what that variable represents. Or you can just add tests to a routine that is untested or not covered with any kind of test. This is a great way to get started with open source software, by the way.

→ Simplicity rules aka YAGNI
Oh, it's so hard to resist creating the next gem, npm package or framework. But be lazy! When your customers want a feature think about what is the least amount of functionality you need to build? Do that first and check usage. If you see that this feature is heavily used by your end users invest in it. Make it more rich, maybe a bit more responsive. But first think like you're participating in a startup weekend every day trying to get your product out there as fast as you can so your idea is validated on the field.

→ Try to do TDD, but at least cover the code you write with unit tests
Avdi Grimm's brilliant tweet sums this up for me:
I know TDD is hard for newcomers. In fact, I've only found very few people doing it at a highly skilled level. Most of the code I inherited from other developers are without any tests.
If you don't do TDD, please cover your work with tests. Your test coverage will not be even close to the level of code written with TDD, but at least you'll help the next developer who has to maintain your code.

→ Develop automated acceptance tests
I've been frequently asked to define the difference between unit tests and acceptance tests? I found the best answer in The Cucumber Book:
"Unit tests ensure you build the thing right, while acceptance tests ensure you build the right thing".
I found that Gherkin helps us (BAs, QAs and Developers) describe a feature in a way that is easily understood by everyone. Automating them with cucumber is a bit more work, but it's well worth the effort.
I follow the GOOS model and I stub out external dependencies in my units tests. Since most of the code I write these days is in dynamic languages, having full stack tests is literally priceless.

→ Strive for "simple" code
I wrote about the benefit of simple code in my previous blog post. It's not only easy to understand but easy to reuse and test. Have you spent time understanding what a 60 line method does obscured with nested conditionals and iterators? I have. It's frustrating to me and very expensive for my employer.

I am sure I am not stating anything new here. In fact, some of the items on this list might sound like a broken record already. But it's good to write them down and look at them once in a while.

What would be the items on your list?

Monday, July 30, 2012

Just make it small, please!

I have seen - and had to maintain - so many messed up, bad code in my carrier that it makes me wonder why I still work in this profession. In fact, I have rarely seen good, clean code. However, I can learn a ton going through open source code repos on Github.

The best definition I have found for clean code is by Michael Feathers captured in the book Clean Code: "Clean code always looks like it was written by someone who cares."

Do you really care about the code or the craft, when:

  • you put 2844 lines of code in the model?
  • you have 167 lines of code in one function?
  • you have several deep nested if statements in for-each loops?
  • you have 1354 lines of code in a js file that drives business logic?
  • you have no tests at all?

I attended the fantastic Simple Design and Testing Conference a while ago. One of the topics we discussed there was the most important principle we'd ask a developer should follow. DRY was the absolute winner.
We all know about and follow the DRY principle but I question if that's enough?

Quite frankly I don't have the patience to analyze a 60 line function loaded with iterators and conditionals. Even the code's author can not understand it 5 minutes after it was written.

My coding style has changed in the last 4-5 years. I tend to write code in a functional style, class objects with a single function that are not longer than 5-10 lines.
Here is one of those:

module Services; module Utils
  class DecryptsData

    class << self

      def execute(service_result)
        return service_result unless service_result.success?

        encrypted_data = service_result.fetch(:encrypted_data)
        decrypted_data = ::Encryption.decrypt(encrypted_data)
        data_key = service_result.fetch(:data_key)

        service_result[data_key] = decrypted_data
        
        service_result
      end

    end
  end
end; end

Look at this piece of code for a moment. Try to understand what I am doing here.

  1. A guard condition (line 7)
  2. Pulling the encrypted data from the context (line 9)
  3. Decrypting data (line 10)
  4. Pulling the key I save the decrypted data with (line 11)
  5. Saving the decrypted data in the context (line 13)

All I am doing is decrypting data. That's it. I am not querying the database, I am not validating data, I am not calling an external service and I am not looping through items and set properties based on some kind of predicate.

People might just shove this into the controller. I won't. I think about software as a collection of functions that's weaved together by organizer functions.

The benefits are enormous:

  • One function
  • -> which is short
  • Easy to understand
  • -> which is easy to test

You could say that I am doing something in Ruby that resembles to functional programming. I call functions on class objects, but the functions I am constructing are not immutable. And I new up an object and maintain state if I need to, but I try to avoid that for the sake of simplicity. I don't want anybody - who maintains the code I write - to spend a lot of time trying to understand what I am doing.

Is functional programming far for me? I don't think so, I believe I am just taking the first steps in that direction.

Tuesday, June 19, 2012

Frequent Job Change?

I've been making a living writing software for the past 12 years (and I am still not in management yet :-). I've worked for small startups and Fortune 500 companies with more than 2,000 people just in IT.

All together I have worked for 8 different companies which averages out at 1.5 year spent at each, the longest being 3 years at one place:

I interviewed with a small startup a couple of years ago and I still remember how disappointed the interviewer was when he learned about my job history. He asked me: "So can I expect you to work for me a year, or two if I am lucky?" What should I have said? We clearly were not a match.

Whenever I start a new job the most important thing for me is to be productive from day one. I am sure I won't understand all the business logic right away. It might take a few months to pick it up, but I strive to add value as soon as possible. If it's only by adding automated tests, it doesn't matter: I contributed.

I have clear goals of what I want to learn or practice in the future. But there were times when I picked up skills by chance.

I had to take courses on data warehousing and dimensional modeling when I worked for a large corporation. First I was not very interested in it, but as I learned more about the topic I realized that you can't use your OLTP database for data mining and reporting: you need to lay out data in a different format to make it efficient.

My views have changed on progressive enhancement thanks to the bright UX folks I had the pleasure to work with. I did not like the idea of making a web app "gracefully degrade" for a more solid architecture but they pushed me to develop applications that way. I thought this is crazy: why would I build the same logic twice with- and without JavaScript? But that's nonsense. If you architect your application properly, building your web app without JavaScript and adding it after is not all that hard. Besides, you don't have to write much more code.
Of course I wouldn't consider progressive enhancement if I was building a one-page web app, like the excellent Trello.

It's been only four months since I started working for my current employer and look what we have accomplished already:

  • We moved our source control repository to git and Github from Subversion, so it's easier for us to release code into different environments.
  • We began carving out business logic and putting it in a business logic gem, so we can easily share logic between different Rails apps.
  • We can drop, recreate and seed our development database in one, single script (yeah, I know, that did not work when I started), so our acceptance tests will have a set database state before they are executed.
  • Introduced Gherkin to the team so we make sure we build the right thing.
  • We are automating acceptance tests using Cucumber with capybara-webkit so our regression cycle is shorter.
  • We set up a build process using Jenkins so we can run both unit- and acceptance tests after each push to Github.
  • We started building new features with progressive enhancement so our application will graceful degrade on devices that can't handle JavaScript.

Dear Employer: please don't be afraid of hiring those "job-hopping" individuals. Just make sure you are selecting the right person and let them produce value from the very beginning. They might have picked up skills - you will need - here and there. When they write tested and clean code they can leave the job any day knowing that somebody else should be able to jump in and continue the work they've started.

Tuesday, May 8, 2012

Running Mocha Specs in the Browser

The way I've been writing JavaScript code might not appeal to other developers: I live my life in the terminal, I write and execute specs in it hitting the browser occasionally just to make sure everything I do works there. Others use the terminal as little as possible. One thing we have in common: we test our JavaScript code with Mocha.

I still remember the cold, autumn day when I first downloaded Jasmine's standalone test-runner a couple of years ago. All I had to do was unzip the file, hook in my own JS source and spec files and I was in business. It even had simple JavaScript objects like Player and Song guiding me through the first steps.

Mocha - unfortunately - does not have a downloadable zip file to help you get started and its browser based tool is deeply buried under its own tests. Extracting it from there takes time and effort.
This writing describes how you can get started with executing your Mocha specs in the browser.

I put all the code of this starter project in a Github repo: mocha-in-browser. You can either clone it or just download the project zip file.

I used the String Calculator Kata as an example. Open up the public/index.html file in your favorite browser and you should see this:

Just type 1,3 in the text box, hit the "Calculate" button and the correct answer appears in green color under the text box.

When you open up the spec/runner.html in the browser, Mocha is happily reporting the test output:

I did not finish the String Calculator Kata. Please work through it not only to sharpen your skills but to get familiar with JavaScript unit testing with Mocha as well.

You can drop the spec directory from this project into your working directory, reset your JavaScript source and spec files in the spec/runner.html file and start using it.

Both Mocha and Should.js are rapidly changing projects. I'd encourage you to download and use the latest source code of those projects as ofter as you can.
I created a Makefile to make this super simple for you. Just run "make" in the project root and you should have the latest version of those files brought to you by node.js and npm.

Monday, April 9, 2012

Refactoring Workflows to Chain of Actions

Everything we do in life is a series of actions. Just to get to work I need to wake up, eat something, brush my teeth and drive to work. Or when the data is sent to the server your code has to validate the user input, it has to create a new object with the attributes, this new data has to be saved and a response data needs to be generated in JSON format if your request happens to be an Ajax request.

When I write code for a series of tasks I start out with a "coordinator" object that has several private methods internally. The main and public method orchestrates the calls for the private methods. The tests around this object start out pretty nice, but as the complexity grows I need to stub more and more external objects. The complexity of the tests are soon becoming indicators of the worsening design and I need to start pulling out objects before the whole thing turns into an iceberg class.

The example I use in this writing is a simple one: The "Dude" wants to go to the park to enjoy the nice weather. He has to go through several steps to get there:

  1. Leave the house
  2. Close the door
  3. If he has a car - Jump in the car
  4. If he has a car - Drive to the park
  5. If he has a car - Park the car
  6. If he has NO car - Jump on the bike
  7. If he has NO car - Ride to the park
  8. If he has NO car - Park the bike
  9. Enter the park
  10. Take a walk

All my examples are in CoffeeScript. I use CS for brevity and for its concise format.

In my example the coordinator object is called "GoesToThePark". It interacts with the House, Car, Bicycle and Park models like this:

And all this described in CoffeeScript:

House = {
  leave: (dude) ->
    'leaves the house'
  closeTheDoor: (dude) ->
    'closes the door'
}
Car = {
  jumpIn: (dude) ->
    'jumps in the car'
  driveToThePark: (dude) ->
    'drives to the park'
  parkTheCar: (dude) ->
    'parks the car'
}
Bicycle = {
  jumpOn: (dude) ->
    'jumps on the bike'
  rideToThePark: (dude) ->
    'rides to the park'
  parkTheBike: (dude) ->
    'parks the bike'
}
Park = {
  enter: (dude) ->
    'enters the park'
}

class GoesToThePark
  constructor: ->
    @messages = []

  toEnjoyTheWeather: (dude)->
    @messages.push House.leave(dude)
    @messages.push House.closeTheDoor(dude)
    if dude.hasCar()
      @messages.push Car.jumpIn(dude)
      @messages.push Car.driveToThePark(dude)
      @messages.push Car.parkTheCar(dude)
    else
      @messages.push Bicycle.jumpOn(dude)
      @messages.push Bicycle.rideToThePark(dude)
      @messages.push Bicycle.parkTheBike(dude)
    @messages.push Park.enter(dude)

Please check out this gist to see the specs. I used mocha to test-drive my code.

It's all nice and sweet. Except we have that nasty "if statement" in the middle of the GoesToThePark#toEnjoyTheWeather method.

Whenever I see a conditional block in the middle of a function call I immediately assume the violation of the Single Responsibility Principle.
I tolerate guard conditions in methods, but that "if statement" must die.

I remembered in my early Java and C# days reading about the Chain of Responsibility design pattern. Every little command object is linked together with a linked list, the first one is called from the "coordinator" object and they each check if there is anything to do with the arguments. If there is, the action is executed and at the end of the method call the next command in the chain is being called.

I found them especially helpful in workflows similar to the example described above. The coordinator object only knows about the action objects and its only responsibility is to call the one and only method on them in order. There is no conditional in the method any more, the actions are smart enough to figure out if they have to deal with the object in the context or not.

I introduce the four new action objects:

  1. LeavesTheHouse - delegates calls to the House object
  2. DrivesToThePark - invokes the methods on the Car object if the dude has a car
  3. RidesToThePark - sends messages to the Bicycle object if the dude has no car
  4. EntersThePark - executes the enter method on the Park object
Only the DrivesToThePark and RidesTheBikeToThePark protects itself with guard conditions, their execution is dependent on the fact of the Dude having a car or not. But those are simple return statements at the very beginning of the method call.

...

LeavesTheHouse = {
  execute: (messages, dude) ->
    messages.push House.leave(dude)
    messages.push House.closeTheDoor(dude)
}

DrivesToThePark = {
  execute: (messages, dude) ->
    return unless dude.hasCar()

    messages.push Car.jumpIn(dude)
    messages.push Car.driveToThePark(dude)
    messages.push Car.parkTheCar(dude)
}

RidesToThePark = {
  execute: (messages, dude) ->
    return if dude.hasCar()

    messages.push Bicycle.jumpOn(dude)
    messages.push Bicycle.rideToThePark(dude)
    messages.push Bicycle.parkTheBike(dude)
}

EntersThePark = {
  execute: (messages, dude) ->
    messages.push Park.enter(dude)
}

class GoesToThePark
  constructor: ->
    @messages = []

  toEnjoyTheWeather: (dude)->
    for action in [LeavesTheHouse, DrivesToThePark, RidesToThePark, EntersThePark]
      do =>
        action.execute(@messages, dude)

...

You can review the entire file in this gist.

The beauty of this code lies in the toEnjoyTheWeather() method. It is simple and now it's super easy to test.


... 

  toEnjoyTheWeather: (dude)->
    for action in [LeavesTheHouse, DrivesToThePark, RidesToThePark, EntersThePark]
      do =>
        action.execute(@messages, dude)

...

In fact, I worked on a Ruby code where the coordinator object called a dozen different objects through it's private methods. Tests were brittle, I had to stare at the code to figure out why something was failing after a simple change. My specs were a clear indication that the code needed serious refactoring. I changed my code using the pattern above and I eliminated all the private methods - they became simple action objects - and testing became much simpler.

Here is what it takes to test the coordinator object's method with stubs:

should = require 'should'
sinon = require 'sinon'

...

describe 'GoesToThePark', ->

  it 'calls the actions in order', ->
    goesToThePark = new GoesToThePark
    messages = goesToThePark.messages
    dude = {}

    leavesTheHouseStub = sinon.stub(LeavesTheHouse, 'execute') \
                              .withArgs(messages, dude)
    drivesToTheParkStub = sinon.stub(DrivesToThePark, 'execute') \
                               .withArgs(messages, dude)
    ridesToTheParkStub = sinon.stub(RidesToThePark, 'execute') \
                              .withArgs(messages, dude)
    entersTheParkStub = sinon.stub(EntersThePark, 'execute') \
                             .withArgs(messages, dude)

    goesToThePark.toEnjoyTheWeather(dude)
    entersTheParkStub.called.should.be.true
    drivesToTheParkStub.called.should.be.true
    ridesToTheParkStub.called.should.be.true
    entersTheParkStub.called.should.be.true

I leave you the exercise of writing the specs with stubs for the example prior to using the action objects.

Listen to your tests, they tell you the story (or the quality) of your code. Don't be afraid of creating tiny classes or objects with only 6-10 lines of code. They are super easy to test and I consider them the building blocks of reliable and maintainable software.

Big thanks to websequencediagrams.com for their tool I used to create the sequence diagram in this blog post.

Monday, March 5, 2012

Job Change

After about a year of employment I decided to leave my employer.
We have had great times together: I worked on their software rewrite, helped them move from 23 (most of them failing specs) to some 5700 passing specs and we started on the curvy path of automated acceptance tests. I had worked with some amazing people there whom I'll miss in the future.

I wasn't looking for a new job. No, but an opportunity came up and I did not want to miss it.

A good friend of mine - Dave Speck - joined a small startup in Independence, OH late last year. I talked to him a couple of times and it seemed I could do a lot of things there. A few meetings and some beers later I made up my mind: I joined Dimple Dough in the middle of February.

I have only worked there for a little over a week but we have already done so much!
Here is one of them:

We can provide basic translation for our international clients, but some of them want to further customize it. The only way they can make translation changes is sending us what they want to change and one of our engineers has to do the updates in our database. Our customers would be happy to do it themselves, if there was a tool they could use. Hence the translator idea was born. We had a vague idea what it will look like but we did not know how the tool SHOULD exactly WORK.

We started by prototyping the tool in pure HTML with CSS and JavaScript. The benefit of doing this is the low cost of change. Imagine how far less expensive it is to modify a raw prototype than the fully functioning product. There are no domain objects, data models, data migrations to change when the client wants to tweak the preview version. It's just a dummy HTML with very simple jQuery that allows us to demonstrate it to the client who can provide us feedback well before development begins.

Once we knew our prototype was close to what we wanted to build and our customer was happy with it we sat down with our business and quality focused team members. In this "three amigos" meeting (BA, QA and Developer) we wrote scenarios in Gherkin syntax using our prototype and other documentation the team had collected by then.

It made me smile to realize that after the first three or four scenarios we were discussing edge cases nobody thought about before. The scenarios we came up with are short and are not tightly coupled to the User Interface, they explain how this new tool should behave.

I tried using Gherkin and cucumber at my previous employer, but I don't think it really caught on there. After talking with @chzy (Jeff Morgan) on a cold December morning I understood why: we used Gherkin for automated system testing and not to discover functionality with BAs and QAs prior to development.

Monday, January 23, 2012

JavaScript Testing with Mocha

JavaScript is a neat and powerful language. Sure it has its flaws, but serious software can be developed with it. The way I prefer developing JavaScript applications is by test driving the code with some kind of testing tool. And I am not thinking about hitting the browser's refresh button. No, I mean executing the specs right in the terminal.

I recently started playing with Visionmedia's mocha testing framework. The code is well maintained and the authors are responding fairly fast to pull requests or issues.
I would recommend it as an alternative to Jasmine.

This blog post will show you the first couple of steps you need to take to test drive your JavaScript code with mocha in the CLI. All my instructions are for OS X, but setting it up should be very similar on Linux and (maybe) on Windows as well.

First of all, you need node.js to run JavaScript code in the terminal. You can download the source code and compile it yourself, but I'd recommend using Homebrew and let it do the job for you.

$: brew install node

At the time of this writing my current node version is 0.6.6. You can check your node.js version by running this command:

$: node -v
v0.6.6

Next you need node's package management tool (npm). Your version of node may include npm, I list this step here in case it does not. Installing it is super easy, just follow the instructions on their web site and in your terminal.

With these two steps you're ready to roll. Create the project directory, cd into it and start moving in. Create a "src" and a "test" directory. You need to install mocha and should.js as npm packages. Having sinon.js - an excellent spying framework - wouldn't hurt either. Create your spec and source file and you are ready to test drive your app with mocha.

I really wanted to help you - Dear Reader - so I created this shell script to make your life easier. Create a directory, cd into it and run the command below in your terminal:

   curl -L http://git.io/setup_mocha_project | sh

If everything goes OK, you will see this:

create the src directory...
create the test directory...
write the package.json file...
install npm packages...

create a sample spec file...
create a sample src file...
run the spec with mocha...
  .

  ✔ 1 tests complete (1ms)

run the spec with list reporter...

   Person should be able to say hello: 1ms

  ✔ 1 tests complete (2ms)

Let's add one more feature to our Person object. Open up the test/person_spec.js file - it was created by the shell script above - and add the "can say good night" spec:

var should = require('should');
var Person = require(__dirname + '/../src/person');

describe('Person', function() {
  it('should be able to say hello', function() {
    var Person = global.theApp.Person();
    var personInstance = new Person();
    var message = personInstance.sayHelloTo('adomokos');

    message.should.equal('Hello, adomokos!');
  });

  // Add this spec
  it('can say good night', function() {
    var Person = global.theApp.Person();
    var personInstance = new Person();
    var message = personInstance.sayGoodNight();

    message.should.equal('Good night!');
  });
});

Run the mocha specs with this command:

$: ./node_modules/mocha/bin/mocha

The error is obvious: the Person object does not yet have the method "sayGoodNight".

  ..

  ✖ 1 of 2 tests failed:

  1) Person can say good night:
     TypeError: Object [object Object] has no method 'sayGoodNight'

Let's fix it by adding the missing method to the Person object:

global.theApp = {};

global.theApp.Person = function() {

  var Person = function() {
   this.sayHelloTo = function(anotherPerson) {
      return 'Hello, ' + anotherPerson + '!';
    };

   // Add this method
   this.sayGoodNight = function() {
     return 'Good night!';
   };
  };

  return Person;

};

When I run the specs again, they all pass.

  ..

  ✔ 2 tests complete (2ms)

You can try other reporters as well. The "list" reporter will give you the documentation text:

$: ./node_modules/mocha/bin/mocha -R list

Try the landing reporter, I found its output unexpected but really cool!

$: ./node_modules/mocha/bin/mocha -R landing

The steps once more:

  1. Make sure you have node.js installed
  2. Check for npm as well
  3. Create your project directory and cd into it
  4. Run this script $: curl -L http://git.io/setup_mocha_project | sh
  5. Execute the specs with $: node_modules/mocha/bin/mocha
And I almost forgot: mocha will pick up CoffeeScript files as well.

Enjoy!

 

::: Update (01/24/2012):
I asked TJ Holowaychuck, the author of Mocha of what thoughts he had on my blog post. He recommended adding a "test" script to the package.json file making it easier to run the specs. I made that change: npm test executed in the terminal should run all your specs under the test directory.

Monday, January 2, 2012

The Tic-Tac-Toe Game

I've been pretty busy lately working on this Tic-Tac-Toe game. It all started as a project to learn CoffeeScript, Backbone.js and turned into a big journey into JavaScript and node.js.

I am test-driving the code with jasmine-node, a great node adapter to Jasmine BDD.
The computer's moves are quite predictable, I will work on that in the future.


Won: 0
Lost: 0
Tie: 0

Enjoy!