Sunday, November 1, 2015

Clojure API with Yesql, Migrations and More (Part 3.)

We created a database with scripts, added migrations and communicated with the database with the help of yesql in the previous posts. Please look at those first to get up to speed with this part.

In the final part of the series, we will serialize the data we pull from the database to JSON, and we will expose that data through an HTTP endpoint. We will also add logging to monitor the JDBC communication with the database.

It was about two years ago when I attended a conference and I sat down with a couple of friends one night for a chat. It was late in the evening, after a couple of beers they asked me what I was up to. I told them I am learning Clojure. They wanted to see it in action, we solved FizzBuzz together. They liked it, but one question was lingering there: "can you build a web app with Clojure?". Of course!
We started out as a console application, but the requirements have changed, we need to expose the data via an HTTP interface through JSON. I like to look at web frameworks as a "delivery mechanism", progressing the idea this way follows that.

Use this commit as a starting for this blog post. Rebuild the database by running make build-db.

Serializing the Data as JSON

We will use the cheshire library to serialize the data to JSON. Let's modify the "project.clj" file this way, see my changes highlighted:

...
  :dependencies [[org.clojure/clojure "1.7.0"]
                 [org.postgresql/postgresql "9.4-1201-jdbc41"]
                 [yesql "0.5.1"]
                 [cheshire "5.5.0"]]
...
The serialization should be taken care of by some kind of logic component. Let's write the test for this, place this content into your "test/kashmir/logic_test.clj" file:
(ns kashmir.logic-test
  (:require [clojure.test :refer :all]
            [kashmir.logic :refer :all]
            [cheshire.core :as json]))

(deftest find-member-by-id-test
  (testing "returns a JSON serialized member record"
      (let [member (first (json/parse-string (find-member 2) true))]
        (is (= "Paul" (:first_name member))))))
Let's add the function skeleton to see test errors and not Java failures. Put this in the "src/kashmir/logic.clj" file:
(ns kashmir.logic)

(defn find-member [id] nil)
Rebuild the database with the make build-db command. Running lein test should provide an output similar to this:
% lein test

lein test kashmir.data-test

lein test kashmir.logic-test

lein test :only kashmir.logic-test/find-member-by-id-test

FAIL in (find-member-by-id-test) (logic_test.clj:9)
returns a JSON serialized member record
expected: (= "Paul" (:first_name member))
  actual: (not (= "Paul" nil))

Ran 4 tests containing 4 assertions.
1 failures, 0 errors.
Tests failed.
Cheshire uses two main functions, generate-string to serialize and parse-string to deserialize data. We will have to serialize the data, please modify the "src/kashmir/logic.clj" file this way:
(ns kashmir.logic
  (:require [kashmir.data :as data]
            [cheshire.core :as json]))

(defn find-member [id]
  (json/generate-string (data/find-member id)))
Run your tests again, all 4 should pass now.
As you think about, the logic namespace is responsible for making sure the data component returned data, handling exceptions and validating user input. This is the part of the app I'd test the most.
(Commit point.)

Exposing the Data with Compojure

Compojure is our go-to tool when it comes to building a web interface without much ceremony. Let's add it to our "project.clj" file:

(defproject kashmir "0.1.0-SNAPSHOT"
  :description "FIXME: write description"
  :url "http://example.com/FIXME"
  :license {:name "Eclipse Public License"
            :url "http://www.eclipse.org/legal/epl-v10.html"}
  :dependencies [[org.clojure/clojure "1.7.0"]
                 [org.postgresql/postgresql "9.4-1201-jdbc41"]
                 [yesql "0.5.1"]
                 [compojure "1.4.0"]
                 [ring/ring-defaults "0.1.5"]
                 [cheshire "5.5.0"]]
  :clj-sql-up {:database "jdbc:postgresql://kashmir_user:password@localhost:5432/kashmir"
               :deps [[org.postgresql/postgresql "9.4-1201-jdbc41"]]}
  :ring {:handler kashmir.handler/app}
  :plugins  [[clj-sql-up "0.3.7"]
             [lein-ring "0.9.7"]]
  :main ^:skip-aot kashmir.core
  :target-path "target/%s"
  :profiles {:uberjar {:aot :all}
             :dev {:dependencies [[javax.servlet/servlet-api "2.5"]
                                  [ring-mock "0.1.5"]]}})
We also need to add a "src/kashmir/handle.clj" file, that will handle the different web requests:
(ns kashmir.handler
  (:require [compojure.core :refer :all]
            [compojure.route :as route]
            [ring.middleware.defaults :refer [wrap-defaults api-defaults]]
            [kashmir.logic :as logic]))

(defroutes api-routes
    (GET "/" [] "Hello World")
    (GET "/members/:id{[0-9]+}" [id]
         {:status 200
          :headers {"Content-Type" "application/json; charset=utf-8"}
          :body (logic/find-member (read-string id))})
    (route/not-found "Not Found"))

(def app
    (wrap-defaults api-routes api-defaults))
Fire up the server with the lein ring server-headless command. Open up a new terminal window, and request the member with ID 2 using the curl command: curl -i http://localhost:3000/members/2. You should see something like this:
% curl -i http://localhost:3000/members/2
HTTP/1.1 200 OK
Date: Thu, 15 Oct 2015 17:31:44 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 123
Server: Jetty(7.6.13.v20130916)

[{"id":2,"first_name":"Paul","last_name":"McCartney",
  "email":"pmccartney@beatles.com","created_at":"2015-10-15T16:50:03Z"}]%
The -i switch for curl will print out both the header and the body of the response.
(Commit point.)

Using Ring Response

The way we are generating the response is too verbose, we are explicitly setting the status, the headers and the body. There are ring helpers we can take advantage of, making this a lot shorter.
Change the "src/kashmir/handler.clj" file content to this (highlighted rows will designate changes):

(ns kashmir.handler
  (:require [compojure.core :refer :all]
            [compojure.route :as route]
            [ring.middleware.defaults :refer [wrap-defaults api-defaults]]
            [ring.util.response :as rr]
            [kashmir.logic :as logic]))

(defroutes api-routes
    (GET "/" [] "Hello World")
    (GET "/members/:id{[0-9]+}" [id]
         (rr/response (logic/find-member (read-string id))))
    (route/not-found "Not Found"))

(def app
    (wrap-defaults api-routes api-defaults))
Fire up the server, run the curl request, everything should still work the same.
(Commit point.)

Stubbing out Data Access in Logic Tests

Hitting the database for the logic function is feasible, but it won't buy you all that much. You can stub out your database call with Clojure's with-redefs function. You need to define a function that returns the value the data access function would return.

Modify the "test/kashmir/logic_test.clj" file this way:
(ns kashmir.logic-test
  (:require [clojure.test :refer :all]
            [kashmir.logic :refer :all]
            [kashmir.data :as data]
            [cheshire.core :as json]))

(deftest find-member-by-id-test
  (testing "returns a JSON serialized member record"
    (with-redefs [data/find-member (fn [id] [{:first_name "Paul"}])]
      (let [member (first (json/parse-string (find-member 2) true))]
        (is (= "Paul" (:first_name member)))))))

Now, stop your Postgres database server and run this test, it should pass as it's not hitting the database, it purely tests the hash serialization.
(Commit point.)

Adding JDBC Logging

Our solution works well as it is, however, we don't see what kind of SQL statements are executed against the database. Turning on logging in Postgres is one option, but monitoring JDBC within our application is prefereable. We will use the log4jdbc library to log jdbc activities. This library is using the Simple Logging Facade For Java library, you need to add that jar file to the project.

Download the slf4j jar file and add it to the project's lib directory. Then modify the "project.clj" file this way:

                  [yesql "0.5.1"]
                  [compojure "1.4.0"]
                  [ring/ring-defaults "0.1.5"]
                  [cheshire "5.5.0"]]
                  [cheshire "5.5.0"]
                  [com.googlecode.log4jdbc/log4jdbc "1.2"]]
   :clj-sql-up {:database "jdbc:postgresql://kashmir_user:password@localhost:5432/kashmir"
                :deps [[org.postgresql/postgresql "9.4-1201-jdbc41"]]}
   :ring {:handler kashmir.handler/app}
   :resource-paths ["lib/slf4j-simple-1.7.12.jar"]
   :plugins  [[clj-sql-up "0.3.7"]
              [lein-ring "0.9.7"]]
   :main ^:skip-aot kashmir.core
You need to configure slf4j, you can do that by adding this content to the "resources/log4j.properties" file:
# the appender used for the JDBC API layer call logging above, sql only
log4j.appender.sql=org.apache.log4j.ConsoleAppender
log4j.appender.sql.Target=System.out
log4j.appender.sql.layout=org.apache.log4j.PatternLayout
log4j.appender.sql.layout.ConversionPattern= \u001b[0;31m (SQL)\u001b[m %d{yyyy-MM-dd HH:mm:ss.SSS} \u001b[0;32m %m \u001b[m %n

# ==============================================================================
# JDBC API layer call logging :
# INFO shows logging, DEBUG also shows where in code the jdbc calls were made,
# setting DEBUG to true might cause minor slow-down in some environments.
# If you experience too much slowness, use INFO instead.

log4jdbc.drivers=org.postgresql.Driver

# Log all JDBC calls except for ResultSet calls
log4j.logger.jdbc.audit=FATAL,sql
log4j.additivity.jdbc.audit=false

# Log only JDBC calls to ResultSet objects
log4j.logger.jdbc.resultset=FATAL,sql
log4j.additivity.jdbc.resultset=false

# Log only the SQL that is executed.
log4j.logger.jdbc.sqlonly=FATAL,sql
log4j.additivity.jdbc.sqlonly=false

# Log timing information about the SQL that is executed.
log4j.logger.jdbc.sqltiming=FATAL,sql
log4j.additivity.jdbc.sqltiming=false

# Log connection open/close events and connection number dump
log4j.logger.jdbc.connection=FATAL,sql
log4j.additivity.jdbc.connection=false
Finally, you need to modify the "src/kashmir/data.clj" file to use the logger Postgres connection:
   (:require [yesql.core :refer [defqueries]]
             [clojure.java.jdbc :as jdbc]))
 
 (def db-spec {:classname "net.sf.log4jdbc.DriverSpy"
               :subprotocol "log4jdbc:postgresql"
               :subname "//localhost:5432/kashmir"
               :user "kashmir_user"
               :password "password1"})
Now when you run the tests or hit the HTTP endpoint with cURL, you should see the JDBC logs in the terminal:
lein test kashmir.data-test
[main] INFO jdbc.connection - 1. Connection opened
[main] INFO jdbc.audit - 1. Connection.new Connection returned
[main] INFO jdbc.audit - 1. PreparedStatement.new PreparedStatement returned
[main] INFO jdbc.audit - 1. Connection.prepareStatement(SELECT *
FROM members
WHERE id = ?) returned net.sf.log4jdbc.PreparedStatementSpy@51dbed72
[main] INFO jdbc.audit - 1. PreparedStatement.setObject(1, 2) returned
[main] INFO jdbc.sqlonly - SELECT * FROM members WHERE id = 2
...
(Commit point.)

As you can see, the log can be verbose. The easiest way I found to turn off logging is changing the log4jdbc:postgresql subprotocol back to the original value: postgresql.
(Commit point.)

This last step concludes the series. We set up a database build process, added migrations and seed data to it. We separated SQL from Clojure by using the yesql library. We added testing with mocking to make sure our code is working properly. We exposed the data as JSON through an HTTP endpoint and we added JDBC logging to the project to monitor the communication with the database.

I hope you will find this exercise helpful. Good luck building your database backed Clojure solution!

Saturday, October 31, 2015

Clojure API with Yesql, Migrations and More (Part 2.)

In the previous article, we started working on kashmir, a Clojure project that interacts with a database, and exposes the data through a JSON HTTP endpoint.
In this post we'll seed the database with some test data, add yesql as our DB communication tool, at the end we will cover testing.

Adding Seed Data

Use this commit as your starting point for this exercise. Rebuild your database by running make build-db to make sure you have no records in the tables. Create a new file in resources/seeds.sql and add the following content to it:

INSERT INTO bands(name) VALUES ('The Beatles');
INSERT INTO bands(name) VALUES ('The Doors');

INSERT INTO members(first_name, last_name, email)
VALUES ('John', 'Lennon', 'jlennon@beatles.com');
INSERT INTO members(first_name, last_name, email)
VALUES ('Paul', 'McCartney', 'pmccartney@beatles.com');
INSERT INTO members(first_name, last_name, email)
VALUES ('George', 'Harrison', 'gharrison@beatles.com');
INSERT INTO members(first_name, last_name, email)
VALUES ('Ringo', 'Starr', 'rstarr@beatles.com');

INSERT INTO bands_members(band_id, member_id) VALUES(1, 1);
INSERT INTO bands_members(band_id, member_id) VALUES(1, 2);
INSERT INTO bands_members(band_id, member_id) VALUES(1, 3);
INSERT INTO bands_members(band_id, member_id) VALUES(1, 4);
This will create 2 band and 4 member records. It will also associate the members of The Beatles with their band record. We will insert these records through Postgres' command line tool. Let's add this to our build-db target in our Makefile:
  ...
build-db:
 dropdb --if-exists --username $(USER) $(DBNAME) -h $(HOST) -p $(PORT)
 createdb --username $(USER) $(DBNAME) -h $(HOST) -p $(PORT)
 lein clj-sql-up migrate
 psql -U $(USER) -d $(DBNAME) --file resources/seeds.sql > /dev/null
We added < /dev/null to this line, we are not interested seeing how many records got inserted into the tables. When you run make build-db you should have the seed data inserted into your database.
(Commit point.)

Talking to the Database with Yesql

The natural way to communicate with a database in Clojure is using java.jdbc. However, the spaghetti SQL is hard to understand, and mixing of Clojure code with SQL could make it a mess very quickly. I found the fantastic tool yesql a few weeks ago and it was just what I needed: an easy way to separate SQL from Clojure. Let's add yesql and the Postgres jdbc driver to the project by modifying the project.clj file this way:

(defproject kashmir "0.1.0-SNAPSHOT"
  ...
  :dependencies [[org.clojure/clojure "1.7.0"]
                 [org.postgresql/postgresql "9.4-1201-jdbc41"]
                 [yesql "0.5.1"]]
  ...)
Create a new directory called "sql" under "src/kashmir". Create a new SQL file in this directory called "data.sql". Add these two queries to it:
-- name: find-member-by-id
-- Find the member with the given ID(s).
SELECT *
FROM members
WHERE id = :id

-- name: count-members
-- Counts the number of members
SELECT count(*) AS count
FROM members
The line in this SQL file that begins with -- name: has special significance. Yesql will create data access functions with the name you define there.
Add a new Clojure file under "src/kashmir" called "data.clj", this file will hold the data access functions. Add the following code to it:
(ns kashmir.data
  (:require [yesql.core :refer [defqueries]]
            [clojure.java.jdbc :as jdbc]))

(def db-spec {:classname "org.postgresql.Driver"
              :subprotocol "postgresql"
              :subname "//localhost:5432/kashmir"
              :user "kashmir_user"
              :password "password1"})

(defqueries "kashmir/sql/data.sql"
            {:connection db-spec})
I am a bit unhappy with duplicating the Postgres connection information here, I'll leave you to set up the DB connection in the project.clj file.
Fire up the REPL to see if this works (you can find my input highlighted below):
% lein repl
nREPL server started on port 55527 on host 127.0.0.1 - nrepl://127.0.0.1:55527
REPL-y 0.3.7, nREPL 0.2.10
Clojure 1.7.0
Java HotSpot(TM) 64-Bit Server VM 1.8.0_60-b27
    Docs: (doc function-name-here)
          (find-doc "part-of-name-here")
  Source: (source function-name-here)
 Javadoc: (javadoc java-object-or-class-here)
    Exit: Control+D or (exit) or (quit)
 Results: Stored in vars *1, *2, *3, an exception in *e

kashmir.core=> (require '[kashmir.data :refer :all])
nil
kashmir.core=> (count-members)
({:count 4})
kashmir.core=> (find-member-by-id {:id 2})
({:id 2, :first_name "Paul", :last_name "McCartney", :email "pmccartney@beatles.com", :created_at #inst "2015-10-14T19:59:48.905474000-00:00"})
kashmir.core=>
Fantastic! We can talk to the database through yesql based on the SQL scripts defined in "src/kashmir/sql/data.sql" file.
(Commit point.)

Adding Tests

Although our application does not have much logic just yet, I'd like to show you how you could start writing automated tests. Create a new test file under "test/kashmir/data_test.clj". Add the following code to it:

(ns kashmir.data-test
  (:require [clojure.test :refer :all]
            [kashmir.data :refer :all]))

(deftest count-members-test
  (testing "there are 4 members"
    (is (= 4 (-> (count-members) first :count)))))
Remove the failing test in "test/kashmir/core_test.clj" file:
(ns kashmir.core-test
  (:require [clojure.test :refer :all]
            [kashmir.core :refer :all]))
Run the tests by invoking lein test and you should see the one and only test passing:
% lein test

lein test kashmir.data-test

Ran 1 tests containing 1 assertions.
0 failures, 0 errors.
(Commit point.)

Finding a Member

Yesql needs a hash even when a record is looked up by an ID. This is how you invoke the yesql generated function: (find-member-by-id {:id 2}). We should keep the data access interface unaware of this implementation detail. Let's find a member by an ID this way: (find-member 2). Write the test for this in test/kashmir/data_test.clj:

...

(deftest find-member-by-id-test
  (testing "finds PM with id 2"
    (is (= "Paul" (-> (find-member 2) first :first_name)))))
This is the code implementation of it in "src/kashmir/data.clj":
...

(defn find-member [id]
  (find-member-by-id {:id id}))
Both of the tests should pass now.
(Commit point.)

Adding a Member

Reading data with yesql is simple, but adding records and testing that over and over can be more challenging. The database has to be reset to its original state after each test run. You have two options here:

  • truncate all the tables after each test,
  • roll back the INSERT transactions.
The way you can truncate all the tables is a blog post in itself. Unfortunately the Clojure community has not created a DatabaseCleaner project we love so much in the Ruby world just yet. Let's use the roll-back feature of the INSERT transaction for our tests in the examples.

When you create a member, you need to associate that member with a band. In fact, a member can not be added to the database without a band. A hash with all the member data and the band name will be the arguments to this create function.
Let's write the test first in the "test/kashmir/data_test.clj" file:

...
(deftest create-member-test
  (testing "adds a member to the DB"
    (let [member {:first_name "Jim" :last_name "Morrison" :email "jmorrison@doors.com"}]
      (is (= 1 (create-member! member "The Doors"))))))
Let's write the simplest code that could possibly work. First, we need to add the INSERT SQL statements to "src/kashmir/data.sql". This is what those look like:
...

-- name: find-band-by-name
-- Finds a band record based on the provided name
SELECT *
FROM bands
WHERE name = :name

-- name: create-member-raw!
-- Adds a new member with the bands_members association
WITH inserted AS (
  INSERT INTO members (first_name, last_name, email)
  VALUES (:first_name, :last_name, :email)
  RETURNING id
)
INSERT INTO bands_members (member_id, band_id)
SELECT inserted.id, :band_id FROM inserted
As I was writing this blog post, I researched how I could insert records into different tables with one SQL statement. Using stored procedure or function would be an easy choice, but that's too heavy for what we need. I found this blog post by Rob Conery. He shows how CTEs (Common Table Expressions) can be used to insert and reuse the created record in a subsequent insert. That's what you see in the second SQL command. By using this solution, the Clojure code will be small, as the database write functionality is delegated to PostgreSQL.
This is what the data logic will look like in the "src/kashmir/data.clj" file:
...

(defn create-member!
  ([member band-name]
    (let [band-id (-> (find-band-by-name {:name band-name})
                       first
                       :id)]
        (create-member-raw! (conj member {:band_id band-id})))))
The "-raw" postfix was used for the function that gets generated by yesql. We could have created an alias, but I liked this kind of naming-convention.
When you run the test it won't error out, but one of the tests will fail. It has more than 4 total records in the members table. Absolutely, the database was not restored to its default state. Let's take care of that! We will insert the record, but we will roll back the transaction once the test is complete, leaving the database in it's original, default state.
Add/modify the highlighted lines in your "src/kashmir/data.clj" file:
...

(defn create-member!
  ([member band-name]
    (jdbc/with-db-transaction [tx db-spec]
      (create-member! member band-name tx)))
  ([member band-name tx]
    (let [band-id (-> (find-band-by-name {:name band-name})
                       first
                       :id)]
        (create-member-raw! (conj member {:band_id band-id})
                            {:connection tx}))))
And finally, initialize and roll back the transaction from the test. Change the highlighted lines in "test/kashmir/data_test.clj" this way:
(ns kashmir.data-test
  (:require [clojure.test :refer :all]
            [kashmir.data :refer :all]
            [clojure.java.jdbc :as jdbc]))
  ...

(deftest create-member-test
  (jdbc/with-db-transaction [tx db-spec]
    (jdbc/db-set-rollback-only! tx)
      (testing "adds a member to the DB"
        (let [member {:first_name "Jim" :last_name "Morrison" :email "jmorrison@doors.com"}]
          (is (= 1 (create-member! member "The Doors" tx)))))))
Rebuild your database and run your tests. You should see the 0 failures, 0 errors. message. Do it many times, the tests should always pass.
(Commit point.)

One Last Refactoring

I am unhappy with the create-member! function. The way we are looking up the band by its name is inelegant, I feel we could do better. Since we have one band record by name, when we call find-band-by-name, we should get back one single hash and not a lazy-seq with a hash in it. Let's refactor to that! First, we'll renamed the yesql generated function to find-band-by-name-raw in the "src/kashmir/sql/data.sql" file:

...

-- name: find-band-by-name-raw
-- Finds a band record based on the provided name
SELECT *
FROM bands
WHERE name = :name
Let's refactor the actual code like this in "src/kashmir/data.clj":
...

(defn find-band-by-name [name]
  (first (find-band-by-name-raw {:name name})))

(defn create-member!
  ([member band-name]
    (jdbc/with-db-transaction [tx db-spec]
      (create-member! member band-name tx)))
  ([member band-name tx]
    (let [band-id (:id (find-band-by-name band-name))]
        (create-member-raw! (conj member {:band_id band-id})
                            {:connection tx}))))
I rebuilt the db, ran the tests and everything passed.
(Commit point.)

You could say this is only the "happy path", what if the band name is incorrect and no band will be found. This will blow up somewhere. Absolutely! You need to do exception handling and error checking. I wanted to keep my examples simple, so others coming to Clojure can benefit from the simplified code.

This last refactoring concludes the second part of the series. In the final session we will add logging to jdbc to monitor how yesql communicates with the database. We will also expose the data as JSON through an HTTP endpoint.

Clojure API with Yesql, Migrations and More (Part 1.)

I've found endless books and articles explaining core ideas and the building blocks of the Clojure programming language. They show you how to use the different data structures, they have good examples for little tasks, like reading and parsing a CSV file, but books or articles that walk you through an example of building a comprehensive solution is hard to find.
I always liked writings that showed me how to build an app. I would learn many aspects of a language, get familiar with tools, and most of all, I would build something that could serve as foundation for my future projects.
I am planning to do just that with this series of blog posts. I'd like to show you how to:

  • Set up your database environment with scripts
  • Manage database changes through migrations
  • Test the different components
  • Stub out function calls you don't need for testing
  • Add logging to monitor the database communication
  • And all this in Clojure!
By the end of these blog posts you will be able to expose database through Clojure libraries as JSON data with HTTP endpoints.

I have following presumptions:

  • You have Clojure installed (I have version 1.7.0 at the time of writing)
  • We'll use PostgreSQL (mine is 9.4.4)
  • I am using OSX (10.10.5)
The name of the app is "kashmir", you can find the final solution in this public repo. I will link specific commit points to the blog posts, this way you can join this tutorial at any point you want. Let's dive in!

The Data Model

The data model is simple, it has only 3 tables. The members table lists the various band members, the bands table lists all the bands those members belong to, and the bands_members table is used to map the members to their bands.
This is what it looks like:

Creating the Database User

I use the excellent pgcli tool as my command line interface for Postgres. It has code completion, table name suggestion features, it's your psql tool on steorid. If you don't have it, grab it through homebrew. Create a DB user called "kashmir_user" and allow this user to create DBs. This is how you do it in the command line, all the inputs are highlighted:

% pgcli postgres
Version: 0.19.1
Chat: https://gitter.im/dbcli/pgcli
Mail: https://groups.google.com/forum/#!forum/pgcli
Home: http://pgcli.com
postgres> CREATE USER kashmir_user PASSWORD 'password1';
CREATE ROLE
Command Time: 0.000s
Format Time: 0.000s
postgres> ALTER USER kashmir_user CREATEDB;
ALTER ROLE
Command Time: 0.000s
Format Time: 0.000s
postgres>

Initializing the Project

Generate the new project skeleton by runnig the lein new app kashmir command in the terminal. You should have a skeleton app project that looks like this. When you run lein run, you see "Hello, World!", and when you run the tests you see the 1 failure:

% lein test

lein test kashmir.core-test

lein test :only kashmir.core-test/a-test

FAIL in (a-test) (core_test.clj:7)
FIXME, I fail.
expected: (= 0 1)
  actual: (not (= 0 1))

Ran 1 tests containing 1 assertions.
1 failures, 0 errors.
Tests failed.

Creating the Database

Database drop and create operations should be scripted. You can use rake db:drop and rake db:create in Rails, we should have something similar here. You can use the Postgres command line tools to create and drop databases with the createdb and dropdb commands. The --if-exists switch helps you when you're running it for the first time, the command won't error out if the database does not exist.
The easiest way to create executable tasks is with a Makefile. Create a new file called Makefile in your project root and add this to it:

DBNAME=kashmir
USER=kashmir_user
PORT=5432
HOST=localhost

PGPASSWORD=password1

# Builds the DB by dropping and recreating it
build-db:
 dropdb --if-exists --username $(USER) $(DBNAME) -h $(HOST) -p $(PORT)
 createdb --username $(USER) $(DBNAME) -h $(HOST) -p $(PORT)
We set up variables in the Makefile, it will be easy to change these values later, you are also adhering to good DRY principles.
Run the Make target by typing the command make build-db in the terminal. You can run this as many times as you want, it will drop and recreate the empty database for you.
(Commit point.)

Running Migrations

The best way to implement changes in a database is through reversible migrations. I usually use the great clj-sql-up migration tool for that. Let's add it to the project.clj file:

(defproject kashmir "0.1.0-SNAPSHOT"
  ...
  :clj-sql-up {:database "jdbc:postgresql://kashmir_user:password@localhost:5432/kashmir"
               :deps [[org.postgresql/postgresql "9.4-1201-jdbc4"]]}
  :plugins  [[clj-sql-up "0.3.7"]])
Run the command lein clj-sql-up create create-members to generate your first migration. This should create a new file in the "migrations" directory. Open up that file and add your migration SQL to it:
(defn up []
    ["CREATE TABLE members(id SERIAL PRIMARY KEY,
                           first_name varchar(50) NOT NULL,
                           last_name varchar(50) NOT NULL,
                           email varchar(50) NOT NULL,
                           created_at timestamp NOT NULL default CURRENT_TIMESTAMP)"
     "CREATE INDEX idx_members_id ON members(id)"
     "CREATE UNIQUE INDEX idx_email_unique ON members(email)"])

(defn down []
  ["DROP TABLE members"])
Test your SQL by running lein clj-sql-up migrate in the terminal. I would recommend looking at the database to make sure the first table, "members" got created properly. Open up pgcli and run \dt from the pgcli prompt. You should see two tables listed there:
  • clj_sql_migrations
  • members
The table "clj_sql_migrations" is used to track the actual version of your database, it's the metadata for clj-sql-up to run the migrations. Let's add the "bands" and "bands_members" tables as well, create a new migration file with the clj-sql-up generator: lein clj-sql-up create create-bands. Open up the migrations/*-create-bands.clj file and add this SQL:
(defn up []
    ["CREATE TABLE bands(id SERIAL PRIMARY KEY,
                         name varchar(50) NOT NULL,
                         created_at timestamp NOT NULL default CURRENT_TIMESTAMP)"
     "CREATE INDEX index_bands_id ON bands(id)"
     "CREATE TABLE bands_members(id SERIAL PRIMARY KEY,
                                 band_id INTEGER REFERENCES bands (id),
                                 member_id INTEGER REFERENCES members (id),
                                 created_at timestamp NOT NULL default CURRENT_TIMESTAMP)"])

(defn down []
  ["DROP TABLE bands"]
  ["DROP TABLE bands_members"])
We should be running the migrations when we drop and rebuild the database. Change your Makefile's "build-db" target like this:
  ...
# Builds the DB by dropping and recreating it
build-db:
  dropdb --if-exists --username $(USER) $(DBNAME) -h $(HOST) -p $(PORT)
  createdb --username $(USER) $(DBNAME) -h $(HOST) -p $(PORT)
  lein clj-sql-up migrate
Now when you run make build-db in your terminal, you should see the database recreated by dropping it first, creating it and running the migration at the end.
(Commit point.)

In the next post we'll add seed data to the database for testing purposes, we will also use the excellent Yesql library to communicate with the database.

Friday, October 2, 2015

The $18 Web Design

My first employer in Minneapolis charged 10s of thousands of dollars for a brochureware website that only promoted a small business in the early 2000s. It became harder and harder to acquire new business, and the price point eventually dropped significantly, but still, it was a pretty good business to be in.

Then in August 2011 Twitter open sourced bootstrap, which turned the entire web to look the same. It was a huge leap in the right direction, but every other website or internal app looked very much alike.

More and more UI engineers cranked out "customized" bootstrap apps. Engineers familiar with bootstrap were able to modify and tweak the design. I even built my own resume on one of those bootstrap designs.

A couple of weeks ago a good friend of mine pinged me about help with his app prototype. He even picked out the template for himself. He sent it over to me as a zip file, I extracted it and my jaw dropped. The 566 MB of content I found in it was amazing.

It had:

  • 4 layouts
  • 5 dashboard mockups with sample graphs
  • 7 different graphs
  • an email template with all messages, view and write email templates
  • metrics dashboard with 6 different data representations
  • all sorts of UI widgets like zoomable maps
  • form widgets with wizards, file uploads
  • 19 user profile views
  • 2 login, forgot password and different error pages
  • a code editor, a timeline view, tree and chat view
  • different UI elements like panels, buttons, tabs and badges
  • 4 data table templates
  • full e-commerce design for products, orders, order detail and payment forms
  • 3 galleries
  • 2 menu options
  • a public site design, this way your company can have a marketing site aligned with the app
  • 5 full and starter templates (angular.js, ASP.NET MVC, meteor.js, static HTML 5 template, Rails 4 templates)
  • 3 CSS pre-processors (SASS, SCSS, LESS)
  • all the templates in PSD in case you want to further tweak it yourself

With a template like that you can pretty much do everything you want. An e-commerce app? Sure! A data analytics application? Absolutely. I've tried using my UI chops before, but none of my attempts were remotely close to what a template like this can offer.

And the best part? This template does not cost $1000. Not even $500. You can have it all for 18 dollars. Yes, for the cost of your lunch you can put your ideas in motion.

Sunday, August 23, 2015

Pay It Forward

I worked in "the Enterprise" long time ago where just the IT headcount reached well above 2000. I was in the Microsoft space then, but I had always admired those Java folks who could just use Spring for dependency management and AOP, and could deploy their apps to a Unix or Linux server. I wanted to use Spring.NET for easier unit testing through dependency injection, however, I had to:

  • fill out a form
  • wait three-four weeks to get scheduled to present my case to the committee
  • wait for the committee's decision
  • start using the open source tool a few weeks later

I did not have two months to be more productive, I wanted to use that tool right away, right at that moment.

A different - and I should say more progressive - software company was a bit more relaxed. We only had to check the open source license for the tool or framework we wanted to use, and if it was the most permissive MIT license, we did not even have to ask.

I finally reached the freedom I had always wanted in the startup world. There is no committee I have to go for permission there. If the developers are onboard with it, I glance at its license, and if it's permissive, we don't think twice about using it.

We built our app entirely on open source software. Our code editors, the database server, the programming languages, the server operating system, the web framework, our app's API layer are all using open source software. We did not pay a single penny for them.

However, it takes serious effort to build and maintain a code base. Developers are working on them after work, during the weekend, not expecting any compensation in exchange for it. As the creator of LightService, I realize what it takes to maintain a library.

I set a rule for myself:

If I use a particular open source software extensively, I make every effort to contribute back to the project.

It does not have to be a huge change. Reviewing documentation, adding missing tests is always great and appreciated by the project's maintainers.

Some projects - especially the ones under heavy development and massive changes - are easier to contribute to. One example of this is the great jsonapi-resources gem, where I helped renaming certain methods with deprecation warnings. It took a while to submit that pull request, but I felt so much better using it, as that project is the foundation of our API layer.

I am sure you are using open source software one way or other. Consider this rule and apply it yourself.

Tuesday, June 30, 2015

Engineering Core Values

I worked for several startups over the years, but none of them had core values. Only well-established companies have core values, - so is the myth - and even there, they might have been only a cute decoration on the wall, nothing more. Nobody knows about them, nobody lives by them.

At Hireology, we know our company's core values by heart. Every single leadership team meeting on Monday starts out with reciting our company's core values. Here they are:

  1. Pathological Optimism
  2. Create Wow Moments
  3. No A$$holes
  4. Eager to Improve
  5. Own the Result

A company's core values might not describe the Engineering Team's core values when it comes to writing software. I felt our team needed something more specific to guide our decisions. Here is the letter I wrote about a year ago, when I announced our Engineering Core Values.

Team,

The company core values define the culture of our company, but it does not describe our engineering values.

The goal of the Engineering Core Values is to keep our team focused on what makes a product great from our perspective. When you write or review code, when you evaluate a change, try to consider these core values. If we follow these three simple guidelines, I am confident our application will be an outstanding one, and we will succeed.

Here they are, our Engineering Core Values:

  1. Performance is paramount
  2. We collect data
  3. Trusted code
1. Performance is paramount
How did Facebook become the leader of the social networking sites? The answer is simple: speed. While MySpace got more and more popular, it couldn't handle its traffic. Users became increasingly frustrated by seeing the "fail whale". In the early days of Facebook, Mark Zuckerberg did not allow a code change to go into production if the request processing time took longer than 2 seconds (from the book - The Facebook Effect). I'd like to follow that example! We should be looking at our application monitoring software and analyze what we see there. If a page takes longer than 2 seconds to process, we have to work on it during the next Engineering Monday.

2. We collect data
We believe in the value of capturing data. An event never happened if we have no data about it. As our business grows, data will be increasingly important to us. Capturing all the changes would be an overkill, but our data analytics engine will collect more and more data as we grow it. Our Rails app's database will always be an online transaction processing (OLTP) database, it will never store historical data for analytical purposes. The data analytics engine will do that.

3. Trusted code
When we took over the app from contractors the application had no tests at all. Zero! Look how far we have come!! Today we have more than 1600 specs and 89 automated scenarios! Whenever you check in code, make sure the code is something you trust. What does trusted code mean? You feel confident about changing a routine you wrote two weeks ago. Specs and acceptance tests are surrounding your code, you know your change will not have unwanted and unexpected ripple effects. You trust that code, knowing your change will not break two other things later in QA or in Production.

Thank you, and please keep these Engineering Core Values in mind.

Attila

I printed our Engineering Core Values announcement and put it on the wall outside of my office, where all our employees can see it. We need to live by them, it can't be just an ornament on that wall.

Friday, May 1, 2015

(Software Engineering) Meeting Best Practices

The TL;DR Version:

Software engineering teams should have two types of engineering meetings:
  1. A forward-looking one that explores new technologies
  2. A self-checking one that discusses current issues and challenges

Have them biweekly, one type of each every single week. Use active listening techniques to encourage equal and engaged participation. Have them in the morning, afternoons should be reserved for writing code and getting work done.

(Software Engineering) Meeting Best Practices

It was a Monday in February of 2010, around 11 am when I received a ping in Campfire to discuss the place we would go to have lunch and have our weekly Tech Talk meeting. We tried to combine lunch with our engineering conversations on Mondays, as that was the day when everybody was in the office. We decided to go to a restaurant, which did not work for us very well. We just couldn't have an engaging conversation when someone was fighting with the fries and the other person heard only half of what the speaker was saying thanks to the loud music at the restaurant.

A year later, when I worked for another company, we had no technical conversations at all. I initiated brown bag lunches combined with watching technical talks, but that was pretty much it. We never really had a recurring event to discuss code or best practices.

I was the first engineer at my current employer. We could have started whatever made the most sense for our team, however, we did not really need a technical meeting until our team grew. We could just stay longer on the call after our morning standup and discuss topics right there. We did not need to schedule and break for another meeting.
In retrospect, I had waited too long to start any kind of engineering meetings. It was one of our senior engineers who started Lightning Talks, and that meeting turned into our "forward-looking", tools, languages, frameworks exploration meeting.

At the beginning of this year, I also started "self-checking", architecture meeting. Engineers can (and are encouraged to) sign up with topics they want to discuss. We usually have one larger topic that someone presents, and we have one or two minor discussions around smaller subjects if we can fit them into one hour. Our team is remote, the presenter shares his/her screen during the Zoom session to show the slides.

We set an order of the participants at the beginning of the meeting, and we go around the room following that order. This way everybody has a chance to speak up, ask questions, and voice an opinion. If someone does not have questions or comments, that person still has to state he/she does not have anything to add. I found this technique working really well for us, as the discussion is not hijacked by opinionated and very vocal team members.

We have our "self-checking" meeting on Tuesday one week, and our "forward-looking" meeting on Thursday the next week. This way there is a large enough gap between these two meeting.

One day my schedule was scattered with meetings and 30-60 minutes break in between them. I had a chunk of 20 to 45 sessions to get any work done. That works for sending an email or killing small chores, but it's just not enough time to get in the zone, understand a problem, and solve it. I need about 3-4 hours of uninterrupted working time to get "wired-in" and be effective.
Therefore, I asked our product and leadership team to support my idea: let's condense all meetings for engineers in the morning, and leave their afternoons meeting-free. This way the engineers will get more done, and the business will benefit from it.

I hope you will find these techniques helping the engineering team to be more effective.

Tuesday, April 14, 2015

The Long and Winding Road to US Cizitenship

Today I became a US citizen.

A naturalized one. Which means I am eligible to vote, run for office, serve on a jury. However, I'll never be a US President. But hey, I could be the Governor of California.

It was a cold, snowy day in early January of 1998, when I set foot on US soil for the first time. I landed in Minneapolis, where my brother lived at that time. The previous week I was a university student with an easy-going outlook on the future, the next week I was an exchange student from Hungary, working at a greenhouse in the north suburbs of Minneapolis.

I traveled a lot that year, thinking I might never be back to this country. I flew back home in March 1999, continued my studies at the university, which I paused for a year to improve my English and to see the world.

However, I visited Seattle and Vancouver, BC in the fall of 1999, just six months after I left the US. I remember landing in Seattle, standing in the middle of the airport terminal thinking "I am home".

I decided to pursue a Ph.D. degree after graduation. In retrospect, I was just buying time, trying to find a way to come back and live in the US.

My brother arranged a job interview with his former manager, who had a web design shop in Minneapolis. This small business owner thought if I am half as smart as my brother is, he was going to get the better end of the bargain. I got hired on the spot, but I had to find an exchange student visa to make my employment legal. I found one, and in 2001 May I moved to the United Sates for good.

The first year went by fairly fast, but my exchange student visa had an expiration date. I switched to a work visa which allowed me to stay and work in the country for up to 6 years.

I went from one company to the other, did my traveling journeyman phase of my professional career. We bought a house in May 2005, and three weeks after moving in I received a phone call from a headhunter, who was trying to find software engineers for a large, Fortune 500 company in the Cleveland, Ohio area. I decided to go through the hiring process, and the next thing I knew was selling our house after the mere 3 months of purchasing it, and we moved to Ohio.

I was put on a fast track with my permanent residency application, which was sponsored by my employer. I received my green card after 18 months. I was happy when I opened the mailbox finding the letter from USCIS notifying me about adjusting my status to a permanent resident.

I had to wait 5 years before I became eligible for US citizenship. In fact, I could have become one in 2012, but moving from Cleveland to Chicago was a big enough challenge for us at that time.

Last year, when we came back from Europe and we entered the country, we had to go through US immigration. Our children are US citizens, we had green cards, but we still had to wait in line with "the visitors" to enter into the country we called home. That was the moment we decided to do something about it. We filed our paperwork, we prepared for our civic test, went through the interview, and today we recited the oath to become part of this Nation.

Sunday, March 8, 2015

Education

On the days when I go to the office, I have an about 25 minutes walk from the train station. I had listened to music for a long time, but a few weeks ago I switched over to podcasts. I have heard great things about "This Developer's Life", and after listening to the first couple of episodes, I was hooked. About a week ago I listened to the session on "Education". I have two children, this topic is a very important one for me. I care a great deal about how my children are educated, what they like to learn, what they are interested in.

The podcast discusses the topic of "do you need formal education to be a good software engineer"? I am not going to give out the conclusion here, please listen to that episode if you have time. However, as I was listening to it, my mind started cruising.

I worked for one of the largest insurance companies in the US as a software engineer. I had a coworker there who was roughly 15 years older than me. He was burnt out, the 9-5 kinda' guy. We were expecting our first child at that time and I received one of life's big lessons from him. He said: "Attila, when your child goes to high school, you should discourage them from learning software engineering. All those jobs will be outsourced, they will be better off being a plumber, than a software engineer." I don't blame him for saying this. Indian contractors were imported to do QA and other tasks for us. They worked from dawn 'till dusk without a break, causing resentment among my fellow software engineers.

But I disagreed with him. I will do everything I possibly can to encourage my children to be software engineers. My dream is that they will love data and math. "Data is the new oil," - said the European Consumer Commissioner according to the book, Predictive Analytics. This profession has a bright future, the possibilities are endless in their lives. I read this somewhere: "No humans will be needed to collect tolls on the highway, but software engineers will have to write code to keep the system running."

Source: The Activist Post

Look at the job market after the great recession at the end of 2008. Corporations' profit is through the roof, Wall Street is flying high while the number of active workers remained at or close to what it was at the height of the downturn. Companies achieved this with increased level of automation. They had to tighten their belts, but money was pumped into automation. Who did they need to drive that growth? Software engineers. And now, when the market is doing better, corporations learned to live lean, being very efficient with fewer people through automation.

Hardware is cheap and getting cheaper. There is only one component they need to fuel the growth based on automation - Software Engineers. This is a great time to be one of them, tell your children to start coding!

Thursday, January 15, 2015

The Case For and Against Cucumber

The TL;DR version

Cucumber has 3 benefits:

  1. Feature Discovery
  2. Automated Acceptance Testing
  3. (Executable) Documentation
In order to use Cucumber successfully within your organization, you need to take advantage of at least 2 of these benefits.

The Case For and Against Cucumber

Last week I gave a talk on Cucumber at CodeMash. I was glad to see the roughly 40 people who came to hear me despite being scheduled as one of the last sessions of the conference.

I ended my talk with this very personal story. I had worked in the Microsoft .NET space for 8 years, but I wanted to do something else. I was fascinated by the Ruby community, the innovation, the sharing I had seen among its members. I lived in Cleveland, OH, and there were only a handful of companies working with Ruby at that time.

My ticket to the Ruby World was my familiarity with Cucumber. My good friend - Joe Fiorini - approached me if I'd be interested in joining their company as a QA Engineer, helping them with QA automation with Cucumber. I was eager to say yes and joined them shortly after.

I wrote the first couple of features, showed them how to write Gherkin. Our test suite happily grew during the first few months of my employment. However, as more and more engineers joined, the case against Cucumber increased. Some of the engineers said they are not against acceptance testing, but those acceptance tests should be written in RSpec and not in Cucumber. Cucumber seemed to them an unnecessary extra layer they did not need.

I felt sad and disappointed. Why my fellow engineers were not seeing the value of Cucumber? What did I do wrong? Should I have spent more time explaining the values of executable documentation? I felt helpless. I asked Jeff "Cheezy" Morgan - who knows a lot more about the values and application of Cucumber at various organizations - to have breakfast with me and one of the engineers.

We met with Cheezy a few weeks later. I told him: "Cheezy, I think Cucumber is a fantastic tool, it expresses business logic like nothing else. Our company should use it. Please, be the judge here, what are we doing wrong?" Cheezy had one question: "Who is reading your Gherkin?" I said: "Who? It's us, the engineers, and maybe our QA folks." He said: "You should not use Cucumber, you would be better off with just RSpec. Cucumber is a tool for discovering requirements." "Huh?!"

I went back to work feeling a bit disappointed. I used Cucumber for acceptance testing, I did not want to hear about any other tools to do that.

It took me a few months to realize that Cheezy was right. I blindly used Cucumber for its expressiveness, and not for its value as a feature discovery tool.

Fast forward a few years to today and I wonder, why Cucumber or Gherkin is useful to us at Hireology. The answer is clear now: the entire Product, QA and Engineering team values and leverages Cucumber for feature discovery. Product will try writing a couple of scenarios when they brainstorm on a new feature. Those scenarios will be fine-tuned, extended with new ones during our 3 Amigos Meeting (a meeting to flush out feature requirements with Product, QA and Engineering). We just happen to automate those specifications during the development process.

I love how we start thinking about edge-cases well before the development begins with the help of Cucumber and Gherkin. What if the external system is not responding? Where will the user be redirected after a successful form submission? The benefit of doing this kind of planning is a more accurate estimation. Estimating engineering effort of a feature is hard, but if you know what you need to build, then at least you can take a decent stab at it, it won't be a complete swag.

We successfully use Cucumber for (1.) feature discovery and for (2.) automated acceptance testing. Now on to its third benefit: documentation.

Our Cucumber (Gherkin) scenarios are living together with our code base. Looking at them is still hard and not available for everyone at our company. I'd like to make all our features accessible to everybody, from our CEO to all our sales folks. "How does feature X work?" "I don't know, go through the feature document by clicking on this hyperlink."

Have you tried reading up on RSpec's mocking and stubbing functionality. In case you have, I am sure you have visited the Relish app. Take a look at the page that describes a basic functionality of RSpec mocking. Does it look familiar? Well, there is a Given/When/Then text in there. The most important question: is that useful? Can you learn the tool just by reading through that? That text is coming from RSpec's own source code. The RSpec developers packaged up their Cucumber scenarios and presented it in an elegant, nicely formatted, searchable app. Relish app is the prime example of executable documentation.

Publishing our more than 200 scenarios is my next goal. We use Cucumber for feature discovery, automated acceptance testing, we should use it for documentation as well.

Thursday, January 1, 2015

Fast TDD in Clojure with Fireplace.vim

I've been looking at Clojure for the past 18 months. I prefer learning and practicing by writing a failing test first, but unfortunately, the feedback loop through TDD in Clojure is slow. How slow? Well, it's between 4-10 seconds depending on the size of the project. I am still new to the language, I want to tweak my code a tiny bit and see if that change broke my tests. I am used to Ruby's almost immediate test execution, and the long wait for running the tests in Clojure makes it less effective.

In Ruby Land, I am used to running a large number of tests (958 examples in our application) in about 3.8 seconds. In a brand new Clojure project, it takes about 4 seconds to run the only failing test. This is no surprise: Clojure code has to be be compiled to Java byte code, where the compilation takes time.

I bumped into Ben Orenstein's great "Tips for Clojure Beginners" blog post a few weeks ago. It's a must read if you're new to Clojure. Vim is my preferred editor, and he wrote about a vim plugin by Tim Pope, called fireplace.vim. I remember looking at it briefly, but for some reason, I did not give it a try at that time.

A few days later I hacked on some code in Clojure again, and it reached a point where I threw my hands in the air and declared: "enough is enough!" I caught myself checking out Twitter and other websites as I had to wait about 10 seconds to run the tests after a simple change. I went through this blog post, where the author talks about using fireplace.vim for test execution. I gave it a try, and there is no turning back!

I installed fireplace.vim with pathogen. I opened another tab in my terminal, navigated to the root directory of my Clojure project. Fired up lein repl there and noted what the port number was.

In this case, 53844 was the port number for the nREPL server. I connected to that from my vim session in the other terminal tab by typing the vim command :Connect.

Fireplace gently investigated which nREPL server I wanted to connect to. I chose (the obvious) option one, it used localhost and I had to provide the port number from the other tab, which was 53844.

I submitted this option, and I was connected to the nREPL in the other tab. Fireplace lets me run the tests in the currently selected pane by using the :RunTests command. I did that, and much to my surprise the tests executed almost instantaneously. I did it once more (or maybe 5 times) just for the heck of it! This is what I found in the quickfix list:

I made the test pass, the output was terse. I guess there isn't much to say when all my expectations are passing. I included an animated gif here to show you what it feels like running the tests. Super sweet, don't you think!?

When I change a Clojure file in a different buffer (other than the buffer where my tests are), I need to Require! those files again. I get around this by writing all my functions right above the tests in the same buffer, and moving them to their final place when I feel confident about them.

There is an easier way to connect to a REPL by using fireplace.vim's :Piggieback! command. Please read the docs of this great vim plugin, that's how you can learn all the other features (like macroexpand) I have not described in this blog post.

My personal shortcut to run the tests is ,r. Setting it up with vim was easy:
:nmap ,r :RunTests<CR>. With this change, I had the same joy in Clojure as I've had with Ruby and RSpec for years. Bye-bye checking out while I am test driving my code in Clojure!

Update on 01/31/2015

I've been using this keybinding with fireplace in vim recently: :nmap ,r :Require! <bar> Eval (clojure.test/run-tests)<CR>. It picks up any changes I make in the source and the test files as I require packages before every test run. I'd recommend giving this a try.