Wednesday, May 31, 2017

Bit Shifting for a Shard ID in Ruby

As our database grew, we had to take a serious look at how can we could split it up by our clients, as most of them wanted to have their own data separate from the others anyway. A few months ago I found a great article from Pinterest, that describes how they sharded their MySQL database.

A sharded entity needs a UUID to uniquely identify the record across all shards. Most of the programming languages can generate a UUID easily, however, what was amazing to me, was that Pinterest generated its own unique ids by encapsulating three distinct numbers in one. Wait, what??! Read that article, it’s definitely worth your time.

While Pinterest encapsulated three numbers into one, we only needed two, a client_id and an entity_id. Our client_id would be a much smaller number than our entity_id, we wanted to reserve more bits for the latter.

It turns out, Ruby has many friendlier tools to deal with binary operations. Let's look at them!

What is the binary representation of the integer number 123?

123-in-binary

Out of the 7 bits, you'll see that the 3rd one is turned off, all the others are turned on, giving us 64 + 32 + 16 + 8 + 2 +1 = 123. How can we get this binary representation in Ruby? It's super easy, just use the to_s(2) method to do it.

pry(main)> 123.to_s(2)
=>"1111011"

This is the exact same string representation as the one in the image above, where the third bit is turned off and represented with a zero.

I'd like to keep the client_id on the left side, but I'd like to reserve bits on the right side. For the sake of simplicity, I will keep this as a small number. Let's add 5 bits to the right-hand side of these bits by using the bitwise left shift operator.

pry(main)> (123 << 5).to_s(2)
=> "111101100000"

The original number, 123, is still represented on the left side, but 5 "0"s were added to the right-hand side. You get the numeric representation of this by leaving out the to_s(2) call at the end:

pry(main)> 123 << 5
=> 3936

This number can be converted back to binary:

pry(main)> 3936.to_s(2)
=> "111101100000"

On the left side I have the binary representation of 123, but how about those bits on the right side? What are those representing? Right now, those bits are all turned off, they will give you 0 ("00000".to_i(2) => 0). How can I store the number 3 on the right side? The bits should look like this:

3-on-right-side

The binary "|" will turn the two rightmost bits on:

pry(main)> (123 MM 5 | 3).to_s(2)
=> "111101100011"

Again, leaving out the to_s(2) will provide the number representation:

pry(main)> (123 << 5 | 3)
=> 3939

The storing part will work, but how can we get our two numbers back from this one combined number? Well, we have to split the bits and convert the binary representation to an integer.

Five bits were used on the right side to store our second number. We need to chop those off to get the number stored on the left side. The bitwise right shift (>>) will do just that:

pry(main)> (3939 >> 5).to_s(2)
=> "1111011"

The string "1111011" is our original 123 in a binary string format. We can convert that to an integer by using the to_i(2) String method:

pry(main)> (3939 >> 5).to_s(2).to_i(2)
=> 123

I right shifted the original number, 3939, converted it to a binary string and converted that to an Integer.

There are more efficient ways to do this by using a binary "&" (3939 >> 5) & 0b1111111 => 123 with the max value the bits can represent. That's what the Pinterest article had, but I found using the Ruby conversion methods a bit more friendly to those of us who are not dealing with binary data on a daily basis.

We have the number on the left side, but what about the number on the right side? When we convert the number representation (3939) to a binary string, we know that the five characters on the right side will represent the bits of our other number. Ruby String’s last(x) will do just that:

pry(main)> (3939 >> 0).to_s(2).last(5)
=> "00011"

Converting this binary String to Integer should be similar to what we've done before:

pry(main)> (3939 >> 0).to_s(2).last(5).to_i(2)
=> 3

Using the binary "&" with the max number the bits can store will do this conversion in one step: (3939 >> 0) & 0b11111 => 3. As a side note, the binary number can be represented as a hexadecimal value: (3939 >> 0) & 0x1F => 3. This is a lot shorter than a series of ones and zeros.

There is a limit of how large the numbers can be as you have a limit of bits to store those. The max number can be determined by flipping the available bits on. For an 7 bit number it's 64 + 32 + 16 + 8 + 4 + 2 + 1 = 127 or 2**x-1, where x is the number of bits. In our case it is 2**7-1 = 127.

We ended up using a 64-bit Integer for our shard_id, which is a BIGINT in MySQL. We store client_id in 22 bits giving us the maximum of 2**22-1 = 4_194_304 and 40 bits for the entity_id with 2**40-1 = 1_099_511_627_775 max value. The remaining two bits are "worth in gold, we can expand the number or store a third (albeit small) number in it.

Tuesday, April 25, 2017

Fireside Chat

When I saw a retweet from Jason Fried about available tickets to a fireside chat with him at Basecamp, I jumped on it. I figured if I can kill two birds with one stone, - meeting him in person and seeing their offices - it's a no-brainer. Company Culture was the topic of the conversation led by Aimee Groth, who visited Chicago to publicize her new book, Kingdom of Happiness about Zappos' culture.

Basecamp HQ

Basecamp HQ is as cool as you think it is. Very few desks, a couple of meeting rooms. It reminded me more of a train terminal with its large windows and limited furnishing than a real office. The office is centered around an auditorium, which is an effective PR and educational platform for the company.

I enjoyed looking at the walls covered with postcards from employees all over the world, but I especially liked David's H-1B approval notice from the USCIS from 2005. I laughed out loud when I noticed it, as I had to go through similar hassle myself, but mine is safely guarded with my documents at home.

Basecamp works in six weeks work schedule. Whatever the team can get down in six weeks, they will deliver it. The scope can change, but the six weeks schedule is hard set. This timeframe helps them delivering functionality, and since the company is working remotely, it worked out well for them.

They don't have managers who only manage people or projects, the teams are led by team leads. These team leads are developers as well, shipping code on a daily basis. Jason and his team realized that managers who do not code, grow away from the work. According to him, "professional (full time) managers forget to do the work".
At one point they've tried rotating team leads, but that did not work out, as the continuity was lost. I could see that: "I see this problem, but I won't deal with it, I'll leave it for the next person, who will take over." Basecamp is looking for people who are self-managed, however, Jason emphasized multiple times: "people like to be led". It's important to "rally the folks by clear goals and purpose".

Jason also talked about the Jeff Bezos investment in the company, which meant a small ownership stake in Basecamp. They did not need the money to survive, David and Jason felt having a person like Mr. Bezos is mutually beneficial to both parties. "Who would not like to have Jeff Bezos as an advisor in his or her company?!" They have not talked to Jeff Bezos for a while, but if they wanted to, they could just reach out to his secretary, set up a meeting, and Jeff would fly to Chicago for a meeting or dinner with them.

The best advice from Bezos - and according to Jason, this was worth the entire dividend they have paid for his investment - was: "invest in the things in your business, that won't change". Don't chase the shiny new things, stick to what will not change. For them, it's Basecamp. The company had 4-5 products that they sold a couple of years ago to focus on their main product, which is Basecamp.

Jason went into details why other products were divested (like HighRise, Backpack, Campfire). Maintaining the web, Android and iOS versions of their products resulted in 15 different projects. That led to insufficient focus for each platform for each product with the employees they had at the time. They could - of course - have hired other developers, but they intentionally wanted to stay small. They did not want to get richer, be the next billionaire, they were just as happy with what they had. This sounds strange, almost naive in the era of bloated startups that are bleeding money chasing to be the next Facebook.

I enjoyed the Q&A at the very end. Some interesting questions came up about the startup community in Chicago, about VCs in general. Jason kindly offered to stay as long as everybody's questions were answered. Really a courteous offer, considering it was after 8 pm on a Friday night.

Oh, yes, and one more thing: Basecamp has 130,000 paying customers. It's a remarkable achievement by a company that has never taken VC money, was profitable from the get-go, and created an exciting app in the "not-so-exciting" domain of project management.

Tuesday, March 28, 2017

Containers

As I was exploring how to make Golang even faster on AWS Lambda, I found a project that promised sub-millisecond execution time compared to my (already pretty good) ~60 millisecond. It used Python execution that ran the Go code in-process in contrast to my attempt, where I had to spawn a new process and execute the lambda code there. Very clever, no wonder that solution did not have the 60-millisecond penalty for running that code. However, in order to build the sample code for this AWS Lambda I had to use Docker.

I've heard about Docker years ago, understood what it's used for at a very high level, however, I have never really given it a serious try. I figured it was time. Boy, I was in for some pleasant surprise!

The project AWS Lambda Go from Eawsy used Docker to containerize their build environment on my laptop. What does that mean? Imagine having a build server running on your computer in seconds, where the version of the Go compiler, the Python environment is set by the author of the Dockerfile. I'd use a little build engine that takes in my code, runs its magic and a zip file comes out that I can run on Lambda. What?!

I wrote all these different tutorials about running MRI Ruby on AWS Lambda or interacting with a Postgres DB with Clojure and I had to set up all the prereqs in plain text: "you have to have Postgres running, and Clojure, and MRI Ruby". I provided all the different Makefile scripts to follow the examples. However, with Docker, I'd just provide a Dockerfile that sets up the environment in the future.

I believe containers are big and will be even bigger very soon.

I see more and more applications where the code describes the behavior and the container descriptor describes the environment.


They live side by side, clearly stating what virtual components the software needs to execute. Engineers can run the software with those containers locally, and the software can be deployed to the cloud with those images pre-built, with tight control over its execution context.

There are many resources to learn Docker. I started with reading the Docker in Action book and went further by reading the Docker in Practice book.

I created a Docker templates repository, where I collected ideas for different recipes. Do I need a Ruby worker with Redis and Postgres backend? I'll just run docker compose up with this docker_compose.yml file and I have an environment, where everything from the OS to the version of Redis and Postgres is predefined. If it works on my machine, it will work on yours, too.

There are many things I like about Docker as compared to Vagrant or other virtual machine solutions. The biggest thing for me is the low power Docker containers would need. Vagrant images would reserve 2 of your 4 cores and 8GB memory when Docker will only take from the host as much as it needs. If it's 32MB, that's it, if it's 1GB, it will take that much.

Docker is the future, and you will see more and more code repos with a Dockerfile in it.

Wednesday, February 8, 2017

Golang

The first time I heard about Golang was a few years back, when the great guys at Brad's Deals, our next door office neighbor organized and hosted the local Go meetup there. Then IO.js and Node.js war broke out and TJ Holowaychuck shifted from Node.js to Golang announcing the move in an open letter to the community.
I did not think much of the language, as its reputation was far from the beauty of a real functional language.

Fast forward a couple of years and I am giving Ruby a serious try on AWS Lambda. Ruby works there, however, it needs enough memory and 3000 ms (3 seconds) to do anything. We have to invoke some of them millions of times in a month and when we calculated the cost for it, the bill gets fairly large quickly.

I created a simple AWS Lambda with Ruby just to print the words "Hello, World!" with 128 MB memory. It took 5339 ms to execute it.

Ruby Hello World on AWS Lambda

Then one night I wrote a tiny Go program:

package main

import "fmt"

func main() {
  fmt.Println("Hello, World!")
}

I cross compiled (since I am working on OSX) with the command GOOS=linux GOARCH=amd64 go build github.com/adomokos/hello to Linux, packaged it up with a Node.JS executor and ran it. I couldn't believe my eyes, it took only 68 ms to get the string "Hello, World!" back. 68 ms! And it was on a 128 MB memory instance. It was beautiful!

Go Hello World on AWS Lambda

Ruby would need four times the memory and it would still execute ~10 times slower than Go. That was the moment when I got hooked.

Go is a simple language. I am not saying it's easy to learn, it's subjective: it depends on your background, your experience. But it's far from the beauty of Haskell or Clojure. However, the team I am working with would have no trouble switching between Go and Ruby multiple times a day.

What kind of a language today does not have map or reduce functions?! Especially when functions are first-class citizens in the language. It turns out, I can write my own map function if I need to:

package collections

import (
  "github.com/stretchr/testify/assert"
  "strconv"
  "testing"
)

func fmap(f func(int) string, numbers []int) []string {
  items := make([]string, len(numbers))

  for i, item := range numbers {
    items[i] = f(item)
  }

  return items
}

func TestFMap(t *testing.T) {
  numbers := []int{1, 2, 3}
  result := fmap(func(item int) string { return strconv.Itoa(item) }, numbers)
  assert.Equal(t, []string{"1", "2", "3"}, result)
}

Writing map with recursion would be more elegant, but it's not as performant as using a slice with defined length that does not have to grow during the operation.

History

Go was created by some very smart people at Google, I wanted to understand their decision to keep a language this pure.
Google has a large amount of code in C and C++, however, those languages are far from modern concepts, like parallel execution and web programming to name a few. Those languages were created in the 60s and 80s, well before the era of multi-core processors and the Internet. Compiling a massive codebase in C++ can easily take hour(s), and while they were waiting for compilation, the idea of a fast compiling, simple, modern language idea was born. Go does not aim to be shiny and nice, no, its designers kept it:

  • to be simple and easy to learn
  • to compile fast
  • to run fast
  • to make parallel processing easy

Google hires massive number of fresh CS graduates each year with some C++ and Java programming experience, these engineers can feel right at home with Go, where the syntax and concept is similar to those languages.

Tooling

Go comes with many built-in tools, like code formatting and benchmarking to name the few. In fact I set up Vim Go that leverages many of those tools for me. I can run, test code with only a couple of keystrokes.

Let's see how performant the procedure I wrote above is. But before I do that I'll introduce another function where the slice's length is not pre-determined at the beginning of the operation, this way it has to auto-scale internally.

func fmapAutoScale(f func(int) string, numbers []int) []string {
  // Initialize a slice with default length, it will auto-scale
  var items []string

  for _, item := range numbers {
    items = append(items, f(item))
  }

  return items
}

The function is doing the same as fmap, similar test should verify the logic.

I added two benchmark tests to cover these functions:

// Run benchmark with this command
// go test -v fmap_test.go -run="none" -benchtime="3s" -bench="BenchmarkFmap"
// -benchmem
func BenchmarkFmap(b *testing.B) {
  b.ResetTimer()

  numbers := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
  for i := 0; i < b.N; i++ {
    fmap(func(item int) string { return strconv.Itoa(item)  }, numbers)
  }
}

func BenchmarkFmapAutoScale(b *testing.B) {
  b.ResetTimer()

  numbers := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
  for i := 0; i < b.N; i++ {
    fmapAutoScale(func(item int) string { return strconv.Itoa(item)  },
    numbers)
  }
}

When I ran the benchmark tests, this is the result I received:

 % go test -v fmap_test.go -run="none" -benchtime="3s" -bench="BenchmarkFmap"
    -benchmem
 BenchmarkFmap-4   ‡ 10000000 | 485 ns/op | 172 B/op | 11 allocs/op
 BenchmarkFmapAS-4 ‡ 5000000  | 851 ns/op | 508 B/op | 15 allocs/op
 PASS
 ok   command-line-arguments  10.476s

The first function, where I set the slice size to the exact size is more performant than the second one, where I just initialize the slice and let it autoscale. The ns/op displays the execution length per operation in nanoseconds. The B/op output describes the bytes it uses per operation. The last column describes how many memory allocations it uses per operation. The difference is insignificant, but you can see how this can become very useful as you try writing performant code.

Popularity

Go is getting popular. In fact, very popular. It was TIOBE's "Language of the Year" gaining 2.16% in one year. I am sure you'll be seeing articles about Go more and more. Check it out if you haven't done it yet, as the chance of finding a project or job that uses Go is increasing every day.