Bloggity Blog

We love our work so much, we want to tell the world about it.

In last week's Dayton Ruby meetup, Chris Chernesky spoke about how he uses Tmux for his everyday Rails development. It was great talk and ever since the meetup, Tmux has been all the buzz at the Littlelines office.

I've been using Tmux for a few years and it has become an essential tool for my workflow. Tmux's window and session management make it a no-brainer for those who live in the terminal. In this article, I explain a few tmux commands and tools that I use daily to help me work more effectively across rails projects.

Setting up Tmux

Let's start off on the right foot. Since we're going to spend most of our day inside tmux, we might as well configure it to best suit our needs. Here are just a few tmux configurations that are essential for me.

# Remaps tmux prefix to Control-b
unbind C-b
set -g prefix C-a

# Improve colors
set -g default-terminal 'screen-256color'

# Navigate around panes easily using vim-like keybindings
bind h select-pane -L
bind j select-pane -D
bind k select-pane -U
bind l select-pane -R

# Remove administrative debris (session name, hostname, time) in status bar
set -g status-left ''
set -g status-right ''

The biggest configuration change I made is remapping the default Tmux prefix from Control-b to Control-a instead. Since I've also remapped my caps lock key to the control key, this is much more convenient for me.

Now let's take a look at a couple aliases I setup to help navigate tmux on the command line. With these three aliases we can easily list the running tmux sessions, join a current session, or kill tmux sessions.

alias tml="tmux list-sessions"
alias tma="tmux -2 attach -t $1"
alias tmk="tmux kill-session -t $1"

Killer Rails Project Management with Tmuxinator

Working with a typical Rails project may involve several commands and long-running processes. For example, we may need to run the web server e.g. bundle exec rails server or we may need to run Sidekiq for our background workers. Tmux is great for this scenario because we can harness the power of Tmux's windows and panes to organize these processes in a single screen.

To further make our lives even easier, Tmuxinator makes managing Tmux sessions a snap with the use of simple Yaml files. The best part, it's a ruby gem so all we have to do is run gem install tmuxinator and we're ready to go. Here is an example of a configuration file I setup for one of my Rails projects.

name: website
root: ~/code/website
windows:
  - editor:
      layout: main-vertical
      panes:
        - emacs
        - guard
  - tests: bundle exec rake test
  - console: bundle exec rails c
  - workers: bundle exec foreman start
  - logs: tail -f log/development.log
  - server: mosh deployer@production-server.com

With this configuration, I just run mux website and Tmuxinator will cd into the project's root directory, create a new (or join an existing) Tmux sessions, create six windows, kick off emacs and guard panes, run tests, and start the background workers. Just one command and we're ready to go to work.

Switch Projects and Windows with Ease

So now we have our Tmux setup and Tmuxinator helping us with session management, we can start moving around our various projects. We can do this in a couple ways: from the command line or within Tmux sessions.

From the command line:

Getting around tmux sessions on the command line is pretty easy. Using our aliases we setup earlier, we can use the tml command to list the current sessions and tma to join an existing session. We can also use the mux [project] command that Tmuxinator provides to either create a new session or join a session if it is already running.

Tmux on the command line

Within Tmux sessions:

Once we're in a tmux session, getting around windows is easy too. I usually hit Command-a [Window #] to jump to a window or Command-a w to toggle a list of windows to open. But what if I need to switch to another project within Tmux?

Thankfully we can run command Command-a s to toggle a list of running sessions to open. This is a great way to navigate all your projects without leaving Tmux.

Tmux on the command line

Summary

What I really like about this setup is that not only can I navigate projects easily, but I can setup projects just one time - saving the need to create window or panes every time. And the kicker, I can hop into one project, make a change, and hop right back where I left off with just a couple keystrokes.

Do you have any Tmux tips you'd like to share? Please leave us a comment below.


Littlelines Roadtrip (Summer Edition)

Written by Matt Sears

The Littlelines crew will taking the show on the road this week. We always like to take the opportunity to stretch our legs and get out of the city for a few days to meet folks in the industry. This time round we're loading up the car and sticking to the great Midwest.

Steel City Ruby - Pittsburgh, PA

Steel City Ruby Logo

Our first stop will be Pittsburgh for the Steel City Ruby conference. Steel City is entering it's third year and the speaker line up looks like it's going to be another great one. We had the pleasure once again of designing the conference tee shirts and we think they look really great.

Weapons of Mass Creation - Cleveland, OH

WMCFest Logo

Next up, we're heading north to Cleveland for Weapons of Mass Creation. This is the premier art, design, and music event in the country. This is the fifth year running and has grown to a 2,000+ attendee event. Checkout the amazing speaker line up - can't wait to listen to all the great talks!

If you see us, stop and say hi, we'd love to meet you! And hey, you never know - we might have goodies to hand out.


Welcome Dustin Armstrong

Written by Matt Sears

Dustin Image

It is my great pleasure to welcome Dustin Armstrong to our development team! Dustin brings his clean, responsive, front-end coding skills to team with a design talent to boot. Dustin is a 2009 graduate of School of Advertising Art and has been writing solid code for the past five years. Dustin's experience in design and development will help our clients products be a glowing success. You may follow Dustin on Twitter and on Github

Welcome aboard Dustin!


A Survival Guide for Legacy Rails Apps

Written by Matt Sears

Hello, my name is Matt and I love working with legacy Rails apps. Ruby on Rails is now over 10 years old. That means there are a lot of (old) Rails applications running out there. At Littlelines, we've worked on hundreds of Rails projects. Most of them we build from the ground up, but often we have the opportunity to work with legacy Rails apps - some as old as 2006! More often than not, we discover these apps include many many common mistakes made back in the day and it's our job to fix them. Over the eight years of writing Rails apps, I've picked up a few tips and tricks that can help us get through the agony of legacy code.

1. Take Stock of the Situation

It is impossible to understand the present or prepare for the future unless we have some knowledge of the past. So first things first, let's take an assessment of where we're at in terms of test coverage and code quality. If we're lucky enough to take on a project that has tests, the first thing I do is install SimpleCov to measure how well our app is tested. Once I have the SimpleCov report, I'll take a screenshot and save the report. Then we can use CodeClimate to get an overall grade on quality and security. CodeClimate will analyze our app and report on all the hotspots and security violations in the code. Finally, I'll write this number down and take a screenshot.

Now that we have metrics to work with, we can do some Opportunistic Refactoring. In other words, always leave code behind in a better state than you found it. The fun part is seeing how far we can improve the code's test coverage and quality and we can challenge ourselves to take the score from an F to an A.

2. Sharpen the Saw

If I had eight hours to chop down a tree, I'd spend six sharpening my axe. – Abraham Lincoln

As developers, we spend most of our day buried in our text editor. Luckily for us, most text editors allow us to customize and fine tune it to make it work more effectively for us. Even better, some editors like Vim and Emacs allow you to create custom functions to help automate repetitive and complex tasks. These can be incredibly useful when working with legacy code. One of my favorite functions converts old Ruby 1.8's hash syntax to 1.9's new syntax.

Ruby Hash Converter

Some text editors like Vim and Emacs ship with built-in Macros support. Macros allow us to record keystrokes and play them back. These can be great tools to automate simpler tasks with less effort. The nice things about Macros is we don't need to write a function, we just need to hit the record button and play it back.

Let's take look at an example. Let's say we are upgrading a Rails application and we discover that it's using the old style of validating ActiveRecord fields. We can create an ad hoc macro to convert the first line of the validation code to the new validation syntax and play it back for the next three lines.

Emacs Macro and Playback

3. Learn Something New

As soon as we stop try new things, we stop learning. We can always be better Rubyist and so as a general rule, I try to learn something new when starting a new project. It doesn't have to be anything huge. In fact, it's usually a small thing and something that fits with the project. It can be anything from replacing our view templates with a new template engine like Slim or something as small like using Ruby 2.0's new keyword arguments. Most likely our legacy app is using some old and unmaintained gems. This is a great opportunity to see what we can replace them with. Head on over to the The Ruby Toolbox or Awesome Ruby and see the latest and greatest libraries available.

To give you an example, A few years ago I was tasked with upgrading an old Rail 1.2 app to the latest and greatest version. During the upgrade process, I discovered a lot of embedded SQL and the project owner had complained about how slow the searching has been on the site. So, I thought it would be a good opportunity to learn more about how to do full-text searching in Rails. And this lead me to discover the great Sunspot gem. With Sunspot, I was able to eliminate all the embedded SQL and make the search perform much much faster at the same time.

4. Have a Plan

As a rule for every new project, I make a list of things I'd like to accomplished by the end of the project. It's usually a small list that contains some very high level goals. In most cases, the goals coincide with making a better Rails app and me a better a developer in the end. For example, here is a list I made on my last project.

  1. Upgrade application to Rails 4 and Ruby 2.
  2. Raise Code Climate GPA to 4.0.
  3. Increase test code coverage by 20%.
  4. Watch RubyTapas episode on Null Object and apply it.
  5. Try out Byebug gem and see how it stacks up to Pry.
  6. Write a new Emacs Lisp function that converts erb to haml across multiple buffers.

5. Keep Pushing

As many of you may know, working with legacy projects can be boring and frustrating. But, we can take steps to make it a little more fun and learn something new in process. Even when the code is horrendous and we're cursing who ever wrote this pile of #@$%, we can still learn something new and challenge ourselves. And it isn't always new tricks or tools we learn, it's also the past mistakes that teach us. If you commit to taking these steps, you'll improve your skills at a much faster rate and you'll find yourself stepping out of your normal routine and applying new solutions that will ultimately lead to you becoming a better developer.

How about you? If you have any tips for working on legacy Rails apps, please add a comment below.


Elixir vs Ruby Showdown - Phoenix vs Rails

Written by Chris McCord

Phoenix vs Rails

This is the second post in our Elixir vs Ruby Showdown series. In this latest installment, we're exploring the performance of the Phoenix and Rails web frameworks when put up against the same task. Before we get into code samples and benchmark results, let's answer a few common questions about these kinds of tests:

tl;dr Phoenix showed 10.63x more throughput over Rails when performing the same task, with a fraction of CPU load

FAQ

Isn't this apples to oranges?

No. These tests are a direct comparison of our favorite aspects of Ruby and Rails with Elixir and Phoenix. Elixir has the promise to provide the things we love most about Ruby: productivity, metaprogramming, elegant APIs, and DSLs, but much faster, with a battle-tested concurrency and distribution model. The goals of this post are to explore how Elixir can match or exceed our favorite aspects of Ruby without sacrificing elegant APIs and the productive nature of the web frameworks we use.

Are benchmarks meaningful?

Benchmarks are only as meaningful as the work you do upfront to make your results as reliable as possible for the programs being tested. Even then, benchmarks only provide a "good idea" of performance. Moral of the story: never trust benchmarks, always measure yourself.

What are we comparing?

Elixir Phoenix Framework

  • Phoenix 0.3.1
  • Cowboy webserver (single Elixir node)
  • Erlang 17.1

Ruby on Rails

  • Rails 4.0.4
  • Puma webserver (4 workers - 1 per cpu core)
  • MRI Ruby 2.1.0

We're measuring the throughput of an "equivalent" Phoenix and Rails app where specific tasks have been as isolated as possible to best compare features and performance. Here's what we are measuring:

  1. Match a request from the webserver and route it to a controller action, merging any named parameters from the route
  2. In the controller action, render a view based on the request Accept header, contained within a rendered parent layout
  3. Within the view, render a collection of partial views from data provided by the controller
  4. Views are rendered with a pure language templating engine (ERB, EEx)
  5. Return the response to the client

That's it. We're testing a standard route matching, view rendering stack that goes beyond a Hello World example. Both apps render a layout, view, and collection of partials to tests real-world throughput of a general web framework task. No view caching was used and request logging was disabled in both apps to prevent IO overhead. The wrk benchmarking tool was used for all tests, both against localhost, and remotely against heroku dynos to rule out wrk overhead on localhost. Enough talk, let's take a look at some code.

Routers

Phoenix

defmodule Benchmarker.Router do
  use Phoenix.Router
  alias Benchmarker.Controllers

  get "/:title", Controllers.Pages, :index, as: :page
end

Rails

Benchmarker::Application.routes.draw do
  root to: "pages#index"
  get "/:title", to: "pages#index", as: :page
end

Controllers

Phoenix (request parameters can be pattern-matched directly in the second argument)

defmodule Benchmarker.Controllers.Pages do
  use Phoenix.Controller

  def index(conn, %{"title" => title}) do
    render conn, "index", title: title, members: [
      %{name: "Chris McCord"},
      %{name: "Matt Sears"},
      %{name: "David Stump"},
      %{name: "Ricardo Thompson"}
    ]
  end
end

Rails

class PagesController < ApplicationController

  def index
    @title = params[:title]
    @members = [
      {name: "Chris McCord"},
      {name: "Matt Sears"},
      {name: "David Stump"},
      {name: "Ricardo Thompson"}
    ]
    render "index"
  end
end

Views

Phoenix (EEx)

...
    <h4>Team Members</h4>
    <ul>
      <%= for member <- @members do %>
        <li>
          <%= render "bio.html", member: member %>
        </li>
      <% end %>
    </ul>
...
<b>Name:</b> <%= @member.name %>

Rails (ERB)

...
    <h4>Team Members</h4>
    <ul>
      <% for member in @members do %>
        <li>
          <%= render partial: "bio.html", locals: {member: member} %>
        </li>
      <% end %>
    </ul>
...
<b>Name:</b> <%= member[:name] %>

Localhost Results

Phoenix showed 10.63x more throughput, with a much more consistent standard deviation of latency. Elixir's concurrency model really shines in these results. A single Elixir node is able to use all CPU/memory resources it requires, while our puma webserver must start a Rails process for each of our CPU cores to achieve councurrency.

Phoenix:
  req/s: 12,120.00
  Stdev: 3.35ms
  Max latency: 43.30ms

Rails:
  req/s: 1,140.53
  Stdev: 18.96ms
  Max latency: 159.43ms

Phoenix

$ mix do deps.get, compile
$ MIX_ENV=prod mix compile.protocols
$ MIX_ENV=prod elixir -pa _build/prod/consolidated -S mix phoenix.start
Running Elixir.Benchmarker.Router with Cowboy on port 4000

$ wrk -t4 -c100 -d30S --timeout 2000 "http://127.0.0.1:4000/showdown"
Running 10s test @ http://127.0.0.1:4000/showdown
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     8.31ms    3.53ms  43.30ms   79.38%
    Req/Sec     3.11k   376.89     4.73k    79.83%
  121202 requests in 10.00s, 254.29MB read
Requests/sec:  12120.94
Transfer/sec:     25.43MB

Rails

$ bundle
$ RACK_ENV=production bundle exec puma -w 4
[13057] Puma starting in cluster mode...
[13057] * Version 2.8.2 (ruby 2.1.0-p0), codename: Sir Edmund Percival Hillary
[13057] * Min threads: 0, max threads: 16
[13057] * Environment: production
[13057] * Process workers: 4
[13057] * Phased restart available
[13185] * Listening on tcp://0.0.0.0:9292

$ wrk -t4 -c100 -d30S --timeout 2000 "http://127.0.0.1:9292/showdown"
Running 10s test @ http://127.0.0.1:9292/showdown
  4 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    21.67ms   18.96ms 159.43ms   85.53%
    Req/Sec   449.74    413.36     1.10k    63.82%
  11414 requests in 10.01s, 25.50MB read
Requests/sec:   1140.53
Transfer/sec:      2.55MB

Heroku Results (1 Dyno)

Phoenix showed 8.94x more throughput, again with a much more consistent standard deviation of latency and with 3.74x less CPU load. We ran out of available socket connections when trying to push the Phoenix dyno harder to match the CPU load seen by the Rails dyno. It's possible the Phoenix app could have more throughput available if our client network links had higher capacity. The standard deviation is particularly important here against a remote host. The Rails app struggled to maintain consistent response times, hitting 8+ second latency as a result. In real world terms, a Phoenix app should respond much more consistently under load than a Rails app.

Phoenix:
  req/s: 2,691.03
  Stdev: 139.92ms
  Max latency: 1.39s

Rails:
  req/s: 301.36
  Stdev: 2.06s
  Max latency: 8.36s

Phoenix (Cold)

$ ./wrk -t12 -c800 -d30S --timeout 2000 "http://tranquil-brushlands-6459.herokuapp.com/showdown"
Running 30s test @ http://tranquil-brushlands-6459.herokuapp.com/showdown
  12 threads and 800 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   317.15ms  139.55ms 970.43ms   81.12%
    Req/Sec   231.43     66.07   382.00     63.92%
  83240 requests in 30.00s, 174.65MB read
  Socket errors: connect 0, read 1, write 0, timeout 0
Requests/sec:   2774.59
Transfer/sec:      5.82MB

Phoenix (Warm)

$ ./wrk -t12 -c800 -d180S --timeout 2000 "http://tranquil-brushlands-6459.herokuapp.com/showdown"
Running 3m test @ http://tranquil-brushlands-6459.herokuapp.com/showdown
  12 threads and 800 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   318.52ms  139.92ms   1.39s    82.03%
    Req/Sec   224.42     57.23   368.00     68.50%
  484444 requests in 3.00m, 0.99GB read
  Socket errors: connect 0, read 9, write 0, timeout 0
Requests/sec:   2691.03
Transfer/sec:      5.65MB

Load

load_avg_1m=2.78

sample#memory_total=34.69MB
sample#memory_rss=33.57MB 
sample#memory_cache=0.09MB
sample#memory_swap=1.03MB
sample#memory_pgpgin=204996pages sample#memory_pgpgout=196379pages

Rails (Cold)

$ ./wrk -t12 -c800 -d30S --timeout 2000 "http://dry-ocean-9525.herokuapp.com/showdown"
Running 30s test @ http://dry-ocean-9525.herokuapp.com/showdown
  12 threads and 800 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.85s     1.33s    5.75s    65.73%
    Req/Sec    22.68      7.18    61.00     69.71%
  8276 requests in 30.03s, 18.70MB read
Requests/sec:    275.64
Transfer/sec:    637.86KB

Rails (Warm)

$ ./wrk -t12 -c800 -d180S --timeout 2000 "http://dry-ocean-9525.herokuapp.com/showdown"
Running 3m test @ http://dry-ocean-9525.herokuapp.com/showdown
  12 threads and 800 connections
	  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.07s     2.06s    8.36s    70.39%
    Req/Sec    24.65      9.97    63.00     67.10%
  54256 requests in 3.00m, 122.50MB read
  Socket errors: connect 0, read 1, write 0, timeout 0
Requests/sec:    301.36
Transfer/sec:    696.77KB

Load

sample#load_avg_1m=10.40

sample#memory_total=235.37MB
sample#memory_rss=235.35MB
sample#memory_cache=0.02MB
sample#memory_swap=0.00MB
sample#memory_pgpgin=66703pages
sample#memory_pgpgout=6449pages

Summary

Elixir provides the joy and productivity of Ruby with the concurrency and fault-tolerance of Erlang. We've shown we can have the best of both worlds with Elixir and I encourage you to get involved with Phoenix. There's much work to do for Phoenix to match the robust ecosystem of Rails, but we're just getting started and have very big plans this year.

Both applications are available on Github if want to recreate the benchmarks. We would love to see results on different hardware, particularly hardware that can put greater load on the Phoenix app.

Shoutout to Jason Stiebs for his help getting the Heroku applications setup and remotely benchmarked!