07 Jun 2014

Golang or Go Home: Getting Started with Go

I’ve recently gotten a little hooked on Golang. Strangely enough, I think it finally started to click for me as a result of one Haskeller and one friend Christopher Stingl.

Christopher was kind enough to link me to this wonderful Introduction To Golang. And as a result of the Haskell reading that I’ve done lately, Golang started to make sense.

For a little background, I’m coming from a Ruby development standpoint with no prior formal training in compiled statically typed languages.

First Impressions

package sack

import (
  "fmt"
  "github.com/codegangsta/cli"
)

func shellInit(c *cli.Context) {
  sh := `
    sack=$(which sack)

    alias S="${sack} -s"
    alias F="${sack} -e"
    `

  fmt.Println(sh)
}

func shellEval(c *cli.Context) {
  sh := "eval \"$(sack init)\""
  fmt.Println(sh)
}

I’ll come clean, first impressions weren’t good. I picked up the language a couple months ago and tried to implement a simple line parser. Didn’t go so well because I got caught up on the various statically typed bits of the language. Essentially, ‘what do you mean I have to know what’s going into my function and returning from it? I don’t even know what types are available’.

But after reading through that 5 Week Intro to Golang and doing some Haskell homework, it clicked. And I liked it.

So let me back up and explain the static typing for those readers that don’t have any background in such languages. In Go, you define functions (the rough equivalent of a method in ruby) as having specific stypes of inputs and outputs. So you might define something as follows:

func Version() string {
  return "0.3.0"
}

This function will take 0 arguments and return one argument of type string.

Or this function:

func executeCmd(term string, path string, flags string) []string {

  var lines []string
  _, err := exec.LookPath(agCmd)
  if err == nil {
    lines = agSearch(term, path, flags)
  } else {
    lines = grepSearch(term, path, flags)
  }

  return lines
}

Which will take three strings as arguments and return an array of strings.

Initially cumbersome during the learning phase, but incredibly helpful. I’ve found myself wishing for a similar safety measure in the language I use for my dayjob, which is Ruby.

And the best part is that the compiler won’t build the program unless all your type ducks are in a row! Which prevents the need to write about 30% of the tests that I see on a daily basis. In fact, many of the bugs that I’ve seen in production would be avoided in a strongly typed static language.

Advantages

  • Static typing safety
  • Cross compiled binaries
  • Super easy deployment (b/c the whole go world is wrapped up into that binary)
  • Fast runtimes
  • Encourages less usage of mutable variables vs. Ruby (though not as good as pure FP language)

Outcome

After playing around with that tutorial, I couldn’t resist reimplementing a commandline tool in Golang. It’s a tool that started as a shell script, which I reimplemented in Ruby, and now have reimplemented in Golang. It’s called sack and you should use it as the glue that connects your silver-searcher or grep results with your $EDITOR. Project link.

The other outcome was that I worked on a project during our 10% time to build a golang web application that serves up a JSON feed of Whois information. It was amazing to me that 2 weeks into learning Golang I was able to put up something useful over the course of two days.

I’m looking forward to experimenting more with Golang, possibly by setting up a Goship server at work. It also affirms for me that I’m missing out on some wonderful programming paradigms in the Ruby world. I expect to see more Functional Programming in my future, whether that’s in the form of Erlang/Elixir/Clojure or a more mathematically pure language like Haskell.

07 Jun 2014

Swimming With the Big Kids

As developers, we’re familiar with the basic software tools that grease the wheels of the programming world. Version control systems, deployment tools, notification hooks, CI servers: they all make our jobs easier.

But how can we work better as an organization when our startup exceeds 2, 4, 8, 16, 32, 64, 128 engineers?

There are three essential tools/patterns that need to be implemented for a development team to be successful.

  1. Project Management Software
  2. Git workflows and commit message standards
  3. Cultivate your Developers’ creativity

Project Management Software

If you’re like me, you hate long running email threads that attempt to pin down the acceptance criteria and context of projects or stories.

Pull the email escape hatch and move into the 21st century!

A good start is having a unified interface for tasks, ie a Project Tracker. This gives the technical and non-technical folks a single point of interface for all the knowledge of “what’s happening” and “what timelines are we operating under?”. It’s incredibly important that the software is convenient to use & updated in realtime.

Though I don’t get any referral bonuses, my favorite thus far is Pivotal Tracker and if you’re an Emacs user check out Pivotal-Tracker-mode or my fork of that project. I prefer using the Emacs interface rather than their web interface when working in a large organization because it’s faster.

So now you have a single channel of communication and record keeping system for projects, YAY! Rambling email threads will (mostly) fade into your company’s past!

Git workflows and commit message standards

Next, come up with a style guide for Git commits and version control practices. Maybe you use ‘git-flow’. Maybe you come up with your own variant of an established workflow. But discuss it among the engaged members on the team.

I strongly recommend running all code changes through other members on the team. A spare pair of eyes is invaluable as is the unique context/domain knowledge that a team-member brings to the table. If you’re using a web based interface for Git such as Github, consider using “Pull Requests”. They’ve become a standard practice in the Github using community. If you have a team using an alternate git-based solution, consider submitting all code as patches through the Dev Leads. Doing so will catch some issues before they make it into production.

As for git commit messages, educate your developers on why they’re important.

Take the following commits:

Fixed thing that was wrong

or

[ZPH/KM][#3489634] BUG/Updated Jquery selector for dashboard login box

Dashboard login box functionality was broken (SHA a98adf91) due to a change in `dashboard.slim`.

Changed selector to $('.login_box') to match slim file.

Can you imagine which is easier to reference 6 months in the future? Say, when you’re investigating why the dashboard login is broken? Or maybe you’re a Dev lead looking for the commits related to a specific project story. If you’re on a team where stories are completed in pairs, it’s a good idea to include the initials of both parties who worked on the commits. Then it’s simple to answer the question of: “Didn’t Zander and Kerri do something with our dashboard last week?”.

If you have developers writing descriptive commits, finding answers to questions is as easy as git log -S dashboard for a list of each commit involving the word dashboard.

Care and Feeding of Developers

Last and certainly not least is related to the care and feeding of software developers. We might not all be beautiful and unique snowflakes (we in the Ruby community do largely use Apple computers after all) but we require regular care to stay sharp and engaged.

10% Time

Set aside dedicated time to have developers work on more creative projects. Why bother?

  • Developers are valuable and losing them is expensive.
  • Happy programmers are productive programmers.
  • Who else could be better situated to see inherent flaws in existing software/tools/workflows in the organization? Let them scratch their own itch.
  • Doing the same tasks day in and day out is a sure-fire recipe for dulling even the brightest minds. So knock it off!

Build in time each two weeks that’s dedicated to creative projects envisioned by the coders.

Where I’m currently working, we do this every 10th business day and it’s called our 10% time. On those days, we come up with interesting projects that advance the company’s interests without the same pressure of failure. These are great times to implement something in a new technology or setup those Git hook integrations with Pivotal Tracker. Fix things that are slowing you down or bothering you on a regular basis. What else could you work on?

  • An idempotent way to setup new developer workstations (script based please).
  • A unified script for firing up the application stack on a local machine (or stopping, reloading, updating, etc).
  • Team chat integrations like Campfire + Hubot, HipChat + Github Merge notification.
  • Dashboard for your income generating parts of software. Track those leads in realtime!
  • Build tools to simplify your Quality Assurance teams’ lives. Or build tools so Product can be more efficient.

10% is the perfect time for fixing these nagging issues. The only requirement is that the developers generate most of the 10% ideas and get to vote with their feet (or digital feet) in choosing what tasks to work on. They’ll self organize into teams where they can be most efficient. Cultivate your developers and they’ll flourish! Also, you won’t have to engage recruiters because your retention will go through the roof. It’s a win-win situation.

Thoughts, comments or jeers are all encouraged! Come have chat w/ me on Twitter @_ZPH.

05 Jun 2014

Get rid of deprecation warning in OpenSSL::Digest::Digest

AWS-S3 complaining at you in Ruby 2.1.1?

Looks like they’re getting rid of OpenSSL::Digest::Digest and that behavior should be in OpenSSL::Digest instead.

Nix it here: aws-s3-0.6.3/lib/aws/s3/authentication.rb:71.

If I have a chance I’ll submit a PR on that (or maybe it’s fixed in a newer version).

Edit: The change is already present in the master copy of aws-s3 here

05 Jun 2014

No db:migrate? No problem

Missing your db:migrate tasks in Rails?

Might have been disabled during initial app work. Check config/application.rb and see if require "active_record/railtie" is commented out.

Version: Rails 4.x.

21 Mar 2014

Weather Github Downtime by Pushing to All the Places

Want to not care whether Github’s up or down at the moment?

Have faulty servers of your own? Or tend to anger people with bot armies?

Want to push to two Git Repos via a single command?

Want to do it easily via a simple .git edit?

My use case is pushing code that resides on Github as well as on Bitbucket. I want it available in both remote locations in case one is unavailable.

Here’s how you do it:

Add the two remotes as normal

git remote add origin GIT_LINK_TO_REPO

git remote add bitbucket GIT_LINK_TO_REPO

In local repo, edit .git/config. Find the entry for origin and bitbucket

[remote "origin"]
url = git@github.com:zph/zph.git
fetch = +refs/heads/*:refs/remotes/origin/*
[remote "bitbucket"]
url = ssh://git@bitbucket.org/zph/zph.git
fetch = +refs/heads/*:refs/remotes/bitbucket/*

Add a new entry using that information and urls:

[remote "all"]
url = git@github.com:zph/zph.git
url = ssh://git@bitbucket.org/zph/zph.git

Now when pushing code: git push all

Credit for this solution: http://stackoverflow.com/questions/849308/pull-push-from-multiple-remote-locations?lq=1

15 Mar 2014

Git Archaeology - Find the Secrets of those Who Came Before

“Those who don’t know their Git history are doomed to repeat its mistakes” - Yammy the Programming Kitten

Setup

You’re working for Cato Consulting.

And, you’ve been lent out to a new client on an unfamiliar project. It’s a codebase that you haven’t touched before. In the sense that you’ve never worked on it, you could kind of call it “Greenfield”… but from where you sit, the project looks like “Brownfield with withered vestiges of healthy code.” This isn’t going to be an easy upcoming two weeks.

So it’s a new day and you have a new story from the client about a text input box that’s no longer working. It should behave as a search field, but instead it does nothing. You realize you also do nothing useful before coffee while sitting there, pondering the solution and decide to increase your caffeine level.

Much better, there can be coding once the caffeine hits.

How Do You Approach the Problem

First, you need the text field. Perhaps it will have a unique id or class.

Bingo, you learn that the field has an id of “#super_search_box”.

Your other big clue in the story is that this feature was working well up until a few weeks ago. The client doesn’t have a schedule of exactly when it went bad, but that gives you a starting point. You can reasonably estimate that the code from 4 weeks ago was working.

Let’s dig into this Git Archaeology! We’ll dirty our hands but our spirits will be clean and we’ll sleep well at night knowing that our problem domains were well understood when implementing a fix.

Git Bisect

Our first useful incantation is git bisect. Git bisect helps in a situation where something stops working.

The basic workflow looks like this:

(Starting on git branch = master at newest commit, which happens to be broken) git bisect start (Then identify a good and a bad commit.) git checkout SHA_from_4_weeks_ago (manually check that search box worked, or run associated automatic tests) If it’s good, mark it via git bisect good, otherwise git bisect bad Then find an opposite example (ie if you found a bad one, find a good working commit). Tag that one via a git commit [good || bad] Then git bisect’s magical excavation will begin.

Git will do a binary search, and each step of the way, you will enter git bisect [bad || good] until git bisect identifies the earliest commit that broke that feature.

After that process is over, you’ll have the bad commit where a breaking change was introduced to that feature.

Next, it’s time to understand why the code author would do such a dastardly thing!

Run a git show to see the full content of that commit, both the message and the code differences.

(Note that you can see which were marked as good/bad by checking out git log --oneline)

Project Tracker

If there are more than 2 people on the project, hopefully there’s a central project management tool like Pivotal Tracker or Jira. Ever hopeful, you expect to see a story number or issue number listed in the git commit. Let’s see what it looks like:

[CTL] [#3443450] Switched JQuery selector for search box

Changes selector from '#super_search_box' to '.snazzy_search'.

diff monolithic_everything.coffee
- '#super_search_box'
+ '.snazzy_search'

Not a very helpful message but at least it contains a story id. Looking that up in the Project Tracker show that this was a chore to refactor some code & apparently it wasn’t done with sufficient care or any automated behavioral tests. Bummer :(.

Advanced Git Searching

Since we’re left underwhelmed but the currently available info, it would be nice to know when the markdown was modified that relates to ‘.snazzy_search’

git log -S 'snazzy_search'

This will search each git log entry for any mentions of snazzy_search.

With knowing when the markdown was changed, along with knowing the exact commit that introduced the regression, you’re set to implement a fix. By doing a bit of digging, the underlying domain becomes clear & the possibility of introducing your own regression will decrease.

Go Forth And Dig

Check out my .gitconfig and my .zsh.d/git.zsh files here for some helpful shortcuts for everyday git behaviors.

Also, look into git blame via Fugitive.vim or Magit (on Emacs). Git blame’s a great way to find out if the developer who introduced the changes is still around and can give you a bit of context on the changes.

If you’re just getting started as a Git Archeologist and want some help with getting started, or if you’re an experienced Git Excavator and want to bounce ideas off me, ping me on Twitter at @_ZPH.

For further reading, checkout these two great blog posts: http://ruturaj.net/git-bisect-tutorial/ and http://mislav.uniqpath.com/2014/02/hidden-documentation/

11 Mar 2014

Conference Advice: How to Meet Devs and Influence Ppl

Want to know the secret about tech conferences? It’s not about the talks. It’s not about the venue.

It’s about all the people who are there!

While attending @RbOnAles, I was asked by @eliserius about how I knew so many people there. This article is a summary of that discussion.

First off, go to a couple conferences each year. It’s best if your employer will pay all or part of the way. But if they won’t, it’s your responsibility to make it to the conferences. They’re an investment in your career’s future.

Now that you’re attending 3+ regional tech conferences each year, what steps can you take to make the most of it? Well, remember that you’re attending to meet the other attendees. By virtue of them also attending the conference, they either work at decent companies or they prioritize conference attendance.

So meet these folks! It’s not hard*. (It’s terrifying at first, but push past that & make it happen). I’m an introverted person and know a fair number of Ruby conference attendees, but those first few minutes of awkwardly interacting are just that: uncomfortable. That’s fine, move the conversation into tech stuff. Chat about Service Oriented Architecture, argue about Dependency Injection, take a stand and say that Ruby isn’t a dying language.

Remember, these folks who are at the conference also want to meet people.

What specific steps do I recommend for breaking the ice or stacking your social deck at the conference?

  • Don’t attend conferences with workmates. Or if you’re at conference, avoid them. There are 363 other days to bs with ’em. You’ve only got 24 hrs * 3 days for the conference… make the most of it and meet new people.
  • Set a goal of meeting X new people at conference. Let’s consider “meeting” to be defined as exchanging names and at least one memorable detail about the other party. Maybe they juggle, write assembly, breed horses. Folks all have a story, ask “What brought you to the conference”. Or “What tech stuff are you playing with right now?”. Or start a heated debate about which $EDITOR to use. If you’re stumped about approaching someone, just be candid, “I saw you standing there & realized we hadn’t met yet: I’m Zander”. Most of the time that’s enough to kick off a conversation, though you might choose to use your own name in the prior quotation.
  • Take your meals with a different new group each time. With a 2 day conference, that gives you 4 opportunities to meet groups of 4+ people each time. Some conferences actively organize this kind of activity. Big shoutout to @steelcityruby for doing this stuff :). If your conference isn’t doing this, post on twitter that you’re organizing a group for Italian/Thai/Indian/Bar food and see who bites.
  • While out having food with others, buy a meal or drink for some folks you don’t know well. Do it for the nice factor the icing on the cake is that you’ll seem even more awesome than you already are.
  • Bring something to share to the conference and then offer it up to people. Could be a cardgame, boardgame, bourbon, soda water, LAN party… just get yourself out there.
  • Volunteer to carpool from airport to conference. Or from major city to tiny town where conference exists.
  • Learn 2 love Twitter. Start or revive your account. Twitter is the lifeblood of many Ruby conferences. Start up Tweetdeck or Tweetbot and add a column dedicated to the conference hashtag. That way you’ll be aware of the pulse of things.
  • Post Twitter messages with hashtag. For example at this last conference, I had the pleasure of doing breakfast with my favorite Aussie (@ryanbigg) by virtue of an early morning Tweet about meeting for breakfast. It’ll also force you to meet people who you might not otherwise get to meet.
  • Stay at the conference hotel & possibly split a room w/ someone you don’t know. Put a call out among friends to find someone to split room. It’ll be awesome. At least it has been for me thus far. Staying at conference hotel itself is wonderful because you’ll be in the middle of the action, late night philosophizing on OOP vs. Functional, etc.
  • Stick around for the evening events each day. Also stay for the workshops that often take place after the conference ends. At Ruby On Ales, it was a MiniTest workshop by @blowmage. It was awesome and I got to meet a few more people before leaving.
  • Still stumped for how to approach people at conference? See if you can recognize anyone at conference from Twitter or Github profiles. Then go up & thank them for the Open Source work they do. It’ll give them chills :). (This advice brought to you by @sarahmei who you should say hi to when you see her at a conference).
  • Be a speaker: you literally have to talk to people. At end of talk, invite them to stop you and chat, because you’re shy and want to meet people. Ask for help, it’s cool :).
  • Final tip: I’m terrified at the beginning of each conversation. Once I warm up, it’s groovy, but until then it’s rough. Get that momentum going, force yourself out of the comfort zone, and see what happens. Soon enough, those conferences will feel like reunions filled with friends.

Also, I’m terrified of starting conversations… so be my guest and come up & tell me you read this post. That’ll break the ice!

PS - I have a new friend from @RbOnAles who’s looking for a remote role (or in SF) either as QA or Jr. Ruby Developer. DM me on Twitter if you know of options.

17 Feb 2014

Pathogen.vim without the Submodules: Use Infect

Over the weekend I finally admitted to myself that I hate submodules.

But they’re a keystone to one of my primary development tools: Vim. In order to use Vim, I use the Pathogen.vim plugin by @tpope. In order to use Pathogen, you normally use submodules in the .vim/bundle/ folder.

But submodules are the work of the devil.

I tried out the Vundle plugin and was seeing much longer loadtime for vim:

So I asked around on Twitter and @jwieringa advised me to checkout ‘infect’ by @crsexton .

And infect is awesome! It works with Pathogen to give it a declarative style for plugins. An example is:

"=bundle tpope/vim-pathogen
"=bundle tpope/vim-sensible
source ~/.vim/bundle/vim-pathogen/autoload/pathogen.vim
execute pathogen#incubate()

"=bundle mileszs/ack.vim
"=bundle vim-scripts/AutoTag
"=bundle kien/ctrlp.vim
"=bundle Raimondi/delimitMate
"=bundle sethbc/fuzzyfinder_textmate
"=bundle tpope/gem-ctags
"=bundle gregsexton/gitv
"=bundle sjl/gundo.vim
"=bundle tpope/vim-vinegar
"=bundle jnwhiteh/vim-golang
"=bundle wting/rust.vim

call pathogen#helptags()
set nocompatible      " We're running Vim, not Vi!
syntax on             " Enable syntax highlighting
filetype on           " Enable filetype detection
filetype indent on    " Enable filetype-specific indenting
filetype plugin on    " Enable filetype-specific plugins

Use it. Love it. Don’t look back!

If you want faster downloads with ‘infect’ try this unofficial fork of the standalone: https://github.com/zph/zph/blob/master/home/bin/infect. When I have time, I’ll work w/ @crsexton to get this added to ‘infect’.

15 Feb 2014

Finding myself in BSD

Blame smartOS not reading my SATA controller.

Blame Linux for not having first class support for ZFS (where my data is held).

Blame @canadiancreed for giving me a way out of the quandry.

The backstory is that I moved all of my backups to two ZFS pools a few years ago. I was running ZFSonLinux and it generally worked…. except when I rebooted the server and had to force import the pools.

Fast forward to that server being replaced by a workstation (i7 Haswell, 240GB Sata3 SSD, and 24GB RAM). I tried smartOS by Joyent and ran into issues with my SATA controller not being recognized. It’s a shame given how awesome the smartOS vm administration is. I mean vmadm and imgadm are light years ahead of Docker.

Since smartOS refused to recognize 3 of my 8 drives I ran back to Linux. The good new, Linux recognized the sata controller. The bad news, Linux couldn’t import the pools. Frankly, Linux had a hell of a time with building the ZFS on Linux kernel modules. I managed to piece together a few clues from the ZFSonLinux Issues on Github. In fact, thanks to @dajhorn for supplying answers on that Github issue which allowed me to build the kernel modules.

This sounds promising doesn’t it? It’s not, it was a horrorshow. Linux refused to import the one of the two zpools. One of them imported nicely, the mission critical financial records. The other semi-critical zpool was shot and refused to import under Linux.

I spent a whole evening banging my head against this issue. If it hadn’t been for Chris Reed I wouldn’t have hit upon the solution.

His recommendation was to try FreeBSD… so I did. And it recognized the SATA controller and also imported the zpools cleanly! So after a dry run w/ FreeBSD, I installed PC-BSD, which is a desktop variant based on FreeBSD. Think of it as the Ubuntu of the BSD world. And hell yeah, it’s all working :).

So far, I’m really liking it. Replace ‘aptitude’ with ‘pkg’ and it’s pretty similar. Except, PC-BSD is working where Linux & ZFS were a hassle.

15 Feb 2014

Solving Issues with RVM on BSD

RVM installation went poorly on FreeBSD.

The ca-certificates weren’t up to date according to the install script.

Really, the ca-certificates weren’t in the right location for RVM’s curl install script.

These commands as root fixed it:

mkdir -p /usr/local/opt/curl-ca-bundle
ln -s /usr/local/share/certs/ca-root-nss.crt /usr/local/opt/curl-ca-bundle/share/ca-bundle.crt