A good summary and collection of Rails security/anonymizing practices:
A good summary and collection of Rails security/anonymizing practices:
When I build my own company, I’ll think back to working at a bootstrapped startup and the lessons I learned from our founder.
My business will embody these principles:
Honorable mention:
When I’m building features, finding customers, or planning technical architecture, I’ll think of her and say: WWMD.
Small startups at a cesspool of ideas, experiences, stress, and joy and I learned a ton through my phases in smaller startups. Today, I want to highlight lessons learned from working at a small bootstrapped-ish startup. I’ve also been reading Ray Dalio’s book Principles 4.
The startup was ~50 people when I left and I was there for 3 years. I joined two years into their journey (~25 ppl total with 6 in engineer). Money was always tight and yet we were moving two digits of millions in revenue when I left.
Beyond learning from my time there, I learned from the 2 years before me… from the example set by out solo founder.
She had a year or two of professional experience in web development focused on design and frontend and had no business experience…. until she made her own experience! She became obsessed with a business problem encountered casually and from then on, lived, breathed, and fought for her business.
She built a truly minimum viable product… and was a shining example of what you can do with radical simplicity. Her busines started with a Google Form. She leveraged amazing productivity and value from off-the-shelf free tools. Turns out free tiers can be pushed REALLY FAR with the right urgency and necessity! If Airtable existed at the time, I think that would have been our database.
She learned what she needed to know about databases, servers, analytics but it was a means to an end of making the business work. She didn’t get lost in la-la land of falling in love with the tech. They were a tool she became proficient with but they didn’t rule her. Technology was a means to an end of building the business. My time with her honed my business approach to delivering the most value we could with available time and energy. We radically optimized for making the biggest bang for the buck, because there weren’t many bucks to go around.
What drew me to the company was her obsession, passion and success. Her fervor and obsession were infectious and I saw my own star rising with hers. I learned to see the world a more like she did, with less rules and boundaries. I bring that with me to my own engineering leadership and personal values in software engineering.
Recommended reading Blue Tape List
During a work hackathon, our project involved using Docker for deployment and dependency management.
The dockerfile was inherited from an underlying open source project and was ok when used for deployments but very slow for local development work. Why, you might ask?
It used multi-stage builds, one for node, one for golang and then a final stage that collected the built artifacts from the prior stages. But the problem was…
The Dockerfile failed to use a best practice of first copying over the package manifest. For Node these are package.json & yarn.lock. For Golang it’s go.mod and go.sum.
Instead of copying over these specific files up front, the Dockerfile copied the full project into the container then performing a build.
Since the local copied source code changed frequently during development, all later steps in the Dockerfile were invalidated and performed without caching :(. Downloading all golang dependencies and compiling from scratch was onerous.
Break apart the dependency installation phase from the local code phase.
Package manifests should be copied in first, then yarn install will
install Node dependencies. I had to get hacky to accomplish the same
thing with Golang, but I’ll post my solution when I have a good moment.
Conceptually, the outcome was:
Dev build time for docker image is now near instant (5 seconds) rather than 2900 seconds on a low power laptop.
We also created a dedicated Dockerfile.dev that excluded js production build logic which was accounting for 300+ seconds of build time. Instead the js was built with a development script enabling hot module reloading.
Using yubikeys everywhere is my jam…here’s how.
I did it by installing yubikey-agent with a:
brew install yubikey-agent
brew services start yubikey-agent
Then shell configuration in ~/.zshrc:
export SSH_AUTH_SOCK="/usr/local/var/run/yubikey-agent.sock"
yubikey-agent -setup
ssh-add -LI’m trying out a new note taking system with the following goals:
It’s based on the concept of Zettelkasten and I’m stitching together my own system using:
My blog has been dormant, where have I been?
Busy! With my professional life and personal life. Big and good things on both fronts.
Since I last wrote:
I haven’t written about my engineering leadership experiences because it’s hard to sufficiently abstract them in the moment in a way that I can publicly write about. I’ll see if that changes in the future because I want to be able to share learnings, since my professional growth is so very important to me.
I recently put up two pull requests for a commandline tool written in Rust called Tome. Tome is a rust binary tool that helps make a folder of scripts re-usable for one or more people so that it has good user ergonomics. My pull requests are https://github.com/toumorokoshi/tome/pull/4 and https://github.com/toumorokoshi/tome/pull/5.
With three days of Rust, my impressions are as follows:
If you see me and want good conversation starters:
In 2021, I’m looking forward to:
It’s 2021, I’m happy and have challenges to work on :).
Visit the link, Cmd-Option-J in Chrome to open DevTools.
Execute and capture result of $('#playerContainer').data().config.video_url
Take that link and enter it in https://cloudconvert.com/ and select “Select Files” dropdown, enter url.
Or, use a commandline tool:
Install pre-requisites
pip install cloudconvert requests
Download script: https://gist.github.com/c83c3e91ee3f9df21686bb50b4fbf904
Make it executable: chmod +x twitter-gif
Run it: twitter-gif TWEET-LINK outputfilename-optional
Last week one of the engineering juniors that I mentor ran into a strange environmental issue.
When he ran npm run karma it would run for ~8 minutes and then suddenly spit out an out of memory error. He tried debugging it for awhile himself and then reached out to me to assist.
We ran through the normal set of troubleshooting steps:
rm -rf node_modules/ followed by npm install. (This semi-frequently resolves issues when old dependencies are not cleared out)And when we tried running the offending command again, we suffered the error once more.
Which was when I reached into my bag of tricks and thought back on articles by @b0rk and @brendangregg. I remembered tutorials about using Dtrace to track down system calls from particular process identifiers. And I remembered a similar tool called DTruss that allows for attaching to PID and observing the system calls. For more info on DTruss, go check it out here: http://www.brendangregg.com/DTrace/dtruss or by vim $(which dtruss).
So I explained the barebones that I knew about how DTruss operates and we fired up dtruss npm run karma.
We had time to talk a bit about system calls and the meaning of the readout. After 2 minutes we noticed that the log continued to fly by but the same folder was being accessed. Over and over and over. We had a recursive dependency due to an out-dated library that was stored inside the project tree.
Thanks to DTruss, we realized the issue, wiped out the offending folder and tried again with success!
PS - While writing this article I learned that Brendan Gregg wrote DTruss. Many thanks both for DTruss and for writing articles about how to use these tools! I also owe a thanks to Julia Evans who exposed me to these tools through her blogging and Zines :).