07 Apr 2023

Replacing inlined scripts with bundler inline

git smart-pull is a great tool for avoiding the messiness of git rebases when there’s changed content.

I long ago inlined the full ruby gem into a single executable file to avoid the hassle of installing it in various ruby environments. It’s worked well!

Ruby 3.2.0 broke my script in a tiny way and broke the underlying gem. The patch has sat unmerged for months and now with bundler/inline I have a better solution than keeping a spare inlined script… forking the project and pointing a wrapper script with bundler/inline at my own repo.

I’m applying patches from PRs in upstream (eg hub merge https://github.com/geelen/git-smart/pull/25).

It’s an elegant solution that I’ll re-use for other ruby scripts in my development environment.

06 Apr 2023

Automatically Warm EBS Volumes from Snapshots

We’re automating more of our cluster operations at work and here’s the procedure for warming an EBS volume created from a snapshot to avoid their first-read/write performance issues.

How it works

Follow instructions in gist for installing in root’s crontab as a @reboot run instruction. It uses an exclusive flock and a completion file to ensure idempotency.

06 Apr 2023

3x Faster Mongodb Controlled Failovers

I recently modified our failover protocol at work for MongoDB in a way that reduces the interruption from 14 seconds down to 3.5 seconds by altering election configurations ahead of controlled failovers. This was tested on a 3.4 cluster but should hold true up until modern versions. Starting in 4.0.2 it’s less valuable for controlled failovers but still useful as a tuning setting for uncontrolled failovers.

How it works

The premise is to make the shard call a new election as fast as possible by reducing electionTimeoutMillis and heartbeatIntervalMillis.

Procedure:

// on the primary
cfg = rs.conf()
cfg.settings["electionTimeoutMillis"] = 1000
cfg.settings["heartbeatIntervalMillis"] = 100
rs.reconfig(cfg)

// wait 60 seconds for propagation
rs.stepDown()

// wait for 60 seconds for election to settle
// connect to primary

cfg = rs.conf()
cfg.settings["electionTimeoutMillis"] = 10000
cfg.settings["heartbeatIntervalMillis"] = 1000
rs.reconfig(cfg)

This is valuable to tune also if you’re on high quality, low latency networks. You’re missing faster failovers in non-controlled circumstances every time mongo insists on waiting 10 seconds before allowing an election, even when receiving failing heartbeats.

PS - While browsing the docs I found this ^_^ which is non-intuitive since I would expect no writes to one shard but no impact to other shards. Presumably it’s a typo and cluster means replicaset.

During the election process, the cluster cannot accept write operations until it elects the new primary.

28 Jan 2023

Use GEM_HOME for bundler inline

Ruby’s bundler inline installs to --system destination and does not respect BUNDLE_PATH (as I would expect).

Errors will be about permission errors and server requesting root access for the bundler install.

Digging around in github issues, this is desired behavior: https://github.com/rubygems/bundler/pull/7154

Solution:

export GEM_HOME=vendor/bundle

22 Nov 2022

Cost Optimizations

It’s 2022 and the macroeconomic environment is severely correcting from the easy days of cheap capital.

In this environment it’s wise on a personal and business level to consider expenditures and ensure they’re providing value. I’ve been through this with personal subscriptions and with family financial planning.

Today I spent a day off work helping a friend and colleague from a former startup in optimizing his company’s tech infrastructure spend. His business is a web application with a valuable service but low traffic levels ( < 10 RPS ) and running on Heroku.

Within the first 30 minutes of him screensharing and describing the business behavior of the app, I was able to recommend $200/mo (25%) savings on his plan.

With another 2 hours, I had a recommendation for how to save another $450, which is a great savings for an indie lifestyle business. That will save him $7,800 per year and reduce his Heroku bill from ~$800 down to $150 per month.

In debugging that, I discovered how he can also trim 20 seconds (ie 75%) off of a critical feature which will translate to increased conversion.

It was a fun problem to solve and fits well into my interest in either increasing business through technology or increasing efficiency and profit through technology. I’ve done similar projects on database and server spend which yields amazing results at scale (7 figures per year), on backend and frontend performance in small and medium size companies, and take pleasure in using my expertise at the intersection of technology and business.

I’ll test the waters to see if there’s demand for consulting to help companies outsource these efforts and reduce their AWS / Heroku / NewRelic / Datadog / etc bills.

The timing is right for this adventure.

05 Nov 2022

Hugo to 0.105.0

I’ve upgraded Hugo, ditched webpack for esbuild and removed most javascript from blog 🍋. The best part is that it now does syntax highlighting on the server side and builds in < 500ms 🐎.

While reading docs, I learned that Hugo allows for custom type blocks such as the mermaid diagramming in a code block below that’s rendered into an image. Unsure if this will have practical use in blog, but it’s a problem in search for an answer ;).

stateDiagram-v2 [*] --> Still Still --> [*] Still --> Moving Moving --> Still Moving --> Crash Crash --> [*]

02 Nov 2022

Export OSX Settings using `defaults` tool

Make your OSX setup reproducible by exporting settings using defaults cli tool:

$ defaults find com.apple.driver.AppleBluetoothMultitouch.trackpad

Found 24 keys in domain 'com.apple.driver.AppleBluetoothMultitouch.trackpad': {
    Clicking = 1;
    DragLock = 0;
    Dragging = 0;
    TrackpadCornerSecondaryClick = 2;
    TrackpadFiveFingerPinchGesture = 2;
    TrackpadFourFingerHorizSwipeGesture = 2;
    TrackpadFourFingerPinchGesture = 2;
    TrackpadFourFingerVertSwipeGesture = 2;
    TrackpadHandResting = 1;
    TrackpadHorizScroll = 1;
    TrackpadMomentumScroll = 1;
    TrackpadPinch = 1;
    TrackpadRightClick = 1;
    TrackpadRotate = 1;
    TrackpadScroll = 1;
    TrackpadThreeFingerDrag = 1;
    TrackpadThreeFingerHorizSwipeGesture = 1;
    TrackpadThreeFingerTapGesture = 0;
    TrackpadThreeFingerVertSwipeGesture = 0;
    TrackpadTwoFingerDoubleTapGesture = 1;
    TrackpadTwoFingerFromRightEdgeSwipeGesture = 3;
    USBMouseStopsTrackpad = 0;
    UserPreferences = 1;
    version = 5;
}

Then convert them into items in the format seen here: https://github.com/zph/zph/blob/master/home/.osx to create your own custom configuration to apply during next system setup.

Use defaults domains to find all the key domains.

02 Nov 2022

Dune's Litany on Reliability

I must not fear outages.

Fear is the mind-killer.

Fear is the little-death that brings total obliteration.

I will face my outages.

I will permit it to pass over me and through me.

And when it has gone past I will turn the scorecard to see its path.

Where the outages have gone there will be 5x 9s. Only availability will remain.

02 Nov 2022

One hour on blog post and two hours fixing blog builds

:face_palm:

I spent one hour tonight writing a new blog post about storage and then my 9 month idle blog

  1. Refused to build from md to html
  2. Couldn’t compile node-gyp with dependency on EOL’d python2
  3. Debugging alpine build depdencencies for node-gyp
  4. Had webpack v1 issues on M1 apple silicon
  5. I refused to upgrade and stay on webpack
  6. Which led to me switching to esbuild for js
  7. Then I needed a custom plugin for esbuild sass
  8. Finally I learned how to execute node in yarn’s package context

Entropy rules everything around me and this is after setting up CI, pinning versions, and using a static site generator for my blog.

I had to sleep on the issue and debug it with fresh eyes… it took 2.5 hrs of debugging and that speed of tool rot makes me consider reducing blog to even more basic form for improved longevity.

01 Nov 2022

Storage Platforms for the Modern Era

One of my teams at work is the online storage team (love it!), so I’m focusing efforts on how we improve the high availability of these systems and improve the system behavior during incidents or degradation. ie all the MTs: MTTR, MTBF and MTTD.

I’ve been spending a lot of time considering reliability improvements and turning those into a series of architecture and storage design principles to follow.

For these systems we’ve learned that our greatest leverage is in MTBF, which is predicated on operating these systems in the messy real world of cloud compuuting with the expectation of hardware failures (gp2 for 99.8 to 99.9% SLA/yr/volume).

What’s a recipe for improving current system behaviors?

  • Partition critical and non-critical workloads
  • Use read replicas as heavily as consistency requirements allow
  • Choose the right cost/reliability threshold for your workloads (gp3 vs io2)
  • Remove cross-shard queries from your critical path
  • Run storage systems with sufficient headroom that you don’t have to performance tune in a panic
  • Ensure 1-2 years of architectural runway on systems in case you need to shift to a new storage platform (ie eat your veggies first)
  • Horizontally shard at application layer by promoting out hot or large tables to dedicated cluster (ie for aurora mysql)

With enough business success and a data intensive features, you’ll hit an inflection point you must either adopt a new online storage technology OR invest heavily in a sharding layer on top of your existing storage (vitess on mysql). Based on evidence of adoption at other companies, adopting a complex sharding solution is more expensive and less featureful than adopting a full storage platform including those features.

Ideal online storage system characteristics

Legend: ✅=yes, ☑️=partial, ⭕=no

Feature / Behavior Aurora MySQL 5.x MongoDB 3.x TiDB 6.x ScyllaDB 2022 FoundationDB 7.0
No downtime or errors due to node failure, maintenance or upgrades ☑️ 1 ☑️ 2 3 4
Workload priority management ☑️ 5 ☑️
Placement rules 6 7
Table partitioning ☑️
Full hardware utilization of writer/readers
Ability to transparently and safely rebalance data ☑️ 8
Linear horizontal scaling 9
Change Data Capture
Prioritize consistency or availability per workload ☑️
Good enough support for > 1 workload (k/v, sql, document store) ☑️
Low operational burden
Supports/allows hardware tiering in db/table ☑️
Safe non-downtime schema migration ☑️
OLAP workloads 10 ☑️
SQL-like syntax for portability and adoption ☑️
Licensing ☑️ 11 ☑️ 12
Source available

Legend

(Chart filled in using my significant experience with MongoDB (<= 3.x) and Aurora Myql (5.x) but knowledge of TiDB, ScyllaDB and FoundationDB comes from architecture, documentation, code, and articles)

Predictions for the next 10 years of online storage

  1. Tech companies will require higher availability so we’ll see a shift towards multi-writer systems with robust horizontal scalability that are inexpensive to operate on modest commodity hardware.
  2. Storage systems will converge on a foundational layer of consistent distributed K/V storage with different abstraction layers on top (TiDB/Tidis/TiKV, TigrisDB, FoundationDB) to simplify operations with a robust variety of features.

  1. Downtime during failover but good architecture for maintenance and upgrades ↩︎

  2. Downtime during failover but good architecture for maintenance and upgrades ↩︎

  3. Best ↩︎

  4. Best ↩︎

  5. In Enterprise Version ↩︎

  6. In Zones ↩︎

  7. Placement Rules ↩︎

  8. Architecture in 3.x has risky limitations and performance issues on high throughput collections ↩︎

  9. See ^8 ↩︎

  10. HTAP functionality through listener raft nodes on TiFlash with columnular storage ↩︎

  11. SSPL terms are source available but not OSI compliant with and have theoretical legal risks in usage on par with AGPL except less widely understood ↩︎

  12. AGPL ↩︎