I learned about difftastic today, which aims to show differences between files while being aware of the underlying programming language used in said files (if any).

Structured diff output with difftastic on a change in Mailhog
Structured diff output with difftastic on a change in Mailhog

It’s basically magic when it works!

I generally like the built-in diff in the JetBrains suite of IDEs. The one I use these days is GoLand, but I believe they all support adding an external diff tool. Since difftastic is a console app, here’s what I had to do on my Mac:

  1. brew install difftastic # install the tool
  2. Install ttab using their brew instructions. This allows GoLand to launch a new tab in iTerm and run the difft command there. Otherwise, using the External Diff Tool in Goland would appear to do absolutely nothing, as the output of the tool isn’t displayed natively.
  3. Configure the external diff tool using the instructions for GoLand.
    1. Program path: ttab
    2. Tool name: Difftastic (but can be anything you like)
    3. Argument pattern: -a iTerm2 difft %1 %2 %3
      1. The “-a iTerm2” is to ensure that iTerm is used instead of the default Terminal app.

Now you can click this little button in the standard GoLand diff view to open up the structural diff if needed:

Screenshot of GoLand's diff viewer with a highlight around the external diff tool button

Ideally the diff would be integrated into GoLand, but I don’t mind it being an extra click away, since difftastic doesn’t work reliably in many situations (particularly large additions or refactorings).

Prometheus has this line in its docs for recording rules:

Recording and alerting rules exist in a rule group. Rules within a group are run sequentially at a regular interval, with the same evaluation time.

Recording Rules

I read that a while ago, but at the time it wasn’t clear why it mattered. It seemed that groups were mostly intended to give a collection of recording rules a name. It became clear recently when I tried to set up a recording rule in one group that was using a metric produced by a recording rule in another group.

The expression for the first recording rule was something like this:

(
  sum(rate(http:requests[5m]))
  -
  sum(rate(http:low_latency[5m]))
)
/
(
  sum(rate(http:requests[5m]))
)

The result:

Using a recording rule from another group

It’s showing a ratio of “slow” requests as a value from 0 to 1. Compare that graph to one that’s based on the raw metric, and not the pre-calculated one:

Using the raw metric

The expression is:

(
  sum(rate(http_requests_seconds_count{somelabel="filter"}[5m]))
  -
  sum(rate(http_requests_seconds_bucket{somelabel="filter", le="1"}[5m]))
)
/
(
  sum(rate(http_requests_seconds_count{somelabel="filter"}[5m]))
)

The metrics used here correspond to the pre-calculated ones above. That is, http:requests is http_requests_seconds_count{somelabel="filter"}, and http:low_latency is http_requests_seconds_bucket{somelabel="filter", le="1"}. The graphs are similar, but the one using raw metrics doesn’t have the strange sharp spikes and drops.

I’m not sure what’s going on here exactly, but based on the explanation from the docs it’s probably a race between the evaluation of the two groups resulting in inconsistent number of samples used for http:requests and http:low_latency. Maybe one has one less sample than the other at the time they’re evaluated for the first group’s expression, which I think could show up as spikes.

Whatever the cause the solution is simple: if one recording rule uses metrics produced by another make sure they’re in the same group.

I was trying to add this site to Indieweb ring last night and found that it couldn’t validate the presence of the previous/next links, even though they were clearly in the footer of every page. I cleared the WordPress and Cloudflare caches without success.

Since Indieweb ring runs on Glitch, which is a large public service, I suspected that maybe Cloudflare was blocking their traffic. Sure enough, checking my http access logs I couldn’t find any requests from Glitch, and switching nameservers to my web host resulted in a successful check:

Indieweb ring's status checker log showing two failed checks and one successful one

This was happening even with the “Bot Fight” option set to off, “Security Level” set to “Essentially Off” and a disabled “Browser Integrity Check” option.

[side note] I love that my autogenerated site identifier for Indieweb ring is a person worried about taking pictures and writing. Accurate.