You were almost certainly thinking "WHY is this like this?", not "What is a one-line summary of what happened in this commit?".
Most commit messages are next to useless because they focus on WHAT was done instead of WHY.
This is exactly the wrong thing to focus on.
You can always reconstruct what changes a commit contains, but it's near impossible to unearth the reason it was done.
(thread)
You were almost certainly thinking "WHY is this like this?", not "What is a one-line summary of what happened in this commit?".
```
[one line-summary of changes]
Because:
- [relevant context]
- [why you decided to change things]
- [reason you're doing it now]
This commit:
- [does X]
- [does Y]
- [does Z]
```
First, it captures context that will be near impossible to recover later. Trust me, this stuff is gold.
Secondly, if you train yourself to ask why you're making every change, you'll tend to make better changes.
The first time you see a commit message like the above instead of "refactor OrderWidget", you'll be a convert.
https://t.co/8e9p3x0zb0
https://t.co/KrOvHJPMXg
https://t.co/rnWpApDrTx
https://t.co/R7tAV3b8rx
More from Tech
A brief analysis and comparison of the CSS for Twitter's PWA vs Twitter's legacy desktop website. The difference is dramatic and I'll touch on some reasons why.
Legacy site *downloads* ~630 KB CSS per theme and writing direction.
6,769 rules
9,252 selectors
16.7k declarations
3,370 unique declarations
44 media queries
36 unique colors
50 unique background colors
46 unique font sizes
39 unique z-indices
https://t.co/qyl4Bt1i5x
PWA *incrementally generates* ~30 KB CSS that handles all themes and writing directions.
735 rules
740 selectors
757 declarations
730 unique declarations
0 media queries
11 unique colors
32 unique background colors
15 unique font sizes
7 unique z-indices
https://t.co/w7oNG5KUkJ
The legacy site's CSS is what happens when hundreds of people directly write CSS over many years. Specificity wars, redundancy, a house of cards that can't be fixed. The result is extremely inefficient and error-prone styling that punishes users and developers.
The PWA's CSS is generated on-demand by a JS framework that manages styles and outputs "atomic CSS". The framework can enforce strict constraints and perform optimisations, which is why the CSS is so much smaller and safer. Style conflicts and unbounded CSS growth are avoided.
Legacy site *downloads* ~630 KB CSS per theme and writing direction.
6,769 rules
9,252 selectors
16.7k declarations
3,370 unique declarations
44 media queries
36 unique colors
50 unique background colors
46 unique font sizes
39 unique z-indices
https://t.co/qyl4Bt1i5x

PWA *incrementally generates* ~30 KB CSS that handles all themes and writing directions.
735 rules
740 selectors
757 declarations
730 unique declarations
0 media queries
11 unique colors
32 unique background colors
15 unique font sizes
7 unique z-indices
https://t.co/w7oNG5KUkJ

The legacy site's CSS is what happens when hundreds of people directly write CSS over many years. Specificity wars, redundancy, a house of cards that can't be fixed. The result is extremely inefficient and error-prone styling that punishes users and developers.
The PWA's CSS is generated on-demand by a JS framework that manages styles and outputs "atomic CSS". The framework can enforce strict constraints and perform optimisations, which is why the CSS is so much smaller and safer. Style conflicts and unbounded CSS growth are avoided.
The entire discussion around Facebook’s disclosures of what happened in 2016 is very frustrating. No exec stopped any investigations, but there were a lot of heated discussions about what to publish and when.
In the spring and summer of 2016, as reported by the Times, activity we traced to GRU was reported to the FBI. This was the standard model of interaction companies used for nation-state attacks against likely US targeted.
In the Spring of 2017, after a deep dive into the Fake News phenomena, the security team wanted to publish an update that covered what we had learned. At this point, we didn’t have any advertising content or the big IRA cluster, but we did know about the GRU model.
This report when through dozens of edits as different equities were represented. I did not have any meetings with Sheryl on the paper, but I can’t speak to whether she was in the loop with my higher-ups.
In the end, the difficult question of attribution was settled by us pointing to the DNI report instead of saying Russia or GRU directly. In my pre-briefs with members of Congress, I made it clear that we believed this action was GRU.
The story doesn\u2019t say you were told not to... it says you did so without approval and they tried to obfuscate what you found. Is that true?
— Sarah Frier (@sarahfrier) November 15, 2018
In the spring and summer of 2016, as reported by the Times, activity we traced to GRU was reported to the FBI. This was the standard model of interaction companies used for nation-state attacks against likely US targeted.
In the Spring of 2017, after a deep dive into the Fake News phenomena, the security team wanted to publish an update that covered what we had learned. At this point, we didn’t have any advertising content or the big IRA cluster, but we did know about the GRU model.
This report when through dozens of edits as different equities were represented. I did not have any meetings with Sheryl on the paper, but I can’t speak to whether she was in the loop with my higher-ups.
In the end, the difficult question of attribution was settled by us pointing to the DNI report instead of saying Russia or GRU directly. In my pre-briefs with members of Congress, I made it clear that we believed this action was GRU.
Machine translation can be a wonderful translation tool, but its uses are widely misunderstood.
Let's talk about Google Translate, its current state in the professional translation industry, and why robots are terrible at interpreting culture and context.
Straight to the point: machine translation (MT) is an incredibly helpful tool for translation! But just like any tool, there are specific times and places for it.
You wouldn't use a jackhammer to nail a painting to the wall.
Two factors are at play when determining how useful MT is: language pair and context.
Certain language pairs are better suited for MT. Typically, the more similar the grammar structure, the better the MT will be. Think Spanish <> Portuguese vs. Spanish <> Japanese.
No two MT engines are the same, though! Check out how human professionals ranked their choice of MT engine in a Phrase survey:
https://t.co/yiVPmHnjKv
When it comes to context, the first thing to look at is the type of text you want to translate. Typically, the more technical and straightforward the text, the better a machine will be at working on it.
Let's talk about Google Translate, its current state in the professional translation industry, and why robots are terrible at interpreting culture and context.
Straight to the point: machine translation (MT) is an incredibly helpful tool for translation! But just like any tool, there are specific times and places for it.
You wouldn't use a jackhammer to nail a painting to the wall.
Two factors are at play when determining how useful MT is: language pair and context.
Certain language pairs are better suited for MT. Typically, the more similar the grammar structure, the better the MT will be. Think Spanish <> Portuguese vs. Spanish <> Japanese.
No two MT engines are the same, though! Check out how human professionals ranked their choice of MT engine in a Phrase survey:
https://t.co/yiVPmHnjKv

When it comes to context, the first thing to look at is the type of text you want to translate. Typically, the more technical and straightforward the text, the better a machine will be at working on it.