I was wondering if someone was going to ask. It's the most bizzare aspect of code reviews at Google.
And "Readability" doesn't mean you are good at a language, it means you are good at it in the way Google uses it. C++ readability is the poster child of this. Borgcron, not so much.
The bug is never the interesting part. The follow on questions are where the data is. How did you find it? How did you fix it? What made it memorable? Did it change the way you code?
I use a variant, "What's the most memorable bug you've fixed?" - and I use it as an indicator of maturity to distinguish L3 SwE from a L5+ SwE (google levels).
First, there is the time-in-field aspect. Simply being in the field for a long time increases the amount of time you have to encounter a sleep-depriving bug.
It can show tenacity. How did they find it? What did they have to do to reproduce it? Was it in prod, test, or dev? etc.
It can show maturity. Why did it pass test? What tests were introduced to detect it? Was it a new class of bug that required new testing? Were you able to add lint rules to detect it? Did you ensure it was pushed properly to prod and do proper follow up.
It can show autonomy. Did you update the testing procedures or just post a bug and hope the QA team fixed it? Did you meet with devops and share info on how to detect and mitigate it? Did you update the playbook at least?
So many possible places to dig in to get the "hire" when the default answer is "no hire". And if you cannot find any, then that's confirmation of the default answer.
Memegen is unbearably whiny now, but doesn't feel like it's because Google hires whiners. It feels more like valid employee feedback, accurately capturing valid worker sentiment toward shitty, randomizing leadership.
And "Readability" doesn't mean you are good at a language, it means you are good at it in the way Google uses it. C++ readability is the poster child of this. Borgcron, not so much.