Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On-the-job observations done by someone with way more experience.

The kpis that get pushback are indicators of whatever you want them to be. They’re just as indicative of poor government planning or shitty parents.



> On-the-job observations done by someone with way more experience

This feels like it puts a potential hard cap on quality growth by discouraging mixups or experimentation that might improve education, but wouldn’t please an old-guard for one reason or another, and discourages alternative class styles which the judge doesn’t approve of.

Both of those seem like potentially serious problems in education, given that its structure has with few exceptions been effectively stagnant over the last several hundred years. You therefore may be mistaking “evaluating the success at implementing the widely accepted method” for an “evaluation of quality”.


The institution of widespread public education that looks anything like what we have now isn’t something I’d describe as having meaningfully existed for hundreds of years, let alone as having been stagnant that whole time.


I would. Most modern systems of education have their roots in 1748, and either are derived from or inspired by reforms to the Prussian educational system to guarantee free and compulsory elementary school education between the ages of 5 and 14 as taught by secular professional teachers for the full populace.

https://en.wikipedia.org/wiki/Prussian_education_system

If you mean the university and college levels, they have interesting differences from the 18th century, but are recognizably similar in regards to basically everything besides cost and curriculum differences we'd expect due to changes in societal needs, technical advancements, and changing interests.


I wouldn’t date the current system prior to one-room schoolhouses becoming uncommon. That’s an enormous change.

If we’re going back farther than that, then I’m really having a hard time seeing where the stagnation comes in—I don’t agree with that even past that point (gifted education alone is only just now starting to develop into something halfway useful, and that’s a very recent change in just one small part of primary and secondary education) but if we’re going farther back, then… what?


Didn’t grades only start around 1900?

Letter grades are an 1897 invention and only at one college to start with: https://studysoup.com/blog/uncategorized/history-of-the-lett...


The structure you're seeing allows for plenty of freedom per classroom and per institution. It's been stagnant in exactly the same ways automotive design has been stagnant, and for largely the same reasons.

Your line of thinking makes sense, and your questions were more or less answered last century. I think this would be a useful conversation to have with a GPT...

> You therefore may be mistaking “evaluating the success at implementing the widely accepted method” for an “evaluation of quality”.

because that conjecture alone is an entire topic of study in several disciplines.


Oh good, I was afraid this was gonna go into cuckoo territory.

Yep, this is what works. All the attempts to turn it into a simple spreadsheet based on something you can easily measure from an office across the state have been so flawed they’re very nearly useless, if not actively harmful. We keep doing it anyway because the people making those calls either don’t understand the field or do, but don’t care because they want to implement bad ideas for political reasons (trend-following to avoid criticism is huge, for one thing)


On-the-job observations are very subvertable though. They happened a few times in my high school in Soviet Union. At least twice the teacher would prep "good" students a few days in advance: I will ask you this question, and this is the answer you are supposed to give, make sure to memorize it well. So the class is engaged, gives good answers to hard questions, except this is all a Potemkin village deal!


Any human can manipulate their assessment. However, I think a skilled and trainer observer has a better chance of detecting that than any standardized blind mechanism.


Practically speaking your interview would require applicants to apply claimed knowledge to novel context.

A simple method would be to identify interesting problems from recent PRs and ask them to walk you through their approach to discovery and solution. It's a problem they should be familiar with, but in a new shape and with different labels. Let's see what they come up with.


Same for the ubiquitous "story points per sprint", "number of hours sitting at the desk", or "lines of code written" measures of productivity for engineering teams; because they're easy to measure and provide some kind of number that can be reported, they get used. Despite being completely useless and actively harmful.

Your point about people who don't understand the process using these measures because it suits their purposes is also relevant. Productivity measures can be a political tool.


So the equivalent for software engineering is peer reviews?


I'd say our equivalent might be a staff-level engineer sitting in on a sprint. Maybe less tense. You're being evaluated, but any resulting PIP is practical advice and comes with a fucking rubric.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: