Not sure I agree. The problem with overfitting is fitting too closely to the data points at hand, but you might still be measuring the right thing, as discussed in other posts here.
The problem with Goodhart's law is, as I've always taken it, closer to the Lucas critique in economics than to the bias-variance trade-off in machine learning. Namely, when it comes to human behavior, structural relations that are very real and present in the training data may break down once you put pressure on them for control purposes.
When you use machine learning to, say, detect skin cancer, you might accidentally learn the markers put into the images to highlight the cancerous region rather than the skin properties - that's overfitting. But the skin cells themselves don't care - they won't alter their behavior whether you detect them correctly (and remove them) or not. If you use a model to find a relation between some input and a human behavior output, humans might very much start to change their behavioral responses once you start to make changes. The entire relation breaks down, even if you've measured it correctly beforehand, because people, unlike particles, have their own interests.
A note that the datapoints you train on are part of the training objective. If you are using different data at test time than you use at training time, then you are measuring the wrong thing during training, the same as if you used a different loss function at training time.
Also -- as you say, feedback loops and non-stationarity make everything more complex, and are ubiquitous in the real world! But in machine learning we also see overfitting phenomena in systems with feedback loops -- e.g. in reinforcement learning or robotics, where the system changes depending on the agent's behavior.
Cool that you're responding here. Well, regarding robotics, I'm sure there's all sorts of problems when it comes to training models, but I'm not sure that Goodhart's law is one of them, unless you can give a concrete example. It's really geared towards social problems. Sure, some natural systems may also exhibit the kind of adaptive response that leads to the breakdown of structural relattions (eg the cancer cells mentioned before may evolve to avoid detection by the AI), but that happens on completely different timescales.
The problem with Goodhart's law is, as I've always taken it, closer to the Lucas critique in economics than to the bias-variance trade-off in machine learning. Namely, when it comes to human behavior, structural relations that are very real and present in the training data may break down once you put pressure on them for control purposes.
When you use machine learning to, say, detect skin cancer, you might accidentally learn the markers put into the images to highlight the cancerous region rather than the skin properties - that's overfitting. But the skin cells themselves don't care - they won't alter their behavior whether you detect them correctly (and remove them) or not. If you use a model to find a relation between some input and a human behavior output, humans might very much start to change their behavioral responses once you start to make changes. The entire relation breaks down, even if you've measured it correctly beforehand, because people, unlike particles, have their own interests.