> Planned economies don’t work great beyond small scopes.
This is a categorically false statement. The Soviets turned the Russian empire from an agricultural backwater with a minority literate populace, into an advanced industrialised state, scientific leader and economic superpower that was on par with the US for decades, a transformation that took place within a span of merely 20~30 years. Planned economies have been demonstrated to have extremely strong potential. Of course, a planned economy is only as good as its planning, and humans are fallible; we have yet to work out a solution to that particular issue.
The USSR was never economically or scientifically on par with the US. They managed to be relatively competitive in some endeavours by concentrated massive percentages of their national people and resources on certain endeavours (industrialization, space, the military), often with brutal violence. The US was militarily competitive, often with more advanced equipment, at a fraction the economic resources - and did it at the same time as having one of the highest living standards in the world (and often had the positive results of military tech bleeding into the civilian sector, like computers).
Magnitogorsk, a massive soviet city built around a steel mill, was essentially built with American expertise (this whole documentary is extremely fascinating on how central planning got to sophisticated and how the USSR ground to a halt): https://youtu.be/h3gwyHNo7MI?t=1023
This is not to say that any planning is bad, but having a central state trying to control everything from how many belt buckles to make down to how far cab drivers should drive each year, and you're going to become a bureaucratic nightmare. Central planning everything becomes a logarithmic planning nightmare, especially when trying to innovate at the same time. You can't plan around output of innovation because the planners are often far removed from everything. A planner would probably try and "plan" on how to breed a faster horse instead of a car, for example.
I'm reminded of an interview I once saw with Gorbachev. He was talking about how he was just promoted into the central committee, essentially the highest ring of the Soviet state. He had just made it to the top and one of his first meetings was having dealing with the issue of persistent shortages of women's panty hose. He was flabbergasted that he was at the top rung of a country that can blast people into space, but can't deal with basic consumer goods availability.
Also, many countries have industrialized just as fast without central planning, particularly several asian ones. True, then did centrally set goals and use various carrot and stick initiatives, but otherwise let the market dictate most of the rest.
> The USSR was never economically or scientifically on par with the US
We can call it solidly #2 if you prefer, but going from a failed empire to #2 in the world is still a real achievement. To be clear, I was not making a statement on whether I think central planning is superior; I was merely contesting the claim that it can not work at scale, which I find to be clearly untrue. Whether it's inferior or not, we have an impressive example indicating that success is at least possible. I would also expect the modern era to offer a better opportunity for central planning than in the past if any nation wanted to give it another go because significantly more well-informed decisions could be made with the degree of data and instant communication we have available today. That said, I certainly wouldn't be keen to advocate for it in my own country, because I don't much like the idea of giving the state absolute control in an era with a level of surveillance the KGB could not have dreamed of.
You can't even measure it cleanly. It was so isolated and its currency wasn't even convertible, but by most measures Japan and West Germany had larger economies with far, far, far better living standards. Go to per-capita level equivalents and you'd be hard pressed to find it higher than any western developed country. Even economic basket-case countries in South America often had better living standards.
North Korea is sending things into space. You can't measure a country on its isolated accomplishments, even if they're impressive.
Many asian countries industrialised with what was essentially central planning. Not in the literal "one government decision maker" sense but via a handful of extraordinarily large mega-corporations operating as central planners themselves.
The big five chaebol in South Korea for example orchestrate more than half the economic activity in the country and that's down from what it was before the turn of the century.
Similarly Japan was heavily industrialised under the zaibatsu and they effectively ran the entire economy of Japan through the entire imperial era. It was only during the american occupation that the zaibatsu were broken up and afterwards the keiretsu would take their place as the dominant drivers and orchestrators of economic activity.
This isn't to say that central planning or extremely heavily integrated planning and operations are a good thing for an economy or remotely healthy in the long term, just that they were pretty prevalent in many major cases of rapid industrialization in asia regardless of whether they came in a socialist or capitalist flavor.
Virtually any country that achieves political stability and effective institutions experiences rapid development in the modern world with open knowledge and trade networks.
There is nothing special about central planning in that manner that a laissez-faire economy would also achieve at that low development.
That's quite a misattribution of success. The Russian empire was politically stable throughout the industrial revolution era, and yet lagged behind other great powers substantially. The Soviet revolution, of course, ushered in a famously politically unstable era with regular, massive purges. Meanwhile, there are many relatively politically stable countries that never managed to become especially industrialised over a period of many decades even up to the modern day, for example Mexico.
There's also a difference between "any country can rapidly develop", and what the USSR did, reaching a superpower status only two countries in the world achieved. For example, the USSR produced 80,000 T-34 medium tanks to the US's 50,000 Sherman tanks and Germany's 8500 PzIV tanks, and it was superior to both. That is a ridiculous feat, and it happened in the middle of a massive invasion that forced the relocation of huge swathes of industry to boot. The USSR was also the first to most space achievements, and it was second to develop nuclear weapons. The USSR did not just catch up to "any industrialised nation", it surpassed them all completely other than the US.
> The Russian empire was politically stable throughout the industrial revolution era, and yet lagged behind other great powers substantially.
The Russian empire was (finally) developing industrially at the outbreak of WW1. It's industrialization was retarded by it's hanging onto serfdom (including in practice after it was technically ended) far longer than the rest Europe (that prevented people moving into cities and work in factories).
> There's also a difference between "any country can rapidly develop", and what the USSR did, reaching a superpower status only two countries in the world achieved. For example, the USSR produced 80,000 T-34 medium tanks to the US's 50,000 Sherman tanks and Germany's 8500 PzIV tanks
The US sent over 400,000 trucks and jeeps to Russia (on top of building many more for itself and other allies), built out a massive navy and merchant marine, built 300,000 planes of various types (almost as much as the rest of the other allies and axis combined), supplied massive amounts of food, energy, etc and researched and built the atomic bomb (and didn't steal it). They did this while fighting a war on two fronts and maintaining a relatively good living standard (it's a fair argument to make that they weren't dealing with a direct invasion threat, though). They also had one of the best military supply chains in the world, that still persists to this day.
The superiority of the T-34 is overplayed. It was a decent tank that was good enough to build at scale, but the Sherman was more survivable and just as reliable.
The Soviet Union went to massive amounts of trouble to gloss over lend-lease aid for propaganda reasons. Russian blood absolutely won the war in Europe, but the USSR had massive amounts of help.
>industrialised over a period of many decades even up to the modern day, for example Mexico.
Pretty sure Mexico's GDP per capita was higher for quite a while, and their stagnation lied precisely in improper government interference that closed off the economy with protectionist policg rather than embracing free trade. Nor did these have inclusive institutions or really stable political situations.
The thing about the USSR, just like with China and India and USA is that once the economic growth sets in, their large populations compared to existing European states would obviously lead to much larger economies of scale and thus GDP growth. But of course, even given that large absolute growth, living standards never did converge with Western Europe. That speaks more to how central planning stagnated things.
This is all false, I guess you've never been to Soviet Union nor russia (that country doesn't deserve capital R). Central planning is dysfunctional at its core, ignoring subtleties of smaller parts. Also, it was historically always done in eastern Europe hand in hand with corruption, nepotism and incompetence where apparatchiks held most power due to going deepest in ass kissing and other rectal speleology hobbies, not because they were competent.
I come from one such country. After WWII, there was Austria and there was eastern bloc to compare. Austria was severely damaged and had much lower GDP than us. It took mere 40 years of open market vs centrally planned economy to see absolutely massive differences when borders reopened and people weren't shot anymore for trying to escape - we didn't have proper food in the shops ffs. Exotic fruits came few times a year, rotten or unripe. Even stuff grown in our country was often lacking completely. Any product ie electric ones, or cars were vastly subpar to western ones while massively more costly (and often design was plain stolen from the western companies).
Society as a whole made it because almost everybody had a big garden to complement everything basic missing in shops. The little meat you could buy was of worst quality, ladden with amount of toxic chemistry that wouldn't be acceptable in Bangladesh.
> The proximate cause of the famine was the infection of potato crops by blight (Phytophthora infestans)[14] throughout Europe during the 1840s.
Vs.
> While most scholars are in consensus that the main cause of the famine was largely man-made, it remains in dispute whether the Holodomor was intentional, whether it was directed at Ukrainians, and whether it constitutes a genocide, the point of contention being the absence of attested documents explicitly ordering the starvation of any area in the Soviet Union. Some historians conclude that the famine was deliberately engineered by Joseph Stalin to eliminate a Ukrainian independence movement. Others suggest that the famine was primarily the consequence of rapid Soviet industrialisation and collectivization of agriculture.
You could've read a bit more of the article. Proximate cause != ultimate cause.
> Initial limited but constructive government actions to alleviate famine distress were ended by a new Whig administration in London, which pursued a laissez-faire economic doctrine, but also because some assumed that the famine was divine judgement or that the Irish lacked moral character,[20
I truly hate how this buzzword is misused with regards to the EU. Voluntarily delegating authority is not the same as losing sovereignty. If you can un-delegate the authority at your own prerogative, you have not lost sovereignty. If the UK, for example, had genuinely lost its sovereignty, it would not have been able to voluntarily withdraw from its participation in the EU.
I would rather say that the term “sovereignty” is multifaceted. We have the concept of popular sovereignty, which means that political power emanates from the people and all other sovereignty is delegated.
However, there is also a use of the term “sovereignty” in the sense of self-determination over one's own state structure and the ability to ward off external interference. When a state transfers certain sovereign rights to the EU, this is more than just delegation. In German constitutional law, for example, this means that the transfer of such rights to the EU has constitutional status.
If there is a lawsuit before the German Federal Constitutional Court (Bundesverfassungsgericht) that challanges an EU law or regulation, the court first examines whether the EU law in question regulates something that actually falls within the EU's area of responsibility or whether it is something over which Germany has reserved its sovereignty.
The most prominent example of such a ruling is the PSPP (Public Sector Purchase Programme) case from 2020, where the German Federal Constitutional Court ruled that another ruling from the Court of Justice of the European Union (CJEU) regarding the European Central Bank (ECB) program of purchasing government bonds is not binding in Germany because the CJEU exceed its judicial mandate and violated the sovereignty of the German Bundestag. The case was "solved" when the European Central Bank provided the Bundestag with additional documentation regarding the program and the Bundestag concluded that everything is in order.
In this decision the term "sovereignty" is explicitly used to outline the case: "In particular, these [complaints] concerned the prohibition of monetary financing of Member State budgets, the monetary policy mandate of the ECB, and a potential encroachment upon the Members States’ competences and sovereignty in budget matters."
The decision later concludes:
"This standard of review [in the ruling of the CJEU] is by no means conducive to restricting the scope of the competences conferred upon the ECB, which are limited to monetary policy. Rather, it allows the ECB to gradually expand its competences on its own authority; at the very least, it largely or completely exempts such action on the part of the ECB from judicial review. Yet for safeguarding the principle of democracy und upholding the legal bases of the European Union, it is imperative that the division of competences be respected."
> The FSFE's mission, as I understand it, is to support and promote free software. But as far as I know, Twitter has never been a friend of free software, nor has it been supportive of other related values the article mentions, like 'privacy', 'transparency', 'autonomy', 'data protection', etc. It has always been a non-free, centralised network which cared about profit more than user rights, and engagement more than fostering civil discourse.
Indeed, and FSFE writes:
> The platform never aligned with our values
> a space we were never comfortable joining, yet one that was once important for reaching members of society who were not active in our preferred spaces for interaction
And then says in no unclear terms what changed:
> Since Elon Musk acquired the social network [...] the FSFE has been closely monitoring the developments of this proprietary platform
> Over time, it has become increasingly hostile, with misinformation, harassment, and hate speech more visible than ever.
> an algorithm that prioritises hatred, polarisation, and sensationalism, alongside growing privacy and data protection concerns, has led us to the decision to part ways with this platform.
You cherry-picked two words "direction and climate" from the article and criticised them for taking an ambiguous political stance, but there is nothing ambiguous about the actual announcement and they clarify their exact motivation for leaving multiple times.
The problem is that 'what changed' is hardly related to why they joined Twitter in the first place. Becoming 'increasingly hostile' and prioritising 'hatred, polarisation, and sensationalism' (more than before) doesn't really contradict or prevent you from 'reaching members of society who were not active in [y]our preferred spaces for interaction'. Like I wrote, X is still popular, there are still people you can communicate with about your mission. The original logical (and given) reason for being on X is still just as valid.
And I didn't criticise them for taking an ambiguous stance. On the contrary, I remarked they seem to be taking a rather unambiguous political stance (one opposed to that of X's new leadership). What I criticised was their not being upfront about this and instead giving explanations which don't really add up for me (for reasons restated above).
I quoted only short parts to avoid making my comment appear twice as long, but please let me know if you found the way I did so to be misleading in some way.
> The problem is that 'what changed' is hardly related to why they joined Twitter in the first place.
Does it have to be? The original calculus was "unpleasantness of using unfree software vs. benefit of reaching more people". The calculus has changed to "unpleasantness of using unfree software + unpleasantness of encountering hate speech vs. benefit of reaching more people". In other words, what used to be "1 + -1 = 0" has become "1 + -2 = -1" for the FSFE. As humans, they are free to consider other reasons than their primary mission alone when determining whether the platform is still one they find to be worthwhile to use.
> What I criticised was their not being upfront about this
I really don't get how your impression is that they are not upfront about this, and yes, I found your comment to have been quite misleading, having skimmed the comments before reading the article. The very first sentence in the article starts with "Since Elon Musk...". What part of this would you have liked them to be more upfront about?
Sort of? For an individual, there's obviously a ton of personal factors that play a role in decision-making. For an organisation with a stated mission, though, I should expect them to make their decisions based on what best aligns with said mission, or another set of priorities they're bound to follow. This is important for knowing if one should support the organisation and if their values are aligned. How can one trust an organisation which only ever claims to fight for Y, but then in practice randomly throws Z, W, and U into the mix, as they feel like it?
As I wrote, the content they criticise X for is the kind of content I recall them being much more indifferent about in the past, so seeing this come up as their main reason for leaving this platform, with no indication of any internal re-evaluation of priorities having happened, is rather out of the blue.
> The very first sentence in the article starts with "Since Elon Musk..."
… and goes on to tell us they have been monitoring it; found it increasingly hostile; that they originally joined to interact with people, promote free software and alternative networks; that the platform feeds hatred, polarisation and sensationalism and grows privacy concerns; and finally that they're leaving.
> What part of this would you have liked them to be more upfront about?
What they suddenly have a problem with and why. As I said, what they actually wrote doesn't add up to this for me. Hostile environment, misinformation, harassment? They didn't seem to care much or see it as hindering their mission before. Hatred, polarisation, sensationalism? Same thing, and it doesn't necessarily hinder their activity on the network. Data protection, privacy concerns? The network has always been non-free, for-profit and centralised. Interacting with people and promoting free software? You literally can still do that.
They say why they originally came, but those reasons are still valid today. They say what they dislike about their platform, but it's either irrelevant to their mission or they haven't disliked it so much before. So what they say does not explain their decision. It doesn't explain the logic behind it. Trying to use it as an explanation doesn't really make sense with their supposed mission.
I can only guess the actual logic is more like 'we have other values we care about more now, which the platform now goes against, and in our current political climate we want to more noticeably stand at the "right side" and gain favour with our primary audience over there'. This, for example, could be a sensible explanation. But they chose not to give one.
Apparently, controlling what people are allowed to say "in the name of good" aligns with the FSFE's values. I know enough history to know what that means.
As far as I can tell, there was no actual low-level optimization being done. In fact, it appears they did not even think to benchmark before committing to 130gb of bloat.
Further good news: the change in the file size will result in minimal changes to load times - seconds at most. “Wait a minute,” I hear you ask - “didn’t you just tell us all that you duplicate data because the loading times on HDDs could be 10 times worse?”. I am pleased to say that our worst case projections did not come to pass. These loading time projections were based on industry data - comparing the loading times between SSD and HDD users where data duplication was and was not used. In the worst cases, a 5x difference was reported between instances that used duplication and those that did not. We were being very conservative and doubled that projection again to account for unknown unknowns.
This reads to me as "we did a google search about HDD loading times and built our game's infrastructure around some random Reddit post without reasoning about or benchmarking our own codebase at any point, ever".
LLMs will never get good enough that no one can tell the difference, because the technology is fundamentally incapable of it, nor will it ever completely disappear, because the technology has real use cases that can be run at a massive profit.
Since LLMs are here to stay, what we actually need is for humans to get better at recognising LLM slop, and stop allowing our communication spaces to be rotted by slop articles and slop comments. It's weird that people find this concept objectional. It was historically a given that if a spambot posted a copy-pasted message, the comment would be flagged and removed. Now the spambot comments are randomly generated, and we're okay with it because it appears vaguely-but-not-actually-human-like. That conversations are devolving into this is actually the failure of HN moderation for allowing spambots to proliferate unscathed, rather than the users calling out the most blatantly obvious cases.
Do you think the original comment posted by quapster was "slop" equivalent to a copy-paste spam bot?
The only spam I see in this chain is the flagged post by electric_muse.
It's actually kind of ironic you bring up copy-paste spam bots. Because people fucking love to copy-paste "ai slop" on every comment and article that uses any punctuation rarer than a period.
> Do you think the original comment posted by quapster was "slop" equivalent to a copy-paste spam bot?
Yes: the original comment is unequivocally slop that genuinely gives me a headache to read.
It's not just "using any punctuation rarer than a period": it's the overuse and misuse of punctuation that serves as a tell.
Humans don't needlessly use a colon in every single sentence they write: abusing punctuation like this is actually really fucking irritating.
Of course, it goes beyond the punctuation: there is zero substance to the actual output, either.
> What's wild is that nothing here is exotic: subdomain enumeration, unauthenticated API, over-privileged token, minified JS leaking internals.
> Least privilege, token scoping, and proper isolation are friction in the sales process, so they get bolted on later, if at all.
This stupid pattern of LLMs listing off jargon like they're buzzwords does not add to the conversation. Perhaps the usage of jargon lulls people into a false sense of believing that what is being said is deeply meaningful and intelligent. It is not. It is rot for your brain.
"it's not just x, it's y" is an ai pattern and you just said:
>"It's not just "using any punctuation rarer than a period": it's the overuse and misuse of punctuation that serves as a tell."
So, I'm actually pretty sure you're just copy-pasting my comments into chatgpt to generate troll-slop replies, and I'd rather not converse with obvious ai slop.
Congratulations, you successfully picked up on a pattern when I was intentionally mimicking the tone of the original spambot content to point out how annoying it was. Why are you incapable of doing this with the original spambot comment?
This article is an advertisement for what appears to be a networking service, something which is not really made clear until near the end.
The article is self-serving in identifying the solutions ("do things related to the service we offer, and if that doesn't work, buy our service to help you do them better"), but it is a subject worth talking about, so I will offer my refutation of their analysis and solution.
The first point I'd like to make is that while the hiring market is shrinking, I believe it was long overdue and that the root cause is not "LLMs are takin' our jerbs", but rather the fact that for probably the better part of two decades, the software development field has been plagued by especially unproductive workers. There are a great deal of college graduates who entered the field because they were promised it was the easiest path to a highly lucrative career, who never once wrote a line of code outside of their coursework, who then entered a workforce that values credentialism over merit, who then dragged their teams down by knowing virtually nothing about programming. Productive software engineers are typically compensated within a range of at most a few hundred thousand dollars, but productive software engineers generally create millions in value for their companies, leading to a lot of excess income, some of which can be wasted on inefficient hiring practices without being felt. This was bound for a correction eventually, and LLMs just happened to be the excuse needed for layoffs and reduced hiring of unproductive employees[1].
Therefore, I believe the premise that you need to focus entirely on doing things an LLM can't -- networking with humans -- is deeply faulty. This implies that it is no longer possible to compete with LLMs on engineering merit, and I could not possibly disagree more. Rather than following their path forward, which emphasises only networking, my actual suggestion to prospective junior engineers is: build things. Gain experience on your own. Make a portfolio that will wow someone. Programming is a field that doesn't require apprenticeship. There is not a single other discipline that has as much learning material available as software development, and you can learn by doing, seeing the pain points that crop up in your own code and then finding solutions for them.
Yes, this entails programming as a hobby, doing countless hours of unpaid programming for neither school nor job. If you can't do that much, you will never develop the skills to be a genuinely good programmer -- that applied just as much before this supposed crisis, because the kind of junior engineer who never codes on their own time was not being given the mentorship to turn into a good engineer, but rather was given the guidance to turn them into a gear that was minimally useful and only capable of following rote instructions, often poorly. It is true that the path of the career-only programmer who goes through life without spending their own time doing coding is being closed off. But it was never sustainable anyways. If you don't love programming for its own sake, this field is not likely to reward you going forward. University courses do not teach nearly effectively enough to make even a hireable junior engineer, so you must take your education into your own hands.
[1] Of course, layoff processes are often handled just as incompetently as hiring processes, leading to some productive engineers getting in the crossfire of decisions that should mostly hurt unproductive engineers. I'm sympathetic to people who have struggled with this, but I do believe productive engineers still have a huge edge over unproductive engineers and are highly likely to find success despite the flaws in human resource management.
Hey there, I'm the developer of the app along with my wife, the author of the post. We quit our jobs over a year ago to work on a problem we care about and helping people connect to their goals through people is what we landed on. That being said, we spend most of our time on the tech! And I think your advice is spot on, that a portfolio of projects really is THE MOST IMPORTANT THING. It's where I would tell people to start. But from there, connecting people to others who care about that portfolio, is also important. I think a lot of technical people pay attention to the former, and tend to ignore the latter. Which is me too! So rather than "this is the only true way" I hope it comes across like a potential piece of the puzzle to some people.
Thanks for giving it some thought and for your perspectives, they really help.
> This article is an advertisement for what appears to be a networking service, something which is not really made clear until near the end.
I have been seeing an uptick of articles on HN where someone identifies a problem, then amps it up a bit more and then tells you that they are the right ones to solve it for a fee.
These things should not be taken seriously and upvoted.
Full disclosure, I'm the author (although I didn't put the post up here on HN). Thanks for pointing out that I wasn't very clear in my CTA and maybe made it sound shady. That's not what I wanted to do, obviously.
It's just an app, not a service, that my husband and I built (and quit our jobs for) that has a generous free trial. (Technically, right now it's completely free because it's in early access, so if you never upgrade, you could use it for free forever.)
The CTA at the end was just in an effort to talk to more people (for free) and see how we can help and make our software better. I come from the DevOps world, and they always say you have to first know how to do something really well manually before you can automate it, and that's what we're trying to do by talking to people (for free).
The problem is that praying that someone stumbles upon your brilliant hobby projects and offers you a job is a terrible bet. Yes, you have to be good a software development, but being good at software development doesn't land you job. Being good at software development, and cutting through the noise gets you a job. Because even if all those laid off people are incompetent, they're still applying for the same jobs you are, and it is very difficult to identify who's who.
So, from a individual's perspective, figuring out how to meet people who will help you sidestep the "unwashed masses" pile of applications is probably the next most important thing after technical competence (and yeah, ranking above technical excellence).
That's exactly what the portfolio is for. Having an actual body of work people can look at and within a couple of minutes of looking think "wow, this person will definitely be able to contribute something valuable to our project" will immediately set you apart from every applicant who has vague, unreliable credentials that are only extremely loosely correlated with competence, like university trivia. You do need to get as far as a human looking at your portfolio, which isn't a guarantee on any given application, but once you get that far your odds will skyrocket next to University Graduate #130128154 who may have happened to get human eyes on their application but has nothing else to set them apart.
In most countries, death by terrorist is at least an order of magnitude less likely than death by bee. Strangely, we do not seem to be on a campaign to lock all humans in-doors to protect them from bees, nor have we declared a global war on beeism. These stats hold from before the modern surveillance regime, and so can hardly be credited to it. It's not actually a problem in particular need of urgent solving. Regular people are safe from terrorism, much safer than they are against most kinds of tragic accidents. What regular people are actually in danger of is losing all of their human rights to fearmongerers, who constantly invoke terrorism to erode them further and further.
Bukele and Duterte did not rise out of an environment of terrorism, so I don't know why you thought it relevant to bring them up. I think it is really sad to see comments on HN of all places advocating that if we don't implement chat control we'll spiral into a lawless hellscape.
India saw 779 million dollars lost to cyber fraud in the first 5 months of 2025.
The degree of cyber fraud in India is beyond insane.
Also - funnily enough - Indian telecom companies are meant to be fined for every SIM card given out under false data. There is already meant to be a check that stops this.
Sincerely, you misunderstand what I am saying, or you didn't read until the end where I said that some level of terrorism is a price worth paying in my subjective judgment.
My point is that my subjective judgment counts for nothing, because the negative feedback loop that I described is a society-wide phenomenon beyond my control as an individual. Asking the majority of people to think the way you do about terrorism is somewhere between wishcasting and virtue signalling. It doesn't interrupt the causality behind the negative feedback loop, so it therefore fails to outline a path that can be trodden in the real world to achieve your desired vision of no surveillance.
I urge everyone to banish this mode of thinking which fixates on what "should" happen without first checking whether that desired end state is a possible world we can exist in once you factor in the second and third order effects beyond the control of any individual.
> Bukele and Duterte did not rise out of an environment of terrorism
Move your abstraction one level higher. They arose out of public safety concerns around murder and drugs and gangs. Those are not terrorism, but they fit under the same umbrella of public safety concerns that motivate regular people to demand authoritarian solutions.
I'm on the side of "some kings deserve credit", but I think:
>Much the same applies to moral advances, like other ideas they're produced by the zeitgeist rather than "made from whole cloth".
is a rather weak argument. Moral advances actually are "made from whole cloth". Morality is objective[1] and can be reasoned from first principles. For example, murder. Murder is not wrong because Yahweh says so. Murder is wrong because the murderer stands to gain virtually nothing, while the murdered loses everything. This discrepancy in gain vs. loss results in a massively net negative impact to society and is therefore objectively bad. However, there are other scenarios where killing someone results in a net positive (or at least less negative than the alternative) to society, for example self-defense against a criminal would-be murderer, and these cases we understand to not be murder.
People have been capable of complex reasoning for as long as we have history. Our predecessors had less information than us available to them, but they still had the same capacity for intelligence and there are plenty of examples of impressive reasoning performed by people thousands of years ago.
So talking about, say, slavery, particularly the exceptionally vile race-based slavery practiced by Americans... it did not take a zeitgeist to understand it was bad. Plenty of people were capable of reasoning about the absolute hypocrisy of the slave-owning founding fathers proclaiming all men born equal from the day America proclaimed its independence. The zeitgeist that ended slavery in America was enough people feeling compelled to take action rather than let the status quo be; even if you understand slavery is bad, it's easier to simply selfishly benefit from it, or even if you don't benefit from it, doing nothing is yet still awfully more appealing than fighting and dying in a civil war over it.
Under that lens, I will absolutely judge historical figures. The slave-owning founding fathers, for instance, are scum who should not be revered. They especially had the education and the experience of perceived tyranny, yet maintained and benefitted from a system they were perfectly capable of reasoning to be worse than the one they revolted against. In fact, they manufactured their own zeitgeist from scratch. If they had wanted to, they certainly could have made the abolition of slavery part of it.
[1] Stating "morality is objective" can come across as arrogant (it may be read as "my moral perspective is the objectively correct one"), so I want to elaborate a bit in a digression. Morality is objective, but not necessarily easy. There are many complex situations, reasoning is actually often quite challenging, and lack of information can confound attempts at reasoning. There are many cases where if you asked me if something was moral, my answer would be "I don't know" rather than baselessly asserting one way as objectively correct. However, many cases like the morality of race-based slavery are trivially easy to reason about, and we have a rich historical record of writing produced by people hundreds of years ago preserved showing they were capable of conducting this reasoning with the information available to them long before the zeitgeist that compelled action to end it.
I'm totally down with the arrogant "morality is objective" viewpoint. However, I don't think it can be reasoned from first principles: I think "from first principles" gives a bad smell to almost any reasoning. I see knowledge as web-like in structure, not hierarchical, and I see moral ideas as belonging to a separate realm where they're supported by other moral ideas. (Consider that "gain" entails values, which are moral.) Some of these ideas are basal urges, but that doesn't make them superior. So, I can't agree with making historical figures at fault for their failure to arrive at a present-day state of morality by figuring everything out from first principles, because I do think it's something the culture does, gradually, as a group effort, with individuals considered "bad" only for failing to be up to speed by the standard of the time.
Incidentally, if they are to be blamed for failing to arrive at future morality by using the first-principle building blocks you suppose it to be made from, then so are we, and so are all future people, since morality is open-ended and there's always more to learn. We're all terribly guilty for not belonging to the infinitely far future, apparently.
Well, I suppose you can say "that whole society, in that place at that time, went down a morally wrong-headed path". I'm not very knowledgeable about Aztecs, for instance, but I believe they had some nasty traditions, as well as a cyclic world-view. Yet there must have been good Aztecs. (Even objectively, we have to consider things in context.)
If we were to leap 300 years into the future, I don't think it'd be very surprising what they look down on us for, presuming they've advanced in a logic-oriented direction and not, say, a relapse into purely religious doctrine, which is by no means a given to occur. We will certainly be condemned, at minimum, for our utterly inhumane treatment of animals, for our relentless exhaustion and destruction of natural resources, for our abuse of the scientific method to proclaim things as factual with studies that can't actually be replicated, and for much more.
Perhaps it would be surprising to some people who haven't thought much about it that we will likely be viewed poorly for wasting non-replenishable helium, necessary for advanced medical technology, on party balloons. But I don't think there is anything we do that we can't currently reason about being considered immoral for. I have absolutely zero doubt that George Washington would not be surprised to leap to 2025 and see someone condemning him for his slave ownership. There is nothing about living in the 1700s that would prevent him from reasoning that what he did was immoral, and indeed, many people in the 1700s did reason that.
Cultural adoption of morality moves significantly slower than reason about morality. This is because cultural adoption requires action. Humans will behave immorally even if they know their actions are immoral, for their own benefit. To counteract this requires coordinated group effort, which is an extremely slow process because, for example, convincing people that it's worth them risking death in a bloody war to stop other people from owning slaves, when they are not themselves ever at risk of being treated as a slave, is a very challenging task. That one participates in selfish, immoral actions for one's own benefit because one's society does not yet coerce one through collective threat of violence to behave morally does not absolve one of one's actions, which can already be reasoned through even if the collective will to enforce it does not yet exist.
Cultural adoption can also diverge from reason about morality completely, of course. This is because selfish people with power can use their power to enforce immoral values like absolute service to themselves, for their own benefit. If a society does not collectively overcome powerful individuals acting selfishly, then the culture's apparent morality will be warped in the service of what benefits a specific individual at the greater expense of society. However, even in such a state, people can and do reason about morality. Human history is a long, long tale of people defying immoral abuse of authority.
> If you study history, then you'll notice how preciously few people were focused on making the lives of regular people better.
If you study modern politics, then you'll notice how preciously few people are focused on making the lives of regular people better. I don't actually believe, if you were to do a deep dive on all of the kings of the past few hundred years and not just the most famous ones, that the ratio would be meaningfully worse. I do suspect fame will negatively correlate with "goodness", since people who do their job quietly are less notable than people who cause a commotion.
Other than the silly design, the website's cookie banner is actively malicious. It proclaims to be legally required and directly blames the President of the European Commission. If Posthog is being truthful about its cookie usage, the cookie banner is in fact not legally required. Consent banners are only required if you're trying to do individual user tracking or collecting personally identifying data; technical cookies like session storage do not require a banner. That they then chose to include a cookie banner anyways, with explicit blame, is an act of propaganda clearly intended to cause unnecessary consent banner fatigue and weaken support for the GDPR.
I don't have a cookie banner on _my_ website for exactly this reason, but I have to admit some people have asked my if it isn't suspicious that I don't. Perhaps that's what they're trying to avoid here? (that would be the positive reading)
I think that's what Posthog might be trying but as per the above there may be a fine line between funny and annoying and/or between useful and useless.
This is a categorically false statement. The Soviets turned the Russian empire from an agricultural backwater with a minority literate populace, into an advanced industrialised state, scientific leader and economic superpower that was on par with the US for decades, a transformation that took place within a span of merely 20~30 years. Planned economies have been demonstrated to have extremely strong potential. Of course, a planned economy is only as good as its planning, and humans are fallible; we have yet to work out a solution to that particular issue.
reply