I agree. The further I have progressed into my career the more I have been focused on the stability, maintainability and "supportability" of the products I work on. Going slower in order to progress faster in the long run. I feel like everyone is disregarding the importance of that at the moment and I feel quite sad about it.
The current models had lots and lots of hand written code to train on. Now stackoverflow is dead and github is getting filled with AI generated slop so one begins to wonder whether further training will start to show diminishing returns or perhaps even regressions. I am at least a little bit skeptical of any claim that AI will continue to improve at the rate it has thus far.
If you don't really understand how LLMs of today are made possible, it is really easy to fall into the trap of thinking that it is just a matter of time and compute to attain perpetual progress..
In my opinion, if you care about the open web, then you should not be using a Blink (Chromium) based browser like Brave. The less control Google has, the better for the web.
The internet used to be controlled in large by Microsoft. Then it wasn't. It does not have to continue to be controlled by Google in the future. Not using Chromium based browsers is a first step.
Only slightly related but on the topic of barcodes and security I'd like to recommend this excellent talk by Felix Lindner, it is quite a few years old but I'd guess stuff like barcode scanners are not the most frequently updated things:
I think part of the problem is that there seems to be a widespread sentiment among software developers to prioritize our time above the user's. Ease of development trumps ease of use and features are made to be easy, or rather fast, to build rather than easy to use and performant.
As a young developer I was taught to take the extra time to make things better for the user. Even if it might mean spending two days on a feature rather than a few hours, the cumulative time saved would end up much more than the extra time I spent since users use your software more often than you write it and there are many users. Unfortunately this view is not widespread enough.
Physical products suffer the same fate. It makes no (capitalist) sense for a business to make a product that never needs to be replaced. In the US, it means we've seen brands like Levi's and L.L. Bean quality and guarantees decline greatly compared to the products that earned them their reputation.
I recently shifted my opinion on IQ tests a bit after watching a recent Veritasium video. He goes into the background/history/controversy of the test as well as some of the concrete impacts of the test and places where it's used. For example did you know the US military has an IQ minimum cutoff? And furthermore they have a second 'soft' cutoff, where only 20% of the military can have an IQ under a certain value. In the past they tried removing this second restriction, but had to reinstate it after seeing increases in casualties/indicators of reduced efficiency! So are IQ tests everything? No. But do they have no merit? Also no. It's somewhere in between.
IQ tests being invalid is more politics than science. Among other things, rejecting the existence of cognitive inequality is necessary to justify systemic racism via the continued existence of Asian quotas (Affirmative Action). Since lots of people benefit from this racism, there’s a huge interest in denial. In western countries, when there’s a few billion people in Asia, and you let a tiny amount in gatekeeping them on the basis of education/wealth/skills, it isn’t really all that much of a shock that they and their children are smarter then average. The only way this could NOT happen is if Asians were LESS intelligent than other groups on average.
IQ tests are hilariously predictive of success if you’re doing a task which is similar to taking an IQ test like academics. They strongly indicate certain mental disorders. Low IQ is more predictive of success than High IQ. Maybe people take the difference between scoring a FSIQ of 110 vs 140 entirely too seriously, but the difference between somebody with 60 vs 90 is staggering.
IQ tests are weakly predictive of academic success, especially on the high end (1SD+). In general, it only predicts 8-25% of variance, even when looking in both directions. That's pretty bad, an average exam does a far better job.
Additionally, the IQ of second generation Asian immigrants will revert to the mean. Not only that, but the advtange decreases rapidly as they age, while the academic advantage grows. And the advantage to begin with is very small - average Asian IQ is only about 2.5 points higher than for Whites, even looking at all generations together.
Given the impact of early childhood environment on IQ, and the huge disparity in academic effort across cultures, esp. those that constitute Asian immigrants, it's pretty clear that the idea that the disparity in Asian achievement cannot be explained by an inherited intelligence advantage. All the data is much more consistent with a culture that just drives students to study far harder.
This does make the argument that affirmative action is harmful even stronger, actually. There is no need to fall back to terrible science to do it. The idea that IQ isn't terribly useful is because it isn't terribly useful, except in very rare cases for diagnosis. The current scientific consensus is consistent with an even stronger argument that AA unfairly discriminates against Asian students.
Not OP, but I understood that to mean any difference in IQ below average (100) has a high impact on success, but differences above 100 have relatively less impact
This is quite a dismissive stance, and I understand the context behind it: IQ was devised to measure broad population academic performance for schoolkids and has big flaws in how it measures that.
But it still has merit as another psychological test battery you can do to determine areas in which you may struggle to process information.
My working memory sucks [compared to the standard for my age range and demographic]. I've had access to stuff like RBANS (Repeatable Battery for the Assessment of Neuropsychological Status), through psychologist friends working in memory clinics. IQ tests correlate that finding, and are much more readily available (ie. free and not locked behind institutional firewalls).
Sure, the most thorough IQ tests are paywalled, but as a concept it's readily available online, though tests will yield you huge variation in scores.
We can choose not to treat IQ as a tool to compare ourselves to other people, but rather as a tool to identify our own strengths and weaknesses within different areas of the test. Ignore the single score at end of test, think on what felt hard, and performance in the score breakdown.
I would love to see more (better designed, statically rigorous) neuropsychological assessments become open and free to access. It would definitely have helped me growing up as an unknown AuDHD kid, to understand I really wasn't "a bright kid just making excuses for things I don't want to do".
That's the only insight IQ scores can give you. But each IQ test tests for something, and IQ being a bunk concept doesn't invalidate that.
Reading comprehension tests test end-to-end ability to process that test and those questions in this circumstance. What comes next? tests test your ability to understand and solve a particular set of puzzles: they're a decent proxy for pattern-recognition skills if you share cultural context with the test author and can handle the administrative overhead of that style of examination. And so on. It's nonsense to give yourself some overall score at the end (though this can make sense for populations), but that doesn't mean the tests are worthless.
If IQ was a bunk concept then the US military could save tens of billions of dollars a year by admitting people who don't meet the current threshold. Imagine the promotion you'd get for saving tens of billions a year, every year, in perpetuity.
I got 10/10 without having any idea who any of the people (except pg) were. The comments always made it possible to link to the companies of the posters somehow.
Interesting experiment to try to figure that out but I'm left wondering if I should be familiar with all those people and what they do.
You might've gotten a lucky roll haha, I'm currently adding more prominent community members (thanks, dang!) that should make it harder to guess. Thanks for playing though :)
I think you are sort of abusing the github logo above the "Closed Source - for a sustainable business model" text. Most people associate it with open source, it doesn't even make any sense there!
Haha it's almost like the Stroop effect, where you print the word "red" in blue ink instead of red ink.
I think you're solving a real pain point but to be honest I watched the video a few times and have trouble understanding how it works/differs from a regular debugger.
Maybe I miss the point here, but it I see these sections as "why customers should choose us" so this point does not make any sense to me, since it's worse(or at least not adding value) from a customer view.
Like GP I was also confused and tried looking for a pricing page but failed.
Seems like there isn't a link to it from https://app.codecrafters.io/catalog which is the site you go to if you click the big CodeCrafters logo in the top left of the page.
There is a "Subscribe" button which takes you to https://app.codecrafters.io/pay but I wasn't savvy enough to notice it or realize what it was.
Only after starting a course did I begin to suspect that I needed to subscribe since there were a bunch of locks all over. However, it was not clear that the locks actually did anything since I could still click on those links.
I guess I would have found out after completing the first step and not being able to progress. This appears to be by design, which strikes me as slightly dishonest.
Thank you for pointing this out. It's not by design. We actually revised our marketing pages recently, and had a regression that got rid of the tooltip that explained what was free. We have a Linear task for fixing it back, but we weren't expecting this HN post today and so it wasn't the top of our list to fix :)