Ontologically, this historical model understands the categories of "Man" and "Woman" just as well as a modern model does. The difference lies entirely in the attributes attached to those categories. The sexism is a faithful map of that era's statistical distribution.
You could RAG-feed this model the facts of WWII, and it would technically "know" about Hitler. But it wouldn't share the modern sentiment or gravity. In its latent space, the vector for "Hitler" has no semantic proximity to "Evil".
Schleswig-Holstein (pop. 3M) shows that Open Source in government is viable. We need an EU that shifts its focus from compliance frameworks to actually investing and building.
A key takeaway from this ruling is that "the systems contain copies of the original works." Does this mean that offering any open-weight model capable of reproducing copyrighted text snippets or lyrics will be prohibited?
That would be a big setback for AI development in the EU.
That's what the lawsuit of the New York Times is about - OpenAI reproducing complete texts of NYT articles without paying for the reproduction of said articles. This is not an EU issue, but a general unsolved legal grey zone for the whole AI market.
The Dunning–Kruger meme has basically turned into a way for people to laugh at others for being “too dumb to know they’re dumb.” But that misses the point completely. The whole “I know that I know nothing” idea applies to everyone. So when people use the meme to mock AI or AI users, it just shows they haven’t learned that same lesson themselves.
I haven't limited my AI use. In fact, it has increased. It still feels experimental, and I often use it even when it does not save time, simply to avoid effortful thinking. That concerns me.
I believe we are heading toward a world where AI offers easy mental shortcuts for nearly everything, similar to how cheap carbs became widespread in our diets. I do not yet know how I will deal with that. For now, I am just a kid in a candy store, enjoying the novelty.
Interesting - what are the kinds of tasks that make up 'effortful thinking'? Do feel like you're now putting in "effort" towards other kinds of thinking / work?
I am curious if using AI has changed the fundamental ways in which you view "effort" and "value" from pursuing a piece of work.
Are there are new kinds of challenges that come up when you're using some new AI tools?
I find the analogy to candy particularly interesting. The default comparison being that "too much of it is bad for you". Do you feel that you are putting on "cognitive weight" as a result of using AI?
The vision of unifying Copilot/Cursor/Roo/Cline/etc. under a common configuration and best-practice layer is compelling, and I like the direction toward reusable templates and memory.
That said, I think the project might be aiming a bit too directly at the highest level of complexity — the full integration of vibe-driven rules across assistants — without first grounding things in foundational concepts. Personally, I’d love to see a clearer breakdown of stages, such as:
1. Formal concepts of LLM-driven project management, akin to how we reason about conventional PM tools/processes.
2. Abstractions and interfaces to build structured rules/prompts (in the broad sense) that can be versioned, composed, and reasoned about.
3. Configuration management to deploy/adapt those rules across specific environments (LLMs, IDEs, agents).
Laying that groundwork could make the ambitious cross-assistant unification feel more achievable and less brittle.
Still, kudos — I think we need more of this kind of experimentation and discussion in the open.
I think this suggestion of polishing the basic layer first and then building the high level layer is a good one. Since the potential development directions are so broad (marketplace for rule sets, evaluation of rule sets, version control of rule sets, more useful tools, unit test & deployment & PM rules), I have never dared to over-design before users have clear feedback.
For project management, I have another exploratory project. This project is a set of rules that use Cursor/Cline/Windsurf as a project manager assistant.
repo is here: https://github.com/botingw/pm-workflow-copilot-ide
For your 2, I understand versioning with rules. Does compose mean assembling rules? What does reasoned about mean?
For your 3, what configuration management do you recommend adding?
I can really relate to this — in school, biology felt like dry memorization. It never clicked with me, and I wrote it off for years. If I could recommend one subtopic of biology to math and physic people, it would definitely be mycology!
It's like real-life Pokémon GO and field mycology has a "collect 'em all" vibe. You get out into nature, identify and catalog fungi — it scratches the same itch as exploring an open-world game.
Fungi are discrete, classifiable entities with tons of metadata: GPS location, substrate, time of year, morphology, spore prints, photos, microscopic features. Perfect for structured data nerds.
Unlike many branches of biology, you don’t need to go to the Amazon. You can walk into your backyard or a nearby forest and find species newly known for your country and sometimes even new for science.
Microscopes, macro lenses, chemicals, even DNA sequencing. There’s a hacker spirit in mycology.
Projects like iNaturalist, Mushroom Observer, and FungiMap are full of real scientific contributions from everyday people. The barrier to entry is low, the impact can be surprisingly high, and the community is genuinely welcoming. Many leading contributors — even those publishing in cutting-edge scientific journals — are passionate autodidacts rather than formally trained biologists.
High intra-species variance, subtle features — perfect playground for machine learning wich is not nearly "solved" here.
Cordyceps that zombify insects. Giant underground networks that share nutrients between trees. Bioluminescent mushrooms. Many weird stories.
Mycology is also becoming a computational frontier - projects like FungiNet use graph networks to map symbiotic relationships, and citizen science platforms generate massive datasets perfect for ML applications beyond just classification. The unsolved phylogenetic relationships and complex biochemical pathways of fungi represent some of the most interesting computational problems in modern biology.
Over the past 30 years, computers and software have dramatically transformed our world. Yet many sectors remain heavily influenced by their analog history. My understanding is that the HN community has always recognized that the future is already here, just not evenly distributed across work environments, administration, and general processes. Didn't many of us believe that numerous jobs could be replaced by a few lines of code if inputs and outputs were properly standardized? The fact that this hasn't happened or has occurred very slowly due to institutional inertia is another story altogether.
Whether software development will become a "bullshit job" or how the world will look in a few years remains unknown. But those who constantly praise their work as software developers while simultaneously acknowledging that other non-physical jobs and processes could be fundamentally overhauled are living in a cognitive bubble—something I wouldn't have expected in this community.
The Rust Belt's just one part of the story, not the whole US economy. Globalization actually created tons of wealth for the USA, but that money hasn't been spread around fairly. It's all piled up in coastal cities and with rich folks while factory towns got left behind.
Cutting off global trade wouldn't fix anything - it would tank the overall economy while only helping a few powerful players pulling the strings. The real problem isn't trade deals; it's that the USA never properly invested the profits back into the communities that got hit hardest.
You could RAG-feed this model the facts of WWII, and it would technically "know" about Hitler. But it wouldn't share the modern sentiment or gravity. In its latent space, the vector for "Hitler" has no semantic proximity to "Evil".