If AI writes a for loop the same way you would... Does it automatically mean the code is bad because you—or someone you approve of—didn't write it? What is the actual argument being made here? All code has trade offs, does AI make a bad cost/benefit analysis? Hell yeah it does. Do humans make the same mistake? I can tell you for certain they do, because at least half of my career was spent fixing those mistakes... Before there ever was an LLM in sight. So again... What's the argument here? AI can produce more code, so like more possibility for fuck up? Well, don't vibe code with "approve everything" like what are we even talking about? It's not the tool it's the users, and as with any tool theres going to be misuse, especially new and emerging ones lol
I don't know why you have to qualify your sentence with "think carefully before you respond" it makes it seem like you're setting up some rhetoric trap... But I'll assume it's in good faith? Anyway...
I don't mind if a review is AI-assisted. I've always been a fan of the whole "human in the loop" concept in general. Maybe the AI helps them catch something they'd normally miss or gloss over. Everyone tends to have different priorities when reviewing PRs, and it's not like humans don't have lapses in judgement either (I'm not trying to anthropomorphise AI, but you know what I mean).
My stance is same about writing code. I honestly don't mind if the code was written `ed` on a linux-powered toaster from 2005 with 32x32 screen, or if they wrote it using Claude Code 9000.
At the end of the day, the person who's submitting the code (or signing off a review) is responsible for their actions.
So in a round-about way, to answer your question: I think AI as part of the review is fine. As impressive as their output can be sometimes be, it can be both impressively good and impressively bad. So no, only relying on AI for review is not enough.
Correct me if I’m wrong, but if you use LD_PRELOAD, presumably it will not work for applications that circumvent libc, such as Go binaries (at least those with CGo disabled)?
Tor does this the right way on Linux. You make a separate user namespace with access only to the WireGuard network adapter and run the program inside of that. You want the kernel involved if you want any sort of guarantee:
How does this work in something like Kubernetes where you have a sidebar container configuring the network for the main container without affecting others on the same host?
I think all containers share the same netns in a pod. You restrict the pod to only the Wireguard peer IP, and have a (NET_ADMIN) sidecar container create an interface (tun/kernel wg) and update the routing tables for the netns. Then I believe the traffic from the other containers in the pod is tunneled.
Can you use user namespaces to create a network namespace with the VPN active and stick applications in that namespace?
From a quick search, https://blog.thea.codes/nordvpn-wireguard-namespaces/ sees to have at least the bones of a decent solution, though I've not had a chance to dig very far. A lot of results use root to set up the namespace, but I was pretty sure that shouldn't be needed with a new kernel and user namespaces enabled
I have no idea. I’ve never messed with it, but maybe something like eBPF to intercept network syscalls? Not sure if that’s a thing—especially without root access? Mostly I was just thinking the project page could use a disclaimer since, in Go, it is common to bypass libc. :shrug:
This seems like a very cool, useful project though!
I don't mean to derail discussion about a cool project, but it still seems to imply xterm.js is somehow "improper" emulation (though I might be misreading it).
Terminal emulators are all approximations of terminals, regardless of the programming language.
I tried doing that in the early 2010s. Even back then it didn't work (github broke for example). If you did it today, you'd likely be blocked by a lot of major websites for "lying" about your user agent. Cloudflare turnstiles will stop working, you'll get captcha'd to death, and so on.
Even tor-browser doesn't dare to modify the user agent string in any major way. It's almost impossible to lie about because trackers don't actually care about your user agent. They're identifying your device/OS through side channels (canvas, webgl, fonts, etc).
Wrt/ Tor browser, it's not that they don't dare to, it's that they don't want to. One of the goals of that browser is to not stick out too much, and changing the user agent would do just that, so they don't do it.
Then the ideal would be to normalize the user agent string to look identical on every platform. My point is: they can't do that. e.g. A linux machine identifying itself as windows would be spotted immediately. Instead, they have to reduce entropy by bucketing you according to your device/OS/arch.
I don't think there is a point there. In case of the Tor browser, they use the user agent to blend in, so they are not a good candidate to do anything about how stupid the user agent is.
It's the current heavyweights who could change it for the better: Google and Apple. If either introduced a major change in how they present the user agent, websites would be very quick to adapt (if they need to in the first place...), or else. Otherwise, no change will happen - and I think this will be the case, same as with the HTTP "Referer" (misspell of "referrer").
Fun fact, non-browsers actually have much nicer user strings. I run an internet radio, and there is a lot of clients like
Linux UPnP/1.0 Sonos/85.0-64200 (ZPS1) Nullsoft Winamp3 version 3.0 (compatible)
> In case of the Tor browser, they use the user agent to blend in, so they are not a good candidate to do anything about how stupid the user agent is.
No. They don't use it to blend in. If they wanted to blend in they would be modifying every platform's user agent string to look like Windows x86_64 or something. They don't do that because there's no way they could possibly get away with it.
Instead, they're resigned to simply censoring the minor version number of the browser to reduce entropy.
> Fun fact, non-browsers actually have much nicer user strings. I run an internet radio, and there is a lot of clients like
And those tools will get blocked by various CDNs for not having a browser user agent string, not having a browser-like TLS handshake, etc. This is why projects like curl-impersonate and golang's utls had to be created.
I don't think it's a strange comment. He's mostly right (and so are you, but I think you're talking past each other). There's nothing wrong with SRS, and I agree with you that it's basically like cheat codes for memorization, but there is a limit to what most people can do. i.e. most people do tend to drop off.
I remember reading some stats from WaniKani (Japanese SRS app) a while back...
WaniKani has 60 "levels" to learn 2000+ kanji. Each level takes about a week (there's no skipping ahead), so the material takes about a year of study to complete -- that's if you're going at breakneck pace, which most people aren't.
According to the numbers I saw on the WK forums, ~8% of users reach level 30 and less than 1% reach level 60... and that's just to learn as much kanji as a 9th grader. That's to say nothing of the grammar and the 20,000+ vocab words you'll need to SRS to truly learn the language, or the thousands of hours you'll have to spend speaking/listening/reading, immersing yourself in native content, etc.
People give up very easily. The language learning community often gives year estimates to reach "near-native level" in a language based on frequency of study. In reality, the process takes a lifetime. I don't know if people truly know what they're signing up for when install those apps and begin studying. It's a lifelong commitment. It's just something you do now, every day.
You can stop at any time of course, and most people do (more than 99% of them apparently).
Learning a language as a hobby is tough. If you don't need the language to communicate and survive in your environment then you have essentially zero real motivation to learn it.
The problem with spaced repetition systems is that it doesn't supply that extra motivation. You're still just memorizing things in a vacuum. If you truly want to learn a language you need to use it to communicate. That means making friends, travelling, reading books, and consuming other media in that language.
You can also be motivated because you like to consume media in the language (relevant for English and Japanese), because you think it will be useful for a job (English) or for travel (French, Spanish, etc.), or simply because you like the language.
If you don’t have any of these reasons to motivate you, the question arises of why you’re bothering in the first place.
I started learning Mandarin on Duolingo while dating a Chinese woman. After we broke up I continued with it just because I found it fun.
Now I have several Chinese friends and I'm learning Chinese cooking. I'm motivated to continue learning about Chinese food and Chinese culture, and the important role food plays within it.
The article specifically points out WaniKani as an example of a very bad implementation of spaced repetition (see the "FSRS in practice" heading, under the paragraph "for Japanese language learning specifically...").
This brings back memories. I haven't looked at it in a while, but I'm glad to see the fork[1] of my fork[2] from 12 years ago is still thriving. Looks like it's been mostly rewritten. Probably for the better.
He is not imposing anything on anyone. He has the same right to express his opinions as anybody else has, and his platform clearly allows everyone else to do the same - in contrast with what the previous executives of Twitter did.
That sounds a lot like copium to justify an authoritarian action by the state because it benefits you personally.
reply