The US could retain a lot of that talent if it put the same level of funding into science that China is, and remained welcoming to foreign nationals. The US has been brain-draining the rest of the world for decades with enormous benefits to us. We then led in most fields and the flywheel kept spinning. Now we are cutting research spending and closing the door, while China continues to increase its science funding year over year. The sclaes are tipping and talent will be drawn to the leading edge, wherever that is.
"Using new data which tracks US-trained STEM PhDs through 2024, we show that despite foreign nationals comprising nearly 50% of trainees, only 10% leave the US within five years of graduating, and only 25% within 15 years."
That sounds like net benefit for the US. Foreign nationals come, the US sells them (overpriced) education, they do relatively low-paid but high-value PhD research, and then most of them stay and continue to contribute to US research endeavors and the economy. This is such an enviable position, and this administration wants to close the doors? This is the secret sauce. This is what has made america great.
You know the mechanism of TMS is not mysterious. It requires no magnetoreception or "stochastic resonance". It is simply inducing electrical currents to modulate neural activity. Its effects are consistent with the known laws of physics, known properties of neurons, and decades of neuroscience research.
I think you're conflating one question with another. The "why" in question is why altering neural activity in that way results in clinical effects. It is not the "why" TMS alters neural activity.
I appreciate that you feel this way, but the mechanisms behind exactly which neural circuits are activated by TMS are simply not yet fully understood.
From 2024:
> Transcranial magnetic stimulation (TMS) is a non-invasive, FDA-cleared treatment for neuropsychiatric disorders with broad potential for new applications, but the neural circuits that are engaged during TMS are still poorly understood.
Again, different question. We know, fundamentally, how TMS causes stimulation/suppression of neural activity, and it does not require magnetoreception. Look at it this way: we don't fully understand how SSRI's cure depression, but we do know their primary target and that their mechanism of action is mediated through that primary target.
I agree. It’s an inversion of the usual pattern: AI-generated “thoughts”, written up by a human.
I’m surprised this made it to the front page of HN. I think AI tools are making it easier to create increasingly plausible-sounding bullshit, and gradually overwhelming the defenses of this community.
Using it in a specialized subfield of neuroscience, Gemini 3 w/ thinking is a huge leap forward in terms of knowledge and intelligence (with minimal hallucinations). I take it that the majority of people on here are software engineers. If you're evaluating it on writing boilerplate code, you probably have to squint to see differences between the (excellent) raw model performances. whereas in more niche edge cases there is more daylight between them.
Exactly my experience as well. Started out loving it but it almost moves too fast - building in functionality that i might want eventually but isn't yet appropriate for where the project is in terms of testing, or is just in completely the wrong place in the architecture. I try to give very direct and specific prompts but it still has the tendency to overreach. Of course it's likely that with more use i will learn better how to rein it in.
I've experienced this a lot as well. I also just yesterday had an interesting argument with claude.
It put an expensive API call inside a useEffect hook. I wanted the call elsewhere and it fought me on it pretty aggressively. Instead of removing the call, it started changing comments and function names to say that the call was just loading already fetched data from a cache (which was not true). I could not find a way to tell it to remove that API call from the useEffect hook, It just wrote more and more motivated excuses in the surrounding comments. It would have been very funny if it weren't so expensive.
Geez, I'm not one of the people who think AI is going to wake up and wipe us out, but experiences like yours do give me pause. Right now the AI isn't in the drivers seat and can only assert itself through verbal expression, but I know it's only a matter of time. We already saw Cursor themselves get a taste of this. To be clear I'm not suggesting the AI is sentient and malicious - I don't believe that at all. I think it's been trained/programmed/tuned to do this, though not intentionally, but the nature of these tools is they will surprise us
> but the nature of these tools is they will surprise us
Models used to do this much much more than now, so what it did doesn't surprise us.
The nature of these tools is to copy what we have already written. It has seen many threads where developers argue and dig in, they try to train the AI not to do that but sometimes it still happens and then it just roleplays as the developer that refuses to listen to anything you say.
I almost fear more that we'll create Bender from Futurama than some superintelligent enlightened AGI. It'll probably happen after Grok AI gets snuck some beer into its core cluster or something absurd.
Earlier this week a Cursor AI support agent told a user they could only use Cursor on one machine at a time, causing the user to cancel their subscription.
agreed, no matter what prompt I try, including asking Claude to promise not to implement code unless we agree on requirements and design, and to repeat that promise regularly, it jumps the gun, and implements (actually hallucinates) solutions way to soon. I changed to Gemini as a result.
That’s where the concept and name come from. “Pomodo” means “tomato” in Italian, and the author of the technique had one of those. The image comes from its Wikipedia page.