Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Seriously, what were these researchers thinking? This 'BrainGPT' thing is a disaster waiting to happen. Ching-Ten Lin and his team of potential civilization destroyers at the University of Technology Sydney might be patting themselves on the back for this, but did they stop to think about the real-world implications? We're talking about reading thoughts—this isn't sci-fi, it's real, and it's terrifying. Where's the line? Today it's translating thoughts for communication, tomorrow it could be involuntary mind-reading. This could end privacy as we know it. We need to slam the brakes on this, and fast. It's not just irresponsible; it's playing with fire, and we're all at risk of getting burned.

Like, accurate brain readers are right under DWIM guns in the pantheon of things thou mustn't build!



Exactly. Dangerous technology. Reminds me of dystopian sci-fi like inception or minority report.

First thing that came to my mind was an airport check. “Oh, you want to enter this country? Just use this device for a few minutes, please”

How about courts and testimony?

This tech will be used against you faster than you will recognize. Later on one will ask, why people let it happen.


I'm optimistically going to assume that model training is per-brain, and can't cross over to other brains. Am I wrong? God I hope I'm not wrong.


>4.4 Cross-Subject Performance Cross-subject performance is of vital importance for practical usage. To further report the We further provide a comparison with both baseline methods and a representative meta-learning (DA/DG) method, MAML [9], which is widely used in cross-subject problems in EEG classification below. Table 2: Cross-subject performance average decreasing comparison on 18 human subjects, where MAML denotes the method with MAML training. The metric is the lower the better. Calib Data Method Eye fixation −∆(%) ↓ Raw EEG waves −∆(%) ↓ B-2 B-4 R-P R-F B-2 B-4 R-P R-F × Baseline 3.38 2.08 2.14 2.80 7.94 5.38 6.02 5.89 Baseline+MAML [9] 2.51 1.43 1.08 1.23 6.86 4.22 4.08 4.79 × DeWave 2.35 1.25 1.16 1.17 6.24 3.88 3.94 4.28 DeWave+MAML [9] 2.08 1.25 1.16 1.17 6.24 3.88 3.94 4.28 Figure 4: The cross-subjects performance variance without calibration In Table 2, we compare with MAML by reporting the average performance drop ratio between withinsubject and cross-subject translation metrics on 18 human subjects on both eye-fixation sliced features and raw EEG waves. We compare the DeWave with the baseline under both direct testing (without Calib data) and with MAML (with Calib data). The DeWave model shows superior performance in both settings. To further illustrate the performance variance on different subjects, we train the model by only using the data from subject YAG and test the metrics on all other subjects. The results are illustrated in Figure 4, where the radar chart denotes the performance is stable across different subjects.

Looks like it crosses over. That's wild.


My intuition is at least in the beginning, but with enough individual data won't you have a model that can generalize pretty well over similar cultures? Maybe moreso for the sheep, just speculating... who knows!


What is the alternative? Hide the research papers in a cabinet and never talk about it? How long would it be before another team achieves the same result? Trying to keep it under wraps would only increase the chance of this technology being abused, but now unbeknownst to the general public.

Basically, are you proposing to ban some fields of research because the result can be abused? Anything can be abused. From the social care system to scientific breakthroughs. What the society should do is to control the abuse, not stop the progress. Not even because of ethics, where the opinions diverge, but because stopping the progress is virtually impossible.


Look up the history of biotechnology, and the intentional way that it has been treated and one might reasonably say suppressed for some examples of how this has been managed previously. Yes, sometimes you can just decide, "we're not gonna research that today." When you start sitting down and building the thing that fits on the head, that's where you say "nope, we're doing that thing we shouldn't do, let's not do it."

There is actually a line. You can actually decide not to cross it.


The alternative was to never pursue and invent organization-dependent[1,2] technology in the first place. The dynamics of the macro-system of {human biology + technology + societal dynamics} are so predictable and deterministic that it's argued[3] if there were any entity that is intelligent, replicating and has a self-preservation instinct instead of humans (aliens, intelligent Von Neumann probes, doesn't matter) the path of technological progress which humanity is currently experiencing wouldn't change. That is, the increasing restrictions on the autonomy of individuals and invasion of privacy with the increasing convenience of life and a more efficient civilization.

Ted Kaczynski pretty much predicted the current state of affairs all the way back at 1970s. [1]

Thankfully the world is not infinite so humankind cannot continue this situation for too long. The first Earth Overshoot Day was 31 December 1971, it was August 2 this year.[4] The effects of the nearing population collapse can be easily seen today in the increasing worldwide inflation, interest rates and hostility as the era of abundance comes to an end and resources get scarcer and scarcer. It's important to note that the technological prowess of humanity was only due to having access to basically unlimited energy for decades, not due to some perceived human ingenuity, which can save humankind from extinction-level threats. In fact, humans are pretty incapable of understanding world-scale events and processes and acting accordingly[5], which is another primary reason to not have left the simple non-technological world which the still non-evolved primate-like human brain could intuitively understand.

1: Refer to the manifesto "Industrial Revolution and Its Consequences".

2: Organization-dependent technology: Technology which requires organized effort, as opposed to small scale technology which a single person can produce himself with correct knowledge.

3: By Kaczynski, in the book Anti-Tech Revolution. Freely available online.

4: Biological overshoot occurs when demands placed on an ecosystem by a species exceeds the carrying capacity. Earth Overshoot Day is the day when humanity's demand on nature exceeds Earth's biocapacity. Humanity was able to continue its survival due to phantom carrying capacity.

5: Just take a look at the collective response of humanity to climate change.


Why not? There are perfectly legitimate uses for this kind of technology. This would be a godsend for those suffering from paralysis and nervous system disorders, allowing them to communicate with their loved ones.

Yes, the CIA, DARPA, et. al. will be all over this (surprisingly if not already), but this is a sacrifice worth making for this kind of technology.


How many people in the whole world are paralyzed or locked in? Ten thousand? Less?

How many people in the whole world are tinpot authoritarian despots just looking for an excuse who would just love to be able to look inside your mind?

Somehow, I imagine the first number is dramatically dwarfed by the second number.

This is a technology that, once it is invented, will find more and more and more and more uses.

We need to make sure you don't spill corporate secrets, so we will be mandating that all workers wear this while in the office.

Oh no, we've just had a leak, we're gonna have to ask that if you want to work here you must wear this brain buddy home! For the good of the company.

And so on.

I'm blind, but if you offered to cure my blindness with the side effect that nobody could ever hide under the cover of darkness ( I donno, electronic eyes of some kind? Go with the metaphor!) I would still not take it.


The other thing you people are missing is how technology compounds. You don't need to have people come in to the police station to have their thoughts reviewed when everyone is assigned an LLM at birth to watch over their thoughts in loving grace and maybe play a sound when they have the wrong one.


All this choice guarantees is new technology will always be used for bad things first. It holds no sway on whether someone will do something bad with technology, after all it's not just "good people" capable of advancing it. See the atomic bomb vs the atomic power plant.

What's important is how we prepare for and handle inevitable change. Hoping no negative change comes about if we just stay the same is a far worse game.


Thing is, it's not possible to stop it. Technology has advanced far enough, all the pieces are in place, so it's inevitable that someone will make this. What we should ask is rather how we can cope with its existence.


If you're referencing the AI safety discussion, there's obviously the fundamental difference between this and a technology with the potential of autonomous, exponential runaway.


What does “DWIM” mean in this context? My first thought is “do what I mean”, but I suspect that isn’t what you meant.


DWIM does in fact mean Do what I mean, a DWIM gun is basically like the Imperius curse. Can't remember if I got it from @cstross or Vinge.


Ohh, ok, a gun that inflicts “do what I mean” on another person. Yeah, that would be pretty bad, wouldn’t it.

When I read the previous comment, I had been imagining a gun (in the sense of a weapon that shoots a bullet from a barrel) which “does what the user intends for it to do”, which, I didn’t see how it would differ much from a usual gun, unless it just, had auto aim, or perhaps acted according to user’s impulses rather than deliberate decisions, which could be bad, but doesn’t seem like it would top the list of “things thou shalt not build”.


Don't worry. It doesn't actually work lol




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: