Hacker Newsnew | past | comments | ask | show | jobs | submit | squirrel's commentslogin

Had been consulting for equity with one startup out of an accelerator, so it was natural to go paid once I went out on my own. For the next few clients, approached investors I knew from that and other startups, who referred me to portfolio companies who needed me. I wish I'd read Alan Weiss's Million Dollar Consulting at the beginning though, I would have avoided many mistakes (like day-rate billing).

The article is well-written and makes cogent points about why we need "centaurs", human/computer hybrids who combine silicon- and carbon-based reasoning.

Interestingly, the text has a number of AI-like writing artifacts, e.g. frequent use of the pattern "The problem isn't X. The problem is Y." Unlike much of the typical slop I see, I read it to the end and found it insightful.

I think that's because the author worked with an AI exactly as he advocates, providing the deep thinking and leaving some of the routine exposition to the bot.


Nope. It was actually written entirely by Claude. https://boxobarks.leaflet.pub/3mj42airv3s2o#fingerprints-of-...

The framing of the essay around learning through "grunt work" is not deep, it's simply that this specific phrase appeared in two of the sources. Anything that looks like insight is plagiarised directly from the sources in some fashion. I've covered in my evidence the pivot phrases where direct summaries of the essay incorrectly appear to transition to the author's own ideas, but there are parts right through the essay that come from the sources. No deep thinking by the prompter required.


Thanks for writing that up. You convinced me that there was more Claude here than I'd thought, but I didn't see evidence that the author hadn't edited and supplemented, which is what I was suggesting. In fact, your last observation about correcting an erroneous date makes my point, not yours: Claude made a mistake, and the (human) author fixed it, thus improving the essay.

I certainly agree that the author should disclose the use of an AI, how much is human vs silicon, and clarify which ideas are his own and which are not. I've written to him to ask about that.


The author replied quickly and described his use of AI as very limited and just for grammar and wording. I believe him, based both on the text of the article itself and what he told me.


Well, I don't believe him. It is Claude all the way through. There are more markers than just those. I covered some in my more comprehensive review of it, but tbf that one is a bit of a mess.

I don't see how my point about the erroneous date makes your point. He POSTED IT with that date, and only changed it AFTER someone pointed it out, then BLOCKED them.

If he did write it by hand though, that means he is admitting to plagiarising the sources, and also to framing that makes absolutely no sense, like saying Schwartz didn't say something that he literally did say. So, great job I guess!

I fully believe that he used only a general prompt, and anything in the essay that seems specific, Claude has mined from the sources. I am going to try to reverse engineer the prompt.


Drone view is so high it's unusable. How about an inside-the-blimp view?

Rural areas are trivially easy. May not be anything to do about that.

how about labelling famous locations like Times Square or the Louvre?

Anomaly is misspelt on home page.


This is not true for business books like mine. It's vital to write a proposal first in that world; publishers want to influence the content (as in the OP article).

I think the same is true for tech books but I don't know as I haven't written one.

A novel or other fiction is the opposite; there you do have to write the whole thing first.


As I commented in another thread, there's no a priori reason to believe that the "average" glutamate receptor level is the "right" one. Isn't it possible that there are:

1. "Normal" people with a level of glutamate receptors at 10, say, on a scale I'm inventing for this example

2. "Autistic" (according to the DSM) people with a level of, say, 5, who are hindered by the effects of being at this level

3. "A little bit autistic" people at a level of, say, 8, who aren't hindered and don't meet the DSM criteria, but in fact actually benefit from the effects of being at this level

Some "normals" might then want to inhibit their glutamate receptors somewhat to get the benefits of being at an 8 or a 9 on my made-up scale.


There are actually four types of autism, according to new research (and seemingly corroborated by my personal experience, though that's just an anecdote): https://www.medrxiv.org/content/10.1101/2024.08.15.24312078v...


Perhaps. But remember that this is a very complex 3D structure with varying receptor densities, it's not "The Glutamate Level", it's some neural network areas with higher or lower excitability connected to other neural networks.

Just like with ADHD it's likely that medication will at best have limited effectiveness and many side effects.


Certainly, we're at the "bash it with a hammer" stage not ready for anything nuanced. I just wouldn't want to assume that the right outcome is "less autism"; I suspect most people could do with at least a little more!


Groups tend to benefit from neurodiversity (and diversity in general). I'm sceptical of the idea that there is a "right level of autism".


It seems you are assuming that because the majority of people have a certain quantity of glutamate receptors, that they are the healthy ones and that we should be trying to bring autistic people up to that level. Is that right?

Why not consider the opposite, that the most beneficial quantity of glutamate receptors could be somewhere below the typical amount? If that were true, then we could try to help others reduce their glutamate receptor level to become healthier and more successful (and a little more autistic).

If we found, say, an association between a lower level of neurological characteristic X and concert-level piano skill, then those who aspire to play that instrument at an elite level might try to decrease X. The fact that most of us are rubbish piano players would not be evidence that lower levels of X are harmful, but very much the opposite.


It is an interesting idea, but let’s not assume autistic traits make you more talented in anything. There certainly is very highly intelligent people with autistic traits that are able to use hyper-focusing to help them work very hard and succeed in academia or at work. I doubt any rational person is looking for ”a cure” for the Alan Turings and Albert Einsteins of this world. Nor even for a regular, albeit slightly odd, chap like myself, who likes reading books alone with his cat and studying math instead of seeing other people.

However there are people with severe autism that makes it more or less impossible for them to communicate with other people or live independently. If these people could have their life improved it might make huge difference to them and their families.


> All autistic participants in the study had average or above average cognitive abilities. McPartland and collaborators are also working together on developing other approaches to PET scans that will enable them to include individuals with intellectual disabilities in future studies.

Simply put they didn't even touch the keeners, nonverbalists, the piss-in-your-pants, or the perpetual 1 year old autistics. They went after people who previously would be called "Aspergers syndrome".

But everything cognitive seems to be called 'autism spectrum disorder' these days.


I am not sure what conclusion you would like us to draw from this. Presumably it is simpler to get people for this sort of study if you can, y’know, ask them. Next step would be to repeat the study with a larger group, eventually adding also those, who really really need help. I doubt there’s a Noble waiting for someone for creating a drug that helps a chap who likes trains to look you into eyes while they are speaking to you.


Of course they didn't. It would be unethical to perform non-medically-necessary PET scans on people who are unable to give informed consent due to the radiation exposure.


First, 1 PET scan is around 25mSv. 50mSv is yearly limit for radiation workers. And those are being overly safe to allow accidental overage. 100mSv is start of detectable cancer risk. So the risk for 1 scan is basically 0.

Secondly, someone has medical power of attorney over the non-functional autistics. And in reality, they are the ones at most need of (almost passive) study to help them. Us high functioning autistics dont need anywhere near the help.. And we have no way to know an Aspergers and traditional autism are even similar, other than the spectrum brigade keeps adding more and more under 'autism'.

Simply put, guardian says yes to do a single scan a year, and I see no problem with it. More than 1 a year, and we start getting into potential damage. Maybe with some pie-in-the-sky-IRB whatif situation, sure. But 1 scan/yr has no demonstrable damage.


I imagine it was a lot easier to get this version where the study participants can consent for themselves past an ethics panel. Now that there's a result suggesting something of value might be learned, there's a stronger argument for studies with greater ethical risk.


You're absolutely right that assumption was implicit. The answer was written totally in that framework. I'm not here to say what's right or wrong in determining something about people who lie outside of normal in these things, or what normal means.

So what I wrote should be read with a "if it is held to be a condition which deserved remediation or avoidance of it's manifestation" attached.

Most medical conditions are couched in this sense, that a deficit or departure from the normal is a problem. In matters of brain chemistry it pays to be more nuanced.


Amazingly, no one seems to have actually checked that this picture was really "circulating on social media". I've been investigating for the past hour or so and can't locate a single public post or reference anywhere other than reposts of the BBC article.

Typically, postings that gain traction have many many reposts and though some may be deleted, there's a long tail of reverberation left behind. I can't find that at all here.

I wonder if the hoaxer just emailed it to Network Rail directly?


That would be a level of hoaxery I would actually applaud.



Doing it without your customer's agreement is indeed unhelpful! I'm sorry you're that frustrated by this experimentation.

That's why I advocate using feature flags and beta labels and supervised user research to get feedback. Do those methods work for you, so you can opt in?


I have no issue if it's opt-in on the user side. My peeve is when software changes on me without my knowledge and agreement.


Good point! I need to write more about the opt-in/out elements of rapid feedback. Thank you!


Thanks! That's not the positive use I have in mind here, where a manager has a real need and uses a demo to focus the work. Does that ever happen at your large corporate?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: