Hacker Newsnew | past | comments | ask | show | jobs | submit | RogerL's commentslogin

7 trivial prompts, and at 100% limit, using sonnet, not Opus this morning. Basically everyone at our company reporting the same use pattern. Support agent refuses to connect me to a human and terminated the conversation, I can't even get any other support because when I click "get help" (in Claude Desktop) it just takes me back to the agent and that conversation where fin refuses to respond any more.

And then on my personal account I had $150 in credits yesterday. This morning it is at $100, and no, I didn't use my personal account, just $50 gone.

Commenting here because this appears to be the only place that Anthropic responds. Sorry to the bored readers, but this is just terrible service.


It's an AI generated article; don't trust anything in it unless you verify it.

it was light gathering. the D5 they brought is a very old camera tech wise, but it was ideal for the low light photos of the eclipse. they also brought a Z9 for much higher resolution photos.

I've purposely eaten insects while traveling. for me it is hard to get over the fact that they are not 'cleaned' - you eat everything in their digestive tracts. I intellectually understand that is safe, but my conditioning makes it hard to handle. taste and texture can be challenging once you get past grasshoppers and ants (for my palate of course).


yes, the correct sub for this is r/truespotify, and there are a dozen discussions on the problem.


Claude does these things even though you have explicit instructions not to do them, this isn't a tool for you asking it to delete files.

Just today Claude decided to do a git restore on me, blowing away local changes, despite having strict instructions to do nothing with git except to use it to look at history and branches.

Why jump to the conclusion that the person is so incompetent with no evidence?


Because there's now a class of programmers who are very anti AI when it comes to coding because they think anybody who relies on it are degenerate vibe coders who have no idea what they are doing. You can see this in pretty much every single HN post w.r.t AI and coding.


There is indeed a class of programmers who think AI over-reliance will make us worse. And there should be, because it's true.

https://www.mdpi.com/2075-4698/15/1/6

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4812513


Did you even read the abstracts of these papers?

The first one has four important phrases: “negative correlation,” “mediated by increased cognitive offloading,” “higher educational attainment was associated with better critical thinking skills, regardless of AI usage,” and “potential costs.”

The second paper has two: “students using GenAI tools score on average 6.71 (out of 100) points lower than non-users,” and “suggesting an effect whereby GenAI tool usage hinders learning.”

I ask you, sir, where exactly do you get “AI over-reliance will make us worse…because it’s true” from TWO studies that go out of their way to make it clear there is no causative link, only correlation, point out significant mediations of the effect, identify only potentiality, and also show only half a letter grade difference, which when you’re dealing with students could be down to all sorts of things. Not to mention we’re dealing with one preprint and some truly confounding study design.

If you don’t understand research methods, please stop presenting papers as if they are empirical authorities on truth.

It diminishes trust in real academic work.


I was just about to say the same thing. This is bad code/documentation. Single letter variable names is almost always wrong if it isn't i for an index or such (and even then, would typing 'idx' kill you?). And as parameters, so much worse. Don't make me guess how to call your function please.


or maybe terseness helps put your brain into pure algorithmic mode? After all that's how mathematical notation works and SDFs are pretty mathematical.


You assume they were talking about a single product. at my job there is essentially endless amounts of small tasks. We have many products and clients we have many internal needs, but can't really justify the human capital. Like I might write 20 to 50 Python scripts in a week just to visualize the output of my code. Dead boring stuff like making yet another matplotlib plot, simple stats, etc. Sometimes some simple animations. there is no monstrosity being built, this is not evidence of tagging on features or whatever you think must be happening, it's just a lot of work that doesn't justify paying a bay area principal engineer salary to do in the face of a board that thinks the path to riches is laying off the people actually making things and turning the screws on the remaining people struggling to keep up with the workflow.

Work is finite, but there can be vastly more available than there are employees to do it for many reasons, not just my personal case.


I grew up in the 70s. The hand wringing then was calculators. No one was going to be able to do math anymore! And then wrist watches with calculators came out. Everyone is going to cheat on exams, oh no!

Everything turned out fine. Turns out you don't really need to be able to perform long division by hand. Sure, you should still understand the algorithm at some level, esp. if you work in STEM, but otherwise, not so much.

There were losses. I recall my AP physics professors was one of the old school types (retired from industry to teach). He could find the answer to essentially any problem to about 1-2 digits of precision in his head nearly instantly. Sometimes he'd have to reach for his slide rule for harder things or to get a few more digits. Ain't no one that can do that now (for reasonable values of "no one"). And, it is a loss, in that he could catch errors nearly instantly. Good skill to have. A better skill is to be able to set up a problem for finite element analysis, write kernels for operations, find an analytic solution using Mathematica (we don't need to do integrals by hand anymore for the mot part), unleash R to validate your statistics, and so on. The latter are more valuable than the former, and so we willingly pay the cost. Our ability to crank out integrals isn't what it was, but our ability to crank out better jet engines, efficient cars, computer vision models has exploded. Worth the trade off.

Recently watched an Alan Guth interview, and he made a throwaway comment, paraphrased: "I proved X in this book, well, Mathematica proved...". The point being that the proof was multiple pages per step, and while he could keep track of all the sub/superscripts and perform the Einstein sums on all the tensors correctly, why??? I'd rather he use his brain to think up new solutions to problems, not manipulate GR equations by hand.

I'm ignoring AGI/singularity type events, just opining about the current tooling.

Yah, the transition will be bumpy. But we will learn the skills we need for the new tools, and the old skills just won't matter as much. When they do, yah, it'll be a bit more painful, but so what, we gained so much efficiency we can afford the losses.


The article and the press release it was derived from says nothing about "more efficient", just smaller.

https://yasa.com/news/yasa-smashes-own-unofficial-power-dens...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: