Hacker Newsnew | past | comments | ask | show | jobs | submit | karianna's commentslogin

Hard agree on the data waste, noise to signal ratio is typically very high and processing, shipping and storing all of that data costs a ton.

Previous start-up I worked on (jClarity, exited to Microsoft) mitigated much of this by having a model of only collecting the tiny amount of data that really mattered for a performance bottleneck investigation in a ring buffer and only processing / shipping and storing that data if a bottleneck trigger occurred (+ occasional baselines).

It allowed our product at the time (Illuminate to run at massive scale without costing our customers an arm and a leg or impacting their existing infrastructure. We charged on the value of the product reducing MTTR and not on how much data was being chucked around.

There was the constant argument against approach of always on observably or “collect all data JIC”, but with a good model (in our case something called the Java Performance Diagnostic Method) we never missed having the noise


In broad strokes, I think this is similar to Bitdrift (https://bitdrift.io) - though they’re focused on mobile observability.


And looks similar to Grepr [0].

0. https://www.grepr.ai/


That’s awesome


Usual caveats of “you should run load to see if it’s truly wasted”, but we do know that Java defaults are not ideal out of the box and so we analysed a ton of workloads on Azure and came up with this: https://learn.microsoft.com/en-us/java/jaz - better defaults out of the box for the JVM in containers at runtime.


That doc is gold thanks for linking.

Yeah defaults are the enemy here. Most of the waste I'm seeing in the data comes from generic Spring Boot apps running with out of the box settings where the JVM assumes it owns the entire node.


No source, useless outside of azure.


You can just install it outside of Azure: https://learn.microsoft.com/en-us/java/jaz/overview#install-...

Note: the tool calls out to Microsoft but the docs don't say what data is shared ("telemetry"), better make sure your firewalls are functioning before running this tool.

Not sure what the source license story is.


I wonder if statistically (hand waving here, I’m so not an expert in this field) the SOTA models do as much or as little harm as their human counterparts in terms of providing safe and effective emotional support. Totally agree we should better understand the risks and trade offs but I wouldn’t be super surprised if they are statistically no worse than us meat bags this kind of stuff.


One difference is that if it were found that a psychiatrist or other professional had encouraged a patient's delusions or suicidal tendencies, then that person would likely lose his/her license and potentially face criminal penalties.

We know that humans should be able to consider the consequences of their actions and thus we hold them accountable (generally).

I'd be surprised if comparisons in the self-driving space have not been made: if waymo is better than the average driver, but still gets into an accident, who should be held accountable?

Though we also know that with big corporations, even clear negligence that leads to mass casualties does not often result in criminal penalties (e.g., Boeing).


> that person would likely lose his/her license and potentially face criminal penalties.

What if it were an unlicensed human encouraging someone else's delusions? I would think that's the real basis of comparison, because these LLMs are clearly not licensed therapists, and we can see from the real world how entire flat earth communities have formed from reinforcing each others' delusions.

Automation makes things easier and more efficient, and that includes making it easier and more efficient for people to dig their own rabbit holes. I don't see why LLM providers are to blame for someone's lack of epistemological hygiene.

Also, there are a lot of people who are lonely and for whatever reasons cannot get their social or emotional needs met in this modern age. Paying for an expensive psychiatrist isn't going to give them the friendship sensations they're craving. If AI is better at meeting human needs than actual humans are, why let perfect be the enemy of good?

> if waymo is better than the average driver, but still gets into an accident, who should be held accountable?

Waymo of course -- but Waymo also shouldn't be financially punished any harder than humans would be for equivalent honest mistakes. If Waymo truly is much safer than the average driver (which it certainly appears to be), then the amortized costs of its at-fault payouts should be way lower than the auto insurance costs of hiring out an equivalent number of human Uber drivers.


> I would think that's the real basis of comparison

It's not because that's not the typical case. LLMs encourage people's delusions by default, it's just a question of how receptive you are to them. Anyone who's used ChatGPT has experienced it even if they didn't realize it. It starts with "that's a really thoughtful question that not many people think to ask", and "you're absolutely right [...]".

> If AI is better at meeting human needs than actual humans are, why let perfect be the enemy of good?

There is no good that comes from having all of your perspective distortions validated as facts. They turn into outright delusions without external grounding.

Talk to ChatGPT and try to put yourself into the shoes of a hurtful person (e.g. what people would call "narcissistic") who's complaining about other people. Keep in mind that they almost always suffer from a distorted perception so they genuinely believe that they're great people.

They can misunderstand some innocent action as a personal slight, react aggressively, and ChatGPT would tell them they were absolutely right to get angry. They could do the most abusive things and as long as they genuinely believe that they're good people (as they almost always do), ChatGPT will reassure them that other people are the problem, not them.

It's hallucinations feeding into hallucinations.


> LLMs encourage people's delusions by default, it's just a question of how receptive you are to them

There are absolutely plenty of people who encourage others' flat earth delusions by default, it's just a question of how receptive you are to them.

> There is no good that comes from having all of your perspective distortions validated as facts. They turn into outright delusions without external grounding.

Again, that sounds like a people problem. Dictators infamously fall into this trap too.

Why are we holding LLMs to a higher standard than humans? If you don't like an LLM, then don't interact with it, just as you wouldn't interact with a human you dislike. If others are okay with having their egos stroked and their delusions encouraged and validated, that's their prerogative.


> If you don't like an LLM, then don't interact with it, just as you wouldn't interact with a human you dislike.

It's not a matter of liking or disliking something. It's a question of whether that thing is going to heal or destroy your psyche over time.

You're talking about personal responsibility while we're talking about public policy. If people are using LLMs as a substitute for their closest friends and therapist, will that help or hurt them? We need to know whether we should be strongly discouraging it before it becomes another public health disaster.


> We need to know whether we should be strongly discouraging it before it becomes another public health disaster.

That's fair! However, I think PSAs on the dangers of AI usage are very different in reach and scope from legally making LLM providers responsible for the AI usage of their users, which is what I understood jsrozner to be saying.


>Why are we holding LLMs to a higher standard than humans? If you don't like an LLM, then don't interact with it, just as you wouldn't interact with a human you dislike.

We're not holding LLMs to a higher standard than humans, we're holding them to a different standard than humans because - and it's getting exhausting having to keep pointing this out - LLMs are not humans. They're software.

And we don't have a choice not to interact with LLMs because apparently we decided that these things are going to be integrated into every aspect of our lives whether we like it or not.

And yes, in that inevitable future the fact that every piece of technology is a sociopathic P-zombie designed to hack people's brain stems and manipulate their emotions and reasoning in the most primal way possible is a problem. We tend not to accept that kind of behavior in other people, because we understand the very real negative consequences of mass delusion and sociopathy. Why should we accept it from software?


> LLMs are not humans. They're software.

Sure, but the specific context of this conversation are the human roles (taxi driver, friend, etc.) that this software is replacing. Ergo, when judging software as a human replacement, it should be compared to how well humans fill those traditionally human roles.

> And we don't have a choice not to interact with LLMs because apparently we decided that these things are going to be integrated into every aspect of our lives whether we like it or not.

Fair point.

> And yes, in that inevitable future the fact that every piece of technology is a sociopathic P-zombie designed to hack people's brain stems and manipulate their emotions and reasoning in the most primal way possible is a problem.

Fair point again. Thanks for helping me gain a wider perspective.

However, I don't see it as inevitable that this becomes a serious large-scale problem. In my experience, current GPT 5.1 has already become a lot less cloyingly sycophantic than Claude is. If enough people hate sycophancy, it's quite possible that LLM providers are incentivized to continue improving on this front.

> We tend not to accept that kind of behavior in other people

Do we really? Maybe not third party bystanders reacting negatively to cult leaders, but the cult followers themselves certainly don't feel that way. If a person freely chooses to seek out and associate with another person, is anyone else supposed to be responsible for their adult decisions?


I’m being lazy (am an iStats user), has anyone here compared the two?


I swapped from stats to iStats a few years ago. I've found stats to be easier to customize and the UI is more uniform. That said, it doesn't have the weather widget, so I've kept iStats around solely for its weather functionality.


A weather widget is built-in to macOS now.

System Settings > Control Center > Weather


argh!! why oh why does mine not have this??? Sequoia 15.1.1


It comes with Sequoia 15.2.

I am still on 15.1.1 too as I have been waiting for a 15.2.1 for bug fixes, but they released 15.3 instead.


of course it is.

thanks. now i'm wondering why it hasn't been pestering me to upgrade. i just went and looked, and now it sees the upgrade possible. i must have told it later and promptly forgot about it


I assume you mean you went iStats -> stats


I switched from iStats to stats like 1 year ago. I found it more responsive (and free). I'm using Vetero for the weather functionality.


This particular patch was upstreamed to OpenJDK tip and will be part of Java 22 for all vendors. We have back-ported the patch to our builds of 11, 17 (and shortly 21), and offer those back-ports upstream. However, those back-ports won't always be accepted into the stable LTS branches, so sometimes they'll only sit in our builds for those versions (like this patch in our 17.0.8+ builds). We make that clear in our release notes etc and the code is always available on our GitHub under the GPLv2+CE license.


This is interesting. Does this mean builds from Amazon, MS, Redhat may have different features?


Yes, they do. Many people don't get the point across that Java is like C and C++, there are plenty of implementations.

Regarding OpenJDK, here are some examples,

Amazon has Lambda SnapStart into their implementation, Azul uses a LLVM based JIT, their own pauseless GC and extensions for value types (ObjectLayout), Red-Hat ships Shenodah.


Yeah, we're not going to do that :-).

The Microsoft Build of OpenJDK is very much a build of OpenJDK that Azure customers can get free support for and over time get some Azure specific performance benefits from. Those performance benefits will be worked on upstream as well (like this escape analysis patch was) but there will be times where upstream won't want to take a patch on a stable LTS branch for example. We'll continue to be very clear when we have a patch that is not in the upstream code and it'll always be available on our GitHub under the GPLv2+CE license.


It’s a reunion for the worst of reasons but it’s a testament to crazy on of how many people he made feel awesome along the way


I had the pleasure of working with Bob for a couple of years in the Java space formalising the thinking around Inversion of Control and dependency injection which was eventually standardised as JSR 330. His clarity of thinking, clean architecture and code are still exemplars that I send new Java engineers to go and learn from.

He was kind, took his time to explain things in way that the day to day developer could understand.

I’ll miss you crazy Bob, heart goes out to your family.


Chris was also well known in the London Java Community and contributed to multiple events, conferences, bringing his expertise in a thoughtful and kind way to folks who were often 1-2 stacks above the runtimes and compilers space .

My conversations with him about JVM design, GraalVM and Ruby were some of the most memorable and fun times I’ve had running around the conference circuit.

Thank you Chris for unselfishly sharing so much with us and leaving the world a better place, you will be missed.

I’m off to go and hug loved ones now.


The folks who work at Adoptium contain many OpenJDK committers (especially from Red Hat and Microsoft) so it does contribute back (but you’ll see folks using their company addresses). I agree that can confuse folks


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: