Hacker Newsnew | past | comments | ask | show | jobs | submit | pulsartwin's commentslogin

That's not great, although at least a splash screen is purely cosmetic. More disappointing is that, even in 2026, the move to Wayland still comes with issues like this:

> Some popups, such as Search Everywhere and Recent Locations, may not be moved outside of the main frame.

and

> Some windows and dialogs, e.g. Project Structure and Alerts, may not be centered on the screen or keep their previous location. This is due to the window manager having total control over windows’ locations in Wayland, which it is not always possible to override on the application side.


Which in the end one has to make peace that the Year of Linux Desktop is better as a UNIX like headless system running on Apple Virtualization, or WSL VMs.

On one side we have X not getting updates, as key devs moved into Wayland, on the other side we have Wayland, which is reaching Hurd levels of development time, with issues like these, being in development since 2008(!).

Then we have the whole issues with audio stacks, reversed enginered 3D drivers although all GPU vendors have their own Linux distros for AI researchers, video hardware decoding still hit and miss,....


Well, Linux is innovating, and innovating and innovating and ... looks more and more like being run by CADTs with no clear goals. The only goal seems to be to use as many shared libraries as possible, making even Windows 95 with its DLL hell, look pale in comparison.

I'm glad there's finally some progress in that direction. If they actually implement subpixel RGB anti-aliasing, it would definitely be worth considering as an alternative. It's been surprising to see so many people praise Zed when its text rendering (of all things) has been in such a state for so long.


Tbh though, is subpixel text rendering really all that important anymore when high resolution monitors are common now and low-dpi is the exception?


You should get outside your major metropolis and highly paid Western job once in a while. High-DPI monitors are the exception for most of the world.


It doesn't take being outside of the west for this to be relevant. Two places I currently frequent, A) the software development offices of a fortune 500 company, and B) the entire office & general-spaces (classrooms, computer labs, etc) of a sizeable university, have 1080p monitors for >80% of their entire monitor deployment.


Even then... my visibility is pretty bad, so earlier this year I upgraded to 45" 3440x1440 monitors, and even then I'm viewing at 125%, so subpixel fonts helps a lot in terms of readability, even if I cannot pick out the native pixels well.

They aren't high-dpi though, just big and still zoomed. On the plus side, it's very similar experience to two 4:3 monitors glued together... side by side apps on half the screen is a pretty great experience... on the down side, RDP session suck, may need to see if I can find a scaling RDP app.


Most people I know are on 1920x1080 LCDs. Over half of PC gamers seem to be on that resolution, for example: https://store.steampowered.com/hwsurvey


My gaming PC is also connected to a 1080p display because tbh for gaming that's good enough, but I don't whine about application text quality on that setup since it looks pretty bad with or without ClearType compared to a highdpi display ;)


Yea, I tried to give it a go on Fedora, but the terrible text rendering made it a insta-delete, for me.


ok for what reason we need sub-pixel rgb anti aliasing here???? does we run game engine for code??


Subpixel antialiasing of fonts is a pretty standard feature for a few decades. Without it, text can look fuzzy on certain displays.

> does we run game engine for code??

Zed literally does this; they render their UI using a graphics library just like a video game.


It's fun to see "GPU accelerated" and "like game engine" when literally every application is rendered the same way with the same APIs.


Last I checked I don't create a GL context to make a WPF app.


There is certainly a lot to be said for first-mover advantage, but TradeMe specifically (while still popular) is currently fighting a losing battle. Between rising listing fees, selling fees as a percentage of the product's value, and the influx of drop-shippers that flood categories with "local" products, its days are likely numbered. Personally, while 10 years ago everyone I knew was using it, I no longer know anybody that goes to TradeMe to sell everyday items. I'm not a fan of FB, but the fact that it's monetarily free to use makes all the difference for the average user.


I don't entirely disagree with some of your points, however, they are not what the recent discussion has been about. Plainly, a maintainer has unilaterally rejected the addition of rust code to assist with DMA and requested that code be duplicated in every driver (which also ignores the fact that the patch never added rust code to kernel/dma to begin with). It strikes many as strange that the experimental addition of rust-based drivers (greenlit by Linus orignally) has come to a head in this way:

"The common ground is that I have absolutely no interest in helping to spread a multi-language code base. I absolutely support using Rust in new codebase, but I do not at all in Linux."


I think rejecting code in his area is his right as a maintainer of this area (to the extend this is the case, I haven't checked). Also the patch adds a file kernel/dma.rs so I am confused about your comment. I also happen to agree that maintaining multi-language code is a pain and I understand that he does not want this imposed on him. This may make it harder for Rust kernel developers, but I can not see how this is "sabotage of the project".


> (to the extend this is the case, I haven't checked)

It's not his area.

> Also the patch adds a file kernel/dma.rs so I am confused about your comment.

It adds rust/kernel/dma.rs, not kernel/dma. That is, it adds that file here: https://github.com/torvalds/linux/tree/master/rust/kernel not here: https://github.com/torvalds/linux/tree/master/kernel/dma


If it is not his area, he should have no power to stop it anyway, so it is even more strange to call it "sabotage". Or has any kernel developer the right to nack anything? This seems unlikely to me. So then it is just some disagreement.


You have to remember that Linux development isn't done like most open source projects where there's one upstream tree everyone sends patches to. Linus pulls in whatever code he wants. A nack means that Hellwig won't pull it into his tree, but that doesn't mean it can't go in someone else's, and end up upstream anyway. The only reason he was even cc'd on the patch is because he's the relevant subsystem maintainer being wrapped, as a courtesy.


As I said, the idea that this is then "sabotage" is completely ridiculous and just shows how toxic this maintainer was that now removed himself.


There are multiple comments that meet the exact definition of sabotage. If this:

"You might not like my answer, but I will do everything I can do to stop this."

is not intent to sabotage (even if it might not be successful as Linus could pull in the patch anyway), then what possibly could be?

Pointing out the ridiculousness of comments like this and suggesting the R4L folks push forward while ignoring them doesn't scream toxicity. Refusing to compromise with the R4L devs and calling the additions a 'cancer' has expectedly caused a stir.


I think you need to look up "sabotage" in the dictionary.


sabotage /ˈsabətɑː(d)ʒ/ verb

1. deliberately destroy, damage, or *obstruct* (something), especially for political or military advantage.

2. to intentionally prevent the success of a plan or action.

Definitions from Oxford, Collins, and Cambridge all fit the bill. Even dictionary.com has "any undermining of a cause."


Yes, and now compare to what happened. You could differentiate between words and actions and what exactly was affected. For someone to "sabotage the Rust experiment in the kernel" you would need to determinate that that person did something that a) effectively damaged / obstructed the project (which - if GP is right about that that person has no power to stop the merging of the patch anyway - is a dubious claim), etc.

Misrepresenting the voicing of opposition to some process as "sabotage" seems completely out of line for a any kind of community project. If you define things so loosely, then every side in any disagreement could always label the other side of doing "sabotage". This reflects the sentiment of many Rust people to "be on the right side of history" where everybody else automatically is wrong and even voicing objects and criticism is already "sabotaging" on the true path.


If you don't think what Hellwig did obstructed the project, then I guess we fundamentally disagree. Again, the R4L folks could work around this by getting the code pulled in by Linus, but that doesn't stop the fact that a senior maintainer has made it explicit that they will do everything in their power to stop this.

Code that wasn't his to reject was NACKed, causing a large amount of uncertainty about how to proceed with drivers that use DMA, and around the R4L project in general. At the absolute least, this is plain intent to sabotage (but IMO it is clearly more than intent at this stage). The core of what you are saying is that this has/will have absolutely no impact on anything to do with future R4L progress. The explosion of discussions around this exact topic across various forums with abundant disagreement from maintainers and R4L folks running counter to that idea are irrelevant I guess.

I'm not even a "rust person" and nobody has said anything about "being on the right side of history" except you. If that's how you see this discussion then we're not going to get anywhere. I wish you well, and urge you to in future engage in good faith and consider that not everybody is some boogeyman "on the true path" evangelist.


A senior maintainer has a different opinion and expresses it. Your point seems to be that because he is not on board with the plan and expresses this, this is already "sabotage". Of course it makes things harder if not everybody agrees. But this is not the same thing that people disagreeing with your plan and say so do "sabotage". Sorry, this is ridiculous.


There is a large difference between "I do not think this is a good idea" vs "do not do this", in particular given the position Hellwig has in the kernel as a listed maintainer of the DMA mapping helpers.

No single technical reason was given besides a non-specific opinion on the "messiness" of multi-language projects.


This is a remarkably disappointing end to Hellwig's original NACK, yet entirely expected based on the recent treatment of R4L devs.


This is a great visualization. I wonder what techniques were used to create it as neither this article nor the original source seem to highlight them.


One thing I've always wondered about these kinds of simulations is how they deal with numerical issues, since I assume they are needing to use both very small and very large numbers. Additionally, even in simple classical physics integration errors can add up very quickly, so wonder how this problem is avoided when working with these kinds of scales. Similar thoughts for things like galaxy simulations, or simulations of planet-sized collisions, etc.


There are techniques for arbitrary precision math (lookup BigNum), I assume they're used there for some of the things


That would probably be infeasibly slow.

There is a whole field of CS that deals with minimizing error when doing floating point math. They just probably use decent algorithms/encoding.


These are good questions. I'll get to the core of them after a one-paragraph rant about the aggregator linked at the top, and three paragraphs about how this bit of "science communication" (s̶c̶a̶r̶e̶ sarcasm quotes) means no answers for you about this particular visualization. I did end up peppering in a couple relevant bits of information in them though. In case it's not clear, the ordering of paragraphs below is not the order in which they were written.

Preliminarily: phys.org is hot garbage. It mostly reproduces institutions' (universities, NASA) press releases with its own ads. Rarely there is original content of dubious quality. This is not the latter, it's just a NASA-written blurb. Fortunately the same content is at the original source https://science.nasa.gov/supermassive-black-holes/new-nasa-b... -- absent that I probably wouldn't have started an answer.

Unfortunately the NASA link and its link to youtube give too little information to say anything reliable about the numerical relativity (NR) "codes" (s̶c̶a̶r̶e̶ jargon quotes) for this particular visualization. Yes, it's pretty and cool; yes it will excite relativists as well as laypersons; however, how about tossing the former a little drop of technical information? The supercomputer used and how much time was spent on it is not really useful to know.

I can only guess that the visualizer, Jeremy Schnittman, would want to generate data using the tools with which he's most familiar. Digging around in his publication history <https://scholar.google.com/citations?user=MiUTIQwAAAAJ> I see that he uses lots of Monte Carlo methods, often averaging over many very slightly different simulations (which might individually be wrong). I can see that for his black hole related work (mostly studying X-rays flying about just outside black holes, but also other phenomena close to but outside the horizon, or a little before the most strongly relativistic parts of black hole mergers) he prefers his own (Monte Carlo) tool Pandurata, which is described in Schnittman & Krolik 2013 <https://iopscience.iop.org/article/10.1088/0004-637X/777/1/1...>.

It is not at all obvious to me, especially given the scant information provided by NASA publicity, how he would safely apply these sorts of toolsets to the interior black hole metric. We aren't even told what the metric is, really; I assume Kerr with modest angular momentum, given his previous publications focus on astrophysically-reasonable Kerr black holes. There's another hint in what in the video looks like dimming at the receding limb (on the right) of the accretion disc. But this visualization could also be just Schwarzschild. Who knows? I look forward to his own professional writeup!

[Other NR tools are available, e.g. <https://nrpyplus.net/> and <https://grchrombo.org/movies/>]

Consequently I'll focus in on this part of your comment:

> these kinds of scales [or] galaxy simulations [...]

This is in the realm of numerical relativity and computational astrophysics respectively. There is an overlap.

Although I had in mind a couple resources about your questions, they're mostly textbooks which aren't freely available. So I first visited Sebastiano Bernuzzi's always useful syllabus http://sbernuzzi.gitpages.tpi.uni-jena.de/nr/ (nr is for Numerical Relativity, solving Einstein's equations with computers) and picked out two useful freely-available resources to start with. They are both called "lecture notes" but are really mini textbooks. They both have excellent bibliographies.

Choptuik's 2006 "Numerical Analysis for Numerical Relativists" (PDF) http://laplace.physics.ubc.ca/People/matt/Teaching/06Mexico/... is awesome. It focuses on finite difference techniques, which dominate in numerical relativity, particularly where black holes are concerned.

Like many others, Bernuzzi's 2021 3+1 Numerical Relativity <http://sbernuzzi.gitpages.tpi.uni-jena.de/nr/notes/2021/main...> points to it in section 2.4, where you will find references to other and newer treatments of various numerical relativity methods. Other techniques get used too, finite-element, for example. For galaxy stuff, you would want a resource on e.g. smoothed particle hydrodynamics or particle-particle/particle-mesh-Ewald.

ETA^2: wow, an actually useful physics SE q&a on that last bit (contrasting finite difference & finite element methods for black holes) from a little less than two years ago (direct link to imho good answer): <https://physics.stackexchange.com/a/725998>

ETA: this is more just a bookmark for me. Even though, like Chopotuik above, it's 17 years old, <https://www.cita.utoronto.ca/~pfeiffer/talks/07Apr_Jacksonvi...> is a really great slide deck.

ETA^3: Bernuzzi "Introduction to Numerical Relativity" @ IHÉS winter 2024 https://www.youtube.com/watch?v=RcdntEBrcuM is probably pretty good.


Looks like it was due to unsafe processing of custom emoji: https://github.com/LemmyNet/lemmy-ui/pull/1897


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: