The bummer is that the transmission is only unidirectional, but a unicycle needs bidirectional control, with very little backlash. Every economical planetary gear hub involves over-running pawls, alas.
- Shader language is SPIRV-compatible GLSL 4.x thus it makes it fairly trivial to import existing GL shaders (one of my requirements was support for https://editor.isf.video shaders).
Cons:
- Was developed before Vulkan Dynamic Rendering was introduced so the whole API is centered around the messy renderpass thing which while powerful is sometimes a bit more tedious than necessary when your focus is desktop app development. However, Qt also has a huge focus on embedded so it makes sense to keep the API this way.
- Most likely there are some unnecessary buffer copies here and there compared to doing things raw.
- Does not abstract many texture formats. For instance still no support for YUV textures e.g. VK_FORMAT_G8_B8_R8_3PLANE_420_UNORM and friends :'(
Sorry, I'll try to be clearer. QRhi docs[1] say "The Qt Rendering Hardware Interface is an abstraction for hardware accelerated graphics APIs, such as, OpenGL, OpenGL ES, Direct3D, Metal, and Vulkan." And PySide6 includes a (python) wrapper for QRhi[2]. Meanwhile, pygfx builds on wgpu-py[3] which builds on wgpu[4] which is a "is a cross-platform, safe, pure-rust graphics API. It runs natively on Vulkan, Metal, D3D12, and OpenGL".
So, from the standpoint of someone using PySide6, QRhi and pygfx seem to be alternative paths to doing GPU-enabled rendering, on the exact same range of GPU APIs.
Thus my question: How do they compare? How should I make an informed comparison between them?
> How should I make an informed comparison between them?
Pygfx provides higher level rendering primitives. The more apples to apples comparison would be wgpu-py versus QtRhi, both of which are middleware that abstract the underlying graphics API.
The natural question is are you already using Qt? You say you are, so IMHO the pros and cons of the specific implementations don't matter unless you have some very specific exotic requirements. Stick with the solution that "just works" in the existing ecosystem and you can jump into implementing your specific business logic right away. The other option is getting lost in the weeds writing glue code to blit a wgpu-py render surface into your Qt GUI and debugging that code across multiple different render backends.
Yeah, sounds like QRhi is about at the level of WebGPU/wgpu-py.
It sounds to me that Qt created their own abstraction over Vulkan and co, because wgpu did not exist yet.
I can't really compare them from a technical pov, because I'd have to read more into QRhi. But QRhi is obviously tight to / geared towards Qt, which has advantages, as well as disadvantages.
Wgpu is more geared towards the web, so it likely has more attention to e.g. safety. WebGPU is also based on a specification, there is a spec for the JS API as well as a spec for webgpu.h. There's actually two implementations (that I know of) that implement webgpu.h: wgpu-native (which runs WebGPU in firefox) and Dawn (which runs WebGPU in Chrome).
Sourceforge still works, and is (now) reliable. The awful DevShare malware stuff that Sourceforge started (under new ownership) in 2012 was stopped (under different new ownership) in 2016.
The site is ugly and difficult to navigate. Go there and enter TCL in the search box and see how many clicks you have to do to actually find the project. Then when you click the download button a timer has to run out before the download will start.
Maybe it technically works, but it's a terrible UX.
Fwiw the github page for that proposal isn't linking to what seems to be the most recent discussion *. I thought type-annotations-as-comments was a no-brainer, but it seems messy. The Sept 2023 discussion shows that it is complicated by the current variety of annotation schemes (TypeScript is not the only one), and the parsing subtleties, which gave rise to unresolved questions about what is the basic motivation. bummer.
What a sweet film. Thank you for the link. This whole time when I heard about work on the Voyager mission I assumed there was a larger team, with fewer single points of failure.
I first learned about "leaky abstractions" from John Cook, who describes* IEEE 754 floats as a leaky abstraction of the reals. I think this is a good way of appreciating floating point for the large group of people who's experience is somewhere between numerical computing experts (who look at every arithmetic operation through the lens of numerical precision) and total beginners (who haven't yet recognized that there can't be a one-to-one correspondence between a point on the real number line and a "float").