Hacker Newsnew | past | comments | ask | show | jobs | submit | ajwin's commentslogin

My understanding is that it is not mesh, it’s Gaussian Splatting. There are tools to convert Splats into mesh though.


Yeah, but isn't still the expected outcome to end up with actual 3D objects, not point clouds? Or did people start integrating point clouds into their 3D workflows already? Besides for stuff like volumes and alike, I think most of us are still stuck working with polygons for 3D.


No geometry in the conventional sense. I did a demo of rendering a Gaussian splat in React Three Fiber here, you can open the linked splat file (its hosted on hugging face) if you want to see the data format. https://codesandbox.io/p/sandbox/3d-gaussian-splat-in-react-... I also have this youtube video about creating that demo https://www.youtube.com/watch?v=6tVcCTazmzo


I use pointclouds all the time in Rhino/Lastools/Meshlab

I much prefer pointclouds and nurbs over meshes

Not everything is gamedev


> Not everything is gamedev

Agree, I'm not sure why you'd think that's the only use case for 3D, unless I misunderstand your argument here.

How would you handle visual effects with point-clouds for example? There are so many use cases for proper 3D, and all I can think of as use-case for point clouds are environments with static lightning, which seems like a really small part of what people generally consider "3D scenes".


> Visual effects

Maybe I missed the mark on “gamedev”, but 3D is larger than just “aesthetically pleasing 3D VFX” for its own sake

Often I’m trying to use something as a reference for a design where a 3D model isn’t the actual end goal, or I’m performing analytics on a 3D object (say in my case for a lot of GIS and simulation work)

The whole “mesh is the be all and end all of 3D modelling” irks me as while yes it’s a really important way of representing an object (especially with real time constraints), it doesn’t do justice to the full landscape of techniques and uses for 3D

It would be like 2D sprite artists from the gamedev world saying “what’s the point of all this vector art you illustrators are doing” or “what’s the point of all these wireframe designs you graphic designers are doing” - “these aren’t raster images!”

I suppose my snipe was trying to communicate the idea that 3D is larger than just a vehicle for entertainment production. It intersects many industries that may eschew polygons because real time rendering is irrelevant

3D tooling has uses beyond producing 3D scenes, just as Photoshop is used for more than touching up photographs

Edit: for anyone stuck in a rut with meshes come join the dark side with nurbs - it makes you think about modelling in a radically different way (unfortunate side effect is it makes working with meshes feel so so “dirty”)


The whole “mesh is the be all and end all of 3D modelling”

No one said this, it seems like you are making up fake questions and not dealing with the actual questions that the person you replied to asked.

You can view point clouds and you can warp them around, but working with them and tracing rays becomes a different story.

Once you need something as a jumping off point to start working with, point clouds are not going to work out anymore. People use polygons for a reason. They have flexible UVs, they can be traced easily, they can be worked with easily, their data is direct, standard and minimal.


Games are the least of it, the vast majority of scientific applications to do with physics use meshes rather than point clouds.

This is because a point cloud does not represent a surface or a volume until the points are connected to form, well, a surface or a volume.

And physical problems are most often defined over surfaces or volumes. For instance, waves don't propagate over sparse sets of points, but within continuous domains.

However, for applications where geometric accuracy is needed, I think you wouldn't want to use a method based on a minimal number of photographs anyways. For instance, the Lascaux cavern was mapped in 3D a decade ago based on "good old" algorithms (not machine learning) and instruments (more sophisticated than a phone camera). So these critiques are missing the point, in my opinion. These Gaussian Splatting methods are very impressive for the constraints they operate under!


You can convert splats into meshes using a simple marching cubes algorithm.

But the meshes produced are not easy to edit.


Generating good meshes sounds like a problem for a completely different machine learning algorithm to me.


Meshing has been around long before machine learning came to prominence, there's plenty of methods to improve surface meshes already.


None of which work.

There is no good method to take a 3D scan and make a sensible mesh out of it. They tend to have far more vertices than necessary and lack structure.


I don't know what you mean by lacking structure, but perhaps you are not aware of all the tools that exist, because fixing surface meshes is a rather classic problem. Just type "surface remeshing" or "surface mesh optimization" on google scholar and you'll see thousands of results.

This is a separate problem from triangulation (turning point clouds into meshes) done with entirely different algorithms. It's likely the software you used for this assumes the user will then turn to other software to improve their surface mesh.

Even for operations that are naturally in sequence, you will often find the software to carry out those steps is separated. For instance turning CAD into a surface mesh is one software, turning a surface mesh into a tetrahedral volume mesh another (if those are hexahedra, then yet another), and then optimizing or adapting those meshes is done by yet another piece of software. And yet these steps are carried out each time an engineer goes from CAD to physical simulation. So it's entirely possible the triangulation software you used does not implement any kind of surface optimization and assumes the user will then find something else to deal with that.


If you wanted to show someone a walkaround of the Sistine chapel or David, would you be better off using triangles and PBR and raycast lighting? You don't really gain anything from all that; you're doing a tremendous amount of computation just to recapture the particular lighting at an exact time. If you want the same detail that a few good pictures capture -tens of millions of pixels- you need to have many billions of triangles onscreen.

With splats you can have incredibly high fidelity with identical lighting and detail built in already. If you want to make a game or a movie, don't use splats. If you want to recreate a static scene from pictures, splats work very well.


splats augment 3D scenes, they don't replace them. i've seen them used for AR/VR, photogrammetry, and high-performance 3D. going from splats to a 3D model would be a downgrade in terms of performance.


What’s the best use of splats that you’ve seen so far that I can try? AR/VR or regular 3D


Meshes are editable. Are Gaussian splats?


What kind of edits you mean? You can crop / combine splats easily in your browser with supersplat (not affiliated)

https://superspl.at/editor


kinda but not really in a meaningful way, at least not yet. there's some plugins for popular 3D software but it's still early days.


Yea, someone can say, “Look, we have just created the first color computer and it displays images. Look at this first ever real life photo on this digital screen!” There will always be the people who ask, “Yeah, but does it run Photoshop?”


Expected by whom? Other researchers in this space? That's the audience for this work.


Not necessarily.

If you're using it to render video you don't need to go into the mesh world.


Isn't https://svraster.github.io/ just superceding gaussians? Voxels are also not meshes, but might they not prove even more useful for coming rendering engines..?


Do LLM's always pick the most probable next word? I would have thought this would lead to having the same output for every input? How does this deal with the randomness that you get from prompting the same thing over and over?


There is at least a parameter called Temperature which decides how much randomness to include in the output.

It doesn't get you perfectly deterministic output to set it to 0 though, per https://medium.com/google-cloud/is-a-zero-temperature-determ... as you don't have perfect control over what approximations are being made on your floating point operations


The most typical reason argmax (temp 0) is non-deterministic is that your request is running batched with other people requests. The number and size of these affects the matrix sizes and thus tiling decisions. Then you get different floating point order and thus different results.

Nvidia gives some guarantees about deterministic results of their kernels but that only applies when you have exact same input data and this is not the case when in-flight batching.


It depends. If we use beam search we pick the most likely sequence of tokens rather than the most likely token at each point in time. This process is deterministic though.

We can also sample from the distribution, which introduces randomness. Basically, if word1 should be chosen 75% of the time and word2 25% of the time, it will do that.

The randomness you’re seeing can also be due to implementation details.

https://community.openai.com/t/a-question-on-determinism/818...


I live in South Australia and I feel like public toilets are ubiquitous and still increasing in number. Mostly they are either accessible or have a separate accessible toilet. The only exception to this is in areas where they have public social issues. My mum is a little bit financial about toilets so she always struggles when overseas. We even have small self cleaning toilets at the small playgrounds in the suburbs. It would be interesting to understand why we ended up like this.


I wonder if anvils will be the breakout product for this technology. It seems like it should be.


I know a guy at Acme Corp. who would pay top dollar to get this tech out to his customers.


Grand pianos.


I still do not understand how something so significantly more expensive and less ambitious will compete with Starlink or even be viable? Will they make their $$ back ever?


They don't really have to compete against SpaceX, just against ViaSat, HughesNet and others.

The US gov will use Starlink first, and as with all other private space company initiatives they buy services from will want a 2nd independent supplier to create 'competition' just like e.g. the Boeing starliner.

If Kuiper is 2nd place, they'll get basically the same govt contracts as SpaceX, & will be able to use those funds for basically whatever - including R&D of Blue Origin rockets I imagine.


SpaceX and Starlink are amazing but I wonder how much Kuiper’s access to the huge muscle that is AWS will bolster its chances of being #1? SpaceX definitely has the early bird advantage but Amazon knows products and the direct access to AWS could position Project Kuiper to dominate Starlink.


I doubt it will make much difference, especially since iirc starlink already has direct conections to google's datacenters. Even if they can cut a few ms that is unlikely to overcome the advantages of Starlink over Kuiper


I did not know Starlink is using Google! Google definitely knows datacenters. Thanks for sharing.


I don't know how much collaboration is actually involved, I think it's basically just a peering agreement

https://www.satellitetoday.com/broadband/2021/05/13/google-c...


It was more than peering. I don't know what it's like now, but Google ran the entire starlink terrestrial network. Including providing CGNAT. Starlink addresses showed up as Google Fi.

As I said I don't know the current state


Plus Amazon certainly has dozens of ideas of how to generate revenue from this over-and-above subscription/bandwidth fees.


Interesting comment. Can you expand on it?


I have a feeling the long term bet here is to tie it in with AWS in a significant way. End user internet helps pay off the expensive R&D, but enterprise has to be where the money is.


The actual goal is probably just to transfer money directly from Amazon to Blue Origin. Also indirectly by way of ULA.


He is one of the founders of OpenAI.. Soo I don't think he has missed out?


He has no stake in the organization whatsoever. They made a total break. Presumably because Elon is a moron.


And now, only a few years on, the FTTN is already slowly getting replaced at great expense making the FTTN a colossal waste of money. Turnbull knew better as other places on earth already tried FTTN and removed it. The cost of putting power to all the nodes was excessive and ongoing power bills eventually expensive too. FTTP was passive at the nodes. The only argument was that it was planned to be a quicker rollout which didn’t eventuate either. Total shit show.


Yeah it’s PS1 Gran Turismo feels for sure.


Start one! Everyone who mentions them should start one. Work super hard to get it started, deal with all the politics and then get voted out.


> I wholeheartedly agree, but how do you get the other side to listen?

The only way is to stop preaching and listen to them. Really listen. This is how Daryl Davis converted the KKK according to his own accounts?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: