Hacker Newsnew | past | comments | ask | show | jobs | submit | more codehero's commentslogin

Sell me: I am a crusty old C++ programmer who started numerical and geometric programming on an SGI machine at the turn of the century. I have seen many academic and commercial 3D libraries come, go or perform poorly (vcollide, RAPID, CGAL and many proprietary ones).

It looks interesting but what makes this library any better than what's come before? Why should I learn Rust to use it or really for any computational geometry problem?


I'm not sure. I certainly would never use a third party collision library because I think collisions are simple enough, but on the larger subject of whether Rust is suitable for game programming; I think so, but cannot confirm. There are a lot of abstractions that require pointer dereference, which is somewhat likely to give you a performance drain, and those abstractions are I guess somewhat harder to avoid than in C++, but not terribly so. Both languages give you the ability to do "data directed design" if you really really want to. My preference for Rust stems from my preference for interfaces, or traits as they are called in Rust. If you want to use features like these then Rust is a no brainer. I have yet to see the true impact on performance these abstractions have however.


What abstractions are you referring to? If you're using traits and writing your functions with generics there shouldn't be any overhead, other than the increased number of instructions, but this is no different than C++ generics. There are trait objects, but these come up somewhat less often then you would think, and have the same performance as using abstract classes/polymorphism in C++. But maybe you're referring to something else?


I am referring to trait objects, and I use those pretty extensively (although I limit my usage because I am making games and want to avoid a pointer dereference). For libraries you might not need them as much, but I definitely find their convenience indespensible for application side programming.


Personally, I've found that most cases you can avoid using trait objects by just using generics, but it obviously depends on your use case. You especially need trait objects to have things like heterogeneous arrays. Trait objects are pretty much the same as C++ classes with virtual methods, with one tweak in the representation. A C++ object is referred to by a pointer to (vtable_ptr, class_fields). A trait object "&Foo" is a fat pointer (vtable_ptr, ptr_to_fields). So a trait object reference is two pointers, but doesn't have a pointer stuck to the front of all of its structs. Method invocation requires you to dereference the vtable, get the function pointer, and call that, passing the data pointer. In C++ you would dereference your pointer, dereference the vtable, then pass the pointer to the function. If anything, I think the Rust method may be faster because you avoid the double de-reference, but it's probably so small as to be unmeasurable in all but pathological uses.


Is this strictly for building shippable software or any time you need collision?

Maybe I am just a web programmer who is really bad at math and programming but this seems like a crazy thing to rewrite for many projects. Even if it only takes a day or an afternoon that is time you could have spent proving if your idea works.


...you just copy your code.


> I certainly would never use a third party collision library because I think collisions are simple enough

Ehhh, I haven't seen a really good, super robust solution for mesh vs. mesh. Deciding whether two meshes collide is simple enough, but robustly pulling out associated information like overlap volumes, penetration depths, etc gets pretty hairy pretty quickly - you start running into similar problems to mesh booleans.


The reason why is that mesh vs mesh essentially should never be used in video games. The accuracy in hitbox bounds and ease of use (making a model mesh your collision mesh) you get from mesh collisions are completely and utterly overshadowed by the performance and lack of accuracy in determining the time of impact. At 60 fps it's really hard to tell the difference in collision between a typical dodecahedron and a sphere. Just use a sphere! (Or, more likely, some aggregate of AABBs, spheres and capsules)


Oh yea, for a game I would almost certainly decompose into some sort of union of easily handled convex shapes. There are definitely applications -- robotics, engineering, hell, even film -- where more accurate mesh-mesh is needed, however, and most solutions are super domain specific and ad hoc.


Haha, I keep forgetting that there are other applications for collision detection than just video games :)


> The reason why is that mesh vs mesh essentially should never be used in video games

Or, alternatively, the reason why mesh vs mesh is never used in video games is that it hasn't been solved well yet and we have to stick to simplified collision geometry and the resulting glitches we get in games?


Mesh vs mesh collisions will never be faster than primitive collisions. Also, simplified collision geometries do not produce glitches in the same way mesh vs mesh collisions do. At most you may have a collision between two primitives where the models don't actually seem to be touching. But this can easily be circumvented. In fact, I would argue that a lot of the physics glitches we've seen are probably the result of using mesh colliders when a simplified geometry would have sufficed.


Check the DOOM 3 source code, they have a pretty good mesh-vs-mesh solver.


i think it's the other way around - if you're using rust, now you have a collision library you can use without having to go back to c++.


I call your bluff on CGAL. What's wrong with a geometry library that can handle exact arithmetic?


Ever used OpenSCAD? Try doing CSG operations with CGAL on complex STL surfaces. Then take those same surfaces with vtk, use their boolean operations. Faster but with significantly more errors.


Not sure what the conclusion is supposed to be here.


Yeah your intuition is correct here. On an MSP430 (which is were I use it) the producer is in interrupt context but as a savvy redditor pointed out, I will need to enable/disable interrupts in the consumer function as well.


I'm running it on a system with 512 bytes of RAM and multiple queues...saving 1 byte helps. I do not need extra space for synchronization; I can enable/disable interrupts without extra RAM required.


I see a war here: the endeavor to reprocess spent waste as fully as possible and the industries that would create expensive burial chambers for them instead.


Making a kernel by community is difficult and has no precedent to directly follow. The model-view-culture community should really take the failings of Linux as an opportunity to make a new kernel that does follow fairness and diversity principles.


No story about early Makerbot is complete without mentioning their commitment to open source (which this puff piece deftly avoids). The open source story has been conveniently dropped.

Here's an excerpt from an Bre Pettis interview in 2011:

Does funding change the commitment to open source hardware? The funding doesn’t change our commitment to being open source. Why would we change a winning strategy? Being open is the future of manufacturing, and we’re just at the beginning of the age of sharing. In the future, people will remember businesses that refused to share with their customers and wonder how they could be so backwards.

http://makezine.com/2011/10/06/makes-exclusive-interview-wit...


For those of you that don't know, he means their abandonment of open source that left the 3D printing community that supported them in the beginning in an uproar.


Would a donation also stop small companies and 1 man operations from violating the GPL?


Linus' law may scale, but even assuming these eyeballs produce bug fixes, applying those fixes to the source tree does not scale as well.

I have taken time to put my eyeballs on bugs in spidev's ioctl() and TI's spi driver but my bug fixes are not in the tree.

Signing off, adhering to the source standard and attaining enough respect from the established devs to get your fix accepted are the limiting factors.

I already invested significant amounts of time finding these bugs and fixing these issues; I don't have any more to spend to make Mark Brown or other kernel devs happy.

I don't even care about getting the credit for my fixes, but it seems the kernel devs don't want to take my code to next step and get it integrated.


USB is still problematic on Linux. In particular the musb is completely unstable when "babble" is detected. You have to use the right kind of USB hub (or modify the cable) to keep power from backfeeding into the BBB. It seems your experience is different from mine.


We've had the same experience on BBBs re: powering USB devices. Using a powered dlink hub helps to resolve it (of the 10+ brands we've tried).


Initially we did have some problems. Most of it was due to not being able to source enough current. We haven't seen any issues so far. Thanks for reminding me. I'll keep an eye out for that.


There are two troubling lines in his code (from https://github.com/espruino/EspruinoOrion/blob/gh-pages/seri...)

  var header = sampleRate; // 1 sec to charge/discharge the cap
  var bufferSize = samplesPerByte*data.length/*samples*/ + header*2;
This implies 2 seconds of overhead per message sent from smartphone to microcontroller, 1 for the light to turn and 1 second of clearing time before the next transmission. This explains why the lights toggle so slowly.

To top it off, he uses 2 stop bits. The signalling rate may be 9600 baud but do not expect to be transmitting 9600 DATA bits a second! I think such claims need to be backed up with a demonstration full loopback capability without either side dropping a byte, with no pauses between serial bits.


There is a demonstration there of full loopback, without the 1 sec delay. The delay is purely so you can send a command using a totally normal audio API that only uses the sound card when needed.

I could probably get away with 1 stop bit... I didn't try. To be honest if the transfer speed matters so much, it might be better to use some other means of communications :)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: