Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A picking texture is a very simple idea. As the name says, it’s used to handle picking in the game, when you click somewhere on the screen (e.g. to select an unit), I use this texture to know what you clicked on. Instead of colors, every object instance writes their EntityID to this texture. Then, when you click the mouse, you check what id is in the pixel under the mouse position.

Unrelated, but why? Querying a point in a basic quad tree takes microseconds, is there any benefit to overengineering a solved problem this way? What do you gain from this?



Well, it's significantly easier to implement than a octree. Game is actually 3D under the hood, projected at a careful angle to look isometric 2D.

The reason the game is 3D has to do with partially visible things being way easier than with isometric textures layered in the right order.

Also, now that i just grab a pixel back from the GPU, it's no overhead at all (to construct or get the data for it).


I assume for the picking system you're rendering each entity/block as a different color (internally) and getting the pixel color under the mouse cursor?


"Color" is pretty much just "integer" to the GPU. It doesn't care if the 32-bit value a shader is writing to its output buffer is representing RGBA or a memory pointer.


It aligns with what appears on the screen accurately and without needing any extra work to make sure there's a representation in a tree that's pixel-accurate. It's also pretty low overhead with the way modern GPU rendering works.


What if you have a collision system where collision filters can exclude collisions based on some condition in such a way that their bounding boxes can overlap? For instance an arrow that pierces through a target to fly through it and onto another target? How do you accurately store the Entity ID information for multiple entities with a limited number of bits per pixel?


Entities that can't be picked, don't write to the texture, entities that can be picked, write to the texture their id. Whatever is closer to the camera will be the id that stays there (same as a color pixel, but instead of the object color you can think object id). So you are limited at one ID per pixel, but for me that works.


Right, it's the same z-buffer problem of deciding what pixel color is visible, with a non-blending buffer update mode.

To be totally coherent, you have to draw the entity ID in the same order you would draw the visible color, in cases where entities could "tie" at the same depth.


A point in screen space is a line in world space after inverse camera projection, so this way you get the line-to-closest-geometry test in O(1), after the overhead of needing to render the lookup texture first.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: