(this is moved here from GitHub Issues for argon.js, https://github.com/argonjs/argon/issues/47)
Here’s the initial question I asked in that issue:
I’ve been thinking about what it’s going to mean to have something like argon.js running on more capable devices, like Google Tango or Microsoft Hololens, or using realities that have a lot more structural information about the “world”.
In all cases, it occurs to me that we might want to have a standard interface to ask “What is the 3D value in the world under this x,y value on the display?”. Akin to what Hololens programs use to figure out where to draw the cursor or pick points in the world.
Aaron Mulder (https://github.com/ammulder) replied:
So I’ve used the three.js raycaster for this kind of thing – when the user waves the mouse over a three.js canvas, I identify whether any of “my” objects are under the cursor and highlight the selected one as appropriate.
I guess the question is, if you’re inspecting the “world”, what should the target be that the API returns? A point? A small disc? The entire “surface” under the cursor/point, however big that is? I’d be really interested in finding/identifying large flat horizontal and vertical surfaces in the world, but I’m not sure what degree of smoothing or whatever would be needed to say “this slightly irregular shape our sensors detected is really just a flat tabletop and here are the dimensions”. And is that the same as this or are there separate API calls for things like “find the Z height at point X,Y in the display” and maybe “find the object/surface at point X,Y in the display” and “give me a list of all flat surfaces in view”? (I guess I should look at the Hololens API and see what they offer.)
If thought I’d move the discussion here, since it’s part of big issue that isn’t specific to argon.js