So I downloaded the Argon4 browser on an iPad, printed the GVU Brochure PDF, laid it on a table, and navigated to the Vuforia sample in Argon4.
The “argon.js” text appears over the brochure – but it’s very unsteady. As I stand there and move the iPad around, the AR text constantly shifts, moving I would guess as much as an apparent inch in any direction.
It seems to be largely some kind of sensor/detection lag – if I move the screen to the right, the text is pulled to the right relative to the brochure. Then as I slow or stop the movement, it snaps back to the left to be positioned correctly over the brochure again.
So my main question is: to what extent can I reduce or eliminate this? Can the AR content ever appear completely motionless relative to the target?
Now I’ll speculate:
It feels like what’s happening is that the camera takes some new video, Argon passes the frame(s) to Vuforia, Vuforia takes a bit to run its detection algorithms on the frames, Vuforia passes a target position back to Argon, Argon moves the AR content to be there, but by the time it does that some non-trivial amount of time has passed and if the iPad is in motion then the target has moved in the field of view so the AR content appears in the wrong place.
It seems like this could be reduced or eliminated by using input from the iPad accelerometers to determine how far the device has moved while all those calculations took place, though I don’t know how accurate those sensors are.
I guess I also wonder, if I took a one-time position of the target from Vuforia, and then quit the Vuforia tracking and just used iPad sensors to calculate the iPad’s orientation and displayed the AR content in the “place in the world” where it was first detected, how well would that work? Would iPad sensor error accumulate rapidly enough that the AR content would just move all over the place? Or would it work better by not trying to “recenter” the AR content and incurring the detection lag that results in all that jitter?
Any thoughts would be appreciated…