Stability of image targets


So I downloaded the Argon4 browser on an iPad, printed the GVU Brochure PDF, laid it on a table, and navigated to the Vuforia sample in Argon4.

The “argon.js” text appears over the brochure – but it’s very unsteady. As I stand there and move the iPad around, the AR text constantly shifts, moving I would guess as much as an apparent inch in any direction.

It seems to be largely some kind of sensor/detection lag – if I move the screen to the right, the text is pulled to the right relative to the brochure. Then as I slow or stop the movement, it snaps back to the left to be positioned correctly over the brochure again.

So my main question is: to what extent can I reduce or eliminate this? Can the AR content ever appear completely motionless relative to the target?

Now I’ll speculate:

It feels like what’s happening is that the camera takes some new video, Argon passes the frame(s) to Vuforia, Vuforia takes a bit to run its detection algorithms on the frames, Vuforia passes a target position back to Argon, Argon moves the AR content to be there, but by the time it does that some non-trivial amount of time has passed and if the iPad is in motion then the target has moved in the field of view so the AR content appears in the wrong place.

It seems like this could be reduced or eliminated by using input from the iPad accelerometers to determine how far the device has moved while all those calculations took place, though I don’t know how accurate those sensors are.

I guess I also wonder, if I took a one-time position of the target from Vuforia, and then quit the Vuforia tracking and just used iPad sensors to calculate the iPad’s orientation and displayed the AR content in the “place in the world” where it was first detected, how well would that work? Would iPad sensor error accumulate rapidly enough that the AR content would just move all over the place? Or would it work better by not trying to “recenter” the AR content and incurring the detection lag that results in all that jitter?

Any thoughts would be appreciated…


Hi Aaron, did you get an answer to this (I didn’t get a notification about this message).

I’ve seen issues with some models of iPad and vuforia, where it appears the camera calibration data is off (which causes the content to not line up well, and reduces the quality of the tracking). Does it behave better on other devices?

Trying to use the sensors to compensate is, frankly, a dead end. Unless the underlying system provides the sensor and camera data with tightly synchronized time stamps (which we don’t get from the OS + Vuforia), doing such compensation is impossible to get right.

If we had rock solid, high accuracy local tracking (e.g., like on a Hololens) that approach would be decent, but the local accuracy of GPS + orientation is so bad that this wouldn’t work on a normal phone.


Hmm. The iPad locked onto the initial position correctly, but then if I physically move it quickly to the side, the AR position seems to lag behind until I stop moving it and it seems to catch up. So it’s not overall inaccurate, just during movement. (I’ll try a side by side with iPhone and iPad just to be sure.)

I guess I was hoping that you could timestamp when you passed video frame “A” to Vuforia, and timestamp when it gave you back the position data for frame “A”, and then use the accelerometers to determine that you moved 0.1m to the left in the intervening time, so you’d adjust the Vuforia position data you received back by 0.1m to (hmm, I guess) the right…

Or, wait, let’s say Vuforia says in frame “A” the target position is 3,15,27 in the world. By the time you receive that you know that the iPad is facing 0.1m to the left, so that same position is just elsewhere in the field of view. But does Vuforia give you an “absolute” position in the world or a “relative” position to your camera?

I guess I don’t know enough about the mechanics of using Vuforia under the covers to really brainstorm effectively.

But if, knowing more, you’re sure there’s no way to compensate for a moving camera, that’s a bummer. I guess we would just take it as a constraint of the system that it doesn’t work very well unless you’re holding the phone pretty still. So you can move around an object to inspect it from different sides, but you’d need to move-and-pause instead of constantly panning, perhaps.


So, the problem is not with Vuforia. It’s with Argon, or rather the limitations imposed by iOS on doing what we’re doing.

Apple prevents us from getting the video into the webview, so we have to render the video natively and then overlay a transparent webview. That’s fine.

But, Apple (and most web view components on other platforms) prevents us from synchronizing with the rendering (or anything) inside the javascript. So what you are seeing is that we are rendering the video in one thread (the app thread), and rendering the graphics in another (the OS-created javascript thread). And there is no way to sync them.

So, the slower the rendering the more swim / separation.

I need to get you to try the latest version, there is slightly less latency.

Eventually, this will be solvable. But, not with current mobile web technology.