How does the oculus touch controller know the absolute position of your hand in 3d space?
There are one or more (usually two) high resolution, high FOV, global shutter IR cameras, connected to the same PC as the Rift is via USB 3.0. Either 60 or 120 FPS, not sure.
They can be mounted [on your desk (the desk lamp stand comes with it), on [stands, or [on the wall, but always pointing slightly inwards towards the main tracking area.
On the Touch controllers, there is an array of [flashing IR LEDs (hidden in the [consumer version) that flash in a certain pattern (to identify the object), and basically, the IR cameras see this pattern, and through complicated computer vision algorithms, fusing this optical data with the IMU (gyro and accelerometer data) sent by the IMU inside the Touch controllers, can determine the position of the controllers down to sub-mm accuracy.
The exact algorithms are proprietary (called ‘Constellation’), and the general concept is extremely complicated, but effectively the accelerometer value can be integrated to determine velocity, and then that value can be integrated to get change in change in distance. Unfortunately, that double integration means event the most minute fluctuation gives exponential drift, and thus this needs to be corrected. The optical system of the IR LEDs is what does this correction.
So the end result is [1:1, sub-mm accurate tracking with near zero latency.
Video on the Touch controllers: https://www.youtube.com/watch?v=s6BuN1uyq48](https://www.youtube.com/watch?v=dbYP4bhKr2M)](Meta Quest 2: Immersive All-In-One VR Headset | Meta Store)](http://static.giantbomb.com/uploads/original/14/141373/2762107-6422974047-oculu.jpg)](http://uploadvr.com/wp-content/uploads/2015/06/oculus-dual-tracking-camera1.jpg)](http://www.roadtovr.com/wp-content/uploads/2015/10/sea-vr-bullet-train-setup1.jpg)](Meta Quest VR Headsets, Accessories & Equipment | Meta Quest)
To talk a bit more about algorithms that can be used to achieve this, the relative positions of the IR LEDs on the touch controller are known, and given any one image of the controller, the number of possible relative poses between the camera and the controller can be narrowed down. With the movement of the controllers and additional information from the IMUs, a unique pose can eventually be determined. Subsequent estimations of pose can then be implemented as updates over previous poses.
These are incredible answers. Thank you both!