Thursday, 27 February 2014

Pan-Tilt FPV using the Oculus Rift

In a previous experiment, I used the head-orientation of the Oculus Rift to drive two servos moving an FPV camera. That was a good start but not very useful as the FPV video feed wasn't displayed in the Rift.

After putting more work in this project, I finally got a functional FPV or tele-presence system that takes the most of the Rift immersivity (if that's a word). The system relies on various bits and pieces that can't possibly be better explained than with a diagram!


The result
The result is indeed surprisingly immersive. I initially feared that the movement of the FPV camera would lag behind but it's not the case, the servos react quickly enough. Also, the large field of view of the Rift is put to good use with the lens I used on the FPV camera.



Some technical notes
The wide FOV lens I use on the FPV camera causes significant barrel distortion on the captured image. After calibrating the camera (using Agisoft Lens), I implemented a shader to correct this in realtime.

I use Ogre 3D and Kojack's Oculus code to produce the type of image expected by the Rift. In the Ogre 3D scene, I simply create a 3D quad mapped with the captured image and place it in front of the "virtual" head. Kojack's Rift code takes care of rendering the scene on two viewports (one for each eye). It also performs another distortion correction step which, this time, compensates for the Rift lenses in front each eye. Lastly, it provides me with the user's head-orientation that translates later down the chain to servo positions for moving the FPV camera.

As the camera is physically servo-controlled only on yaw and pitch, I apply the head-roll to the 3D quad displaying the captured image (in the opposite direction). This actually works really well (thanks Mathieu for the idea!). I'm not aware of any commercial RC FPV system that does that.

And ideas for future developments...
One of the downside of the system is the poor video quality. This comes from several things:
  • the source video feed is rather low resolution,
  • the wireless transmission adds some noise
  • the analog to digital conversion is performed with a cheap USB dongle
Going fully-digital could theoretically solve these problems:
  • for example, using the Raspberry Pi camera as a source: the resolution and image quality would be better. It is also much lighter than the Sony CCD. It doesn't have a large FOV though (but this can be worked around)
  • transmitting over WiFi would avoid using a separate wireless system. But what kind of low-latency codec to use then? Also range is an issue (though directional antena and tracking could help)
  • the image manipulated by the receiving computer would directly be digital, so no more composite video capture step.
Another problem with the current system is that the receiver end relies on a PC. It would be far more transportable if it could run on a small computer like the Raspberry Pi (which could probably be held at the user's belt).

I should also get rid of the Pololu Maestro module on the transmitter end as I've already successfully used the Raspberry Pi GPIO for generating PWM signals in the past.

Lastly, it would be fantastic to capture with two cameras and use the Rift stereoscopic display.

So still some room for improvement! Any advice welcomed.

The receiver-end (A/V receiver on the left, Wifi-Pi on the right)

3 comments:

  1. are you planning on posting code? does your system work in ubuntu?

    ReplyDelete
  2. Hello how are you doing? I was wondering that also

    ReplyDelete
  3. could you elaborate more on connection between PC to RPi? did you set RPi as Access Point to allow PC send orientation command?

    Thanks

    ReplyDelete