Thursday, 27 February 2014

Pan-Tilt FPV using the Oculus Rift

In a previous experiment, I used the head-orientation of the Oculus Rift to drive two servos moving an FPV camera. That was a good start but not very useful as the FPV video feed wasn't displayed in the Rift.

After putting more work in this project, I finally got a functional FPV or tele-presence system that takes the most of the Rift immersivity (if that's a word). The system relies on various bits and pieces that can't possibly be better explained than with a diagram!


The result
The result is indeed surprisingly immersive. I initially feared that the movement of the FPV camera would lag behind but it's not the case, the servos react quickly enough. Also, the large field of view of the Rift is put to good use with the lens I used on the FPV camera.



Some technical notes
The wide FOV lens I use on the FPV camera causes significant barrel distortion on the captured image. After calibrating the camera (using Agisoft Lens), I implemented a shader to correct this in realtime.

I use Ogre 3D and Kojack's Oculus code to produce the type of image expected by the Rift. In the Ogre 3D scene, I simply create a 3D quad mapped with the captured image and place it in front of the "virtual" head. Kojack's Rift code takes care of rendering the scene on two viewports (one for each eye). It also performs another distortion correction step which, this time, compensates for the Rift lenses in front each eye. Lastly, it provides me with the user's head-orientation that translates later down the chain to servo positions for moving the FPV camera.

As the camera is physically servo-controlled only on yaw and pitch, I apply the head-roll to the 3D quad displaying the captured image (in the opposite direction). This actually works really well (thanks Mathieu for the idea!). I'm not aware of any commercial RC FPV system that does that.

And ideas for future developments...
One of the downside of the system is the poor video quality. This comes from several things:
  • the source video feed is rather low resolution,
  • the wireless transmission adds some noise
  • the analog to digital conversion is performed with a cheap USB dongle
Going fully-digital could theoretically solve these problems:
  • for example, using the Raspberry Pi camera as a source: the resolution and image quality would be better. It is also much lighter than the Sony CCD. It doesn't have a large FOV though (but this can be worked around)
  • transmitting over WiFi would avoid using a separate wireless system. But what kind of low-latency codec to use then? Also range is an issue (though directional antena and tracking could help)
  • the image manipulated by the receiving computer would directly be digital, so no more composite video capture step.
Another problem with the current system is that the receiver end relies on a PC. It would be far more transportable if it could run on a small computer like the Raspberry Pi (which could probably be held at the user's belt).

I should also get rid of the Pololu Maestro module on the transmitter end as I've already successfully used the Raspberry Pi GPIO for generating PWM signals in the past.

Lastly, it would be fantastic to capture with two cameras and use the Rift stereoscopic display.

So still some room for improvement! Any advice welcomed.

The receiver-end (A/V receiver on the left, Wifi-Pi on the right)

Friday, 21 February 2014

Video latency investigation Part 2

In a previous article I investigated the latency of different video capture and display systems. Wireless video transmission was left aside at that time.

Since then, I got my hands a brand new RC video transmission kit and also a very old CRT TV...

Portable CRT TV
(composite input)
TS832 and RC832
5.8GHz AV Transmitter & Receiver


I (re)did some of the tests on the LCD TV I used in the previous article, just to check the reproductibility of previous measures. Unfortunately its composite input died when I started the tests, so I switched to the SCART input using an adapter.

Here are the results:
Setup Average
latency in ms
(4 measures)
Sony CCD + LCD TV (composite) (same test as prev. article) 41
Sony CCD + LCD TV (SCART) 31
Sony CCD + AV Transmission + LCD TV (SCART) 31
Sony CCD + CRT TV 15

Some more conclusions:

  • The latency induced by a CRT display is inredibly small!
  • The 5.8Ghz AV transmission kit doesn't add any measurable latency to the system
  • Weirdly enough (at least on this particular LCD TV), the latency decreases by about 10ms when using the SCART input instead of the composite one.

Thursday, 20 February 2014

Raspberry Pi Flight Controller Part 2 : Single-axis "acro" PID control

After dealing with inputs/outputs it's time for some basic flight control code!

The best explanation I've found about quadcopter flight control is Gareth Owen's article called "How to build your own Quadcopter Autopilot / Flight Controller". Basically, you can control a quadcopter in two different ways:

  • In "Acrobatic" or "Rate" mode: the user's input (i.e. moving the sticks on the transmitter) tells the controller at what rate/speed the aircraft should rotate on each axis (yaw, pitch and roll). In this mode the user is constantly adjusting the quadcopter rotational speed which is a bit tricky but allows acrobatic maneuvers, hence it's name! The "acro" controller is the easiest you can implement and it only requires a gyroscope in terms of hardware. The basic KK board I was using previously is an Acro controller.
  • In "Stabilized" mode: this time the user's inputs indicate the angles on each axis that the aircraft should hold. This is far easier to pilot: for example: if you center the sticks, the aircraft levels. Technically, a stabilized flight controller internally uses an Acro flight controller. In terms of hardware, in addition to the gyroscope, this controller needs an accelerometer (to distinguish up from down), and optionally a magnetometer (to get an absolute reference frame).
So let's start at the very beginning: an Acro flight controller working on a single axis. Here's the kind of loop behind such a controller:
  • Get the latest user's inputs:
    • The desired rotation rate around the axis (in degrees per second)
    • The throttle value (in my case, unit-less "motor-power" between 0 and 1)
  • Get a reading from the gyroscope (a rotation rate in degrees per second)
  • Determine the difference between the measured rotation rate and the desired one: this is the error (still in degrees per second)
  • Use the PID magic on this error to calculate the amount of motor-power to apply to correct the error. This is the same unit as the throttle.
  • Add this correction to the throttle value, and send the result to one motors. Subtract the correction from the throttle value and send the result to the second one (the [0..1] value is converted into a pulse-width in microseconds, i.e a PWM signal that the ESCs understand)
And repeat this loop as frequently as possible (for me: the flight control is done at 125Hz, the gyro reading at 250Hz, and the user's input reading at 50Hz). And this is what the result looks like:


Graph colors:

  • green: gyroscope reading
  • blue: user's "desired" angular rate 
  • red: throttle value
Notice how the quadcopter rotates at constant speed when the desired angular rate stays at the same non-zero value.