Signal power in GNU Radio

11.04.2015 18:28

In my recent attempts to measure the noise figure of a rtl-sdr dongle, I've noticed that the results of the twice-power method and the Y-factor method differ significantly. In an attempt to find out the reason for this difference, I did some further measurements with different kinds of signals. I found out that the power detector I implemented in GNU Radio behaves oddly. It appears that the indicated signal power depends on the signal's crest factor, which should not be the case.

Update: As my follow-up post explains, I was using a wrong setup on both the spectrum analyzer and the signal generator.

First of all, I would like to clarify that what I'm doing here is comparing the indicated power (in relative units) for two signals of identical power. I'm not trying to determine the absolute power (say in milliwatts). As the GNU Radio FAQ succinctly explains, the latter is tricky with typical SDR equipment.

The setup for these experiments is similar to what I described in my post about noise figure: I'm using an Ezcap DVB-T dongle tuned to 700.5 MHz. I'm measuring the power in a 200 kHz band that is offset by -500 kHz from the center frequency. As far as I can see from the FFT, this band is free from spurs and other artifacts of the receiver itself. Signal power is measured by multiplying the signal with a complex conjugate of itself and then taking a moving average of 50000 samples.

Updated rtl-sdr power detector flow graph.

I'm using a Rohde & Schwarz SMBV vector signal generator that is capable of producing an arbitrary waveform with an accurate total signal power. As a control, I've also setup a FSV spectrum analyzer to measure total signal power in the same 200 kHz band as the rtl-sdr setup.

For example, this is what a spectrum analyzer shows for an unmodulated sine wave with -95 dBm level set on the generator:

R&S FSV power measurement for CW.

And this is what it shows for a 100 kHz band of Gaussian noise, again with -95 dBm level:

R&S FSV power measurement for Gaussian noise.

The measured power in the 200 kHz channel in both cases agrees well with the power setting on the generator. Difference probably comes from losses in the cable (I used a 60 cm low-loss LMR-195 coax that came with the USRP), connectors, errors in calibration of both instruments and the fact the the FSV adds its own noise power to the signal. The important thing, however, is that power read-out changes only for 0.19 dB when switching on the modulation. I think this is well within the acceptable measurement error range.

Repeating the same two measurements using the rtl-sdr dongle and the GNU Radio power detector:

rtl-sdr signal power measurements for CW and noise.

Note that now the modulated signal shows much higher power than the unmodulated one. The difference is 2.53 dB, which cannot be attributed to random error.

In fact, this effect is repeatable and not specific to the rtl-sdr dongle. I've repeated the same measurements using an USRP N200 device with a SBX daughterboard. I've also used a number of different signals, from band-limited Gaussian noise, multiple CW signals to an amplitude modulated carrier.

The results are summarized in the table below. To make things clearer, I'm showing the indicated power relative to the CW. I've used -95 dBm mean power for rtl-sdr and -100 dBm for USRP, to keep the signal to noise ratio approximately the same on both devices.

Ppeak/Pmean [dB]Prtl-sdr [dB]PUSRP [dB]
CW0,000,000,00
2xCW, fd=60 kHz3,020,020,00
2xCW, fd=100 kHz3,020,040,04
3xCW, fd=60 kHz3,68-0,030,00
100% AM, fm=60 kHz6,021,201,25
Gaussian noise, BW=100 kHz10,502,552,66

As you can see, both devices show an offset for signals, that have a significant difference between peak and average powers. The offsets are also very similar between the devices, something that suggests that this effect is not caused by the device itself.

Any explanation due to physical receiver design I can imagine results in a lower gain for signals with a high peak-to-mean power ratio. So exactly the opposite of what I've seen.

It doesn't seem to be caused by some smart logic in the tuner adjusting gain for different signals. The difference in gain seems to remain down to very low signal powers. I think it is unlikely that any such optimization would work down to very low signal-to-noise levels. This also excludes any receiver non-linearity as the cause as far as I can tell.

GRC power detector response for CW and noise signals.

If I would be using an analog power detector, this kind of effect would be typical of a detector that does not measure signal power directly (like a diode detector which has an exponential characteristic instead of quadratic). However, I'm calculating signal power numerically and you can't get a more exact quadratic function than x2.

I've tested a few theories regarding numerical errors. In fact, results do differ somewhat between the moving average or the decimating low-pass filter. They also differ between using conjugate and multiply blocks or the RMS block. However, the differences are insignificant as far as I can see and don't explain the measurements. I've chosen the flow graph setup shown above because it produces figures that are closest to an identical calculation done in NumPy. Numerical errors also don't explain why the same flow graph produces valid results for a receiver simulated with signal and noise source blocks.

So far I'm out of ideas what could be causing this.

Posted by Tomaž | Categories: Analog

Comments

You should be aware that using a square wave gating function such as a moving average will introduce an error in your figures. You should use a gating that is based on gaussian distribution which has the lowest distortion.

Add a new comment


(No HTML tags allowed. Separate paragraphs with a blank line.)