HiFiBerry follow up

11.06.2017 20:59

Back in December I was writing about problems with a HiFiBerry audio interface for Raspberry Pi. The audio board was apparently emitting interference on the 2.4 GHz frequency band and made the wireless LAN connection on the Raspberry Pi unreliable. I got in contact with their support and they acknowledged that the oscillators on that version of their hardware had an EMI problem. They promised to get back when they've fixed the issue and indeed in April they offered to replace the old board free of charge or sell me a new one for half the price. I opted for the latter option, since I was curious what changes they made to the design and wanted to compare the boards side-by-side.

HiFiBerry DAC+ HW 2.2

This is the old board, marked HiFiBerry DAC+ HW 2.2. The components marked with X44 and X48 are Fox Electronics Xpresso-series hybrid oscillator modules. Each of these contains an integrated circuit and a quartz crystal resonator. Unfortunately they are unmarked and I didn't bother to measure their frequency. From their designations I'm guessing one provides the clock for the 44100 Hz sampling rate and the other for 48000 Hz. Datasheet for the PCM5122 ADC suggests that the oscillators themselves are in the 10 MHz range and the clocks are then divided down inside the ADC chip.

HiFiBerry DAC+ HW 2.6

This is the new board I got. It's marked HW 2.6. The gold hybrids have been replaced with similar looking components in black plastic packages. Xpresso-series has apparently been discontinued. The new oscillators are marked only with letters BOHXA and after a brief web search I didn't manage to find their manufacturer. The new board also omits the push-button. I'm not sure what its original function was anyway.

Here are the two photographs superimposed to highlight the differences:

Comparison of HiFiBerry DAC+ HW 2.2 and 2.6

Apart from the new oscillator hybrids the most obvious change is the removal of two large copper fills on the top layer of the PCB. These large areas of copper on the top layer are all connected to the 3.3V supply. The bottom layer of the board remains one big ground plane.

Copper fill stub near the oscillators on HW 2.2.

The copper fill near the oscillators looks especially suspicious. It was only connected to the 3.3V supply with a narrow bridge between the pins of P4 on the right. It provided supply voltage to U4 and nothing else further down to the left side. It seems like it could accidentally form a quarter-wave stub antenna. It's approximately 25 mm in length, so the resonance could well be somewhere in the GHz range. This is near enough to the 2.4 GHz band that I think it would be feasible for it transmit some oscillator harmonics out from the board.

It would be interesting to see if this stub was indeed causing the problems. It should be easy to drill a hole through and decouple the left end of it with a SMD capacitor to the ground plane on the bottom layer. I could then repeat the near-field measurements I did last year to confirm. I'm not sure if I will bother though. The new board does indeed fix the problem with the Raspberry Pi built-in Wi-Fi radio and I currently don't have any particular use for another audio board.

Posted by Tomaž | Categories: Analog | Comments »

Repairing what wasn't broken

04.06.2017 21:10

Last time when I was tinkering with the insides of my portable CRT TV (UTV 6007) I happened to notice one more odd thing. One of the smaller through-hole resistors near the high-voltage transformer looked charred from heat. Ever since this TV was new it had a little bit of that characteristic smell of overheated electronics. However it worked fine, so I didn't worry too much about it and it's not unusual for cheap plastic to smell a bit when heated. But this here looked serious enough to look into it even though from the outside there were still no apparent problems.

Blackened R75 near the high-voltage transformer.

The size of the resistor suggested a 1/8 W rating. The body was too blackened to read out the colors. The position is marked with R75 and the silkscreen print underneath helpfully says it should be 1 kΩ. However I've seen that other resistors on this board sometimes don't match the values printed for their positions. Out of the circuit, the resistor measured around 900 Ω, which seemed consistent with a slightly damaged 1 kΩ. Just to be sure, I traced the circuit around it to see what its function was.

The following is the relevant part of the circuit on the main board around the HVT. Only the low-voltage secondary winding providing a few 100 V for the first anode is shown here.

Circuit around the HVT in UTV 6007.

I also traced the small circular board that sits directly on top of the electron gun pins and is connected to the main board with a (very flimsy looking) 4-wire flat cable.

Small circuitboard on top of the electron gun.

I didn't find the exact pin-out for this tube, so take the pin markings with a grain of salt. However this pin-out seems consistent with how the typical cathode ray tube is connected. For reference I used the one shown in the KA2915 datasheet and a few old TV schematics my father found in his library.

The small circuit on top of CRT pins.

The cathode K has a positive bias relative to the ground. The bias can be adjusted with the Brightness knob. The first grid G1 is grounded and is hence negative relative to the cathode. First anode A1 is on a higher positive bias relative to the ground and is hence positive relative to the cathode. The second grid G2 either isn't present in this tube or is grounded as well. There is no apparent focus adjustment. The video signal is connected to the cathode. It varies cathode potential relative to the first grid and so controls the intensity of the electron beam and thus the brightness of the spot on the phosphor.

The capacitor and diode arrangement between A1, G1 and ground is interesting. Something similar is present on all CRT circuits I've seen (see D3 here for example, from this project). Its purpose might be to put a high negative bias on G1 to stop the electron beam when the device is turned off and A1 goes low. I know that a lingering electron beam in old TV sets sometimes burned out a spot in the screen if the beam didn't shut down before the deflection. This may be there to prevent that.

In any case, the R75 is in circuit that provides the anode voltage. Only the small anode current should flow through it, and the charging current for the 2.2 μF capacitor for a short time after the TV is turned on. It's not apparent what caused it to heat up so much. The capacitor seems fine, so perhaps something arced over at one point or the TV was turned on and off several times in short succession.

Replacement R75 resistor in UTV 6007.

Since the circuit seemed to be consistent with the suggested 1 kΩ value, I replaced the resistor with a new one. I used a standard 1/4 W carbon resistor I had at hand and left it on longer leads for better cooling if something similar happened again. As expected, the TV runs just as well with the new resistor in place as it did with the old burned up one. There's currently no sign of it overheating, but perhaps I'll check again after some time.

I love playing with this old stuff. Analog TV was one of the pinnacles of consumer analog technology and it's fascinating to see how optimized and well thought out it was at the end of its era. This particular specimen is surprisingly repairable for a device that I got new in the store for 20 €. Components are well marked on the silk screen print and most have their values printed out as well (even if those don't always match reality). The only thing more I could wish for is that I could run it with the case opened without a special contraption for holding the CRT.

Posted by Tomaž | Categories: Analog | Comments »

Disabling TV audio squelch circuit

21.05.2017 14:33

I just don't have any luck with Maker Faires it seems. I had everything packed and prepared for the event last week and then spent the weekend in bed with a fever. Sincere appologies to anyone who wanted to see the Galaksija. I'm sure there were more than enough other interesting exhibitions to make the visit worth your time.

Galaksija screenshot

In the weeks leading to the Maker Faire I came across an old problem with the small analog TV (United UTV 6007) that I use with vintage computers. Ever since I first played with Galaksija's audio capabilities I noticed that sound gets very distorted when played through the TV speaker. I never really looked into it. I just assumed that perhaps voltage levels were wrong for line input or the high-frequency components of 1 bit music were interfering with something. Since I had Galaksija already setup on my bench, I decided to investigate this time. It turned out that a clean sine wave from a signal generator also sounded very choppy and distorted. On the other hand, audio from a DVD player sounded perfectly fine. This made me curious and I took the TV apart.

United UTV 6007 TV circuit board.

UTV 6007 is built around the CD5151CP integrated circuit. It's very similar to the camping set TV described in this post about adding a composite video input to it. The post on Bidouille.org also has links to a bunch of useful datasheets. UTV 6007 already has a composite video and an audio line input out of the box, which was one of the reasons I originally bought it.

Part of the UTV 6007 circuit board marked "hbb".

I traced the audio path on the board to this curious circuit near the volume knob. I'm not sure what "hbb" stands for, but the circuit has a squelch function. It mutes the speaker when there's no picture displayed. This makes the TV silent when it's not tuned to a channel instead of playing the characteristic white noise. It actually takes a surprising amount of real estate on the small PCB.

Audio amplifier and squelch circuit in UTV 6007

This is the relevant part of the circuit traced out. The squelch takes input from the sync. sep. output on the CD5151CP. This is probably a signal that contains only synchronization impulses separated out from the video. R1, C1, R2 and C2 form an impulse filter. Positive impulses between 150 μs and 5 ms will open Q1. This discharges C3. If no impulses are present, R3 will in about 14 ms charge C3 to the point that Q2 opens. Q2 shorts the audio amplifier input to ground, making the output silent.

Q2 seems somewhat odd, since its collector doesn't have any bias current. So at first glance it appears that it would not be able to ground negative half waves of the audio signal. However, D386 amplifier has a bipolar differential input stage that sources base current. Apparently that provides sufficient collector current for Q2. In fact, the audio circuit (without the squelch) is identical to one of the D386 reference designs.

These timings suggest that the circuit detects vertical video synchronization. Unfortunately, the compact design of the TV makes it non-trivial to power it up while the circuit board is accessible. I didn't want to bother with any special setup, so I don't have any actual measurements. Sound distortion suggested that Galaksija's video signal was making this circuit erroneously trigger for a short time once every frame, which made for a choppy sound. Galaksija's video is in fact somewhat out-of-spec (for instance, it's progressive scan instead of interlaced).

Since I was not sure which timing exactly was the culprit, I opted to simply disable the circuit. I guess in the age of digital TV some untuned television noise just adds to the retro style of the whole setup. To disable the squelch I removed the R3 resistor. Without it, Q2 can't get any base current and hence always remains closed. A quick test confirmed that with that modification in place Galaksija sounds as it should on the TV speakers.

Posted by Tomaž | Categories: Analog | Comments »

Phase noise in microcontroller ADCs

17.04.2017 19:44

In the past few years I designed a series of small VHF/UHF receivers. They are all based on TV tuner chips from NXP semiconductors and the STM32 series of Cortex-M3 microcontrollers. The receivers were originally intended for experiments with TV whitespaces, such as detecting the presence of wireless microphones. However, European projects come and go, and so recently my work at the Institute has shifted towards ultra-narrowband technology. I keep using my old hardware though. Partly because it is convenient and partly because it's interesting to find where its limitations are.

With ultra-narrowband, phase noise is often the defining characteristic of a system. Phase noise of a receiver is a measure of the short-term stability of its frequency reference. One of its effects is blooming of narrowband signals in the frequency domain. A common way to specify phase noise is in decibels relative to the carrier (dBc), at 1 Hz equivalent bandwidth at a certain offset from the carrier. This slide deck from Agilent nicely explains the concept.

Not surprisingly, my design has quite a lot of phase noise. This was not a concern when receiving wide band FM microphone signals. However, it turns out that it's not the RF part that is the culprit. Most of the phase noise in the system comes from the analog-to-digital converter in the ARM microcontroller that I use to sample the baseband signal. I investigated this using the same setup I used for my ADC undersampling measurement - in the following measurements, no RF circuits were involved.

This is how the spectrum of a 500 kHz CW signal looks like after being sampled at 2 Msample/s (using the interleaved dual mode of the ADC). The spectrum is calculated using FFT from 2048 samples. Ideally, there should only be a narrow spike representing one frequency component, however the phase noise causes it to smear into a broad peak:

Measured spectrum of a CW signal.

From this, I drew the phase noise plot. This shows half of the dual sideband power, calculated at 1 Hz equivalent bandwidth and relative to the total signal power:

Measured phase noise of the ADC.

At 10 kHz offset, this gives:

\mathcal{L}_{ADC}(10\mathrm{kHz}) = -77 \mathrm{dBc @ 1 Hz}

On the other hand, typical phase noise from the datasheet of the tuner chip I'm using is:

\mathcal{L}_{tuner}(10\mathrm{kHz}) = -93 \mathrm{dBc @ 1 Hz}

For comparison, the National Instruments USRP N210, another device I use daily, is only 3 dB better at 10 kHz (according to this knowledge base page):

\mathcal{L}_{USRP}(10\mathrm{kHz}) = -80 \mathrm{dBc @ 1 Hz}

Proper lab equipment of course is significantly better. The Rohde & Schwarz SMBV signal generator I used in the measurement only has -148 dBc of phase noise specified at 20 kHz offset.

What causes this phase noise? The ADC in the microcontroller is driven by the system clock. The accuracy of this clock determines the accuracy of the signal sampling and in turn the phase noise in the digital signal on the output of the ADC. In my case, the system clock is derived from the high speed internal (HSI) oscillator using the integrated PLL. The datasheet doesn't say anything about the oscillator, but it does say that the PLL cycle-to-cycle jitter is at most 300 ps.

Using a Monte Carlo simulation, I simulated the phase noise of a system where signal sampling has a random ±150 ps jitter with a uniform distribution. The results nicely fit the measurement. The shaded area below shows the range of 𝓛(f) observed in 5000 runs:

Simulated phase noise compared to measurement.

So it seems that the PLL is responsible for most of the phase noise. Unfortunately, it appears that I can't avoid using it. There is no way to run the integrated ADC from a separate external clock. I could run the whole system from a low-jitter high-speed external (HSE) clock without the PLL, however HSE is limited to 25 MHz. This is quite low compared to my current system clock of 56 MHz and would only be feasible for significantly lower sample rates (which would require different analog anti-aliasing filters). External ADC triggering also wouldn't help here since even with an external trigger, the sample-and-hold circuit appears to be still driven by the ADC clock.

For some further reading on the topic, I recommend Effect of Clock Jitter on High Speed ADCs design note from Linear, which talks about phase noise from the perspective of serious ADCs, and Phase Locked Loop section in STMicroelectronics AN2352.

Posted by Tomaž | Categories: Analog | Comments »

About the Wire loop probe

15.12.2016 21:08

Recently I was writing about how my father and I were checking a HiFiBerry board for a source of Wi-Fi interference. For want of better equipment we used a crude near-field probe that consisted of a loop of stripped coaxial cable and a trimmer capacitor. We attempted to tune this probe to around 2.4 GHz using the trimmer to get more sensitivity. However we didn't see any effect of capacitance changes on the response in that band.

The probe was made very much by gut feeling, so it wasn't that surprising that it didn't work as expected. We got some interesting results nonetheless. Still, I thought I might do some follow-up calculations to see how far off we were in our estimates of the resonance frequency.

Our probe looked approximately like the following schematic (photograph). The loop diameter was around 25 mm and the wire diameter was around 1 mm. Trimmer capacitor was around 10 pF:

Wire loop at the end of a coaxial cable.

Inductance of a single, circular loop of wire in air is:

L = \mu_0 \frac{D}{2} \left( \ln \frac{8D}{d} - 2 \right) \approx 50 \mathrm{nH}

The wire loop and the capacitor form a series LC circuit. If we ignore the effect of the coaxial cable connection, the resonant frequency of this circuit is:

f = \frac{1}{2 \pi \sqrt{LC}} \approx 200 \mathrm{MHz}

So it appears that we were off by an order of magnitude. In fact, this result is close to the low frequency peak we saw on the spectrum analyzer at around 360 MHz:

Emissions from the HiFiBerry board from DC to 5 GHz.

Working backwards from the equations above, we would need capacitance below 1 pF or loop diameter on the order of millimeters to get resonance at 2.4 GHz. These are very small values. Below 1 pF, stray capacitance of the loop itself would start to become significant and a millimeter-sized loop seems too small to be approximated with lumped elements.

Posted by Tomaž | Categories: Analog | Comments »

HiFiBerry and Wi-Fi interference

01.12.2016 11:43

HiFiBerry is a series of audio output cards designed to sit on the Raspberry Pi 40-pin GPIO connector. I've recently bought the DAC+ pro version for my father to use with a Raspberry Pi 3. He is making a custom box to use as an Internet radio and music player. I picked HiFiBerry because it seemed the simplest, with fewest things that could go wrong (the Cirrus Logic board for instance has many other features in addition to an audio output). It's also well supported out-of-the-box in various Raspberry Pi Linux distributions.

Unfortunately, my father soon found out that the internal wireless LAN adapter on the Raspberry Pi 3 stopped working when HiFiBerry was plugged in. Apparently other people have noticed that as well, as there is an open ticket about it at the Raspberry Pi fork of the Linux kernel.

Several possible causes were discussed on the thread on GitHub, from hardware issues to kernel driver bugs. From those, I found electromagnetic interference the most likely explanation - reports say that the issue isn't always there and depends on the DAC sampling rate and the Wi-Fi channel and signal strength. I thought I might help resolving the issue by offering to make a few measurements with a spectrum analyzer (also, when you have RF equipment on the desk, everything looks like EMI).

HiFiBerry board with a near-field probe over the resonators.

I don't have any near-field probes handy, so we used an ad-hoc probe made from a small wire loop on an end of a coaxial cable. We attempted to tune the loop using a trimmer capacitor to get better sensitivity around 2.4 GHz, but the capacitor didn't have any noticeable effect. We swept this loop around the surface of the HiFiBerry board as well as the Raspberry Pi 3 board underneath.

During these tests, the wireless LAN and Bluetooth interfaces on-board Raspberry Pi were disabled by blacklisting brcmfmac, brcmutil, btbcm and hci_uart kernel modules in /etc/modprobe.d. Apart from this, the Raspberry Pi was booted from an unmodified Volumio SD card image. Unfortunately we don't know what kind of ALSA device settings the Volumio music player used.

What we noticed is that the HiFiBerry board seemed to radiate a lot of RF energy all over the spectrum. The most worrying are spikes approximately 22.6 MHz apart in the 2.4 GHz band that is used by IEEE 802.11 wireless LAN. Note that the peaks on the screenshot below almost perfectly match the center frequencies of channels 1 (2.412 GHz) and 6 (2.437 GHz). The peaks continue to higher frequencies beyond the right edge of the screen and the two next ones match channels 11 and 14. This seems to approximately match the report from Hyperjett about which channels seems to be most affected.

Emissions from the HiFiBerry board in the 2.4 GHz band.

The spikes were highest when the probe was centered around the crystal resonators. This position is shown on the photograph above. This suggests that the oscillators on HiFiBerry are the source of this interference. Phil Elwell mentions some possible I2S bus harmonics, but frequencies we saw don't seem to match those.

Emissions from the HiFiBerry board down to 1 GHz.

Scanning lower frequencies shows that the highest peak is around 360 MHz, but that is likely because of the sensitivity of our probe and not due to something related to the HiFiBerry board.

Emissions from the HiFiBerry board from DC to 5 GHz.

I'm pretty sure these emissions are indeed connected with the HiFiBerry itself. With the probe on Raspberry Pi board underneath HiFiBerry, the spectrum analyzer barely registered any activity. Unfortunately, I forgot to take some measurements with a 2.4 GHz antenna to see how much of this is radiated out into the far-field. I'm guessing not much, since it doesn't seem to affect nearby wireless devices.

Related to that, another experiment points towards the fact that this is an EMI issue. If you connect a Wi-Fi dongle via a USB cable to the Raspberry Pi, it will work reliably as long as the dongle is kept away from the HiFiBerry board. However if you put it a centimeter above the HiFiBerry board, it will lose the connection to the access point.

In conclusion, everything I saw seems to suggest that this is a hardware issue. Unfortunately the design of the HiFiBerry board is not open, so it's hard to be more specific or suggest a possible solution. The obvious workaround is to use an external wireless adapter on an USB extension cable, located as far as feasible from the board.

I should stress though that the measurements we did here are limited by our probe, which was very crude, even compared to a proper home-made one. While frequencies of the peaks are surely correct, the measured amplitudes don't have much meaning. Real EMI testing is done with proper tools in a anechoic chamber, but that is somewhat out of my league at the moment.

Posted by Tomaž | Categories: Analog | Comments »

BPSK on TI CC chips, 2

18.06.2016 13:07

A few days ago I described how a Texas Instruments CC1101 chip can be used to transmit a low bitrate BPSK (binary phase-shift keying) signal using the minimum-shift keying (MSK) modulator block. I promised to share some practical measurements.

The following has been recorded using an USRP N200 with sampling frequency of 1 MHz. Raw I/Q samples from the USRP were then passed to a custom BPSK demodulator written in Python and NumPy.

The transmission was done using a CC1101, which was connected to the USRP using a coaxial cable and an attenuator. MSK modulator on CC1101 was setup for hardware data rate of 100 kbps. 1000 MSK symbols were used to encode one BPSK symbol, giving the BPSK bitrate of 100 bps. The packet sent was 57 bytes, which resulted in packet transmission time of around 4.5 seconds. The microcontroller firmware driving the CC1101 kept repeating the same packet with a small delay between transmissions.

Recorded signal power versus time.

This is one packet, shown as I/Q signal power versus time:

Signal power during a single captured packet.

In-phase (real) component of the recorded signal, zoomed in to reveal individual bits:

Zoomed-in in-phase signal component versus time.

Both the CC1101 and USRP were set to the same central frequency (868.2 MHz). Of course, due to tolerances in both devices their local oscillators had slightly different frequencies. This means that the carrier translated to baseband has a low, but non-zero frequency.

You can see 180° phase shifts nicely, as well as some ringing around the transitions. This has to be filtered out before carrier recovery.

After carrier recovery we can plot the carrier frequency during the time of transmission. Here it's plotted for all 4 packets there were recorded:

Recovered carrier frequency versus time for 4 packets.

You can see that the frequency shifts by around 20 Hz over the time of 4.5 seconds. This is around 20% of the 100 Hz channel occupied by the transmission. At 868.2e6 central frequency, 20 Hz drift is a bit over 0.02 ppm, which is actually not that bad. For comparison, the quartz crystal I used with CC1101 has specified ±10 ppm stability over the -20°C to 70°C range (not sure what USRP uses, but it's probably in the same ballpark). However, I think the short-term drift seen here is not due to the quartz itself but more likely due to changes in load capacitance. Perhaps the oscillator is heating slightly during transmission. In fact, just waving my arm over the PCB with the CC1101 has a noticeable effect.

Finally, this is the phase after multiplying the signal with the recovered carrier. The only thing left is digital clock recovery, bit slicing and decoding the upper layers of the protocol:

Signal phase after multiplication with recovered carrier.

Posted by Tomaž | Categories: Analog | Comments »

Power supply voltage shifts

02.05.2016 20:16

I'm a pretty heavy Munin user. In recent years I've developed a habit of adding a graph or two (or ten) for every service that I maintain. I also tend to monitor as many aspects of computer hardware as I can conveniently write a plugin for. At the latest count, my Munin master tracks a bit over 600 variables (not including a separate instance that monitors 50-odd VESNA sensor nodes deployed by IJS).

Monitoring everything and keeping a long history allows you to notice subtle changes that would otherwise be easy to miss. One of the things that I found interesting is the long-term behavior of power supplies. Pretty much every computer these days comes with software-accessible voltmeters on various power supply rails, so this is easy to do (using lm-sensors, for instance).

Take for example voltage on the +5 V rail of an old 500 watt HKC USP5550 ATX power supply during the last months of its operation:

Voltage on ATX +5 V rail versus time.

From the start, this power supply seemed to have a slight downward trend of around -2 mV/month. Then for some reason the voltage jumped up for around 20 mV, was stable for a while and then sharply dropped and started drifting at around -20 mV/month. At that point I replaced it, fearing that it might soon endanger the machine it was powering.

The slow drift looks like aging of some sort - perhaps a voltage reference or a voltage divider before the error amplifier. Considering that it disappeared after the PSU was changed it seems that it was indeed caused by the PSU and not by a drifting ADC reference on the motherboard or some other artifact in the measurements. Abrupt shifts are harder to explain. As far as I can see, nothing important happened at those times. An application note from Linear mentions that leakage currents due to dirt and residues on the PCB can cause output voltage shifts.

It's also interesting that the +12 V rail on the same power supply showed a bit different pattern. The last voltage drop is not apparent there, so whatever caused the drop on the +5 V line seemed to have happened after the point where regulation circuit measures the voltage. The +12 V line isn't separately regulated in this device, so if the regulation circuit would be involved, some change should have been apparent on +12 V as well.

Perhaps it was just a bad solder joint somewhere down the line or oxidation building up on connectors. At 10 A, a 50 mV step only corresponds to around 5 mΩ change in resistance.

Voltage on ATX +12 V rail versus time.

This sort of voltage jumps seem to be quite common though. For instance, here is another one I recently recorded on a 5 V, 2.5 A external power supply that came with CubieTruck. Again, as far as I can tell, there were no external reasons (for instance, power supply current shows no similar change at that time).

Voltage on CubieTruck power supply versus time.

I have the offending HKC power supply opened up on my bench at the moment and nothing looks obviously out of place except copious amounts of dust. While it would be interesting to know what the exact reasons were behind these voltage changes, I don't think I'll bother looking any deeper into this.

Posted by Tomaž | Categories: Analog | Comments »

Rapitest Socket Tester

29.01.2016 17:47

John Ward has a series of videos on YouTube where he discusses the Rapitest Socket Tester. This is a device that can be used to quickly check whether a UK-style 230 V AC socket has been wired correctly. John explains how a device like that can be dangerously misleading, if you trust its verdict too much. Even if Rapitest shows that the socket passed the test, the terminals in the socket can still be dangerously miswired.

Rapitest Socket Tester (Part 1)

(Click to watch Rapitest Socket Tester (Part 1) video)

I have never seen a device like this in person. Definitely they are not common in this part of the world. Possibly because the German "Schuko" sockets we use don't define the positions of the live and neutral connections and hence there are fewer mistakes to make in wiring them. The most common testing apparatus for household wiring jobs here is the simple mains tester screwdriver (about which John has his own strong opinion and I don't completely agree with him there).

From the first description of the Rapitest device, I was under the impression that it must contain some non-linear components. Specifically after hearing that it can detect when the line and neutral connections in the socket have been reversed. I was therefore a bit surprised when I saw that the PCB inside the device contains just a few resistors. I was curious how it manages to do its thing with such a simple circuit, so I went slowly through the part of the video that shows the disassembly and sketched out the schematic:

Schematic of the Rapitest Socket Tester

S1 through S3 are the neon indicator lamps that are visible on the front of the device, left to right. L, N and E are line, neutral and earth pins that fit into the corresponding connections in the socket. It was a bit hard to read out the resistor values from the colors on the video, so there might be some mistakes there, but I believe the general idea of the circuit is correct.

It's easy to see from this circuit how the device detects some of the fault conditions that are listed on the front. For instance, if earth is disconnected, then S3 will not light up. In that case, voltage on S3 is provided by the voltage divider R7 : R8+R1+R2 which does not provide a high enough voltage to strike an arc in the lamp (compared to R7 : R8, if earth is correctly connected).

Similarly, if line and neutral are reversed, only the R3 : R5 divider will provide enough voltage and hence only S1 will light up. S3 has no voltage since it is connected across neutral and earth in that case. For S2, the line voltage is first halved across R2 and R1 and then reduced further due to R4 and R6.

Rapitest 13 Amp Socket Tester

Image by John Ward

However, it's hard to intuitively see what would happen in all 64 possible scenarios (each of the 3 terminals can in theory be connected to either line, neutral, earth or left disconnected, hence giving 43 combinations). To see what kind of output you would theoretically get in every possible situation, I threw together a simple Spice simulation of the circuit drawn above. A neon lamp is not trivial to simulate in Spice, so I simplified things a bit. I modeled lamps as open-circuits and only checked whether the voltage on them would reach the breakdown voltage of around 100 V. If the voltage across a lamp was higher, I assumed it would light up.

The table below shows the result of this simulation. First three columns show the connection of the tree socket terminals (NC means the terminal is not connected anywhere). I did not test situations where a terminal would be connected over some non-zero impedance. An X in one of the last three columns means that the corresponding lamp would turn on in that case.

  L N E S1 S2 S3
1 L L L      
2 L L N     X
3 L L E     X
4 L L NC      
5 L N L X    
6 L N N X X X
7 L N E X X X
8 L N NC X X  
9 L E L X    
10 L E N X X X
11 L E E X X X
12 L E NC X X  
13 L NC L      
14 L NC N   X X
15 L NC E   X X
16 L NC NC      
17 N L L X X X
18 N L N X    
19 N L E X    
20 N L NC X X  
21 N N L     X
22 N N N      
23 N N E      
24 N N NC      
25 N E L     X
26 N E N      
27 N E E      
28 N E NC      
29 N NC L   X X
30 N NC N      
31 N NC E      
32 N NC NC      
33 E L L X X X
34 E L N X    
35 E L E X    
36 E L NC X X  
37 E N L     X
38 E N N      
39 E N E      
40 E N NC      
41 E E L     X
42 E E N      
43 E E E      
44 E E NC      
45 E NC L   X X
46 E NC N      
47 E NC E      
48 E NC NC      
49 NC L L      
50 NC L N      
51 NC L E      
52 NC L NC      
53 NC N L      
54 NC N N      
55 NC N E      
56 NC N NC      
57 NC E L      
58 NC E N      
59 NC E E      
60 NC E NC      
61 NC NC L      
62 NC NC N      
63 NC NC E      
64 NC NC NC      

I marked with blue the six combinations (7, 8, 15, 19, 37, 55) that are shown on the front of the device. They show that in those cases my simulation produced the correct result.

Five rows marked with red show situations where the device shows "Correct" signal, but the wiring is not correct. You can immediately see two classes of problems that the device fails to detect:

  • It cannot distinguish between earth and neutral (combinations 6, 10 and 11). This is obvious since both of these are on the same potential (in my simulation and to some approximation in reality as well). However, if a residual-current device is installed, any fault where earth and neutral have been swapped should trip it as soon as any significant load is connected to the socket.
  • It also fails to detect when potentials have been reversed completely (e.g. line is on both neutral and earth terminals and either neutral or earth is on the line terminal - combinations 17 and 33). This is the deadly as wrong as you can get situation shown by John in the second part of his video.

Under the assumption that you only have access to AC voltages on the three terminals in the socket, both of these fault situations are in fact impossible to distinguish from the correct one with any circuit or device.

It's worth noting also that the Rapitest can give a dangerously misleading information in other cases as well. For instance, the all-lights-off "Line not connected" result might give someone the wrong impression that there is no voltage in the circuit. There are plenty of situations where line voltage is present on at least one of the terminals, but all lamps on the device are off.

Posted by Tomaž | Categories: Analog | Comments »

Minimalist microwave magic

17.12.2015 14:52

The other day at the Institute, Klemen brought me this microwave motion sensor. Apparently, it was left over from an old municipal lighting project where street lights were to be only turned on when something was moving in the vicinity. I don't know what came out of this idea, but the sensor itself is quite fascinating.

Microwave motion sensor module.

Bottom side of the motion sensor circuit.

There is no manufacturer name on the device. The bottom side of the PCB says GH1420 and IRP/07. It appears to be very similar to the AgilSense HB100 sensor module, but it's probably a cheap knock-off rather than the original. I haven't come across these yet, but it seems they are somewhat popular to use with Arduino (and as cheap DIY 10 GHz sources for amateur radio enthusiasts).

Microwave motion sensor block diagram.

Image by AgilSense

The application note from AgilSense contains the block diagram above. The device transmits a continuous wave at around 10.5 GHz on transmit antenna. Any signal that gets reflected back to the receive antenna is mixed with the local oscillator. If the input signal is Doppler-shifted because it reflected off a moving object, you get some low frequency signal on the output. The application note says that a typical signal is below 100 Hz and in the μV-range.

Top side of the motion sensor circuit.

After removing the metal can, the circuit appears fantastically minimalist. There are only two semiconductor elements, two passive elements and a dielectric resonator. The PCB substrate feels like plain old FR4. Copper traces are covered with solder mask and have what looks like immersion gold finish - unusual for a microwave RF circuit. If it weren't for the transistor in the high-frequency SMD package and the PCB microstrip wizardry, I wouldn't believe this runs at 10 GHz. They didn't even bother to solder the RF shield can onto the ground plane.

I'm not very familiar with extremely high-frequency design, but this does look more or less like what the block diagram above promises. The X shaped element is most likely a high-frequency NPN transistor used as an oscillator. Base is upper-right, collector is lower-left and the two remaining pins are the emitter (with vias to the ground plane on the other side). The +5 V power lead provides collector voltage through a resistor on the lower-right. The quarter-circle things on the PCB are butterfly stubs in low-pass filters.

The collector of the transistor is capacitively coupled with the base through the white cylindrical resonator. This provides the feedback that drives the oscillator. What's interesting is that there is no bias on the base of the transistor. Either it is working as a class C amplifier or there's something more fancy than a plain bipolar transistor in that SMD package.

The output of the oscillator is coupled to the transmit antenna on the bottom and to the mixer in the center. The little rectangular stubs on the way probably help with impedance matching or some filtering of oscillator harmonics. The trace from the receive antenna comes in on the top of the picture. The mixer is probably some kind of a diode arrangement, although I don't recognize this setup. The low-frequency output from the mixer then exits through another low-pass filter to the lower-left.

Apparently that's all you need for a simple Doppler radar. I was surprised at the extreme minimalism of this device and the apparent huge divide between design effort and manufacturing costs. I'm sure a lot of knowledge and work went into figuring out this PCB, but once that was done, it was probably very simple to copy. I wonder if this specific setup used to be covered by any patents.

Posted by Tomaž | Categories: Analog | Comments »

USB noise on C-Media audio dongles

29.11.2015 20:11

Cheap audio dongles based on C-Media chips are a convenient source of USB-connected audio-frequency DACs, ADCs and even digital I/Os with some additional soldering. Among other things, I've used one for my 433 MHz receiver a while back. Other people have been using them for simple radios and oscilloscopes. Apparently, some models can be easily modified to measure DC voltages as well.

Of course, you get what you pay for and analog performance on these is not exactly spectacular. The most annoying thing is the amount of noise you get on the microphone input. One interesting thing I've noticed though is that the amount of noise depends a lot on the USB bus itself. The same device will work fine with one computer and be unusable on another. USB power rails are notoriously noisy and it's not surprising that these small dongles don't do a very good job of filtering them.

USB hubs and dongles used for noise measurements.

To see just how much the noise level varies with these devices, I took some measurements of digital signal power seen on the microphone input when there was no actual signal on the input.

I tested two dongles: one brand new (dongle A) and an old one from the 433 MHz receiver (dongle B). Dongle B is soldered onto a ground plane and has an extra 10 nF capacitor soldered between +5V and ground. In all cases, the microphone input was left unconnected. I also took two different unpowered USB 2.0 hubs and tested the dongles when connected directly to a host and when connected over one of these two hubs. For USB hosts, I used a CubieTruck and an Intel-based desktop PC.

Noise power versus gain for dongle A

Noise power versus gain for dongle B

Each point on the graphs above shows signal power (x2) averaged over 15 seconds using 44100 Hz sample rate. 0 dB is the maximum possible digital signal power. Both dongles use signed 16-bit integer format for samples. I varied the microphone gain of dongles as exposed by the ALSA API ("Mic" control in capture settings). The automatic gain control was turned off.

You can see that dongles connected to the CubieTruck performed much worse than when connected to the PC. It's also interesting that dongles connected over hub A seemed to have a lower noise floor, although I'm not sure that difference is significant. It's likely that USB noise was also affected by unrelated activity in the host the dongles were connected to.

Signal power versus gain for dongle A

For comparison, above is how signal power looks like versus gain when a 10 mV peak-to-peak sine wave is connected to the dongle's input. You can see that the microphone gain control allows for a bit more than 20 dB of variation in gain.

Time-frequency diagram of noise from dongle B on CubieTruck

What is causing so much noise on CubieTruck? Looking at the spectrum of the noise recorded by one of the dongles, there appear to be two distinct parts: one is on low frequencies at around 100 Hz and below. I would guess this comes from the mains hum and its harmonics. The other part is between 2 kHz and 4 kHz and changes in frequency often. Sometimes it also completely disappears (hence strange dips on the graphs above). I'm guessing this comes from some digital signals in the CubieTruck.

There's really not much you can do about this. The small PCBs don't allow for much additional filtering to be botched on (the little ceramic capacitor I added certainly didn't help much) and it's not worth doing something more elaborate since then making your own board from scratch starts to make more sense.

Posted by Tomaž | Categories: Analog | Comments »

CC2500 radios and crystal tolerances

21.11.2015 13:05

While working on Spectrum Wars (during a live demo in fact - this sort of things always happens during live demos) I found a pair of Texas Instruments CC2500 radios that were not able to communicate with each other. Even more puzzling, radio A was able to talk to radio B, but packets from radio B were not heard by radio A.

After checking for software problems (I learned from experience to be vary of CC2500 packet drivers) I found out that very short packets sometimes got through, while longer packets were dropped by the packet handling hardware due to bad CRC. I also noticed that this problem only occurred at low bit rate settings and hence most narrow band transmissions. This made me suspect that perhaps the transmission of radio B fell out of the radio A's reception bandwidth.

When configuring CC2500 radio for reception, you must configure the width of the channel filter. Neither the Texas Instrument's SmartRF Studio software nor the datasheet is very helpful in choosing the correct setting though. What the datasheet does mention is that 80% of the signal bandwidth must fall within the filter's bandwidth and warns that crystal oscillator tolerances must be taken into account. Unfortunately, determining what the signal bandwidth for the given modulation and bit rate is left as an exercise for the reader.

Some typical occupied bandwidth values for a few modulations and bit rates are given in the specifications, but of course, Spectrum Wars uses none of those. As an engineer-with-a-deadline I initially went with a guesstimate of 105 kHz filter bandwidth for 50 kbps bit rate and MSK modulation. It appeared to work fine at that time. After I noticed this problem, I continued with the practical approach and, once I had a reproducible test case, simply increased the filter bandwidth until it started to work at 120 kHz.

Just to be sure what the problem was, I later connected both problematic radios to the spectrum analyzer and measured their transmitted signals.

Measured spectra of two nodes transmitting MSK modulated packets.

The signal spectra are shown with green (radio A) and black traces (radio B). The cursors are positioned on their center frequencies. The blue and red vertical lines mark the (theoretical) pass bands of 105 kHz and 120 kHz receive filters on radio A respectively.

For both radios, the set central frequency of transmission was 2401 MHz. Their actual central frequencies as measured by the spectrum analyzer were:

radio Aradio B
f [MHz]2401.044902401.02171
δf [ppm]+18.7+9.0

The crystals used on radios are specified at ±10 ppm tolerance and ±10 ppm temperature stability. The accuracy of the frequency measurement was ±6 ppm according to the spectrum analyzer's documentation (a somewhat generous error margin for this instrument in my experience). Adding this up, it seems that the LO frequencies are within the maximum ±26 ppm range, although radio A looks marginal. I was measuring at room temperature, so I was not expecting to see deviations much beyond ±16 ppm.

On the other hand, it is obvious that a non-negligible part of the signal from radio B was getting clipped by the 105 kHz receive filter on radio A. The situation with the new 120 kHz setting in fact does not look much better. It is still too narrow to contain the whole main lobe of the signal's spectrum. It does appear to work though and I have not tried to measure what percentage of the signal's power falls within the pass band (it's not trivial with short packet transmissions).

As for why the problem was asymmetrical, I don't know for sure. It's obvious that this radio link is right on margin of what is acceptable. It might be that some other tolerances came into play. Perhaps minor differences in filter bandwidth or radio sensitivity tipped the scale in favor of packets going in one direction over the other. I've seen weirder things happen with these chips.

Posted by Tomaž | Categories: Analog | Comments »

Pinning for gigasamples

04.10.2015 19:27

I recently stumbled upon this video back from 2013 by Shahriar Shahramian on the Signal Path blog. He demonstrated an Agilent DSA-X series oscilloscope that is capable of 160 Gsamples/s and 62 GHz of analog bandwidth. Or, as Shahriar puts it, an instrument that doubles the value of the building it is located in. It is always somewhat humbling to see how incredibly far the state-of-the-art is removed from the capabilities accessible to a typical hobbyist. Having this kind of an instrument on a bench in fact seems like science-fiction even for the telecommunications lab at our Institute that has no shortage of devices that are valued in multiples of my yearly pay.

I was intrigued by the noise level measurements that are shown in the video. At around 7:00 Shahriar says that the displayed noise level at 62.3 GHz of analog bandwidth is around 1 mV RMS. He comments that this a very low noise level for this bandwidth. Since I am mostly dealing with radio receivers these days, I'm more used to thinking in terms of noise figures than millivolts RMS.

The input of the scope has an impedance of 50Ω. Converting RMS voltage into noise power gives:

N = \frac{1\mathrm{mV}^2}{50 \Omega} = 2.0\cdot 10^{-8} \mathrm{W}
N_{dBm} = -47 \mathrm{dBm}

On the other hand, thermal noise power at this bandwidth is:

N_0 = kTB = 2.5\cdot 10^{-10} \mathrm{W}
N_{0dBm} = -66 \mathrm{dBm}

So, according to these values, noise power shown by the oscilloscope is 19 dB above thermal noise, which means that oscilloscope's front-end amplifiers have a noise figure of around 19 dB.

This kind of calculation tends to be quite inaccurate though, because it depends on knowing accurately the noise bandwidth and gain. In another part of the video Shahriar shows the noise power spectral density. You can see there that power falls sharply beyond 62 GHz, so I guess the bandwidth is more or less correct here. Another thing that may affect it is that the RMS value measured includes the DC component and hence includes any DC offset the oscilloscope might have. Finally, noise should have been measured using a 50Ω terminator, not open terminals. However Shahriar says that his measurements are comparable to the instrument's specifications so it seems this hasn't affected the measured values too much.

Of course, I have no good reference to which to compare this value of 19 dB. For example, cheap 2.4 GHz integrated receivers I see each day have a noise figure of around 10 dB. A good low-noise amplifier will have it in low single digits. A Rohde & Schwarz FSV signal analyzer that I sometimes have on my desk is somewhere around 12 dB if I remember correctly. These are all at least one order of magnitude removed from having 60 GHz of bandwidth.

I guess having a low noise figure is not exactly a priority for an oscilloscope anyway. It's not that important when measuring large signals and I'm sure nobody is connecting it to an antenna and using it as a radio receiver. Even calling Shahriar's demonstration the world's fastest software-defined radio is somewhat silly. While the capabilities of this instrument are impressive, there is no way 160 Gsamples/s could be continuously streamed to a CPU and processed in real-time, which is the basic requirement for an SDR.

Posted by Tomaž | Categories: Analog | Comments »

Correcting the GNU Radio power measurements

18.04.2015 11:20

I was previously writing about measuring the noise figure of a rtl-sdr dongle. One of the weird things I noticed about my results was the fact that I arrived at two different values for the noise figure when using two methods that should in theory agree with each other. In the Y-factor method I used a Gaussian noise source as a reference while in the twice-power method I used an unmodulated sine wave signal. Investigating this difference led me to discover that power measured using the GNU radio power detector I made was inexplicably different for modulated signals versus unmodulated ones. This was in spite of the fact that the level indicator on the signal generator and a control measurement using a R&S FSV signal analyzer agreed that the power of the signal did not change after switching on modulation.

This was very puzzling and I did a lot of different tests to find the reason, none of which gave a good explanation of what was going on. Finally, Lou on GNU Radio Discuss mailing list provided a tip that led me to finally solve this riddle. It turned out that due to what looks like an unlucky coincidence, two errors in my measurements almost exactly canceled out, obscuring the real cause and making this look like a impossible situation.

R&S SMBV and FSV instruments connected with a coaxial cable.

What Lou's reply brought to my attention was the possibility that the power measurement function on the signal analyzer might not be showing the correct value. He pointed out that the power measurements might be averaged in log (dBm) scale. This would mean that power of signals with a non-constant envelope (like a Gaussian noise signal I was using) would be underestimated compared to unmodulated signals. A correct power measurement using averaging in the linear (mW) scale, like the one implemented in my GNU radio power meter, would on the other hand lack this negative systematic error. This would explain why FSV showed a different value than all other devices.

To perform control measurements I was using a specialized power measurement mode on the instrument. In this mode, you tell the instrument the bandwidth of a channel you want to measure total signal power in and it automatically integrates the power spectral density over the specified range. It also automatically switches to a RMS detector, sets the attenuator and resolution bandwidth to optimal settings and so on. The manual notes that while this cannot compete with a true power meter, the relative measurement uncertainty should be under 0.5 dB.

By default, this mode doesn't do any averaging, so for a noise signal the power reading jumps around quite a bit. I turned on trace averaging to get a more stable reading, without thinking that this might do the average in log scale. After reading Lou's reply, I did some poking around the menus on the instrument and found an "Average Mode" setting that I didn't notice before. Setting it to "Power" instead of "Log" indeed made the FSV power measurement identical to what I was seeing on the USRP, rtl-sdr and my SNE-ESHTER device.

Excerpt from the R&S FSV manual about averaging mode.

So, a part of the mystery has been apparently solved. I guess the lecture here is that it pays to carefully read relevant parts of the (924 page) manual. To be honest, the chapter on power measurements does contain a pointer to the section about averaging mode and the -2.5 dB difference mentioned there would likely ring a bell.


The question still remained why the level indicator on the R&S SMBV signal generator was wrong. Assuming FSV and other devices now worked correctly, the generator wrongly increased signal level when modulation was switched on. Once I knew where to look though, the reason for this was relatively easy to find. It traces back to a software bug I made a year ago when I first started playing with the arbitrary waveform generator.

When programming a waveform into the instrument over the USB, you have to specify two values in addition to the array of I/Q samples: a RMS offset and peak offset. They are supposed to tell the instrument what is the ratio between the signal's RMS value and full range of DAC and the ratio between signal's peak and full range of DAC. I still don't know exactly why the instrument needs you to calculate that - these values are fully defined by the I/Q samples you provide and the instrument could easily calculate them itself. However, it turns out that if you provide a wrong value, the signal level will be wrong - in my case for around 2.5 dB.

The correct way to calculate them is explained in an application note:

rms = \sqrt{\frac{1}{N}\sum_{n=0}^{N-1}(I_n^2+Q_n^2)}
peak = \sqrt{\max_{n=0}^{N-1}(I_n^2+Q_n^2)}
full = 2^{15}-1
rms\_offs = 20\cdot\log\frac{full}{rms} \qquad peak\_offs = 20\cdot\log\frac{full}{peak}

It seems I initially assumed that the full range for the I/Q baseband was defined by the full range of individual DACs (the square outline on the complex plane below). In reality, it is defined by the amplitude of the complex vector (the shaded circle), which in hindsight makes more sense.

Full scale of the SMBV arbitrary waveform generator.

After correcting the calculation in my Python script, the FSV power measurements and the generator's level indicator match again. This is what the spectrum analyzer now shows for an unmodulated sine wave with -95 dBm level set on the generator:

Fixed CW signal power measurement with R&S FSV.

And this is what it shows for a 100 kHz band of Gaussian noise, again with -95 dBm level.

Fixed noise signal power measurement with R&S FSV.

What is the moral of this story? I guess don't blindly trust big expensive instruments. Before someone else pointed it out I didn't even consider that the issue might be with my control measurements. I was only looking at the GNU Radio, the "cheap" SDR hardware and questioning my basic understanding of signal theory. It's not that the two instruments were not performing up to their specifications - I was merely using them in a wrong way. Considering their complexity (both have ~1000 page manuals, admittedly none of which I have read cover-to-cover) that does not seem such a remote possibility anymore.

The other was that doing silly lab measurements in the after hours can have benefits. If I was not measuring the rtl-sdr dongle out of curiosity, I wouldn't discover that I had a bug in my scripts. This discovery in fact invalidates some results that were on their way to be published in a scientific journal.

Posted by Tomaž | Categories: Analog | Comments »

Signal power in GNU Radio

11.04.2015 18:28

In my recent attempts to measure the noise figure of a rtl-sdr dongle, I've noticed that the results of the twice-power method and the Y-factor method differ significantly. In an attempt to find out the reason for this difference, I did some further measurements with different kinds of signals. I found out that the power detector I implemented in GNU Radio behaves oddly. It appears that the indicated signal power depends on the signal's crest factor, which should not be the case.

Update: As my follow-up post explains, I was using a wrong setup on both the spectrum analyzer and the signal generator.

First of all, I would like to clarify that what I'm doing here is comparing the indicated power (in relative units) for two signals of identical power. I'm not trying to determine the absolute power (say in milliwatts). As the GNU Radio FAQ succinctly explains, the latter is tricky with typical SDR equipment.

The setup for these experiments is similar to what I described in my post about noise figure: I'm using an Ezcap DVB-T dongle tuned to 700.5 MHz. I'm measuring the power in a 200 kHz band that is offset by -500 kHz from the center frequency. As far as I can see from the FFT, this band is free from spurs and other artifacts of the receiver itself. Signal power is measured by multiplying the signal with a complex conjugate of itself and then taking a moving average of 50000 samples.

Updated rtl-sdr power detector flow graph.

I'm using a Rohde & Schwarz SMBV vector signal generator that is capable of producing an arbitrary waveform with an accurate total signal power. As a control, I've also setup a FSV spectrum analyzer to measure total signal power in the same 200 kHz band as the rtl-sdr setup.

For example, this is what a spectrum analyzer shows for an unmodulated sine wave with -95 dBm level set on the generator:

R&S FSV power measurement for CW.

And this is what it shows for a 100 kHz band of Gaussian noise, again with -95 dBm level:

R&S FSV power measurement for Gaussian noise.

The measured power in the 200 kHz channel in both cases agrees well with the power setting on the generator. Difference probably comes from losses in the cable (I used a 60 cm low-loss LMR-195 coax that came with the USRP), connectors, errors in calibration of both instruments and the fact the the FSV adds its own noise power to the signal. The important thing, however, is that power read-out changes only for 0.19 dB when switching on the modulation. I think this is well within the acceptable measurement error range.

Repeating the same two measurements using the rtl-sdr dongle and the GNU Radio power detector:

rtl-sdr signal power measurements for CW and noise.

Note that now the modulated signal shows much higher power than the unmodulated one. The difference is 2.53 dB, which cannot be attributed to random error.

In fact, this effect is repeatable and not specific to the rtl-sdr dongle. I've repeated the same measurements using an USRP N200 device with a SBX daughterboard. I've also used a number of different signals, from band-limited Gaussian noise, multiple CW signals to an amplitude modulated carrier.

The results are summarized in the table below. To make things clearer, I'm showing the indicated power relative to the CW. I've used -95 dBm mean power for rtl-sdr and -100 dBm for USRP, to keep the signal to noise ratio approximately the same on both devices.

Ppeak/Pmean [dB]Prtl-sdr [dB]PUSRP [dB]
CW0,000,000,00
2xCW, fd=60 kHz3,020,020,00
2xCW, fd=100 kHz3,020,040,04
3xCW, fd=60 kHz3,68-0,030,00
100% AM, fm=60 kHz6,021,201,25
Gaussian noise, BW=100 kHz10,502,552,66

As you can see, both devices show an offset for signals, that have a significant difference between peak and average powers. The offsets are also very similar between the devices, something that suggests that this effect is not caused by the device itself.

Any explanation due to physical receiver design I can imagine results in a lower gain for signals with a high peak-to-mean power ratio. So exactly the opposite of what I've seen.

It doesn't seem to be caused by some smart logic in the tuner adjusting gain for different signals. The difference in gain seems to remain down to very low signal powers. I think it is unlikely that any such optimization would work down to very low signal-to-noise levels. This also excludes any receiver non-linearity as the cause as far as I can tell.

GRC power detector response for CW and noise signals.

If I would be using an analog power detector, this kind of effect would be typical of a detector that does not measure signal power directly (like a diode detector which has an exponential characteristic instead of quadratic). However, I'm calculating signal power numerically and you can't get a more exact quadratic function than x2.

I've tested a few theories regarding numerical errors. In fact, results do differ somewhat between the moving average or the decimating low-pass filter. They also differ between using conjugate and multiply blocks or the RMS block. However, the differences are insignificant as far as I can see and don't explain the measurements. I've chosen the flow graph setup shown above because it produces figures that are closest to an identical calculation done in NumPy. Numerical errors also don't explain why the same flow graph produces valid results for a receiver simulated with signal and noise source blocks.

So far I'm out of ideas what could be causing this.

Posted by Tomaž | Categories: Analog | Comments »