HiFiBerry and Wi-Fi interference

01.12.2016 11:43

HiFiBerry is a series of audio output cards designed to sit on the Raspberry Pi 40-pin GPIO connector. I've recently bought the DAC+ pro version for my father to use with a Raspberry Pi 3. He is making a custom box to use as an Internet radio and music player. I picked HiFiBerry because it seemed the simplest, with fewest things that could go wrong (the Cirrus Logic board for instance has many other features in addition to an audio output). It's also well supported out-of-the-box in various Raspberry Pi Linux distributions.

Unfortunately, my father soon found out that the internal wireless LAN adapter on the Raspberry Pi 3 stopped working when HiFiBerry was plugged in. Apparently other people have noticed that as well, as there is an open ticket about it at the Raspberry Pi fork of the Linux kernel.

Several possible causes were discussed on the thread on GitHub, from hardware issues to kernel driver bugs. From those, I found electromagnetic interference the most likely explanation - reports say that the issue isn't always there and depends on the DAC sampling rate and the Wi-Fi channel and signal strength. I thought I might help resolving the issue by offering to make a few measurements with a spectrum analyzer (also, when you have RF equipment on the desk, everything looks like EMI).

HiFiBerry board with a near-field probe over the resonators.

I don't have any near-field probes handy, so we used an ad-hoc probe made from a small wire loop on an end of a coaxial cable. We attempted to tune the loop using a trimmer capacitor to get better sensitivity around 2.4 GHz, but the capacitor didn't have any noticeable effect. We swept this loop around the surface of the HiFiBerry board as well as the Raspberry Pi 3 board underneath.

During these tests, the wireless LAN and Bluetooth interfaces on-board Raspberry Pi were disabled by blacklisting brcmfmac, brcmutil, btbcm and hci_uart kernel modules in /etc/modprobe.d. Apart from this, the Raspberry Pi was booted from an unmodified Volumio SD card image. Unfortunately we don't know what kind of ALSA device settings the Volumio music player used.

What we noticed is that the HiFiBerry board seemed to radiate a lot of RF energy all over the spectrum. The most worrying are spikes approximately 22.6 MHz apart in the 2.4 GHz band that is used by IEEE 802.11 wireless LAN. Note that the peaks on the screenshot below almost perfectly match the center frequencies of channels 1 (2.412 GHz) and 6 (2.437 GHz). The peaks continue to higher frequencies beyond the right edge of the screen and the two next ones match channels 11 and 14. This seems to approximately match the report from Hyperjett about which channels seems to be most affected.

Emissions from the HiFiBerry board in the 2.4 GHz band.

The spikes were highest when the probe was centered around the crystal resonators. This position is shown on the photograph above. This suggests that the oscillators on HiFiBerry are the source of this interference. Phil Elwell mentions some possible I2S bus harmonics, but frequencies we saw don't seem to match those.

Emissions from the HiFiBerry board down to 1 GHz.

Scanning lower frequencies shows that the highest peak is around 360 MHz, but that is likely because of the sensitivity of our probe and not due to something related to the HiFiBerry board.

Emissions from the HiFiBerry board from DC to 5 GHz.

I'm pretty sure these emissions are indeed connected with the HiFiBerry itself. With the probe on Raspberry Pi board underneath HiFiBerry, the spectrum analyzer barely registered any activity. Unfortunately, I forgot to take some measurements with a 2.4 GHz antenna to see how much of this is radiated out into the far-field. I'm guessing not much, since it doesn't seem to affect nearby wireless devices.

Related to that, another experiment points towards the fact that this is an EMI issue. If you connect a Wi-Fi dongle via a USB cable to the Raspberry Pi, it will work reliably as long as the dongle is kept away from the HiFiBerry board. However if you put it a centimeter above the HiFiBerry board, it will lose the connection to the access point.

In conclusion, everything I saw seems to suggest that this is a hardware issue. Unfortunately the design of the HiFiBerry board is not open, so it's hard to be more specific or suggest a possible solution. The obvious workaround is to use an external wireless adapter on an USB extension cable, located as far as feasible from the board.

I should stress though that the measurements we did here are limited by our probe, which was very crude, even compared to a proper home-made one. While frequencies of the peaks are surely correct, the measured amplitudes don't have much meaning. Real EMI testing is done with proper tools in a anechoic chamber, but that is somewhat out of my league at the moment.

Posted by Tomaž | Categories: Analog | Comments »

BPSK on TI CC chips, 2

18.06.2016 13:07

A few days ago I described how a Texas Instruments CC1101 chip can be used to transmit a low bitrate BPSK (binary phase-shift keying) signal using the minimum-shift keying (MSK) modulator block. I promised to share some practical measurements.

The following has been recorded using an USRP N200 with sampling frequency of 1 MHz. Raw I/Q samples from the USRP were then passed to a custom BPSK demodulator written in Python and NumPy.

The transmission was done using a CC1101, which was connected to the USRP using a coaxial cable and an attenuator. MSK modulator on CC1101 was setup for hardware data rate of 100 kbps. 1000 MSK symbols were used to encode one BPSK symbol, giving the BPSK bitrate of 100 bps. The packet sent was 57 bytes, which resulted in packet transmission time of around 4.5 seconds. The microcontroller firmware driving the CC1101 kept repeating the same packet with a small delay between transmissions.

Recorded signal power versus time.

This is one packet, shown as I/Q signal power versus time:

Signal power during a single captured packet.

In-phase (real) component of the recorded signal, zoomed in to reveal individual bits:

Zoomed-in in-phase signal component versus time.

Both the CC1101 and USRP were set to the same central frequency (868.2 MHz). Of course, due to tolerances in both devices their local oscillators had slightly different frequencies. This means that the carrier translated to baseband has a low, but non-zero frequency.

You can see 180° phase shifts nicely, as well as some ringing around the transitions. This has to be filtered out before carrier recovery.

After carrier recovery we can plot the carrier frequency during the time of transmission. Here it's plotted for all 4 packets there were recorded:

Recovered carrier frequency versus time for 4 packets.

You can see that the frequency shifts by around 20 Hz over the time of 4.5 seconds. This is around 20% of the 100 Hz channel occupied by the transmission. At 868.2e6 central frequency, 20 Hz drift is a bit over 0.02 ppm, which is actually not that bad. For comparison, the quartz crystal I used with CC1101 has specified ±10 ppm stability over the -20°C to 70°C range (not sure what USRP uses, but it's probably in the same ballpark). However, I think the short-term drift seen here is not due to the quartz itself but more likely due to changes in load capacitance. Perhaps the oscillator is heating slightly during transmission. In fact, just waving my arm over the PCB with the CC1101 has a noticeable effect.

Finally, this is the phase after multiplying the signal with the recovered carrier. The only thing left is digital clock recovery, bit slicing and decoding the upper layers of the protocol:

Signal phase after multiplication with recovered carrier.

Posted by Tomaž | Categories: Analog | Comments »

Power supply voltage shifts

02.05.2016 20:16

I'm a pretty heavy Munin user. In recent years I've developed a habit of adding a graph or two (or ten) for every service that I maintain. I also tend to monitor as many aspects of computer hardware as I can conveniently write a plugin for. At the latest count, my Munin master tracks a bit over 600 variables (not including a separate instance that monitors 50-odd VESNA sensor nodes deployed by IJS).

Monitoring everything and keeping a long history allows you to notice subtle changes that would otherwise be easy to miss. One of the things that I found interesting is the long-term behavior of power supplies. Pretty much every computer these days comes with software-accessible voltmeters on various power supply rails, so this is easy to do (using lm-sensors, for instance).

Take for example voltage on the +5 V rail of an old 500 watt HKC USP5550 ATX power supply during the last months of its operation:

Voltage on ATX +5 V rail versus time.

From the start, this power supply seemed to have a slight downward trend of around -2 mV/month. Then for some reason the voltage jumped up for around 20 mV, was stable for a while and then sharply dropped and started drifting at around -20 mV/month. At that point I replaced it, fearing that it might soon endanger the machine it was powering.

The slow drift looks like aging of some sort - perhaps a voltage reference or a voltage divider before the error amplifier. Considering that it disappeared after the PSU was changed it seems that it was indeed caused by the PSU and not by a drifting ADC reference on the motherboard or some other artifact in the measurements. Abrupt shifts are harder to explain. As far as I can see, nothing important happened at those times. An application note from Linear mentions that leakage currents due to dirt and residues on the PCB can cause output voltage shifts.

It's also interesting that the +12 V rail on the same power supply showed a bit different pattern. The last voltage drop is not apparent there, so whatever caused the drop on the +5 V line seemed to have happened after the point where regulation circuit measures the voltage. The +12 V line isn't separately regulated in this device, so if the regulation circuit would be involved, some change should have been apparent on +12 V as well.

Perhaps it was just a bad solder joint somewhere down the line or oxidation building up on connectors. At 10 A, a 50 mV step only corresponds to around 5 mΩ change in resistance.

Voltage on ATX +12 V rail versus time.

This sort of voltage jumps seem to be quite common though. For instance, here is another one I recently recorded on a 5 V, 2.5 A external power supply that came with CubieTruck. Again, as far as I can tell, there were no external reasons (for instance, power supply current shows no similar change at that time).

Voltage on CubieTruck power supply versus time.

I have the offending HKC power supply opened up on my bench at the moment and nothing looks obviously out of place except copious amounts of dust. While it would be interesting to know what the exact reasons were behind these voltage changes, I don't think I'll bother looking any deeper into this.

Posted by Tomaž | Categories: Analog | Comments »

Rapitest Socket Tester

29.01.2016 17:47

John Ward has a series of videos on YouTube where he discusses the Rapitest Socket Tester. This is a device that can be used to quickly check whether a UK-style 230 V AC socket has been wired correctly. John explains how a device like that can be dangerously misleading, if you trust its verdict too much. Even if Rapitest shows that the socket passed the test, the terminals in the socket can still be dangerously miswired.

Rapitest Socket Tester (Part 1)

(Click to watch Rapitest Socket Tester (Part 1) video)

I have never seen a device like this in person. Definitely they are not common in this part of the world. Possibly because the German "Schuko" sockets we use don't define the positions of the live and neutral connections and hence there are fewer mistakes to make in wiring them. The most common testing apparatus for household wiring jobs here is the simple mains tester screwdriver (about which John has his own strong opinion and I don't completely agree with him there).

From the first description of the Rapitest device, I was under the impression that it must contain some non-linear components. Specifically after hearing that it can detect when the line and neutral connections in the socket have been reversed. I was therefore a bit surprised when I saw that the PCB inside the device contains just a few resistors. I was curious how it manages to do its thing with such a simple circuit, so I went slowly through the part of the video that shows the disassembly and sketched out the schematic:

Schematic of the Rapitest Socket Tester

S1 through S3 are the neon indicator lamps that are visible on the front of the device, left to right. L, N and E are line, neutral and earth pins that fit into the corresponding connections in the socket. It was a bit hard to read out the resistor values from the colors on the video, so there might be some mistakes there, but I believe the general idea of the circuit is correct.

It's easy to see from this circuit how the device detects some of the fault conditions that are listed on the front. For instance, if earth is disconnected, then S3 will not light up. In that case, voltage on S3 is provided by the voltage divider R7 : R8+R1+R2 which does not provide a high enough voltage to strike an arc in the lamp (compared to R7 : R8, if earth is correctly connected).

Similarly, if line and neutral are reversed, only the R3 : R5 divider will provide enough voltage and hence only S1 will light up. S3 has no voltage since it is connected across neutral and earth in that case. For S2, the line voltage is first halved across R2 and R1 and then reduced further due to R4 and R6.

Rapitest 13 Amp Socket Tester

Image by John Ward

However, it's hard to intuitively see what would happen in all 64 possible scenarios (each of the 3 terminals can in theory be connected to either line, neutral, earth or left disconnected, hence giving 43 combinations). To see what kind of output you would theoretically get in every possible situation, I threw together a simple Spice simulation of the circuit drawn above. A neon lamp is not trivial to simulate in Spice, so I simplified things a bit. I modeled lamps as open-circuits and only checked whether the voltage on them would reach the breakdown voltage of around 100 V. If the voltage across a lamp was higher, I assumed it would light up.

The table below shows the result of this simulation. First three columns show the connection of the tree socket terminals (NC means the terminal is not connected anywhere). I did not test situations where a terminal would be connected over some non-zero impedance. An X in one of the last three columns means that the corresponding lamp would turn on in that case.

  L N E S1 S2 S3
1 L L L      
2 L L N     X
3 L L E     X
4 L L NC      
5 L N L X    
6 L N N X X X
7 L N E X X X
8 L N NC X X  
9 L E L X    
10 L E N X X X
11 L E E X X X
12 L E NC X X  
13 L NC L      
14 L NC N   X X
15 L NC E   X X
16 L NC NC      
17 N L L X X X
18 N L N X    
19 N L E X    
20 N L NC X X  
21 N N L     X
22 N N N      
23 N N E      
24 N N NC      
25 N E L     X
26 N E N      
27 N E E      
28 N E NC      
29 N NC L   X X
30 N NC N      
31 N NC E      
32 N NC NC      
33 E L L X X X
34 E L N X    
35 E L E X    
36 E L NC X X  
37 E N L     X
38 E N N      
39 E N E      
40 E N NC      
41 E E L     X
42 E E N      
43 E E E      
44 E E NC      
45 E NC L   X X
46 E NC N      
47 E NC E      
48 E NC NC      
49 NC L L      
50 NC L N      
51 NC L E      
52 NC L NC      
53 NC N L      
54 NC N N      
55 NC N E      
56 NC N NC      
57 NC E L      
58 NC E N      
59 NC E E      
60 NC E NC      
61 NC NC L      
62 NC NC N      
63 NC NC E      
64 NC NC NC      

I marked with blue the six combinations (7, 8, 15, 19, 37, 55) that are shown on the front of the device. They show that in those cases my simulation produced the correct result.

Five rows marked with red show situations where the device shows "Correct" signal, but the wiring is not correct. You can immediately see two classes of problems that the device fails to detect:

  • It cannot distinguish between earth and neutral (combinations 6, 10 and 11). This is obvious since both of these are on the same potential (in my simulation and to some approximation in reality as well). However, if a residual-current device is installed, any fault where earth and neutral have been swapped should trip it as soon as any significant load is connected to the socket.
  • It also fails to detect when potentials have been reversed completely (e.g. line is on both neutral and earth terminals and either neutral or earth is on the line terminal - combinations 17 and 33). This is the deadly as wrong as you can get situation shown by John in the second part of his video.

Under the assumption that you only have access to AC voltages on the three terminals in the socket, both of these fault situations are in fact impossible to distinguish from the correct one with any circuit or device.

It's worth noting also that the Rapitest can give a dangerously misleading information in other cases as well. For instance, the all-lights-off "Line not connected" result might give someone the wrong impression that there is no voltage in the circuit. There are plenty of situations where line voltage is present on at least one of the terminals, but all lamps on the device are off.

Posted by Tomaž | Categories: Analog | Comments »

Minimalist microwave magic

17.12.2015 14:52

The other day at the Institute, Klemen brought me this microwave motion sensor. Apparently, it was left over from an old municipal lighting project where street lights were to be only turned on when something was moving in the vicinity. I don't know what came out of this idea, but the sensor itself is quite fascinating.

Microwave motion sensor module.

Bottom side of the motion sensor circuit.

There is no manufacturer name on the device. The bottom side of the PCB says GH1420 and IRP/07. It appears to be very similar to the AgilSense HB100 sensor module, but it's probably a cheap knock-off rather than the original. I haven't come across these yet, but it seems they are somewhat popular to use with Arduino (and as cheap DIY 10 GHz sources for amateur radio enthusiasts).

Microwave motion sensor block diagram.

Image by AgilSense

The application note from AgilSense contains the block diagram above. The device transmits a continuous wave at around 10.5 GHz on transmit antenna. Any signal that gets reflected back to the receive antenna is mixed with the local oscillator. If the input signal is Doppler-shifted because it reflected off a moving object, you get some low frequency signal on the output. The application note says that a typical signal is below 100 Hz and in the μV-range.

Top side of the motion sensor circuit.

After removing the metal can, the circuit appears fantastically minimalist. There are only two semiconductor elements, two passive elements and a dielectric resonator. The PCB substrate feels like plain old FR4. Copper traces are covered with solder mask and have what looks like immersion gold finish - unusual for a microwave RF circuit. If it weren't for the transistor in the high-frequency SMD package and the PCB microstrip wizardry, I wouldn't believe this runs at 10 GHz. They didn't even bother to solder the RF shield can onto the ground plane.

I'm not very familiar with extremely high-frequency design, but this does look more or less like what the block diagram above promises. The X shaped element is most likely a high-frequency NPN transistor used as an oscillator. Base is upper-right, collector is lower-left and the two remaining pins are the emitter (with vias to the ground plane on the other side). The +5 V power lead provides collector voltage through a resistor on the lower-right. The quarter-circle things on the PCB are butterfly stubs in low-pass filters.

The collector of the transistor is capacitively coupled with the base through the white cylindrical resonator. This provides the feedback that drives the oscillator. What's interesting is that there is no bias on the base of the transistor. Either it is working as a class C amplifier or there's something more fancy than a plain bipolar transistor in that SMD package.

The output of the oscillator is coupled to the transmit antenna on the bottom and to the mixer in the center. The little rectangular stubs on the way probably help with impedance matching or some filtering of oscillator harmonics. The trace from the receive antenna comes in on the top of the picture. The mixer is probably some kind of a diode arrangement, although I don't recognize this setup. The low-frequency output from the mixer then exits through another low-pass filter to the lower-left.

Apparently that's all you need for a simple Doppler radar. I was surprised at the extreme minimalism of this device and the apparent huge divide between design effort and manufacturing costs. I'm sure a lot of knowledge and work went into figuring out this PCB, but once that was done, it was probably very simple to copy. I wonder if this specific setup used to be covered by any patents.

Posted by Tomaž | Categories: Analog | Comments »

USB noise on C-Media audio dongles

29.11.2015 20:11

Cheap audio dongles based on C-Media chips are a convenient source of USB-connected audio-frequency DACs, ADCs and even digital I/Os with some additional soldering. Among other things, I've used one for my 433 MHz receiver a while back. Other people have been using them for simple radios and oscilloscopes. Apparently, some models can be easily modified to measure DC voltages as well.

Of course, you get what you pay for and analog performance on these is not exactly spectacular. The most annoying thing is the amount of noise you get on the microphone input. One interesting thing I've noticed though is that the amount of noise depends a lot on the USB bus itself. The same device will work fine with one computer and be unusable on another. USB power rails are notoriously noisy and it's not surprising that these small dongles don't do a very good job of filtering them.

USB hubs and dongles used for noise measurements.

To see just how much the noise level varies with these devices, I took some measurements of digital signal power seen on the microphone input when there was no actual signal on the input.

I tested two dongles: one brand new (dongle A) and an old one from the 433 MHz receiver (dongle B). Dongle B is soldered onto a ground plane and has an extra 10 nF capacitor soldered between +5V and ground. In all cases, the microphone input was left unconnected. I also took two different unpowered USB 2.0 hubs and tested the dongles when connected directly to a host and when connected over one of these two hubs. For USB hosts, I used a CubieTruck and an Intel-based desktop PC.

Noise power versus gain for dongle A

Noise power versus gain for dongle B

Each point on the graphs above shows signal power (x2) averaged over 15 seconds using 44100 Hz sample rate. 0 dB is the maximum possible digital signal power. Both dongles use signed 16-bit integer format for samples. I varied the microphone gain of dongles as exposed by the ALSA API ("Mic" control in capture settings). The automatic gain control was turned off.

You can see that dongles connected to the CubieTruck performed much worse than when connected to the PC. It's also interesting that dongles connected over hub A seemed to have a lower noise floor, although I'm not sure that difference is significant. It's likely that USB noise was also affected by unrelated activity in the host the dongles were connected to.

Signal power versus gain for dongle A

For comparison, above is how signal power looks like versus gain when a 10 mV peak-to-peak sine wave is connected to the dongle's input. You can see that the microphone gain control allows for a bit more than 20 dB of variation in gain.

Time-frequency diagram of noise from dongle B on CubieTruck

What is causing so much noise on CubieTruck? Looking at the spectrum of the noise recorded by one of the dongles, there appear to be two distinct parts: one is on low frequencies at around 100 Hz and below. I would guess this comes from the mains hum and its harmonics. The other part is between 2 kHz and 4 kHz and changes in frequency often. Sometimes it also completely disappears (hence strange dips on the graphs above). I'm guessing this comes from some digital signals in the CubieTruck.

There's really not much you can do about this. The small PCBs don't allow for much additional filtering to be botched on (the little ceramic capacitor I added certainly didn't help much) and it's not worth doing something more elaborate since then making your own board from scratch starts to make more sense.

Posted by Tomaž | Categories: Analog | Comments »

CC2500 radios and crystal tolerances

21.11.2015 13:05

While working on Spectrum Wars (during a live demo in fact - this sort of things always happens during live demos) I found a pair of Texas Instruments CC2500 radios that were not able to communicate with each other. Even more puzzling, radio A was able to talk to radio B, but packets from radio B were not heard by radio A.

After checking for software problems (I learned from experience to be vary of CC2500 packet drivers) I found out that very short packets sometimes got through, while longer packets were dropped by the packet handling hardware due to bad CRC. I also noticed that this problem only occurred at low bit rate settings and hence most narrow band transmissions. This made me suspect that perhaps the transmission of radio B fell out of the radio A's reception bandwidth.

When configuring CC2500 radio for reception, you must configure the width of the channel filter. Neither the Texas Instrument's SmartRF Studio software nor the datasheet is very helpful in choosing the correct setting though. What the datasheet does mention is that 80% of the signal bandwidth must fall within the filter's bandwidth and warns that crystal oscillator tolerances must be taken into account. Unfortunately, determining what the signal bandwidth for the given modulation and bit rate is left as an exercise for the reader.

Some typical occupied bandwidth values for a few modulations and bit rates are given in the specifications, but of course, Spectrum Wars uses none of those. As an engineer-with-a-deadline I initially went with a guesstimate of 105 kHz filter bandwidth for 50 kbps bit rate and MSK modulation. It appeared to work fine at that time. After I noticed this problem, I continued with the practical approach and, once I had a reproducible test case, simply increased the filter bandwidth until it started to work at 120 kHz.

Just to be sure what the problem was, I later connected both problematic radios to the spectrum analyzer and measured their transmitted signals.

Measured spectra of two nodes transmitting MSK modulated packets.

The signal spectra are shown with green (radio A) and black traces (radio B). The cursors are positioned on their center frequencies. The blue and red vertical lines mark the (theoretical) pass bands of 105 kHz and 120 kHz receive filters on radio A respectively.

For both radios, the set central frequency of transmission was 2401 MHz. Their actual central frequencies as measured by the spectrum analyzer were:

radio Aradio B
f [MHz]2401.044902401.02171
δf [ppm]+18.7+9.0

The crystals used on radios are specified at ±10 ppm tolerance and ±10 ppm temperature stability. The accuracy of the frequency measurement was ±6 ppm according to the spectrum analyzer's documentation (a somewhat generous error margin for this instrument in my experience). Adding this up, it seems that the LO frequencies are within the maximum ±26 ppm range, although radio A looks marginal. I was measuring at room temperature, so I was not expecting to see deviations much beyond ±16 ppm.

On the other hand, it is obvious that a non-negligible part of the signal from radio B was getting clipped by the 105 kHz receive filter on radio A. The situation with the new 120 kHz setting in fact does not look much better. It is still too narrow to contain the whole main lobe of the signal's spectrum. It does appear to work though and I have not tried to measure what percentage of the signal's power falls within the pass band (it's not trivial with short packet transmissions).

As for why the problem was asymmetrical, I don't know for sure. It's obvious that this radio link is right on margin of what is acceptable. It might be that some other tolerances came into play. Perhaps minor differences in filter bandwidth or radio sensitivity tipped the scale in favor of packets going in one direction over the other. I've seen weirder things happen with these chips.

Posted by Tomaž | Categories: Analog | Comments »

Pinning for gigasamples

04.10.2015 19:27

I recently stumbled upon this video back from 2013 by Shahriar Shahramian on the Signal Path blog. He demonstrated an Agilent DSA-X series oscilloscope that is capable of 160 Gsamples/s and 62 GHz of analog bandwidth. Or, as Shahriar puts it, an instrument that doubles the value of the building it is located in. It is always somewhat humbling to see how incredibly far the state-of-the-art is removed from the capabilities accessible to a typical hobbyist. Having this kind of an instrument on a bench in fact seems like science-fiction even for the telecommunications lab at our Institute that has no shortage of devices that are valued in multiples of my yearly pay.

I was intrigued by the noise level measurements that are shown in the video. At around 7:00 Shahriar says that the displayed noise level at 62.3 GHz of analog bandwidth is around 1 mV RMS. He comments that this a very low noise level for this bandwidth. Since I am mostly dealing with radio receivers these days, I'm more used to thinking in terms of noise figures than millivolts RMS.

The input of the scope has an impedance of 50Ω. Converting RMS voltage into noise power gives:

N = \frac{1\mathrm{mV}^2}{50 \Omega} = 2.0\cdot 10^{-8} \mathrm{W}
N_{dBm} = -47 \mathrm{dBm}

On the other hand, thermal noise power at this bandwidth is:

N_0 = kTB = 2.5\cdot 10^{-10} \mathrm{W}
N_{0dBm} = -66 \mathrm{dBm}

So, according to these values, noise power shown by the oscilloscope is 19 dB above thermal noise, which means that oscilloscope's front-end amplifiers have a noise figure of around 19 dB.

This kind of calculation tends to be quite inaccurate though, because it depends on knowing accurately the noise bandwidth and gain. In another part of the video Shahriar shows the noise power spectral density. You can see there that power falls sharply beyond 62 GHz, so I guess the bandwidth is more or less correct here. Another thing that may affect it is that the RMS value measured includes the DC component and hence includes any DC offset the oscilloscope might have. Finally, noise should have been measured using a 50Ω terminator, not open terminals. However Shahriar says that his measurements are comparable to the instrument's specifications so it seems this hasn't affected the measured values too much.

Of course, I have no good reference to which to compare this value of 19 dB. For example, cheap 2.4 GHz integrated receivers I see each day have a noise figure of around 10 dB. A good low-noise amplifier will have it in low single digits. A Rohde & Schwarz FSV signal analyzer that I sometimes have on my desk is somewhere around 12 dB if I remember correctly. These are all at least one order of magnitude removed from having 60 GHz of bandwidth.

I guess having a low noise figure is not exactly a priority for an oscilloscope anyway. It's not that important when measuring large signals and I'm sure nobody is connecting it to an antenna and using it as a radio receiver. Even calling Shahriar's demonstration the world's fastest software-defined radio is somewhat silly. While the capabilities of this instrument are impressive, there is no way 160 Gsamples/s could be continuously streamed to a CPU and processed in real-time, which is the basic requirement for an SDR.

Posted by Tomaž | Categories: Analog | Comments »

Correcting the GNU Radio power measurements

18.04.2015 11:20

I was previously writing about measuring the noise figure of a rtl-sdr dongle. One of the weird things I noticed about my results was the fact that I arrived at two different values for the noise figure when using two methods that should in theory agree with each other. In the Y-factor method I used a Gaussian noise source as a reference while in the twice-power method I used an unmodulated sine wave signal. Investigating this difference led me to discover that power measured using the GNU radio power detector I made was inexplicably different for modulated signals versus unmodulated ones. This was in spite of the fact that the level indicator on the signal generator and a control measurement using a R&S FSV signal analyzer agreed that the power of the signal did not change after switching on modulation.

This was very puzzling and I did a lot of different tests to find the reason, none of which gave a good explanation of what was going on. Finally, Lou on GNU Radio Discuss mailing list provided a tip that led me to finally solve this riddle. It turned out that due to what looks like an unlucky coincidence, two errors in my measurements almost exactly canceled out, obscuring the real cause and making this look like a impossible situation.

R&S SMBV and FSV instruments connected with a coaxial cable.

What Lou's reply brought to my attention was the possibility that the power measurement function on the signal analyzer might not be showing the correct value. He pointed out that the power measurements might be averaged in log (dBm) scale. This would mean that power of signals with a non-constant envelope (like a Gaussian noise signal I was using) would be underestimated compared to unmodulated signals. A correct power measurement using averaging in the linear (mW) scale, like the one implemented in my GNU radio power meter, would on the other hand lack this negative systematic error. This would explain why FSV showed a different value than all other devices.

To perform control measurements I was using a specialized power measurement mode on the instrument. In this mode, you tell the instrument the bandwidth of a channel you want to measure total signal power in and it automatically integrates the power spectral density over the specified range. It also automatically switches to a RMS detector, sets the attenuator and resolution bandwidth to optimal settings and so on. The manual notes that while this cannot compete with a true power meter, the relative measurement uncertainty should be under 0.5 dB.

By default, this mode doesn't do any averaging, so for a noise signal the power reading jumps around quite a bit. I turned on trace averaging to get a more stable reading, without thinking that this might do the average in log scale. After reading Lou's reply, I did some poking around the menus on the instrument and found an "Average Mode" setting that I didn't notice before. Setting it to "Power" instead of "Log" indeed made the FSV power measurement identical to what I was seeing on the USRP, rtl-sdr and my SNE-ESHTER device.

Excerpt from the R&S FSV manual about averaging mode.

So, a part of the mystery has been apparently solved. I guess the lecture here is that it pays to carefully read relevant parts of the (924 page) manual. To be honest, the chapter on power measurements does contain a pointer to the section about averaging mode and the -2.5 dB difference mentioned there would likely ring a bell.


The question still remained why the level indicator on the R&S SMBV signal generator was wrong. Assuming FSV and other devices now worked correctly, the generator wrongly increased signal level when modulation was switched on. Once I knew where to look though, the reason for this was relatively easy to find. It traces back to a software bug I made a year ago when I first started playing with the arbitrary waveform generator.

When programming a waveform into the instrument over the USB, you have to specify two values in addition to the array of I/Q samples: a RMS offset and peak offset. They are supposed to tell the instrument what is the ratio between the signal's RMS value and full range of DAC and the ratio between signal's peak and full range of DAC. I still don't know exactly why the instrument needs you to calculate that - these values are fully defined by the I/Q samples you provide and the instrument could easily calculate them itself. However, it turns out that if you provide a wrong value, the signal level will be wrong - in my case for around 2.5 dB.

The correct way to calculate them is explained in an application note:

rms = \sqrt{\frac{1}{N}\sum_{n=0}^{N-1}(I_n^2+Q_n^2)}
peak = \sqrt{\max_{n=0}^{N-1}(I_n^2+Q_n^2)}
full = 2^{15}-1
rms\_offs = 20\cdot\log\frac{full}{rms} \qquad peak\_offs = 20\cdot\log\frac{full}{peak}

It seems I initially assumed that the full range for the I/Q baseband was defined by the full range of individual DACs (the square outline on the complex plane below). In reality, it is defined by the amplitude of the complex vector (the shaded circle), which in hindsight makes more sense.

Full scale of the SMBV arbitrary waveform generator.

After correcting the calculation in my Python script, the FSV power measurements and the generator's level indicator match again. This is what the spectrum analyzer now shows for an unmodulated sine wave with -95 dBm level set on the generator:

Fixed CW signal power measurement with R&S FSV.

And this is what it shows for a 100 kHz band of Gaussian noise, again with -95 dBm level.

Fixed noise signal power measurement with R&S FSV.

What is the moral of this story? I guess don't blindly trust big expensive instruments. Before someone else pointed it out I didn't even consider that the issue might be with my control measurements. I was only looking at the GNU Radio, the "cheap" SDR hardware and questioning my basic understanding of signal theory. It's not that the two instruments were not performing up to their specifications - I was merely using them in a wrong way. Considering their complexity (both have ~1000 page manuals, admittedly none of which I have read cover-to-cover) that does not seem such a remote possibility anymore.

The other was that doing silly lab measurements in the after hours can have benefits. If I was not measuring the rtl-sdr dongle out of curiosity, I wouldn't discover that I had a bug in my scripts. This discovery in fact invalidates some results that were on their way to be published in a scientific journal.

Posted by Tomaž | Categories: Analog | Comments »

Signal power in GNU Radio

11.04.2015 18:28

In my recent attempts to measure the noise figure of a rtl-sdr dongle, I've noticed that the results of the twice-power method and the Y-factor method differ significantly. In an attempt to find out the reason for this difference, I did some further measurements with different kinds of signals. I found out that the power detector I implemented in GNU Radio behaves oddly. It appears that the indicated signal power depends on the signal's crest factor, which should not be the case.

Update: As my follow-up post explains, I was using a wrong setup on both the spectrum analyzer and the signal generator.

First of all, I would like to clarify that what I'm doing here is comparing the indicated power (in relative units) for two signals of identical power. I'm not trying to determine the absolute power (say in milliwatts). As the GNU Radio FAQ succinctly explains, the latter is tricky with typical SDR equipment.

The setup for these experiments is similar to what I described in my post about noise figure: I'm using an Ezcap DVB-T dongle tuned to 700.5 MHz. I'm measuring the power in a 200 kHz band that is offset by -500 kHz from the center frequency. As far as I can see from the FFT, this band is free from spurs and other artifacts of the receiver itself. Signal power is measured by multiplying the signal with a complex conjugate of itself and then taking a moving average of 50000 samples.

Updated rtl-sdr power detector flow graph.

I'm using a Rohde & Schwarz SMBV vector signal generator that is capable of producing an arbitrary waveform with an accurate total signal power. As a control, I've also setup a FSV spectrum analyzer to measure total signal power in the same 200 kHz band as the rtl-sdr setup.

For example, this is what a spectrum analyzer shows for an unmodulated sine wave with -95 dBm level set on the generator:

R&S FSV power measurement for CW.

And this is what it shows for a 100 kHz band of Gaussian noise, again with -95 dBm level:

R&S FSV power measurement for Gaussian noise.

The measured power in the 200 kHz channel in both cases agrees well with the power setting on the generator. Difference probably comes from losses in the cable (I used a 60 cm low-loss LMR-195 coax that came with the USRP), connectors, errors in calibration of both instruments and the fact the the FSV adds its own noise power to the signal. The important thing, however, is that power read-out changes only for 0.19 dB when switching on the modulation. I think this is well within the acceptable measurement error range.

Repeating the same two measurements using the rtl-sdr dongle and the GNU Radio power detector:

rtl-sdr signal power measurements for CW and noise.

Note that now the modulated signal shows much higher power than the unmodulated one. The difference is 2.53 dB, which cannot be attributed to random error.

In fact, this effect is repeatable and not specific to the rtl-sdr dongle. I've repeated the same measurements using an USRP N200 device with a SBX daughterboard. I've also used a number of different signals, from band-limited Gaussian noise, multiple CW signals to an amplitude modulated carrier.

The results are summarized in the table below. To make things clearer, I'm showing the indicated power relative to the CW. I've used -95 dBm mean power for rtl-sdr and -100 dBm for USRP, to keep the signal to noise ratio approximately the same on both devices.

Ppeak/Pmean [dB]Prtl-sdr [dB]PUSRP [dB]
CW0,000,000,00
2xCW, fd=60 kHz3,020,020,00
2xCW, fd=100 kHz3,020,040,04
3xCW, fd=60 kHz3,68-0,030,00
100% AM, fm=60 kHz6,021,201,25
Gaussian noise, BW=100 kHz10,502,552,66

As you can see, both devices show an offset for signals, that have a significant difference between peak and average powers. The offsets are also very similar between the devices, something that suggests that this effect is not caused by the device itself.

Any explanation due to physical receiver design I can imagine results in a lower gain for signals with a high peak-to-mean power ratio. So exactly the opposite of what I've seen.

It doesn't seem to be caused by some smart logic in the tuner adjusting gain for different signals. The difference in gain seems to remain down to very low signal powers. I think it is unlikely that any such optimization would work down to very low signal-to-noise levels. This also excludes any receiver non-linearity as the cause as far as I can tell.

GRC power detector response for CW and noise signals.

If I would be using an analog power detector, this kind of effect would be typical of a detector that does not measure signal power directly (like a diode detector which has an exponential characteristic instead of quadratic). However, I'm calculating signal power numerically and you can't get a more exact quadratic function than x2.

I've tested a few theories regarding numerical errors. In fact, results do differ somewhat between the moving average or the decimating low-pass filter. They also differ between using conjugate and multiply blocks or the RMS block. However, the differences are insignificant as far as I can see and don't explain the measurements. I've chosen the flow graph setup shown above because it produces figures that are closest to an identical calculation done in NumPy. Numerical errors also don't explain why the same flow graph produces valid results for a receiver simulated with signal and noise source blocks.

So far I'm out of ideas what could be causing this.

Posted by Tomaž | Categories: Analog | Comments »

Notes on HB9AJG's E4000 sensitivity measurements

03.04.2015 20:28

In August 2013, a ham operator with the HB9AJG call sign posted a detailed report on measurements done with two rtl-sdr dongles to the SDRSharp Yahoo group. They used laboratory instruments to evaluate many aspects of these cheap software-defined radio receivers. As lab reports go, this one is very detailed and contains all the information necessary for anyone with sufficient equipment to replicate the results. The author certainly deserves praise for being so diligent.

In my previous blog post, I mentioned that my own measurements of the noise figure of a similar rtl-sdr dongle disagree with HB9AJG's report. My Ezcap DVB-T dongle is using the same Elonics E4000 integrated tuner as the Terratec dongle tested by HB9AJG. While this does not necessarily mean that the two devices should perform identically, it did prompt me to look closely at HB9AJG's sensitivity measurements. During that I believe I have found two errors in the report, which I want to discuss in the following.

Remark 1: The dongles have a nominal input impedance of 75 Ohms, whereas my signal generators have output impedances of 50 Ohms. My dBm figures take account of the difference of 1.6dB.

The first odd thing I noticed about the report is this correction for the mismatch between the signal generator's output impedance and the dongle's input impedance. I'm not sure where the 1.6 dB figure comes from.

If we assume the source and load impedances above, the mismatch correction should be:

\Gamma = \frac{Z_l - Z_s}{Z_l + Z_s} = \frac{75\Omega - 50\Omega}{75\Omega + 50\Omega} = 0.2
ML = (1 - \Gamma^2) = 0.96
ML_{dB} = 0.18 \mathrm{dB}

0.18 dB is small enough to be insignificant compared to the other measurement errors and I ignored mismatch loss completely in my noise figure calculations.

I don't actually know the input impedance of my dongle. 75 Ω seems a fair guess as that is the standard for TV receivers. The E4000 datasheet specifies an even lower loss of around 0.14 dB for a 50 Ω source (see Input Return loss (50R system) on page 11). Of course, the dongle might have some additional matching network in front of the tuner and I don't currently have the equipment at hand to measure the mismatch loss directly.

It might be that 1.6 dB figure was measured by HB9AJG. If in fact these tuners are so badly matched, then my measurements overestimate the noise figure for a similar amount. For the purpose of comparing my results with HB9ALG's however, I have removed this compensation from their figures.

Update: My father points out that signal amplitude on a 75 Ω load in a 50 Ω system is in fact 1.6 dB higher than on a 50 Ω load. I was wrongly considering signal power correction. It is in fact the amplitude of the signal entering the receiver that matters, not the power. In that aspect, HB9AJG's correction was accurate. On the other hand, an Agilent application note pointed out by David in a comment to my previous post shows that accounting for mismatch is not that simple.

10log(Bandwidth) in my measurements is 10log(500) = 27dB

My second problem with the report is connected with the bandwidth of the measurement. To calculate the noise figure from the minimum discernible signal (MDS), measurement bandwidth must be accurately known. Any error in bandwidth directly translates to noise figure error. HB9AJG used the SDR# software and the report says that they used a 500 Hz filter for wireless (CW) telegraphy in their MDS measurements.

I replicated their measurements in SDR# using the same filter settings and it appears to me that the 500 Hz filter is in fact narrower than 500 Hz. I should mention however that I used version 1.0.0.1333 instead of 1.0.0.135 and my version has a Filter Audio check box that the report doesn't mention. It seems to affect the final bandwidth somewhat and I left it turned on.

SDR# showing audio spectrum with 500 Hz filter enabled.

I believe the actual filter bandwidth in my case is around 190 Hz. This estimate is based on the audio spectrum curve shown on the SDR# screenshot above. The curve shows the spectrum of noise shaped by the audio filter. Since noise has a flat spectrum, this curve should be similar to the shape of the filter gain itself.

Calculating the gain-bandwidth product of the filter.

A trace of the spectrum is shown in linear scale on the graph above. A perfect square filter with 190 Hz bandwidth (lightly shaded area on the graph) has the same gain-bandwidth product as the traced line. In log scale this is equivalent to 22.8 dB.

Finally, if I take both of these corrections and apply them to the MDS measurements for 700 MHz from HB9AJG's report, the noise figure comes out as:

NF = -136\mathrm{dB} + 1.6\mathrm{dB} + 174\mathrm{dB} - 10\log\frac{190\mathrm{Hz}}{1\mathrm{Hz}}
NF = 16.8 \mathrm{dB}

This result is reasonably close to my result of 17.0 dB for the twice-power method.

Update: 1.6 dB figure has the wrong sign in the equation above, since it is due to higher signal amplitude, not lower power as I initially thought. To cancel HB9AJG's correction, it should subtracted, giving NF = 13.6 dB.

Of course, you can argue that this exercise is all about fudging with the data until it fits the theory you want to prove. I think it shows that noise measurements are tricky and there's a lot things you can overlook even if you're careful. The fact that this came out close to my own result just makes me more confident that what I measured has some connection with reality.

Posted by Tomaž | Categories: Analog | Comments »

Noise figure measurements of rtl-sdr dongles

29.03.2015 19:53

Noise figure is a measure of how much noise a component introduces into a signal that passes through it. For a radio receiver, it defines how weak a radio signal it is capable of receiving before the signal is drowned in the receiver's own noise. For instance, in spectrum sensing, having a low noise figure receiver helps a lot when trying to detect hidden transmitters. To have some reference to compare my own receiver design with, I recently performed some noise figure measurements on an Ezcap DVB-T dongle.

Ezcap DVB-T dongle and a R&S SMBV signal generator.

Principles of noise measurements are nicely detailed in an application note from Agilent. Unfortunately I don't have access to specialized noise measurement equipment. I do however have a calibrated Rohde & Schwarz SMBV vector signal generator at work. It can be used as both a continuous wave and a somewhat decent noise source, so I chose to measure the noise figure using both the Y-factor method and the generator twice-power methods.

Both methods require measuring the power of the signal exiting the receiver. I implemented a power meter in GNU Radio using the flow graph shown below (GRC file). It measures true (RMS) signal power in a 200 kHz wide band that is offset by -500 kHz from the center frequency of the tuner. This is to exclude low-frequency noise from the measurement. High level of noise around the DC is characteristic of the direct conversion tuner used by the Ezcap dongle.

rtl-sdr power detector GRC flow graph

The settings used for the RTL-SDR source block are:

  • Sample rate 2.048 Msample/s,
  • LO frequency 700.5 MHz (which puts the center of the 200 kHz measured band at 700 MHz)

I used GNU Radio release 3.7.5.1.

RTL-SDR source block settings.

For twice-power method, I set the signal generator to unmodulated sine wave at 700 MHz and manually found the output power setting that caused a 3 dB change in the power meter reading. This is the minimum discernible signal MDS:

MDS = -104 \mathrm{dBm}
NF_{tp} = -104\mathrm{dB} + 174 \mathrm{dB} - 10\log{\frac{200 \mathrm{kHz}}{1\mathrm{Hz}}} = 17.0 \mathrm{dB}

For the Y-factor method, I used the arbitrary waveform function on the generator to produce Gaussian noise in a 50 MHz band centered on 700 MHz. Total power on the generator was 80 dBm. For such a setup, the excess noise ratio is:

ENR = \frac{P_{gen}}{BW_{gen}\cdot k\cdot T_0} - 1

With the noise generator turned off, the power detector showed -41.5 dB. With the noise generator turned on, the power detector showed -36.5 dB. This gives the following noise figure:

Y = \frac{P_{on}}{P_{off}}
NF_{yf} = 10\log{\frac{ENR}{Y-1}} = 10\log{\frac{49.0}{3.16 - 1}} = 13.6 \mathrm{dB}

These results are curious for several reasons.

First of all, the two methods should produce the same result, but in fact the resulting noise figures differ by 3.4 dB (a factor of around 2). My first suspect was an error in my calculations somewhere. The twice-power method, for example, is sensitive to the measurement bandwidth and this is a common source of errors in my experience. However I have repeated these exact same measurements using a completely simulated receiver in GNU Radio and the same power meter (and hence the same 200 kHz filter). In simulation the two methods agree perfectly, which makes me think the error is not in my calculations.

Another suspect was the quality of the noise for the Y-factor method. This method is typically used with specialized (analog) noise sources, not a pseudo-random vector loaded into an arbitrary waveform generator. However, repeated measurements with different signal powers, pseudo-random cycle lengths and sampling rates are in very good agreement (less than 0.5 dB difference in resulting noise figure). I have also measured the spectral power density used in the ENR calculation (Pgen/BWgen) with a spectrum analyzer and that measurement agrees with the calculated figure to within 0.1 dB.

Above makes me think that both measurements are correct and that there is some physical process in the receiver that is causing this difference. There may be some automatic gain control somewhere that behaves differently. Crest factor for instance is significantly different between noise and constant-wave inputs.

Update: Based on my later discovery that the noise power and ENR was not correct in my Y-factor calculation, it is likely that the 17.0 dB result is more accurate.

The second weird thing is the unusually large value. The noise figure is largely determined by the first stage, which is the low-noise amplifier in the Elonics E4000 tuner integrated circuit in this case. The datasheet specifies a noise figure around 4 dB, which is significantly lower than what I saw. It's not that far fetched though that a cheap design like this would perform worse than the best-case promoted in the datasheet. There might be a noisy power supply and interference from the USB bus for instance.

The most elaborate existing characterization of the rtl-sdr DVB-T dongles I'm aware of was done in 2013 by HB9AJG. Among other things, it also includes measurements of the minimum discernible signal. For the E4000 tuner at 700 MHz, that document states a noise figure of 11.0 dB, which is also somewhat lower than both my measurements. However, I believe HB9AJG made several errors in their article and in fact after accounting for them, their results nicely match mine for the twice-power method (I plan to write a bit more on that in a future post).


In conclusion, even though the results look unusual, I can't find any concrete reason to doubt their accuracy. The noise figure for this particular receiver seems to be between 17.0 and 13.6 dB, which is not particularly good. It depends on what you want to do, of course, but in general these dongles do not work very well with weak signals.

Posted by Tomaž | Categories: Analog | Comments »

Coax attenuation

16.11.2014 17:33

Two years ago, when the Institute was setting up the network of spectrum sensing nodes in Logatec, we required a set of antennas to cover the TV broadcast band. Based on the frequency range of VESNA's receiver, I chose to buy several Super Scan Sticks from Moonraker. The 50 MHz - 900 MHz range we were intending to use them for was well within the Scan Stick's specification and it was also very reasonably priced compared to some other offers for broadband antennas we got at the time.

Since then this antenna proved to perform reasonably well compared to others we tried. For instance, the hand-held MRW-210 we have on some nodes because of its small size is practically useless. However, all work so far has been done above 470 MHz and it doesn't look like that will change in the future. In hindsight it would probably be better to choose an antenna that performs better at higher frequencies.

I was recently reminded of that when it was pointed out to me that even the connection between the antenna and the sensor might be causing significant attenuation in our setup. The Scan Stick has a (non-replaceable) SO-239 connector and we bought each antenna together with a matching 10 m long RG-58 coaxial cable. I overlooked that at the time, but neither that cable type, nor the connectors were designed with frequencies much above 400 MHz in mind.

SO-239 panel connector wired to a SMA plug.

Lately I have been preparing a handful of new receivers for deployment. The way you're supposed to wire a panel-mounted SO-239 (left on the picture above) to an internal coaxial cable was another reminder that this setup was not made with high frequency signals in mind.

Unfortunately I don't have equipment necessary to measure the characteristics of the antenna itself. However to get at least an estimate of how much signal power I'll actually be losing in the cable, I conducted some measurements with just the cabling. I connected the receiver through the pigtail shown above, 10 m of the RG-58 cable and a SO-239-to-N-type adapter to our Rohde&Schwarz SMBV vector signal generator. Then I swept the frequency of the generator and measured the detected power on the other end. As a control, I performed the same measurement using 60 cm of a LMR-195 cable with SMA connectors and a SMA-to-N-type adapter I had laying around.

Detected power versus frequency for two different cables.

As you can see, the difference between the cables is significant, while not exactly show-stopping. The big variation in detected power between 500 MHz and 550 MHz is due to variation in detector sensitivity.

There are some typical attenuation figures versus frequency for both types of cable available on the web. LMR-195 supposedly has less than .2 dB attenuation at 60 cm length on these frequencies. On the other hand, 10 meters of RG-58 has around 5 dB.

This suggests that the attenuation in the short LMR-195 cable is insignificant compared to longer RG-58. To get just the attenuation in the RG-58 cable and ignore changes in detector sensitivity, I subtracted the measurements with LMR-195 from those with RG-58. In the plot below, I compare this figure with the typical attenuation versus frequency for RG-58.

Cable attenuation measurement versus model.

From this graph it seems that RG-58 is performing better than expected. For lower frequencies the attenuation is actually a good decibel lower than it should be according the cable model. I also did not take into account any return loss, so the attenuation in just the cable must be even lower than what I measured. The SO-239 connector is pretty bad at keeping correct characteristic impedance above 400 MHz, so I'm guessing the reflection becomes significant at higher frequencies.

In the end, 6 dB loss means only one quarter of power at the antenna reaches the receiver. In the context of radio that might not be as bad as it sounds, but it definitely ruins the otherwise good noise figure of VESNA's receiver. We're running out of our stock of Scan Sticks anyway and I'll be looking into new antennas and cabling for future deployments.

Posted by Tomaž | Categories: Analog | Comments »

Seminar on receiver noise and covariance detection

31.10.2014 19:35

Here are slides of yet another seminar I gave at the School a few weeks ago to an audience of one. Again, I'm also posting them here in case it might be useful beyond merely incrementing my credit point counter. Read below for a short summary or dive directly into the paper if it sounds like fun reading to you. It's only four pages this time - I was warned that nobody has time to read my papers.

Effects of non-Gaussian noise on covariance-based detectors title slide

Like all analog devices, radio receivers add some noise to the signal that is passing through them. Some of this noise is due to pretty basic laws of physics, like thermal noise or noise due to various quantum effects in semiconductors. Other sources of noise however come from purely engineering constraints. These are for example crosstalks between parts of the circuit, non-ideal filters and so on. When designing receivers, all these noise sources are usually considered equivalent, since in the end only total noise power is what matters. For instance, you might design a filter so that it filters out unwanted signals until their power is around thermal noise floor. It doesn't make sense to have more attenuation, since you won't see much improvement in total noise power.

However, when you are using a receiver as a spectrum sensor, very weak spurious signals buried in noise become significant. After all, the purpose of a spectrum sensor is exactly that: to detect very weak signals in presence of noise. Since you don't know what kind of signal you are detecting, a local oscillator harmonic might look exactly like valid transmission you want to detect. Modern spectrum sensing methods like covariance- and eigenvalue-based detectors work well in presence of white noise. Because of this it might be better for a receiver designer to trade low total noise power for noise with a higher power, but one that looks more like white noise.

The simulations I describe were actually motivated by the difference I saw between theoretical performance of such detectors and practical experiments with an USRP when preparing one of my earlier seminars. I had a suspicion that spurious signals and non-white noise from the USRP's front-end could be causing this. To see if it's true, I've created a simulation using Python and NumPy that checks the minimal detectable power for two detectors in presence of different spurious sine signals and noise, colored by digital down-conversion.

In the end, I found out that periodic spurious signals affected the minimal detectable signal power even when they were 30 dB below the thermal noise power, regardless of frequency. Similarly, digital down-conversion alone also affects detector performance because of correlation it introduces into thermal noise. However since oversampling ADC have so many other practical benefits, DDC is most likely a net gain even in a spectrum sensing application. On the other hand, periodic components in receiver noise should be avoided as far as possible.

Posted by Tomaž | Categories: Analog | Comments »

Kickstarting failure

16.07.2014 17:37

Yesterday I stumbled upon the list of supposedly life-changing Kickstarter projects that hovered briefly on the front page of Hacker News. While I receive a regular stream of links to more or less feasible crowd-funding projects through various channels of modern communication, this list caught my eye as being particularly full of far fetched, if not down-right fraudulent proposals.

After skimming through a campaign for electric vehicles, written by someone who doesn't know the difference between energy and power, I stopped for a moment on Shawn West's 30-second rechargeable battery. He is asking for $10.000 to build a replacement for ordinary rechargeable batteries using a super capacitor for energy storage instead of an electrochemical cell.

Capacitor, casing and circuit board.

Image by Shawn West

Let's consider for a moment his claims: he says that his patent-pending battery using a lithium-ion super capacitor is roughly equivalent to a typical rechargeable battery. He shows us an AA-sized prototype that supposedly contains two integrated circuits: a voltage regulator and a protective circuit that prevents the capacitor from being over- or under-charged. In the FAQ he mentions that the capacity of his battery is 1150 mAh.

Unsurprisingly, on all of his pictures the capacitor is placed in such a way that the model or capacity rating isn't visible. However, with some image enhancement, it's just possible to read out "YUDEN" on one of the photographs.

Enhanced photograph of Shawn West's capacitor

Taiyo Yuden is in fact a manufacturer of lithium-ion capacitors. Looking through their super capacitor range, there is actually just one model that would fit within the 14 x 50 mm AA sized battery: the 40 farad, 12 x 35 mm cylinder-type LIC1235R3R8406.

Here are its specifications:

C_{cap} = 40 \mathrm{F}
U_{max} = 3.8 \mathrm{V}
U_{min} = 2.2 \mathrm{V}

Let's do some back-of-the envelope calculations: That tiny chip on the circuit board looks like a low-drop linear regulator. In that case, the capacity of the battery given in milliampere-hours is equal to the change in electric charge between the fully charged and fully discharged capacitor (ignoring quiescent current of the regulator):

C_{bat} = \Delta Q = Q_{max} - Q_{min} = C_{cap} \cdot (U_{max} - U_{min})
C_{bat} = 17.8 \mathrm{mAh}

That's barely 1.5% of the claimed capacity!

If we consider for a moment that his circuit actually contains a switching regulator, the situation improves, but only slightly. Given 100% conversion efficiency, the energy that can be extracted from the battery is now equal to the change in electric field energy between the fully charged and fully discharged capacitor:

\Delta W = \frac{C_{cap}}{2}\left(U_{max}^2 - U_{min}^2\right)
\Delta W = 192 \mathrm{J}

Since the inventor claims that his battery does not have a discharge curve, but puts out a steady Ubat = 1.5 V, we can simply convert the energy rating to capacity:

C_{bat} = \frac{\Delta W}{U_{bat}}
C_{bat} = 35.6 \mathrm{mAh}

Obviously, this is much better than the dissipative case above, but the figure is still more than one order of magnitude off from the Kickstarter campaign claim of 1150 mAh. Even giving the author the benefit of doubt and using the largest capacitor from Taiyo Yuden's super capacitor range, the achievable capacity remains much smaller than your vanilla pink-bunny-never-stops alkaline.


Super capacitors are a fascinating component and they certainly have their uses. I kind of like the idea of packaging one into a alkaline battery casing, especially the exposed ring that is used to by-pass the regulator for fast charging. However the claims that this could be used to power your smart phone are ridiculous.

Crowd-funding seems to fuel a big part of the broader-Internet fascination with hardware start-ups these days. I can't help but think that projects with claims that are not challenged in even one of the overly-enthusiastic let's-disrupt-the-industry comments are doing more harm than good to our field.

Posted by Tomaž | Categories: Analog | Comments »