Comparing MMANA-GAL with measurements of a biquad antenna

17.03.2023 9:45

My dad recently made a biquad antenna for the 2.4 GHz band. He optimized the geometry in the MMANA-GAL software. I've measured the S-parameters of the antenna using my home-made VNA and compared it to the simulation results produced by MMANA-GAL. Measured VSWR is slightly higher than predicted and the resonance is off by around 30 MHz, but otherwise the measurements show roughly similar results than the simulation.

The antenna is made in an old CD-R cake box. The driven element is a 1.5 mm diameter copper wire bent into the figure eight shape. Center of the wire is positioned 15 mm above a circular reflector made out of steel foil with a diameter of 122 mm. A coaxial feed line goes through the reflector and attaches to one side of the element. The feed line is soldered to a SMA connector on the bottom of the box. There is some more info on the construction of biquad antennas for the 2.4 GHz band on this page.

Biquad antenna in a CD-R cake box with two sizes of covers.

Two transparent covers of different heights fit over the antenna: the high cover measures 50 mm in height. The low cover measures 23 mm in height. The low cover sits just above the element. Both covers are made of polypropylene. I've measured the antenna with each cover as well as with no cover installed.

Biquad antenna modelled in MMANA-GAL.

The antenna models in MMANA-GAL consist of a collection of wires. The solid reflector plate was modeled as a grid of wires, with spacing small compared to the wavelength. Diameter of the wires in the model was 0.8 mm and "Cu wire" was selected for the loss model. The signal source in the model was placed directly at the element. Antenna was simulated in free space. MMANA-GAL has no capability to simulate dielectrics around the antenna, so the effect of the box was not simulated.

The built-in optimization function of MMANA-GAL was used to optimize the geometry of the driven element for the best VSWR at 2450 MHz. This produced the slightly elongated shape of the element (note that the corners are not 90 degrees)

My small home made vector network analyzer.

I've measured the complex reflection coefficient of the antenna using my home-made VNA. I've written about it previously on my blog. It consists of a custom RF bridge and a multiplex board. The detector is a modified HackRF SDR and the stimulus signal is generated with the ERASynth Micro synthesizer. Instrument control and signal processing is performed by a PC running software based on scikit-rf.

The antenna was connected to the VNA with 50 cm of a SS405 semi-rigid coaxial cable. The calibration plane was at the SMA connector at end of the cable. I used the short-open-load calibration with the SMA calibration kit I've written about previously. I used the SMA female-to-female adapter from the kit to connect the calibration standards to the male connector at the end of the cable.

I've measured the antenna inside a room. This probably means that reflections from walls affected the measurements. The measurements shown here were done with the antenna flat on a table, facing the ceiling. I repeated the measurement with the antenna in other orientations. The measured VSWR curve was very similar to the one shown.

Measured and simulated VSWR of the biquad antenna.

The measured resonance without the cover is around 28 MHz higher than predicted by MMANA-GAL. If I assume that this difference in resonance is only due to the difference in element size between the model and the real antenna the corresponding difference is only around 0.3 mm. This is well within the tolerances of such a hand-made antenna.

MMANA-GAL calculated a lower VSWR at resonance than what my measurement shows. It predicted 1.06 (30 dB reflection loss), while I measured 1.21 (20 dB reflection loss).

The cover also affects the resonance. Polypropylene has a relative permittivity of around 2.3. Since the propagation speed is lower, the wavelength is lower as well. This means that the same antenna should have a lower resonance frequency in polypropylene than in empty space. The measurements agree with that. With the low cover the resonance is lower by about 55 MHz. If I consider the entire band from 2.4 GHz to 2.5 GHz, the antenna actually performs better with the low cover, since the maximum VSWR inside the band is lower in this case. On the other hand, the high cover is apparently far enough from the element that it has almost no effect. The measured VSWR with the high cover was very similar to the VSWR with no cover.

An archived post on Wireless Nederland forum lists some VSWR measurements of a similar antenna. At 15 mm height of the element above the reflector they measured 1.20. They find an optimum at 17 mm height with VSWR of 1.15. I'm guessing these measurements were made with the driven element sized to 1/4 wavelength, not the optimized geometry produced by the MMANA-GAL in our case.

Reflection coefficient of the biquad antenna shown on a Smith chart.

The measured reflection coefficient on the Smith chart differs from the simulated. I'm guessing this is because of the electrical length of the feed line. I calibrated the VNA before the feed line, so I also measured the phase delay incurred by it. On the other hand, in the simulation the signal source was directly at the driven element and does not include this delay.


I think the measurements fit nicely with the simulations. Differences in resonance frequency can easily be attributed to mechanical tolerances. At these frequencies a tenth of a millimeter difference can already produce measurable deviations in antenna performance. There is also the fact that the feed line and cover were not included in the simulation and that the reflector is steel, not copper. Even though the measured performance is slightly worse than what was predicted with MMANA-GAL, 1.21 VSWR is still a very good result, indicating that 99% of the power is being radiated out of the antenna.

Posted by Tomaž | Categories: Analog | Comments »

A footnote about ESP8266 5V tolerance

05.05.2022 16:28

The other day I've been reading the instructions for a project involving the ESP8266 chip. In the part that describes connecting one of the ESP8266's GPIO pins to a 5V logic level line this comment stuck out:

As the ESP8266 is 5V-tolerant, [...] there shouldn't be any issues, however I haven't had time to test this for longer periods of time. Therefore, if the ESP burns out after a while, just add a voltage divider or something.

It never occurred to me that the GPIO pins of the ESP8266 would be able to work with 5V levels directly. I've done a few projects with this IC and I always considered it a purely 3.3V device that requires a level shifter to work with 5V logic. Normally, 5V tolerance of pins is advertised in some prime real estate in a microcontroller datasheet. On the other hand, ESP8266's datasheet doesn't mention it explicitly. Even the specification of voltage levels for digital IO is kind of vague about it:

From the "Digital IO Pads" section of the ESP8266 datasheet.

Image by Espressif Systems

The table says that the maximum input voltage for a high logic level on a digital pin is 3.6V. However the paragraph below also talks about pins having some protection against over voltage. My understanding is that there's a circuit that shunts current from a pin to ground when the pin voltage reaches 6V, similar to a Zener diode. Snap-back might be a mistranslation of fold-back. In any case, the difference between 6V breakdown and 5.8V hold voltage doesn't seem that significant. What good is this protection if the maximum allowed voltage is only 3.6V?

After searching the web a bit I came across a 2016 post on Hackaday. The article and comments give a good overview of what's known about this. In short, official documentation is vague, experience on the topic is mixed but many people believe that pins are indeed 5V tolerant.

I wanted to quickly check this. As people discussed in the Hackaday comments, a digital pin that only accepts voltages up to the supply voltage will normally have an ESD protection diode connected from the pin to the supply. Such a pin will start sinking current when voltage on it exceeds supply plus a diode voltage drop. That is where the typical 3.6V specification for VIH maximum comes from: a 3.3V supply plus 0.3V Schottky diode drop. On the other hand, a pin that can tolerate higher voltages will instead have a Zener diode to ground, or some other circuit that performs a similar function. In that case, the pin will only sink current at higher voltages.

I connected a spare ESP-01 module to my curve tracer. I measured the I-V curve of the GPIO2 pin since that one is not connected to any external pull-up or pull-down resistors on this module. The module was powered from a development board with a 3.3V supply voltage. I used a simple Arduino program that just called pinMode(2, INPUT). This ensured that the pin was set to high impedance and not sourcing or sinking current, except for any protection circuit on it.

Measurements of the ESP8266 digital pad with a curve tracer, forward polarity.

This was the result of my measurement. In the forward polarity the pin started sinking significant current at around 6.6V. The slight current ramp at lower voltages is an artifact of my instrument and isn't a real current sinking into the pin.

I didn't see any fold-back characteristic. However I was using a 1 kΩ load resistance on the curve tracer. The load resistance prevents a rapid current increase during the measurement. This might have prevented the fold-back circuit from working and I didn't want to risk the chip by going to higher currents. In any case, the 6.6V threshold strongly points towards the pin not having an ESD diode to the supply line and hence being intentionally designed to handle higher voltages than 3.6V.

Measurements of the ESP8266 digital pad with a curve tracer, reverse polarity.

In the reverse polarity, the pin started to source current at around 0.5V, as expected for an internal diode connected from the pin to ground. This diode is also mentioned in the datasheet.

In conclusion, it seems that indeed the digital pins on the ESP8266 have provisions for tolerating voltages higher than the supply. This doesn't necessarily mean that they will also work reliably at those voltages. Even if they will, I would still want to avoid using 5V levels directly. The ESP8266 bootloader likes to put various pins in output mode. A 5V level on a 3.3V-powered output pin is certainly not healthy, regardless of whether the pin tolerates such a level in input mode.

This slide deck from ST goes into some more detail on how 5V-tolerant pins are implemented internally and lists some pitfalls associated with their use. Though I have no idea if Espressif used the same implementation approach as ST.

Posted by Tomaž | Categories: Analog | Comments »

Effect of ground cutout on the CM4 antenna

10.03.2022 11:16

Recently I had the opportunity to test how much the design of the carrier board affects the performance of the PCB antenna on the Raspberry Pi Compute Module 4. I compared a custom carrier board with the CM4 centered over a continuous ground plane, a similar board with the suggested ground cutout under the antenna and the official Raspberry Pi CMIO4 board that has the cutout, but also places the module on the edge of the board. I recorded the RSSI of the Wi-Fi signal in different locations using these carrier boards. I found that the cutout only improves RSSI by around 3 dB on average. Both custom boards performed significantly worse than CMIO4.

PCB antenna on the Raspberry Pi CM4.

One of the improvements of the Raspberry Pi Compute Module 4 over the Compute Module 3 is the on-module Wi-Fi radio interface with a built-in antenna. The antenna on the module provides basic wireless connectivity without requiring any external components. However the Compute Module is not a stand-alone device and the quality of Wi-Fi reception is affected by the design of the carrier board the module sits on. Ground plane on the carrier board and any other components placed near the antenna will change its properties and hence the losses in signal transmission and reception.

Figure 11 from patent CN110770975A.

Image by Proant Ltd

This is Figure 11 from patent CN110770975A. The antenna on the CM4 is similar to ones shipped on other Raspberry Pi boards with on-board wireless interfaces, like the Zero W and Raspberry Pi 3. It consists of a triangular slot etched into the copper layers of the PCB with a few tiny SMT capacitors soldered on top. The copper pattern forms a resonant cavity antenna with resonances at 2.4 and 5 GHz. The design has been licensed by the Raspberry Pi Foundation from the Swedish company Proant AB. This antenna design is currently marketed by Proant under the trademark name Niche. The original patent is in Chinese, however Google Patents has a good English translation that references the figures in the Chinese document. Some professional measurements of the VSWR and the radiation pattern are also available from Antenna Test Lab.

Figure 4 from the Compute Module 4 datasheet.

Image by Raspberry Pi Trading Ltd

The datasheet for the Compute Module 4 suggests that an external antenna be used where possible. Otherwise it suggests that the carrier board should ideally have at least an 8 x 15 mm cutout in the ground planes under the antenna and that the antenna be oriented towards the edge of the device.

Ground plane cutout for the antenna on CMIO4 carrier board.

The picture above shows the official CMIO4 development board for CM4. As you can see, it follows both of these these recommendations. However many carrier boards in the wild do not include the ground cutout and/or place the board somewhere in the middle of the carrier PCB. The datasheet suggests that the designer checks the performance of the antenna on their specific design, but doesn't go into further detail on how to do that.

I recently had the opportunity to test the antenna performance on two custom carrier boards where the only major difference between them was that Board A had the recommended 8 x 15 mm ground plane cutout and the Board B did not. More precisely, Board B had ground and power planes and signal layers without any consideration for the antenna. Board A maintained the 8 x 15 mm keep-out area recommended by the datasheet on all layers. No copper planes or traces were present inside this area. I also had a stock Raspberry Pi CMIO4 board at hand for comparison.

Carrier board A with a continuous ground plane.

Carrier board B with ground cutout under the antenna.

Both custom boards placed the Compute Module 4 in the middle of a larger PCB, with the PCB extending approximately 47 mm beyond the edge of the CM4 with the antenna pattern. They used the mating connector with 1.5 mm stacking height. I can't disclose the details of the design, so I'm only sharing a rough sketch of the important dimensions. The carrier board shape was more complicated than shown. The boards had the CM4 mounted on the top side, while all other SMT components were on the bottom side. Wi-Fi reception was tested without any enclosure and apart from the carrier board PCB there were no other parts near the antenna.

My test consisted of taking one board, placing it in a specific location relative to the wireless access point, turning it on and recording the RSSI on both the Compute Module 4 (station - STA) and the wireless access point (AP). I fixed the transmit power on both sides using the set txpower command. I then turned it off and repeated the measurement for all locations. I repeated the sequence for all boards. Locations were in different rooms in a brick building and had different orientations of the antenna relative to the access point. I took care that at each location the antenna was oriented consistently for all boards. Access point was a TP-Link Archer C5 and wasn't moved during the measurements. The measurements were done in the 2.4 GHz band.

NMEAS=100

function measure {
	HOST=$1
	DEV=$2
	FN=$3

	ssh $HOST 'N=0
	while [ $N -lt '$NMEAS' ]; do
		iw dev '$DEV' info
		iw dev '$DEV' station dump
		sleep 1
		N=$(($N+1))
	done' > $FN
}

ssh $HOST_D "iw dev $DEV_D set txpower fixed 1800"

sleep 120

measure $HOST_D $DEV_D $FN_D&
measure $HOST_A $DEV_A $FN_A&

wait

The gist of the script I used for the measurement is above. It takes 100 samples of the RSSI using the station dump command in 1 second intervals from both the access point and the station. The 120 second sleep after setting the TX power is there because I found on previous experiments that on some wireless adapters it takes some time for the TX power command to take effect. I think this pause is not necessary for CM4, but I left it in the script just to be sure. From the TX power reported by the info command it appeared that the TX power on both sides was indeed fixed at the set levels.

RSSI measured at the wireless LAN access point.

RSSI measured at the Compute Module 4.

In the plots above, the solid horizontal lines show the mean RSSI for each location and carrier board. Apart from the mean value, the distribution of the 100 RSSI samples is also shown in the form of a violin plot. Top figure shows RSSI reported by the access point and the bottom figure shows RSSI reported by the Compute Module 4. Higher RSSI means better reception.

The results show that Board B with the cutout consistently outperformed Board A without the cutout. The difference isn't large however, only about 3 dB on average. Similarly, CMIO4 consistently outperformed Board B. The difference was larger here and varied quite a lot by location. On location 3 it was as large as 17 dB on both sides of the connection. On average the difference was around 8 dB.

TX bitrate reported by the wireless LAN access point.

Bitrates selected by both the access point and the Compute Module 4 depend on the quality of signal reception as well. On average, the transmit bitrates on the access point were around 16 Mbit/s higher with Board B than Board A. Bitrates on CMIO4 were around 18 Mbit/s higher than with Board B. Again there was a lot variation between locations. However the bitrates might have been also affected by interference from other traffic on the 2.4 GHz band, not just antenna performance.


In conclusion, it seems that the addition of the cutout helps somewhat with the antenna performance. However judging by the fact that the difference between CMIO4 and Board B with the cutout was higher, it seems that the recommendation that the antenna is placed on the edge of the circuit board is more important than the cutout itself.

It's hard to say what these numbers mean in practical use, in terms of connection reliability. Let's assume that in free space Board A can operate reliably some distance from the access point. Then Board B with 3 dB better reception would be able to operate at the same reliability at a distance 140% larger than Board A. Similarly, 8 dB difference for CMIO4 translates to around 250% larger maximum distance. In a building with walls the effect is hard to predict. From other experiments I've done I can say that the CMIO4 performance is comparable to a typical half-wave dipole antenna, at least using this method.

The only apparent problem I've seen during my testing was that the Compute Module 4 had problems associating with the Wi-Fi network on Board A on location 2. Two or three attempts were usually necessary to get network connectivity. Otherwise the wireless network seemed to work fine for a usable ssh connection.

The exact results would very likely differ for a different design of the carrier board or a different choice of locations, so I think these numbers only give a rough idea of what the effect of the cutout is in general. It would be interesting to get some reflection measurements as well using a VNA. Unfortunately the few CM4s I have available are quite precious since it's very hard to get new ones at the moment. Connecting the antenna to the VNA would require some, very likely destructive, modifications to the module to connect a coax cable to the antenna. Because of this I only did what measurements I could using the stock radio present on the module.

Posted by Tomaž | Categories: Analog | Comments »

Comparing RF Demo Kits

10.09.2021 15:48

Earlier this year I was writing about the RF Demo Kit. It's a small circuit board with a sample of different radio frequency circuits that's sold as a learning tool. Among other things it includes a set of on-board short, open and load calibration standards. I was using these standards on the RF Demo Kit for VNA calibration when measuring antennas with U.FL connectors since I lack a better U.FL calibration kit. Having a better model of calibration standards leads to more accurate measurements and in the previous blog post I wrote about measuring some of the model parameters.

At the time I managed characterize the fringing capacitance C0 = 0.58 pF of the open standard on the RF Demo Kit. When I mentioned my measurements on the NanoVNA forum, it was pointed out to me that at least two versions of the RF Demo Kit exist and that the other version might have better characteristics due to the different PCB layout. Since I recently improved my home-made VNA, I decided to revisit this measurement and see if the second RF Demo Kit indeed shows different characteristics.

SOLT standards on the NWDZ RF Demo Kit.

Here is a close-up of the SOLT standards on the NWDZ Rev-01-10 RF Demo Kit. This is the board I measured in my earlier blog post. The load standard is a 0603 size SMT chip resistor. It measures 50.0 Ω at DC. Short and open standards are terminated directly at the U.FL connector, which means they have a different delay compared to the load standard. The substrate looks like a typical 1.6 mm thick FR-4.

SOLT standards on the Deepelec RF Demo Kit.

This is the other board mentioned in the NanoVNA forum, the Deepelec RF Demo Kit. Here the load standard is a smaller, 0402 size SMT chip resistor. It also measures 50.0 Ω at DC.

All three standards are terminated after an approximately 2 mm long trace. Compared to the NWDZ Demo Kit, this forms a more consistent calibration plane. The trace is covered in solder resist, which isn't ideal. The dimensions of the trace and gap to ground seem to be slightly different between the standards. The substrate looks identical to the other board, also most likely a 1.6 mm thick FR-4.

Compared to the NWDZ Demo Kit, U.FL connectors have thermal reliefs around their ground pads. Using thermal reliefs is usually discouraged in high frequency circuits.

My measurement setup for characterizing the RF Demo Kits.

I again used my home-made VNA for the measurements. I connected the RF Demo Kits to the VNA using a 20 cm SS405 coax and a reasonably expensive U.FL-to-SMA adapter. I measured at the frequency span from 500 to 3000 MHz. The VNA was calibrated at its port using a SMA calibration kit I wrote about previously. I applied port extension to null out the effects of the coax using the same method I described in the earlier blog post about RF Demo Kits.

In general, this setup yielded much more accurate results as before. The new version of my VNA has significantly better characteristics in terms of the dynamic range and phase noise. The SS405 coax with the U.FL-to-SMA adapter also performed much better and gives more consistent results than the 20 cm RG-316 pigtail I used previously. Compared to my earlier measurements the new ones look cleaner and with less random noise.

Smith chart for the NWDZ Demo Kit with port extension.

This is how the open, short and load standards measure on the NWDZ RF Demo Kit. Ideally, the standards should have 1, -1 and 0 reflectivity respectively and the plots should just be dots on the Smith chart. Due to various imperfections however the plots spread out from their ideal values as frequency increases.

Fitting a capacitance value to the measurement of the open standard gives a value of C0 = 0.57 pF. This is very close to the 0.58 pF figure I got in my previous measurement.

The load standard has more than 13.5 dB return loss up to 3 GHz. In my previous measurement I've seen return loss down to 12 dB.

Smith chart for the Deepelec Demo Kit with port extension.

The measurement of the Deepelec RF Demo Kit shows a similar picture, but differs in a few details. The calculated one-way time delay for the port extension was 30 ps longer compared to the NWDZ kit. 30 ps is equivalent to 9 mm in free space. This seems too large for the 2 mm line seen on the PCB, even when accounting for a reasonable velocity factor. There might be some other effects here or maybe it's just an error in my measurement.

Fitting a capacitor to the open standard yields C0 = 0.54 pF. The value is similar to the one I get for the NWDZ Demo Kit, however this fit is worse near 3 GHz. On the Smith chart, the trace for the NWDZ open increases in phase in a linear fashion with very little deviation from the ideal capacitor characteristic. On the other hand the trace for Deepelec open folds back at itself and makes a loop near 3 GHz.

Interestingly, the match of the load standard is slightly worse than with the NWDZ board. The worst return loss is 11.9 dB. I would expect the physically smaller 0402 resistor to behave better at higher frequencies.

Smith chart for the SMA calibration kit with port extension.

For comparison, this the same measurement done with the SMA calibration kit connected to the end of the same 20 cm SS405 cable (although with a SMA-to-SMA adapter instead of the U.FL-to-SMA). You can see that all 3 standards are much better defined compared to the two Demo Kits. The port extension comes out at exactly 1.00 ns, which is roughly consistent with the 20 cm length of the cable and the 0.7 velocity factor for SS405.

Setup for measuring the thru on Demo Kits.

I also measured these same three SMA standards by connecting them using a short U.FL pigtail and the thru on both Demo Kits. This way I wanted to get an estimate of the quality of the thrus on these boards since I can't measure them directly with my one-port VNA.

Measurements of the thru on the NWDZ Demo Kit.

Compare this measurement with the case above where the SMA calibration standards were connected directly to the 20 cm SS405 cable without the NWDZ thru in between. The three standards are much less well defined. This is the effect of the two U.FL connectors on the Demo Kit, the PCB trace between them and the U.FL pigtail. I can't say which one had a stronger effect.

Measurements of the thru on the Deepelec Demo Kit.

This is the same measurement, but with the thru on the Deepelec Demo Kit. Interestingly, it shows a much worse picture. I was so surprised that I repeated the measurement to make sure I was getting a consistent result. The thrus on both Demo Kits are effectively the same, with the only obvious difference being the dimensions of the PCB trace and the gaps to ground.

It seems unlikely this is an effect of the U.FL connectors. They seem identical on both boards and the connectors would also ruin the response of other standards. The most suspect is the 4 mm long trace between the connectors. Deepelec's trace does look a bit too narrow for 50 Ω on a typical two layer board. My guess is it's about 50 mil wide, with a 20 mil gap to ground. This makes the gap too small for a microstrip mode. A co-planar wave guide model with some typical PCB parameters gives a characteristic impedance of about 125 Ω, not accounting for the effect of the solder mask filling the gap.

For 3 GHz, the 1/10 wavelength rule is between 5 and 10 mm, depending on the velocity factor. The trace for the thru is about 4 mm long so it's surprising to me that it would have this much of an effect, even if the trace has a wrong characteristic impedance. However a quick simulation of the measurement did in fact show very similar results if I used a 4 mm, 125 Ω transmission line segment as a thru:

Simulated effect of a 4 mm, 125 ohm transmission line segment.

In summary, the SOLT calibration standards on the NWDZ RF Demo Kit seem to be better than on the Deepelec one. The return loss of the load standard is slightly better and the open standard is more accurately described with the fringing capacitance model at high frequencies. The thru on the NWDZ is significantly better compared to Deepelec, which seems to have a significant impedance mismatch.

I need to stress that these Demo Kits sell for about 20 €, which is an order of magnitude less than similar, semi-professional kits. A cal-kit with a brand name will have at least one more figure in its price on top of that. They can't really compare. The RF Demo Kits I've tested are mostly meant to be used with the NanoVNA that works best up to a few hundred MHz and are perfectly fine as a learning tool.

Posted by Tomaž | Categories: Analog | Comments »

Inside a 3.5 mm plug from old Bose headphones

28.08.2021 10:16

Some time ago my dad told me about a problem he was having with a new ham radio transceiver. The transceiver has a RS-232 serial interface in a 3.5 mm stereo socket in the back panel. Connections that would normally be used for left and right audio channels are used for data RX and TX instead. Since the transceiver didn't come with a suitable cable my dad made his own: he found an old cable with a molded 3.5 mm plug on one end and soldered the other end to a DB-9 connector he could plug into his PC. However no matter what he tried, he could not get the digital interface working.

Surprisingly, we traced the problem to his adapter cable. Even though the multimeter showed that the cable had continuity on all cores and that no lines were shorted, the digital signals that passed through it came out distorted on the oscilloscope. I was curious what was going on, so I took the cable home with me for some more investigation. I commonly re-use the 3.5 mm plugs from old headphones as well and have never came across a problem like this before.

This is the 3.5 mm plug molded onto the end of the cable in question. It has nice gold plating on the contacts and a no-expense-spared molding with a combination of hard plastic and a soft rubber-like overmold. It probably once served some higher-end Bose headphones.

The 3.5 mm stereo plug on the end of the suspect adapter cable.

First thing I did was to measure the frequency response of the cable in the audio frequency range. I connected the stimulus signal to the 3.5 mm connector and measured the gain and phase of the signal coming out of the other end of the cable. This is the resulting Bode plot:

Measured frequency response of the adapter cable.

Obviously there is something more going on in this cable than just normal copper connections. At these frequencies I would expect to see practically flat, 0 dB gain and next to no phase shift across the cable. Instead, there seem to be a -10 dB band stop filter with a center frequency of around 2 kHz somewhere inside.

I found it unlikely that the actual cable was doing the filtering, so I focused on the molded handle around the actual 3.5 mm connector. After some application of unreasonable force it still refused to fully come apart. It did reveal what looked like tops of two MLCCs molded into the plastic. However it seemed that if there is some more electronics inside removing the rest of the plastic by force would also destroy the circuit.

3.5 mm plug with the partially removed molding.

Since I heard that acetone sometimes dissolves potting compounds I put the connector into a glass of acetone-base nail polish remover. I then forgot about it, so it soaked in the acetone for about 2 months. Still, this didn't have as much effect on the plastic as I thought it would. It did make it brittle enough so that I could chip it away until I revealed a small double-sided printed circuit board with a few passive SMD components:

Circuit board inside the 3.5 mm plug handle, bottom side.

Circuit board inside the 3.5 mm plug handle, top side.

The circuit board is marked "YODA" Ver A. Apparently someone working at Bose was a Star Wars fan. If I read the date code correctly this board was produced in 2005. The circuit is symmetrical and has two identical parts for the left and right channel. Each half consists of 2 multi-layer ceramic capacitors and two chip resistors. Tracing the circuit revealed this schematic:

Schematic of the filter circuit embedded in the 3.5 mm plug.

Note that the resistances in the circuit are low enough that a typical multimeter on continuity setting will see them as a short. This is what made debugging the initial problem so frustrating.

I couldn't get a reliable measurement of the capacitors, so 10 μF was a bit of a guess here. However simulating the response of this circuit in SPICE shows that it behaves similarly to what I measured before I destroyed the connector:

Simulated frequency response of the filter circuit compared to the measurement.

Why did Bose go into the trouble of embedding this circuit into the connector? I'm guessing they wanted to improve the frequency response of their headphones. Maybe the speaker has a mechanical resonance at 2 kHz and the filter circuit damps that electrically? I don't know much about hi-end audio engineering. This article mentions passive correction filters that are placed in-line with a headphone cable to equalize the speaker response. While the circuit mentioned is a bit more complicated than the one I found, the one Bode plot they show is very similar to what I measured.

One quick test I found that is capable of detecting this specific circuit without any elaborate setup is setting the multimeter for a capacitance measurement and connecting the probes between one of the channels and ground. On a normal audio cable the multimeter reads some low capacitance, but in the case of the embedded filter it shows around 20 μF due to the two capacitors in the circuit.

In conclusion, it seems that not all connectors are merely connectors. When all other options fail it's worth doubting the most basic assumptions, like that a cable you're using is not actually behaving like a cable.

Posted by Tomaž | Categories: Analog | Comments »

HackRF clock converter, 3

26.06.2021 10:48

I modified my HackRF with a small board based around the LPC6957 clock buffer. This allows me to connect a wider range of clock sources to its CLKIN input for a 10 MHz reference clock. Among other things, I can now synchronize HackRF to the ERASynth Micro I use in my vector network analyzer. In my last blog post I said I will share some more measurements on how the modified HackRF performs, so here are a few initial observations.

HackRF connected to the ERASynth Micro.

The measurements I talk about below were done with the HackRF antenna input connected to the RF output of the ERASynth Micro through a short piece of a RG-316 coaxial cable and a 20 dB attenuator. ERASynth Micro was set up to output a CW signal at various frequencies at -20 dBm level. I also had the REF OUT from the ERASynth Micro connected to CLKIN on the HackRF. For measurements where I didn't want ERASynth Micro and HackRF running from the same clock source I left the cable attached to CLKIN and disabled the CLKIN input using hackrf_clock --clkin 0.

The first thing I noticed when testing the clock converter modification was the fact that at some frequencies the phase noise appears higher at around 100 kHz offset when HackRF is running from an external clock. As I mentioned in my last post this was already noticeable in the waterfall plot of the spectrum analyzer application. Difference is even more obvious in the following plot of the apparent phase noise of the signal at 2420 MHz.

Apparent phase noise in digital baseband at 2420 MHz.

The plot shows spectral density calculated using the Welch's method from a 10 s long recording of digital I/Q samples from the HackRF at 8 MHz sampling frequency. This plot does not show phase noise of the actual signal on the wire. I have no instruments available to directly measure that (however the spec for ERASynth Micro phase noise is much lower than what I measured - I show the comparison in this post). The plot shows the apparent phase noise of the sine wave in the digital domain, including the contributions of both HackRF and ERASynth Micro.

A signal at 1000 MHz doesn't show a significant increase when CLKIN is enabled, however the interesting part at around 100 kHz offset it is obscured by some spurs:

Apparent phase noise in digital baseband at 1000 MHz.

My understanding is that at these offset frequencies the phase noise is largely defined by the various PLLs in HackRF. The synchronization itself shouldn't matter. As I said last time, I suspect the difference is because of different PLL settings in HackRF. When CLKIN is disabled, HackRF derives all internal clocks from a 25 MHz quartz oscillator. When CLKIN is enabled, it uses the 10 MHz reference, hence requiring a different multiplier in the first stage PLL that converts the reference to a 800 MHz clock.


For my specific application in the vector network analyzer the far-off phase noise is less important than the stability of the signal over periods of time in the range of 1 to 10 ms. This is because I use a time multiplex to compare the phase of the reference and measured signals. The assumption in this type of measurement is that the reference signal has a stable phase over one period of the time multiplex.

On the phase noise plots above, stability over this range of time intervals is shown beyond the left edge of the graph. However it's difficult to show this in the frequency domain since it requires Fourier transforms over a very large number of samples and at least my naive approaches ran out of computer memory. Hence I rather explored this in the time domain.

Setup for measuring phase stability.

This is the block diagram of the setup. The 10 MHz TCXO in the ERASynth Micro is the single reference frequency source. Two PLLs in the ERASynth Micro convert this reference into the 2420 MHz RF signal on the coax. HackRF then uses a complicated circuit that involves multiple PLLs, frequency conversions and an analog-to-digital conversion to convert the RF signal to a 2 MHz digital intermediate frequency. I then use a digital LO on the computer to convert the signal to DC and measure its phase angle.

A typical plot of the detected phase angle in degrees over a course of 100 ms looks like this. The plot is similar for other RF frequencies:

Phase stability of the received CW signal.

I was somewhat surprised that I still get this kind of random walk in signal phase, even when everything is running from a single clock source. I've seen it sometimes drift up to ±30 degrees. My understanding was that at these time scales the PLLs should largely track their reference clock and not contribute to the stability of the signal, so I'm not sure where this is coming from.

On the other hand, the whole system is very complicated and I find it hard to understand all the parts. Especially HackRF is internally much more complicated than I initially thought. It includes many nested layers of PLLs distributed through different chips and so far I failed to get a good high-level picture of how various parts affect could phase stability.

In conclusion, the clock converter board seems to work, but it has some side effects I didn't anticipate, like the unusual increase in phase noise at 100 kHz offset. The clock synchronization itself also didn't help as much as I thought it would in improving the accuracy of my vector measurements. However it did lead me to better explore the properties of the whole system and I found some other improvements I can make.

Posted by Tomaž | Categories: Analog | Comments »

HackRF clock converter, 2

18.06.2021 20:18

Last time I was writing about making a small modification for the HackRF to expand the range of signals that can be fed into the external 10 MHz reference input. My initial motivation for it was to sync the ERASynth Micro frequency synthesizer and HackRF in my home-made vector network analyzer. However I thought it might be more broadly useful, so I designed the PCB to fit nicely into off-the-shelf HackRF enclosures. I've now assembled a prototype, verified that it works and written the necessary HackRF firmware and host tools code to support the clock converter circuit.

Clock converter board mounted onto the HackRF.

I installed the clock converter into my HackRF as I described in my previous post. I cut the PCB trace on the HackRF that connects the center pin of the CLKIN SMA connector and the pin 2 of the P22 header. I then soldered three thin wires between the SMA connector and the input on the clock converter board. Two outer wires are ground and the center wire carries the 10 MHz signal. They are quite short. I could use a short coax for this, but wires were simpler and I think that the impedance mismatch of this length won't matter much at 10 MHz.

HackRF with the clock converter modification installed.

I've put a footprint for an extra edge-mount SMA connector on the clock converter board. This way it can be used without any destructive modifications to the HackRF. However cutting the trace makes it possible to use the existing connector for connecting the HackRF to an external reference, same as before the modification. This way the modified HackRF fits into cheap off-the-shelf enclosures that provide some extra vertical space above the base PCB. The original molded plastic enclosure is too low unfortunately.

HackRF mounted in a metal enclosure.

The LTC6957 chip on the clock converter is turned on and configured through spare GPIOs on the HackRF's ARM CPU. It is disabled by default using some pull ups. Hence the HackRF should work as before if the converter board is plugged in but the firmware doesn't know about it. To actually use it, a patched firmware must be uploaded to the HackRF's MCU.

The firmware modifications are largely just boiler plate code that is needed to toggle GPIO pins based on requests over USB. Most of the new code is in clock_conv.c file.

The original README has instructions on how to build and upload the firmware. I didn't have any problems with that on a stock Debian Buster system. Remember to reset the MCU after uploading new firmware using hackrf_spiflash -R.

The only thing that was slightly confusing was the firmware version string that is reported by hackrf_info. The version string is made automatically from the current git tag, or commit SHA1 if tag doesn't exist. However, it only seems to get refreshed when making a new build directory with cmake, not when merely running the build with make.

For the host tools side of things, I patched the new hackrf_clock tool. I added two new command-line arguments: --clkin can be used to enable or disable the LTC6957 and hence the CLKIN input. --clkin-filt can be used to adjust the LTC6957 input filter bandwidth.

You can verify that the HackRF's PLL has locked onto the external reference using hackrf_debug as described in the wiki:

external reference disabled
$ hackrf_clock --clkin 0
$ hackrf_debug --si5351c -n 0 -r
[ 0] -> 0x51

external reference enabled
$ hackrf_clock --clkin 1
$ hackrf_debug --si5351c -n 0 -r
[ 0] -> 0x01

I will post some more detailed measurements of the performance of the modified HackRF later. For now, the simplest way to see the effect of the external clock is to check the frequency offset between HackRF and another device. Here are two screenshots of HackRF Spectrum Analyzer. In both cases I had the antenna input of the HackRF connected to ERASynth Micro via a coaxial cable and some attenuators. ERASynth Micro was set to output a 2420.000 MHz signal. Also, the REF OUT of ERASynth Micro was connected to CLKIN on the HackRF:

Spectrum of a 2.420 GHz signal with CLKIN disabled.

This is with the CLKIN disabled (--clkin 0). The signal appears on the spectrum display with an approximately 22 kHz offset, since the ERASynth Micro and the HackRF use their internal quartz references which have slightly different frequency offsets.

Spectrum of a 2.420 GHz signal with CLKIN enabled.

This is with the CLKIN enabled (--clkin 1). Now the signal appears exactly at 2420.000 MHz since both devices are synchronized to the common 10 MHz reference (in this case, the TCXO in the ERASynth Micro). Of course, that doesn't mean that the signal is really exactly at 2420.000 MHz, just that both devices now exactly agree on what 2420.000 MHz is.

One interesting thing to note is that the lower screenshot also shows a slightly increased level of phase noise around the signal peak. As far as I can see, this is not due to the clock converter board. Even when CLKIN is used on an unmodified HackRF, received signals seem to exhibit slightly increased phase noise compared to when the internal quartz oscillator is used. I also tried this with a different 10 MHz source, so it's not due to ERASynth Micro either.

I didn't investigate this further. It might be that all my 10 MHz sources are noisy. Another possible cause could be different settings in HackRF's SI5351C. The SI5351C uses a PLL to convert either 25 MHz from the internal quartz or 10 MHz from the CLKIN into a 800 MHz clock. This 800 MHz signal is then used to generate all other clock signals in the HackRF. It might be that the higher PLL divider value (80 versus 32) contributes to this effect.

If you want to modify your HackRF like this, you can find the hardware design files in my hackrf-clock-conv GitHub repository. The modified firmware can be found in my fork of the HackRF repository. If you don't want to bother with making and soldering the PCB yourself, I'm also still collecting interest for a small production run of these boards. Send me an email if you are interested.

Posted by Tomaž | Categories: Analog | Comments »

HackRF clock converter

06.06.2021 10:24

HackRF can use an external 10 MHz reference clock instead of the built-in crystal oscillator. The CLKIN input accepts a DC coupled, CMOS-level, 3.3V square wave signal since it's connected directly to the digital input pin on the SI5351C PLL chip. I want to run my HackRF from the 10 MHz reference signal generated by my ERASynth Micro. Unfortunately, the TCXO output from the ERASynth Micro is an AC coupled, sinewave-ish signal and hence not directly compatible with the HackRF's CLKIN. While I've seen reports that sine wave signals on CLKIN also tend to work, I wanted to make a proper interface that didn't drive the SI5351 input outside of its rated signal levels.

HackRF CLKIN input line highlighted on the schematic.

In the future I might also want to synchronize the HackRF to other clock sources and I think a DC coupled, CMOS-level output is quite rare on instruments. Hence modifying the HackRF to accept a wider range of signals on the CLKIN connector seems useful to me.

I very much copied the idea for the circuit design from the Osmocom project's osmo-clock-conv. osmo-clock-conv is a stand-alone board that uses an Analog Devices LTC6957 clock buffer to convert a wide range of clock signals into a CMOS-level square wave. The LTC6957 is a specialized chip for this purpose that introduces very little additional phase noise and jitter into the signal during conversion. It should perform much better than, for example, a diode and a Schmitt trigger "self-biasing clock squarer" circuit with a similar function in the osmo-clock-gen.

I could have just used osmo-clock-conv board directly, or in fact I could just order the LTC6957 evaluation board and connect it via a coax to CLKIN. However I felt like making a more elegant solution that would be more tightly integrated with the HackRF. HackRF offers quite a lot of possibilities through various extension headers on its circuit board. The header P22 is connected to CLKIN and can be used to add add a custom circuit that supplies the reference clock signal. Adding a small TCXO board to P22 is quite popular and there are HackRF enclosures readily available that leave enough space for the TCXO mod. Hence adding a small clock converter circuit in place of the TCXO should be relatively straightforward and I could get a nice enclosure off-the-shelf that would nicely fit my modified HackRF.

3D render of the HackRF clock converter circuit board.

The circuit required to support the LTC6957 is quite minimal, so it wasn't hard to cram it all into a small two-layer board that will sit in the corner between P22 and P20 headers. Compared to the typical TCXO mod that only mounts onto the P22 I decided to also use the P20 header. This both makes it a bit more mechanically stable as well as gives me access to some unused GPIO lines on the HackRF's LPC4320 CPU.

I designed the input circuit to be 50 Ω terminated and hence work best with 50 Ω sources. The input is AC coupled and should work with AC or DC coupled sources. The converter should work with square wave signals with amplitudes between 0.8 V and 8 V peak-to-peak (when measured without a 50 Ω load) and sine wave signals with levels between -4 dBm and +16 dBm.

The LTC6957 has some digital inputs that affect its operation. This includes setting filter bandwidth (useful for adjusting for a sine wave or a square wave input) and turning the clock conversion on and off. osmo-clock-conv uses jumpers to configure those, but since I had GPIO lines available I simply used those. This makes the LTC6957 configurable from software. I also wanted to make sure I can power down the LTC6957 on request - LTC6957 with a floating input will likely produce a random clock signal and I don't want the SI5351 to lock onto that if I leave CLKIN unconnected. With the LTC6957 output disabled, the SI5351 should automatically switch back to its own crystal oscillator.

The LTC6957 has two identical outputs. The second one isn't used on the board, but I wired it to an AUX header in case it later turns out to be useful.

Position of the clock converter board on the HackRF.

The only hairy part of this design is the fact that the HackRF offers no clean way for an extension board to sit between CLKIN and the SI5351 clock input. The P22 header only allows a board to be connected in parallel to the clock line (see the schematic at the top of the post). There is also no series element on the clock line that can be desoldered to isolate the CLKIN from the SI5351.

What I plan to do is cut the trace on the HackRF PCB going from the CLKIN connector to the SI5351 right before it connects to the P22 header. I then plan to use a short piece of coax, or simply a pair of thin wires, to connect the original CLKIN SMA connector to the input of my clock converter board. This way the external clock signal will enter through the original CLKIN connector, go through a wire jumper to the clock converter board. After conversion the signal will then go back onto the HackRF board through P22.

I also left a footprint for an edge-mount SMA connector on the clock converter board. This makes it possible to use it without modifying the HackRF PCB by having a separate SMA connector for the clock converter input. I probably won't be using that since the additional connector will not fit in existing HackRF enclosures.

I'm currently waiting for the PCBs, which should arrive any day now. I was lucky to get what appears to be the last two LTC6957-3 chips on the market, so I should be able to assemble the board and test its design shortly. I also still have to write the software. Unfortunately, the HackRF firmware doesn't provide a general way of controlling the spare GPIOs so I will have to modify and recompile it. I did some quick tests and I don't think that will be much of a problem. The latest firmware release also introduces a new hackrf_clock utility and I'm hoping I can integrate with that.

I'll be publishing the designs and the firmware patch after I verify that it works as intended. If you're also interested in modifying your HackRF like this, please drop me a mail. I might do a small production run of the clock converter board after the current component shortage passes if I see enough interest.

Posted by Tomaž | Categories: Analog | Comments »

Trace phase noise in NanoVNA

20.05.2021 10:10

A quick follow-up to my previous blog post where I was exploring the phase noise in my home-made vector network analyzer. One of the things I did last time was to estimate how much the final vector measurements are jumping around on the phase axis. For my system I got a result of approximately 1.4 degrees RMS at 1 GHz, which is quite bad. Commercial vector network analyzers have this trace phase noise error typically between 0.1 and 0.01 degrees RMS.

Since I had a Jupyter notebook with all the calculations already prepared I quickly ran a similar test on a NanoVNA-H for comparison. I disabled error network correction (i.e. no calibration, disabled CORRECTION in the CAL menu). I then took 200 measurements at 1 GHz with nothing connected to the CH0 connector. Here are the results in a polar plot with the calculated spread:

Trace phase noise for NanoVNA-H at 1 GHz.

I thought it would also be interesting to check how the phase noise varies with frequency:

Trace phase noise for NanoVNA-H versus frequency.

At the base frequency range up to 300 MHz, it seems NanoVNA-H is pretty much on par with professional instruments as far as this metric is concerned. At higher frequencies where it uses harmonic mode the trace phase noise gets worse, but it's still quite good. It stays below 0.15 degrees RMS up to 900 MHz and below 0.35 degrees RMS up to 1500 MHz.

Posted by Tomaž | Categories: Analog | Comments »

Vector measurements with the HackRF, 2

13.05.2021 20:10

Over the past year I've been slowly building up a small, one-port vector network analyzer. The last improvement I made to it was replacing the rtl-sdr receiver with the HackRF One. This increased the frequency range, but the dynamic range of the measurement was still quite low at higher frequencies. I suspected that a significant source of noise in the system was phase noise. In this post I describe some measurements I performed to get a better idea of what is going on in regard to phase. I also wanted to have a base reference to compare with when I change things in the future. This way I will be able to see whether I improved the instrument or made it worse.

My small, home-made vector network analyzer, upgraded with a HackRF.

First thing I measured was the apparent phase noise of the stimulus signal in the digital baseband. I manually set my instrument so that the signal coming from the ERASynth Micro synthesizer was routed directly to the HackRF receiver. I then recorded an IQ signal trace from the HackRF and calculated the apparent phase noise of the sine wave. The ERASynth Micro output frequency was set to 1 GHz.

Apparent phase noise of the stimulus signal in the digital baseband.

This is the resulting plot of the phase noise versus frequency offset. It is based on the FFT of the recorded digital baseband signal with 128k points and the Hann window. I verified that the measured noise level is above the spectral leakage due to FFT windowing (for Hann window the leakage falls off by 60 dB per decade). For reference I also plotted the phase noise specification of the ERASynth Micro from its datasheet. That would be the ideal result if HackRF was perfect and didn't contribute any additional noise. In reality, HackRF's internal oscillator is probably much noisier than the ERASynth Micro.

Currently the ERASynth Micro and HackRF are both running free from their own internal oscillators. They are not synchronized to a common reference, hence this graph is the combination of all sorts of effects in both devices: phase noise in both oscillators, jitter from various phase locked loops and probably other effects as well. The noise shown on the plot is not present in any real analog signal anywhere. It shows up on the digital data that comes out of the HackRF's ADC. Since that is the input to all further processing it's the thing I'm most interested in.

Trace phase noise when measuring the open standard.

Another thing I was interested in was the final noise level in the vector measurement. This is the trace phase noise that's usually specified for commercial vector network analyzers in degrees root-mean-square. It's the effective error on the phase coordinate that shows up in the final measurement result, after all the processing has been done. To estimate this for my system I did a zero-span vector measurement of the open calibration standard. I recorded 200 points at 1 GHz. The plot above shows the result on a linear-scale polar plot. The estimated error of the measurement was 1.39 degrees RMS.

Its apparent from the plot that my measurements are smeared more along the phase than the amplitude axis. This is where my initial assumption came from that the phase noise is currently more problematic in my system than inaccuracies in measuring the amplitude of the signal. Just for comparison I looked up some datasheets for commercial network analyzers. It seems a typical value for this would be in the range of 0.1 to 0.01 degrees RMS. Not that I ever expect to reach that level of accuracy with my home-brew instrument, but it's interesting to see how it compares.

Next step for this project is definitely to try to run the HackRF from the 10 MHz TCXO in ERASynth Micro and see how much this improves the metrics I described above. After some research it seems that I need to be careful with how I approach this. HackRF needs a 3.3V CMOS digital signal as a reference while Ref out on ERASynth Micro is a sine wave. I need to design a board that will convert the waveform, however a sloppy conversion can introduce additional jitter. I've been looking at some previous work published by the amazing Osmocom project and I will likely take their osmo-clock-gen and/or osmo-clock-conv designs as a starting point.

Posted by Tomaž | Categories: Analog | Comments »

Exploring an old Belkin UPS

27.04.2021 19:53

Sometime around June 2005 I bought a small Belkin UPS to protect my home server from blackouts. For more than 15 years it has worked flawlessly. It survived several generations of server hardware and required only one battery change during that time. I was using it until a few months ago when it had developed an unusual problem: it still powers the load without issues from the battery when the mains power goes out. However it shuts down the moment the mains voltage returns, even if the battery charge is not yet depleted. I was curious to see what went wrong with it, so I recently had a closer look at its circuit. I couldn't find any obvious problems and it's possible it only needs a new battery. Still, it was an interesting thing to pick apart.

Belkin 650 VA Regulator Pro Silver Series UPS

This is the Belkin 650 VA UPS, model number F6C650uSER-SB. I believe this model was sold under the brand name The Regulator Pro: Silver Series. It uses a 5250 mAh sealed lead-acid battery. The battery provided at most 30 minutes of run time. I suspect the run time is internally limited regardless of battery capacity. Even with a minimal load I never saw it running on battery for longer.

The UPS can be controlled through a RS-232 serial connection. I used it on Debian through the belkin driver in the Network UPS Tools. The only issue I had with the driver that I can remember was that it was impossible to turn off the beeper which is annoyingly loud on this model.

Parts of the plastic enclosure with marked latches.

Accessing the electronics without damage takes some effort. The plastic enclosure consists of two black halves (1, 2) and a silver front panel (3). Obviously, remove the battery and all external connections before opening. There is a single screw holding the two black parts together. This is accessible through a hole in one of the sides. After removing the screw the next step is to remove the silver front panel. The panel is strongly held in place with four latches that are marked on the photo above. I used a spudger to go around the edges and loosen it a bit. Still, removing it was mostly a matter of applying brute force. After detaching the front panel the two black halves split easily.

Disassembled Belkin UPS showing the circuit board and internal wiring.

The circuit board and everything else inside is held in place with plastic latches on one of the halves of the enclosure. I had no problems removing the circuit once the enclosure was open. There are two large, single-side printed circuit boards. The horizontal board on the picture above holds the power conversion electronics. The vertical board contains the control circuit and the optically-isolated serial interface. There are also some parts, like ferrite beads, fuses and so on that are just hanging off the wires in the bottom right corner.

The main controller IC in the Belkin UPS.

The control board is soldered to the power board and isn't easily removable. The main controller appears to be this large IC in a 42-pin DIP package. The chip is marked ST72C334. It seems to be an ST7-series 8-bit microcontroller from STMicroelectronics. The C in the part number tells that the software is stored in flash memory, not factory-coded ROM. Sticker on it reads 5015320501 (probably some internal part number) and date code 0113 (I'm guessing week 13 of 2001 - it must have been already several years old when I bought it in 2005).

A glob of solder hanging off one of the tabs that hold the control board.

I've noticed a glob of solder just barely holding onto one of the tabs that hold the control board in place. It looked like it was just moments away from dropping away and causing a short circuit disaster on the circuit below. The solder joint also had a crack in it, however that tab appears to be only for mechanical support and doesn't have any electrical function. The break couldn't have been a source of the problems I had with the UPS.

Top side of the power circuit board with labeled parts.

Unsurprisingly for its relatively low cost and small size, this is an offline UPS. When mains power is present, the load is powered directly from the input via a relay (1). A battery charger keeps the 12 V battery topped up from the mains AC voltage (2). When mains power is lost, a high-voltage DC-DC boost converter converts the 12 V battery to a high DC voltage (3). The H-bridge 50 Hz inverter then chops that high DC voltage to AC (4) to power the load. There is also a common-mode filter on the power board (5) and a current transformer (6) that is used by the controller to measure the current drawn by the load.

I've traced out the main parts of the circuit. The sections of the schematic are labeled in the same way as the parts of the circuit board in the previous photograph. Obviously there is a lot missing, but the topology of the voltage converters is clearly visible.

A rough schematic with the main components of the Belkin UPS.

There is no isolation in this circuit. Everything is referenced to mains voltage, even the battery and the control circuit. Only the serial interface is separated from the rest of the circuit via optocouplers. This means that it's a very bad idea to connect anything to the battery terminals that's not a battery.

A weird detail in this circuit is the relay with the question mark. It connects the DC-DC boost converter to the battery charger. I'm not sure what its purpose is. It might be there to keep the 400 V capacitor charged while the mains power is present so that the inverter can start faster. However in that case a diode should work just as well. It might also serve as a part of some kind of a self-test function.

I haven't found exactly how the control circuit is powered. There is no obvious DC-DC converter dedicated to it. Very likely there is a linear regulator hidden somewhere that is powered from the 12V battery voltage. I'm certain that the relay coils are powered from the battery. What this means is that if the battery is depleted or degraded to the point where it can't activate the main relay the UPS won't start. The UPS by itself cannot recover from a completely discharged battery since the relays in their idle position disconnect the charger, and everything else, from the mains voltage.


As I mentioned above, this UPS developed a problem where it shuts off when the mains power comes back after an outage. The switch-over from mains to battery power works fine. It's the transition from battery power back to mains that's broken. After an outage you need to long press the button on the front panel to restore power to the load. If the load is still running on battery power when the mains comes back it will lose power without warning (i.e. without a clean shutdown).

I did a few basic checks. Visually everything looks good. Surprisingly, all the big electrolytic capacitors also seem just fine. The pair of 2200 μF 16 V in parallel with the battery measured in-circuit as 4700 μF and 40 mΩ ESR. The 22 μF 400 V for the inverter input voltage measured 20 μF and 1.7 Ω ESR.

The battery I bought in 2014 has a rated capacity of 5250 mAh. After 7 years of use it still retained 2300 mAh when I measured it outside of the UPS. The internal resistance measured around 1 Ω, which does seem a big high. Obviously this battery is ripe for replacement. It might be that the cause of my problems is simply due to the high internal resistance of the battery. When the mains power comes back, the battery must actuate the main relay as well as continue to power the inverter. Perhaps this current spike causes enough of a voltage drop to reset the control circuit.

Unfortunately I don't have an isolation transformer so I can't do much debugging of the circuit while it's live. Connecting an oscilloscope to it is out of the question so I can't check for voltage drops on the battery side. It certainly seems possible though that simply buying a new battery would fix it. When the previous battery went bad the UPS was completely dead. Only reading on the web that this is a typical symptom of a bad battery in these models convinced me to just replace the battery instead of buying a new UPS. I don't need another UPS at the moment though so I don't think I'll go that route. From the last time I remember it wasn't trivial to find a supplier anyway. For now this UPS will just end up in my spare computer parts pile.


I remember this was a pretty expensive gadget when I bought it even though it was the cheapest entry level model I could find. If I knew that it would serve me for 15 years I wouldn't hesitate to buy it. Looking inside it now it also looks well designed with plenty of safety features. I guess it's longevity isn't that surprising though. Even at the start this UPS was never loaded anywhere near its maximum rating. Over the years my server only grew less power hungry with each update. The last computer the UPS was connected to didn't even register on its power meter. It always showed that the output was unloaded.

Posted by Tomaž | Categories: Analog | Comments »

Optimizing an amplitude-shift keying detector

15.04.2021 20:03

Nothing is more permanent than a temporary solution to an engineering problem. Some time ago I was reverse engineering a proprietary wired network protocol. I ended up quickly throwing together a simple audio-frequency amplitude-shift keying detector just so that I could record some traffic. I needed many packet captures before I could begin to understand what is being sent over the line and just taking screenshots on an oscilloscope was too inconvenient. A few months later and almost the exact schematic I made with a few back-of-the-napkin calculations ended up in an actual product. It seemed to work fine in practice so there didn't appear to be any need for additional design work. A year later however and some problems became apparent with my original design. With a better idea about what kind of sensitivity and selectivity was required I got to revisit my old circuit.

The simple amplitude shift keying detector.

The first detector I made for my experiments just used a passive, RC band pass filter to isolate the carrier. It has a single transistor acting both as a demodulator and an amplifier to produce a 5V digital signal. I made it during lockdown last year on a piece of breadboard from spare components I had lying around. The design that ended up in manufacturing switched to smaller, surface mount components but was basically unmodified in function.

To better understand the performance of this circuit I made a simulation model of the detector and the signal being detected. I used Spice OPUS for this task. Although I've also used ngspice a few times in the past, I keep returning to Spice OPUS. I've used it since my undergrad days and at this point I know most of its quirks. I like to use tools that I understand very well and know when I can and when I can't trust the results they give me.

Since a detector is a non-linear circuit I had to use the transient analysis. The basic Spice analysis types don't allow you to run the transient analysis on a range of input signals automatically. I had to write up a short program in Nutmeg, the Spice scripting language that is included in Spice OPUS. I varied the carrier amplitude and frequency on a logarithmic scale and chose 60 points on each axis. This resulted in 3600 separate simulations being run. I chose this number simply because it still returned a result reasonably fast on my computer.

I could use Spice itself to visualize the results, but the plotting capabilities are limited and I much rather work in Python than Nutmeg. Hence I only wrote out raw Spice vectors into a text file and then used Python and matplotlib to visualize the results:

Visualization of the usable region for the old detector design.

On this plot the axes are input carrier frequency and amplitude on a logarithmic scale. The color shows demodulated output signal level. The output is in inverted logic, hence if a carrier is detected the output is low. The hatched area shows where the output is not defined according to the 5V CMOS logic levels. Those input ranges are forbidden since in those cases the output of the detector could be interpreted either as low or high by the digital logic after it.

The circuit reliably detects the carrier for a range of frequencies, but the amplitude must be relatively high. At the frequency with best sensitivity, it would only detect signals with amplitudes of around 1 V or more. The two red dots on the plot are my new design requirements. The system uses a frequency multiplex to carry various channels on one line. I wanted to make a new circuit that would still detect the wanted carrier at 150 mV amplitude at 50 kHz. At the same time it should not be sensitive to an unwanted signal at 1 kHz and up to 2 V amplitude. The existing circuit obviously falls short of the former requirement.

Meeting these requirements wasn't simple. Simply increasing sensitivity of the detector wouldn't work. It would not be selective enough and would react to carriers outside of the desired frequency band. I needed to add better filtering as well as increase gain. Since the original circuit was so compact there was very little spare PCB space left for improvements. I considered a few opamp-based approaches as well as pushing demodulation to the digital domain, but in the end those all turned out to be infeasible.

I ended up adding another transistor with which I implemented both a Sallen-Key active second-order filter as well a small gain stage. Having a transistor pair in a SOT-23 doesn't take any more space than the single one in the original design. I also optimized the circuit so that it had many identical-valued resistors. That made it possible to replace four individual 0603 chip resistors with a single 0805 resistor array, saving more board space.

This is how the simulation of the new design looks like:

Visualization of the usable region for the new detector design.

The plot shows that the new circuit meets both the requirements I was aiming for. It produces a well-defined digital output for weaker signals thanks to the additional gain. The new filter also results in more attenuation outside of the desired frequency band. This means that the stronger, unwanted signal at 1 kHz still does not produce any output.

I was worried that the sensitivity of the detector would depend too much on variations in transistor parameters, so I also did some additional simulations to check that. I varied the transistor model in Spice to include the edge cases for DC gain and base-emitter voltage specified in the datasheet. The results show that even with worst-case transistors the new design should still meet the requirements:

Effect of transistor variations on the usable region.

In conclusion, this was an interesting challenge, but also frustrating at times. I kept thinking that I should come up with a more modern design. This kind of discrete-transistor circuit seemed old-fashioned. I felt there should be an off-the-shelf IC that would do what I needed, but I failed to find one. I looked at the LM567 tone detector and a few similar solutions, but I simply couldn't fit any of them on the available PCB space.

I haven't done any such involved discrete design in a while and I had to dig myself into some reference books first and refresh my knowledge of transistor properties. After a few tries and iterations I ended up with a solution that I think is quite elegant and solves the problem I had in front of me. An opamp design would likely have better properties, but in the end a circuit that you can use is better than one that you can't.

Posted by Tomaž | Categories: Analog | Comments »

Vector measurements with the HackRF

27.03.2021 20:09

Over the last year I slowly built up a small, one-port vector network analyzer. The instrument consists of a rtl-sdr USB receiver dongle, the ERASynth Micro frequency synthesizer, an RF bridge, a custom time multiplex board I designed and, of course, a whole lot of software that glues everything together and does final signal processing in the digital domain. In the past months I've written pretty extensively about its development here on this blog and I've also used the system in practice, mainly to measure VSWR and matching networks of various multiband LTE antennas.

The original instrument could perform S11 measurements up to around 2 GHz. However the 2 GHz limit was only due to the E4000 tuner in the rtl-sdr. Since ERASynth Micro can generate signals up to 6.4 GHz and I designed the multiplexer with signals up to 8 GHz in mind, getting measurements at higher frequencies was only a matter of upgrading the receiver. I was tempted to do that since the popular 2.4 GHz ISM band was just out of reach of my measurements. I did a lot of searching around for a suitable SDR that would cover that frequency range. The top of my list was an USRP B200, but amid the lockdowns and the general chaos in international shipping last year I couldn't find a supplier. In the end I settled for a HackRF One.

My small, home-made vector network analyzer, upgraded with a HackRF.

On one hand, moving from the rtl-sdr to HackRF was pretty simple. After I got hackrf_tcp up and running most of my old code just worked. Getting things to work reasonably well took longer though. I lost a lot of time figuring out why the gain in the system varied a lot from one measurement to the other. I would optimize signal levels for best dynamic range, only to try it again next day and find that they have changed considerably. In the end, I found out that the USB hub that I was using could not supply the additional current required by the HackRF. Interestingly, nothing obvious broke with the USB bus voltage wildly out of spec, only the analog performance became erratic.

I wish HackRF came with some hardware settings (jumpers or something) that would allow me to semi-permanently disable some of its more dangerous features. As it is right now I need to use a DC block to protect my multiplex board from a DC bias in case the antenna port power gets enabled by a software bug. I also had to put in a 20 dB attenuator to protect the HackRF in case its pre-amplifier gets turned on, since that would get damaged by the signal levels I'm commonly using.

Error network terms when using Henrik's bridge with HackRF.

Here are the error network terms with the upgraded system. The faded lines are the measurements by Henrik Forstén from whom I copied the RF bridge design. Up to 2 GHz the error terms match pretty well with those I measured with the rtl-sdr. As I noted in my previous blog post, the error terms are a measure of the whole measurement system, not only the bridge. Hence my results differ from Henrik's quite significantly since apart from the bridge his system is quite different.

Unfortunately, my new system still isn't very well behaved beyond 2 GHz. I suspect this has to do with the construction that has a lot of connectors everywhere and RG-316 cables of questionable quality. Every contact introduces some impedance mismatch and in the end it's hard to say what is causing what. I also know that I made an error in calculating the dimensions of the coplanar waveguides on my multiplex board. I'm sure that is not helping things.

Estimated dynamic range of the system.

This is an estimated dynamic range of the vector measurement, using the method described in this post. Let's say it's better than 50 dB below 1.5 GHz and better than 30 dB below 3.5 GHz. It quickly gets worse after that. At around 5.5 GHz I suspect there's some kind of a resonance in the 10 dB attenuator I have on the multiplex board that's made out of 0603 resistors. Beyond that point the signal that passes through the attenuator is stronger than the un-attenuated signal and I can't measure anything anymore.

I really would like to improve the dynamic range in the future. Basically all the practical measurements I did with this system was with the device-under-test being behind a U.FL connector. The problem with that is that the connector introduces a significant mismatch before the device-under-test. You can calibrate this out, but this comes at a cost of the effective dynamic range. Hence it may be true that 30 dB of dynamic range is enough for practical measurements. However as soon as you start moving the measurement plane away from the port of the instrument, you really want to start with as much dynamic range as possible.

I believe most of the total noise in the system now actually comes from the phase noise. Currently the signal source and the receiver use independent clocks and each of those clocks has its own drift and various artifacts introduced by individual phase locked loops. I suspect things would improve significantly if I could run both ERASynth Micro and HackRF from a common clock source, ideally from the ERASynth Micro's low phase noise TCXO. Unfortunately they use incompatible interfaces for reference clock in/out. I need to make an adapter circuit before I can try this out.

In the end, this is all kind of an endless rabbit's hole. Every measurement I take seems like it could be done better and there appears to be a lot of room for improvement. I've already accumulated a wish list of changes for the multiplex circuit. At one point I will likely make a new versions of the bridge and the multiplexer PCBs using the experience from the past year of experimenting.

Posted by Tomaž | Categories: Analog | Comments »

Characterizing the RF Demo Kit

19.03.2021 18:47

The RF Demo Kit is a small printed circuit board with several simple RF circuits on it. There are several variants out there from various sellers. It's cheap and commonly sold together with NanoVNA as a learning tool since it is nicely labeled. Among random circuits like filters and attenuators, it also contains a set of short, open, load and thru (SOLT) standards that can be used for VNA calibration. These are all accessible over U.FL surface mount coaxial connectors.

The "RF Demo Kit" circuit board, NWDZ Rev-01-10.

I've been using the SOLT standards on the Demo Kit for calibrating my home-made vector network analyzer. I've been measuring some circuits with an U.FL connection and I lack a better U.FL calibration kit at the moment.

Of course, for this price it's unrealistic to expect the Demo Kit to come with any detailed characterization of the standards. Initially I just assumed the standards were ideal in my calculations. However I suspected this wasn't very accurate. The most telling sign was that I would often get an S11 measurement of a passive circuit that had the absolute value larger than 1. This means that the circuit reflected more power than it received and that wasn't possible. Hence the calibration must have given my instrument a wrong idea of what a perfect signal reflection looks like.

The "RF Demo Kit" connected to my home-made vector network analyzer.

I suspected that my measurements would be more accurate if I could estimate some stray components of the standards on the Demo Kit and take them into account in my calibration. I did the estimation using a method that is roughly described in the Calibrating Standards for In-Fixture Device Characterization white paper from Agilent.

I did my measurements in the frequency range from 600 MHz to 3 GHz. First I used my SMA cal-kit to calibrate the VNA with the measurement plane on its SMA port. I then measured the S11 of the Demo Kit short standard, using a SMA-to-U.FL coax cable:

S11 for the short standard before port extension.

I used this measurement to calculate the port extension parameters to move the measurement plane from the VNA port to the position of the short on the Demo Kit PCB. I verified that the calculated port extension made sense. Calculated time delay was approximately 0.95 ns. With a velocity factor of 0.7 for the RG-316 cable, this gives a length of 19.9 cm which matches the length of the 20 cm cable I used almost exactly.

S11 for the short standard after port extension.

Using the port extension "unwinds" the Smith chart for the short standard. Ideally the whole graph should be in one spot at Z = 0 (red dot). In reality, noise of the VNA and various other imperfections make it a fuzzy blob around that point.

I then applied the same port extension to the measurement of the open standard. Ideally, this graph should be in one point at Z = infinity (again, marked with a red dot). This graph is still very noisy, like the previous one. However one important difference is that it clearly shows a systematic, frequency dependent deviation, not just random noise around the ideal point. Some calculation shows that this deviation is equivalent to a stray capacitance of about 0.58 pF. The simulated response of an open with 0.58 pF stray capacitance is shown in orange:

S11 for the open standard after port extension.

The Agilent white paper goes further and also estimates the stray inductance of the load standard. This is how my measurement of the Demo Kit load standard looks like with the same port extension applied as before. Again, the ideal value is marked with the red dot:

S11 for the load standard after port extension.

It looks pretty messy, with maximum reflection loss of about -12 dB at frequencies up to 3 GHz. Note that the U.FL connectors themselves are only specified up to about -18 dB reflection loss, so a lot of this is probably due to connector contacts. The white paper describes using time gating to isolate the effect of the load from the effect of contacts, but the mathematics of doing that based on my measurements escape me for the moment. I also suspect that my setup lacks the necessary bandwidth.

I stopped here. I suspect estimating the stray load inductance wouldn't make much difference. Keep in mind that the graph is drawn with port extension in place, which removes the effect of the cable. Hence the circling of the graph must be due to phase delays in the load resistor and the U.FL connector. The phase delay also cannot be due to the short length of microstrip between the U.FL and the 0603 resistor on the RF Demo Kit PCB. That shouldn't account for more than about 15° of phase shift at these frequencies.

At the very least I'm somewhat satisfied that the Figure 7(b) in the white paper shows a somewhat similarly messy measurement for their load standard. I'm sure they were using a much higher quality standard, but they were also measuring up to 20 GHz:

S11 measurement of the load standard from the Agilent white paper (Fig 7b)

Image by Agilent Technologies, Inc.

In the end, here is the practical effect of taking into account the estimated stray capacitance of the open standard when measuring S11 of a device:

S11 measurements calibrated using ideal open or open with stray capacitance.

When the VNA was calibrated with the assumption of a ideal open standard, the log magnitude of the measured S11 went above 0 dB at around 1400 MHz. This messed up all sorts of things when I wanted to use the measurement in some circuit modeling. However, when I took the stray capacitance of the open standard into account, the measured S11 correctly stayed below 0 dB over the entire frequency range. Since I don't have a known-good measurement I can't be sure in general whether one or the other is more accurate. However the fact that the limits seem correct now suggests that taking the estimated stray capacitance into account is an improvement.

Posted by Tomaž | Categories: Analog | Comments »

On fake 50 ohm coaxial cables

03.01.2021 11:50

I guess by now everyone is already aware that cheap no-name items often aren't what they claim to be. I've recently found out that BNC coaxial cables sold as having a 50 Ω characteristic impedance often are in fact 75 Ω. Since this is something that is not trivial for the customer to measure and a 75 Ω cable will sort of work even in a 50 Ω system I guess it's easy for sellers to get away with this kind of scam.

This article goes into the details of how to measure characteristic impedance of a transmission line. I used the Smith chart method. I connected each cable to the (50 Ω) NanoVNA CH0 on one end and a Zref = 50 Ω calibration load on the other end. I then measured S11 over the frequency range from 10 to 200 MHz. NanoVNA was calibrated using the calibration kit that came with it.

I found the point where the S11 plot on the Smith chart crosses the real axis. This is when the cable acts as a quarter-wave transformer. The point on the chart marks the impedance Zt. It is marked in orange on the plots below. Characteristic impedance Z0 of the cable can then be calculated using the following formula:

Z_0 = \sqrt{Z_t Z_{ref}}

The Seafront 100 cm BNC cable sold as "50 Ω Oscilloscope Test Probe Cable Lead" measured 71.1 Ω using this method. The second cable I got in the pair had a bad ground connection and couldn't be measured at all. The reviews on the Amazon product page mention bad mechanical construction of the connectors. The cable has a readable SYV 75--3--2 marking that already raises suspicions that this is indeed a 75 Ω cable.

Label on the Seafront 100 cm cable.

Characteristic impedance measurement of Seafront 100 cm cable.

The Goobay 50 cm BNC cable sold as 50 Ω RG-58 measured 79.6 Ω. This cable in fact has RG58 50 OHM COAXIAL CABLE printed on it. It was either marked incorrectly from the factory or was relabeled later. There is a review on the Amazon product page from R. Reuter that also reports that the cable they received was 75 Ω based on a NanoVNA measurement.

Label on the Goobay 50 cm cable.

Characteristic impedance measurement of Goobay 50 cm cable.

The 60 cm LMR-200 SMA cable that I got with the NanoVNA measures 50.4 Ω. I'm showing it here for comparison with the two results above. The cable construction is not that great - one of the two such cables I got in the kit wasn't crimped properly and the connector fell off the cable after a few uses. However the cable actually has the correct characteristic impedance within a reasonable tolerance.

Characteristic impedance measurement of LMR200 60 cm cable.

I'm curious why this is happening. I'm guessing because 75 Ω cables are cheaper for some reason. Maybe there is a bigger demand for 75 Ω cables for video applications and 50 Ω used in RF systems is more of a niche application? Also the cables I measured didn't have good matching even for use in 75 Ω systems. Maybe they're in fact quality control rejects from a failed factory run? Anyway, it's good to be aware of this practice and double check if using a cable in an application that is sensitive to this sort of thing.

Posted by Tomaž | Categories: Analog | Comments »