Voltage divider cheat sheet

16.09.2019 19:46

The two most common electrical measurements I do these days is measuring output impedance of a source and input impedance of a load. I don't have any special equipment for that. I'm not doing it so often that it would warrant buying or making a specialized tool. Since it's all either DC or around audio frequencies and I'm not interested in very precise values it's a really simple measurement. I usually just use an oscilloscope and whatever else is available around the shop to use as reference loads and sources.

Deriving unknown impedances or resistances from the voltage measurements and reference values is straightforward. It only takes a minute and I must have done it a hundred times at this point by inverting the voltage divider formula. I still don't know the resulting equations by heart though. So to avoid doing it the hundred and first time and the odd mistake, I've made myself a nice cheat sheet to stick above my desk. It contains the formulas for the three most common measurements I make.

Voltage divider measurements cheat sheet

PDF version

The notes about the best accuracy refer to the selection of the reference impedance such that the result is least affected by errors in voltage measurements (which, when measuring amplitudes with an oscilloscope, is the largest source of error in my case). The selection is quite obvious, but I guess I added it there for the sake of rigor.

The cheat sheet is made in LaTeX using the circuitikz package for schematics. I'm mostly putting it here for my own future reference, but maybe someone else will find it useful.

Posted by Tomaž | Categories: Analog | Comments »

z80dasm 1.1.6

09.09.2019 20:32

z80dasm is a command-line disassembler for the Z80 CPU. I initially released it way back in 2007 when I was first exploring Galaksija's ROM and other disassemblers I could find didn't do a good enough job for me. Since then it accumulated a few more improvements and bug fixes as I received feedback and patches from various contributors.

Version 1.1.6 is a release that has been way overdue. Most importantly, it fixes a segmentation fault bug that several people have reported. The patch for the bug has actually been committed to the git repository since June last year, but somehow I forgot to bump the version and roll up a new release.

The problem appeared when feeding a symbol file that was generated by z80dasm back as input to z80dasm (possibly after editing some symbol names). This is something that the man page explicitly mentions is supported. However, when this was done together with block definitions, it caused z80dasm to segfault with a NULL pointer dereference. Some code didn't expect that the symbol automatically generated to mark a block start could already be defined via the input symbol file. Thanks to J. B. Langston for first sending me the report and analysis of the crash.

I took this opportunity to review the rest of the symbol handling code and do some further clean ups. It has also led me to implement a feature that I have been asked for in the past. z80dasm now has the ability to sort the symbol table before writing it out to the symbol file.

More specifically, there is now a --sym-order command-line option that either takes a default or frequency argument. Default leaves the ordering as it was in the previous versions - ordered by symbol value. Frequency sorts the symbol table by how frequently a symbol is used in the disassembly. The most commonly used symbols are written at the start of the symbol file. When first approaching an unknown binary, this might help you to identify the most commonly used subroutines.

Anyway, the new release is available from the usual place. See the included README file for build and installation instructions. z80dasm is also included in Debian, however the new release is not in yet (if you're a Debian developer and would like to sponsor an upload, please get in touch).

Posted by Tomaž | Categories: Code | Comments »

What is a good USB cable?

31.08.2019 20:36

In conclusion of my recent series on the simple matter of USB cable resistance, I would like to attempt the question of what is a good USB cable. After having a reliable measurement of a cable's resistance, the next obvious question is of course whether that resistance complies with the USB standard and whether such a cable is suitable for powering single-board computers like the Raspberry Pi. Claims that most cables don't comply with the standard are quite common whenever this topic is discussed. I'm by no means an expert on USB, but luckily USB Implementers Forum publishes the standards documents in their on-line library. I went in and studied some of the documents on the Cable and Connector Specification topic which, among other things, specify cable resistance.

I've started my reading with USB 2.0, because micro- and mini-USB cables I tested in my previous post are unlikely to be older than USB 2.0. The standard is now nearly 20 years old and over the years it seems to have received many revisions and updates. Hence it's hard to pick up a random cable from the pile and say with certainty with which document it should comply. In addition, I find that the text of the standard itself often isn't particularly clear. For example, the following text implicitly defines the maximum allowable cable resistance in the Universal Serial Bus Cables and Connectors Class Document, revision 2.0 from August 2007:

Cable Assembly Voltage Drop requirement for USB 2.0

Image by USB Implementers Forum

Initially I thought this means voltage drop over the pair of wires. As in, total voltage drop over VBUS and GND wires should be less than 125 mV at 500 mA (effectively 250 mΩ round-trip resistance). However the fact that most cables seem to be around 500 mΩ suggests that manufacturers read this as 250 mΩ per wire (500 mΩ round-trip).

A later document amends this definition somewhat and makes it clearer that the voltage drops are for each wire separately and that this voltage drop includes contact resistance. The following is from Universal Serial Bus 3.0 Connectors and Cable Assemblies Compliance Document, revision 1.0 draft from October 2010. Also note that both the measurement current and the allowable voltage drop were increased. The measurement must now be done at 900 mA, however maximum effective single-wire resistance is still 250 mΩ, same as in USB 2.0:

Cable Assembly Voltage Drop requirement for USB 3.0

Image by USB Implementers Forum

An even later document addresses cable compliance with older revisions of the standard. USB 3.1 Legacy Cable and Connector Revision 1.0 from 2017 contains this calculation:

IR drop at device calculation from a USB 3.1 document.

Image by USB 3.0 Promoter Group

This equation clearly shows that the 250 mΩ figure from the other documents is supposed to be combined from two 30 mΩ contact resistances and a 190 mΩ wire resistance. It also multiplies the voltage drop by two due to the round trip through both VBUS and GND wires.

USB type C specification tries to make this even clearer and comes with schematics that explicitly show where voltage drop must be measured. Since in USB type C you can have different types of cables that are rated for different currents, that standard only specifies maximum voltage drop. Note also that in type C the requirements for the VBUS line were relaxed compared to previous standards. Previously, for a cable delivering 1 A of current, the VBUS line must have had a maximum resistance of 250 mΩ while in type C up to 500 mΩ is allowed.

Figure showing cable IR drop from USB Type-C specification.

Image by USB 3.0 Promoter Group

7 out of 12 micro USB and 5 out of 6 mini USB cables I found at my home have less than 500 mΩ round-trip resistance. So according to my understanding of the standard for pre-type C cables, roughly 70% of my cables comply with it. Here are my resistance measurements plotted versus cable length. I've also included measurements published by Balaur on EEVblog and martinm on their blog. Points in the shaded area represent cables that comply with the standard.

Plot of cable resistance measurements versus length.

So strictly according the USB standards, the situation out there isn't perfect, but it doesn't look like the majority of cable are completely out of spec either. This seems a bit at odds with the general opinion that finding a good cable for running Raspberry Pi is hard. However, things start getting a bit clearer when you look at what exactly Raspberry Pi boards demand from these cables.

In the following table I've extracted maximum required power for all Raspberry Pi model Bs from the Wikipedia article. These boards will display the infamous under-voltage warning when their power supply voltage falls under approximately 4.63V. Assuming a perfect 5 V power supply, this is enough data to calculate the maximum allowable cable resistance for powering these boards:

R_{max} = \frac{U_{supply} - U_{min}}{I_{max}} = \frac{5.00\mathrm{V} - 4.63\mathrm{V}}{I_{max}}
Model Max. supply
current [mA]
Max. cable
resistance [mΩ]
RPi 1 Model B 700 529
RPi 1 Model B+ 350 1057
RPi 2 Model B 820 451
RPi 3 Model B 1340 276
RPi 3 Model B+ 1130 327
RPi 4 Model B 1250 296

Raspberry Pi model Bs after version 2 appear to require cables with resistance well below 500 mΩ that the standard requires for micro USB cables. Only 3 cables from my collection would be able to power a Raspberry Pi 3 model B. Raspberry Pi 4 gets a pass because the type C standard is flexible enough and doesn't directly specify cable resistance (although its type C implementation has other power-related issues). Yet, since type C cables have 750 mV maximum voltage drop at rated current, it requires a cable rated for 3 A or more according to this estimate (I'm not sure if Raspberry Pi 4 uses the same APX803 voltage monitor as earlier versions).

Also note that this calculation is for a perfect 5V power supply, which is optimistic. Power supplies don't have perfect regulation and the calculations in the USB standard assume worst case 4.75 V at the source. Such a worst case power supply, even if it provides sufficient current, would require practically zero ohm cables to power a Raspberry Pi without under-voltage warnings and associated CPU throttling.

To sum up, yes there are USB cables out there that are out of spec. However based on this limited sample, most micro and mini USB cables do seem to actually comply with the standard. Also worth noting is that shorter ones tend to have a better chance of being compliant. On the other hand, at least part of the blame for the grief surrounding USB cables appears to fall onto the Raspberry Pi itself since they designed their boards with an requirement for better-than-the-standard cables and power supplies.

Posted by Tomaž | Categories: Analog | Comments »

Resistance measurements of various USB cables

23.08.2019 10:23

After I made my USB cable resistance tester I naturally wanted to measure some cables. I searched my apartment and ended up with a big jumble of 18 micro and mini USB cables of various lengths and origins. I didn't imagine I would find that many, but I guess today just about everything comes with one and I apparently never throw away anything. In fact some cables were in a very bad shape and already had insulation flaking off from old age.

USB kabelsalat.

I measured the resistance of each cable at 1 A using the voltage ratio method I described in my previous post. The following table lists the results. For a lot of cables I don't know their origin and they must have came bundled with various devices. I've listed the brand if it was printed on or if I knew for certain which device the cable came with. I'm afraid this comparison isn't very useful as a guide which cable brand to buy, but it does give an interesting statistic of what kind of cables can be found out there in the wild.

N Brand Color Type Length [cm] R [mΩ]
1 Wacom Black A / micro B 28 199
2 CellularLine Gray A / micro B 207 212
3 White A / micro B 105 224
4 White A / micro B 51 294
5 Wacom Black A / micro B 98 334
6 Samsung Black A / micro B 82 408
7 Nokia Black / gray A / micro B 115 490
8 CubeSensors White A / micro B 103 522
9 Black A / micro B 103 569
10 HTC Black A / micro B 128 597
11 Google Black A / micro B 153 613
12 Amazon White A / micro B 182 739
13 Silver A / mini B 30 177
14 Black A / mini B 146 323
15 Black A / mini B 125 396
16 Silver A / mini B 56 412
17 Canon White A / mini B 125 435
18 Silver A / mini B 180 804

Unsurprisingly, two short 30 cm cables came out as best in terms of resistance, measuring below 200 mΩ. A bit more unexpected was finding out that the 2 m CellularLine isn't far behind. This is a fancy and laughably overpriced cable I bought in a physical store not so long ago, the only one on this list that I'm sure didn't come bundled with any device. It appears in this case the price was at least somewhat justified.

I was also a bit surprised that some cables that came bundled with devices measured pretty high. The white Amazon was for charging a Kindle 3 and it had the highest resistance among the micro B cables I tested. On the other hand, it was also in pretty bad shape, so it might be that it was damaged somehow. Cables bundled with an HTC phone and Google Chromecast also measured more than 500 mΩ.

Other measurements I could find on the web seem to roughly agree with mine. martinm lists measured values between 289 and 1429 mΩ. Balaur on EEVblog forum measured between 276 and 947 mΩ on his cables. The only report that was really off was this forum post by d_t_a where most of the cables listed are lower than 200 mΩ.

Another thing I was interested in was how repeatable these measurements were. I mentioned several times in my previous posts that contact resistance can play a big role. Since each time you plug in a cable the contacts sit differently and have a slightly different resistance, contact resistance behaves like a random variable in the measurement results. When I was doing the measurements above this was quite obvious. Minimal movements of the cable caused the voltage displayed on the voltmeter to dance around.

Histogram of 10 measurements of cable 16.

I repeated the measurement of cable 16 from the table above 10 times. Before each repetition I unplugged and re-plugged both ends of the cable. Above you can see the histogram of those measurements. The results only vary for approximately ±1%, which is much less than I thought they would. This is about the same as the expected error of the measurement itself due to the accuracy of the reference resistor. Of course, this was all done over a short period of time. I'm guessing the resistance would change more over longer periods of time and more cycles as contacts deteriorate or gather dirt.

I also checked how the measurement is affected if I plug something between the tester and the cable. Gino mentioned in a comment they used an adapter and an extension cable in their measurement. So I repeated the measurement of cable 1 from the table with a short type A-to-type A extension in series. Just for fun, I also tested how much resistance a cheap USB multimeter adds:

Assembly R [mΩ]
Cable 1 202
Cable 1 + 45 cm extension 522
Cable 1 + Muker V21 multimeter 442

As you can see from the results above, both of these added quite a lot. With the excellent 200 mΩ cable, both more than doubled the total resistance. Even with an average 500 mΩ cable, this multimeter would add around 240 mΩ or approximately 50% on top. Battery-powered devices like smartphones adjust their charging current according to the voltage drop they see on their end. Hence they might charge significantly slower when the multimeter is in series with the cable compared to just using a cable. This puts some doubt on the usability of these USB multimeters for evaluating USB cables and power supplies.

Posted by Tomaž | Categories: Analog | Comments »

USB cable resistance tester

18.08.2019 17:04

Back in June I did a short survey of tools for measuring resistance of power supply lines in USB cables. I was motivated by the common complaint about bad cables, often in the context of powering single board computers like the Raspberry Pi. I wasn't particularly happy with what I found: the tool wanted to buy was out of stock, and I've seen various issues with others. Having already dug too deep into this topic I then set out to make my own tool for this purpose.

So roughly two months later I have a working prototype in my hands. It works as designed and I spent a few hours measuring all the USB cables I could get my hands on. I'm reasonably happy with it and can share the design if anyone else wants to make it.

USB cable resistance tester.

As I mentioned in my previous post, I really liked the approach of FemtoCow's USB cable resistance tester and I basically copied their idea. Since USB type C is gaining in popularity I've added connectors so that A-to-C and C-to-C cables can be tested in addition to A-to-mini B and A-to-micro B, I've taken care that even with the added connectors, the voltmeter still has Kelvin connections in all combinations. I've also added proper 4 mm test sockets for all connections.

Simplified schematic of the USB cable tester.

The principle of operation is very simple. Electrically, the resistance tester consists of two parts. On one end of the cable is a reasonably accurate 1 Ω resistor in series with the cable's VBUS and GND lines. The other end only shorts the VBUS and GND lines together. The power supply is used to set a current through the cable. The measured resistance of the cable, which consists of the sum of the four contact resistances and resistances of the two copper cores, can then be calculated as:

R_{measured} = R_{VBUS} + R_{GND} = R_{ref}\frac{U_{measure}}{U_{calibrate}}

Or, if set current is 1 A, the voltmeter reading in volts directly corresponds to the measured resistance in ohms:

R_{measured} [\Omega] = U_{measure} [\mathrm{V}]

The nice thing about this approach is that the cable can be tested at an arbitrary current. If the first equation is used, the accuracy of the method does not depend on the accuracy of the current setting. It even doesn't depend much on the calibration accuracy of the voltmeter: since a ratio of similar voltages is used, any linear error cancels out. The only requirement is that the voltmeter is reasonably linear over the 0.1 V to 1 V range. Since Kelvin connections are used, the resistance of the PCB traces has negligible effect on measurements as well.

The only critical component is the reference resistor. 1% resistors are widely available, so getting to that kind of measurement accuracy should be trivial. With some more effort and a bit higher price, 0.1% current sense resistors aren't that exotic either. For my tool I went with a cheap 1% thick-film resistor since I considered that good enough.

USB cable tester connected to a multimeter.

After measuring a pile of cables, some shortcomings did become apparent that I didn't think of earlier: I really should have added a switch for the voltmeter instead of having four test sockets. Constantly re-plugging the test leads gets tiring really fast. It also affects the measuring accuracy somewhat since it's hard to re-plug the cables without moving the tool slightly. Since moving the connectors results in slightly changing their contact resistances, it's hard to measure both voltages in the exactly the same setup. The errors because of that seem minimal though.

Another thing I noticed is that with my analog power supply, setting the current to exactly 1 A wasn't possible. Since I have only one knob that goes from 0 to 25 V in one rotation, setting low voltages requires very small movements of the knob and isn't very accurate. Hence I mostly used the ratio equation for my measurements. My power supply also tended to drift a bit which was a bit annoying. The power supply at work with a digital interface worked much better in that respect.

Finally, I'm not sure how harmful this kind of test is for type C cables that contain active parts, like the electronically marked power delivery cables. I didn't test any so far. All schematics I could find show that the power delivery ID chip is powered from the VCONN line, which is left unconnected in this tool, so that should be fine. On the other hand, the active cables that do signal conditioning do seem to be powered from VBUS. It's possible, although I think unlikely, those could respond weirdly or even be damaged by the low voltage applied during this test.

If you want to make a tool like this, you can find all required Gerber files and the bill of materials in the GitHub repository. While it might be possible to etch and drill the board yourself, I highly recommend using one of the cheap PCB prototyping services instead. The USB C connectors require very small holes and SMD pads that I think would be pretty challenging to get right in a home workshop. There are some more notes in the README file regarding that. On the other hand, the Würth connectors listed in the BOM are solderable with only a soldering iron, so manual assembly is reasonably straightforward with no hot air station required. However again the type C ones can be pretty tricky due to the fine pitch.

Posted by Tomaž | Categories: Analog | Comments »

Quick and ugly WFM data export for Rigol DS2072A

15.08.2019 14:48

At work I use a Rigol DS2072A oscilloscope. It's quite a featureful little two-channel digital scope that mostly does the job that I need it for. It can be buggy at times though and with experience I learned to avoid some of its features. Like for example the screenshot tool that sometimes, but not always, captures a perfectly plausible PNG that actually contains something different than what was displayed on the physical screen at the time. I'm not joking - I think there's some kind of a double-buffering issue there.

Recently I was using it to capture some waveforms that I wanted to further process on my computer. On most modern digital scopes that's a simple matter of exporting a trace to a CSV file on a USB stick. DS2072A indeed has this feature, however I soon found out that it is unbearably slow. Exporting 1.4 Msamples took nearly 6 minutes. I'm guessing exporting a full 14 Msample capture would take an hour - I've never had the patience to actually wait for one to finish and the progress indicator indeed remained pegged at 0% until I reset the scope in frustration. I planned to do many captures, so that approach was clearly unusable.

Rigol DS2072A oscilloscope.

Luckily, there's also an option for a binary export that creates WFM files. Exporting to those is much faster than to the text-based CSV format, but on the other hand it creates binary blobs that apparently only the scope itself can read. I found the open source pyRigolWFM tool for reading WFM files, but unfortunately it only seems to support the DS1000 series and doesn't work with files produced by DS2072A. There's also Rigol's WFM converter, but again it only works with DS4000 and DS6000 series, so I had no luck with that either.

I noticed that the sizes of WFM files in bytes were similar to the number of samples they were supposed to contain, so I guessed extracting raw data from them wouldn't be that complicated - they can't be compressed and there are only that many ways you can shuffle bytes around. The only weird thing was that the files containing the same number of samples were all of a slightly different size. A comment on the pyRigolWFM issue tracker mentioned that the WFM files are more or less a memory dump of the scope's memory which gave me hope that their format isn't very complicated.

After some messing around in a Jupyter Notebook I came up with the following code that extracted the data I needed from WFM files into a Numpy array:

import numpy as np
import struct

def load_wfm(path):
    with open(path, 'rb') as f:
        header = f.read(0x200) 
        
    magic = header[0x000:0x002]
    assert magic == b'\xa5\xa5'
        
    offset_1 = struct.unpack('<i', header[0x044:0x048])[0]
    offset_2 = struct.unpack('<i', header[0x048:0x04c])[0]
    n_samples = struct.unpack('<i', header[0x05c:0x060])[0]
    sample_rate = struct.unpack('<i', header[0x17c:0x180])[0]
    
    assert n_samples % 2 == 0
    
    pagesize = n_samples//2
        
    data = np.fromfile(path, dtype=np.uint8)
    
    t = np.arange(n_samples)/sample_rate
    x0 = np.empty(n_samples)
    
    # Samples are interleaved on two (?) pages
    x0[0::2] = data[offset_1:offset_1+pagesize]
    x0[1::2] = data[offset_2:offset_2+pagesize]
    
    # These will depend on attenuator settings. I'm not sure
    # how to read them from the file, but it's easy to guess 
    # them manually when comparing file contents to what is
    # shown on the screen.
    n = -0.4
    k = 0.2
    
    x = x0*k + n
    
    return t, x
    
t, x = load_wfm("Newfile1.wfm")

Basically, the file consists of a header and sample buffer. The header contains metadata about the capture, like the sample rate and number of captured samples. It also contains pointers into the buffer. Each sample in a trace is represented by one byte. I'm guessing it is a raw, unsigned 8-bit value from the ADC. That value needs to be scaled according to the attenuator and probe settings to get the measured voltage level. I didn't manage to figure out how the attenuator settings were stored in the header. I calculated the scaling constants manually, by comparing the raw values with what was displayed on the scope's screen. Since I was doing all captures at the same settings that worked for me.

I also didn't bother to completely understand the layout of the file. The code above worked for exports containing only channel 1. In all my files the samples were interleaved in two memory pages: even samples were in one page, odd samples in another. I'm not sure if that's always the case and the code obviously does not attempt to cover any other layout.

Here is a plot of the data I extracted for the trace that is shown on the photograph above:

Plot of the sample data extracted from the WFM file.

I compared the trace data I extracted from the WFM file with the data from the CSV file that is generated by the oscilloscope's own slow CSV export function. The differences between the two are on the order of 10-15. That is most likely due to the floating point precision. For all practical purposes, the values from both exports are identical:

Difference between data from the WFM file and the CSV export.

Anyway, I hope that's useful for anyone else that needs to extract data from these scopes. Just please be aware that is only a minimal viable solution for what I needed to do - the code will need some further hacking if you will apply it to your own files.

Posted by Tomaž | Categories: Code | Comments »

Measuring USB cable resistance

28.06.2019 15:04

I'm commonly hearing questions about finding good USB cables for powering single-board computers like the Raspberry Pi. The general consensus on the web seems to be that you need a good USB charger and a good USB cable. A charger typically has a specification, which gives at least some indication of whether it will be able to deliver the current you require. On the other hand, no USB cable I've seen advertises it's series resistance, so how can you distinguish good cables from bad?

Since it bothered me that I didn't have a good answer at hand, I did a bit of digging around the web to see what tools are readily available. Please note however that this isn't really a proper review, since I haven't actually used any of the devices I write about here. It's all just based on the descriptions I found on the web and my previous experience with doing similar measurements.

A bunch of random micro USB cables.

On battery-powered devices, like mobile phones, using a cable with a high resistance is hardly noticeable. It will often only result in a slightly longer charge times, since many battery management circuits adjust the charging current according to the source resistance they see. Unless the battery is completely dead, it will bridge any extra current demand from the device so operation won't be affected. Since most USB cables are used to charge smartphones these days, this tolerance of bad cables seems to have led us to the current state where many cables have unreasonably high resistance.

With a device that doesn't have its own battery the ability of the power supply to deliver a large current is much more important. If the voltage on the end of the cable drops too much, say because of high cable resistance, the device will refuse to boot up or randomly restart under load. Raspberry Pi boards try to address this to some degree with a built-in voltage monitor that detects if the supply voltage has dropped too much. In that case it will attempt to lower the CPU clock, which in turn lowers current consumption and voltage drop at the cost of computing performance. It will also print out an Under-voltage detected warning to the kernel log.

Measuring resistance is simple theoretically, it's just voltage drop divided by current. However in practice it is surprisingly hard to do accurately with tools that are commonly at hand. The troubles boil down mostly to non-destructively hooking up anything to a USB connector without some kind of a break-out board and the fact that a typical multimeter isn't able to accurately measure resistances in the range of 1 Ω.

"Charging slowly" on the bottom of Android lock screen.

Gašper tells me he has a method where he tests micro USB cables by plugging them into his smartphone and into a 2 A charger. If the phone says that it is fast charging, then the cable is probably good to power a Raspberry Pi. I guess the effectiveness of this method depends on what smartphone you're using and how it reports the charging rate. My phone for example shows either Charging or Charging slowly on the bottom of the lock screen. There are also apparently dedicated apps that show some more information. I'm not sure how much of that is actual measurement and how much is just guesswork based on the phone model and charging mode. Anyway, it's a crude, last resort method if you don't have any other equipment at hand, but it's better than nothing.

Riden UM34 USB multimeter

Image by Banggood

The ubiquitous USB multimeters aren't much use for testing cables. All I've personally seen have a USB A plug on one end and USB A socket on the other, so you can't connect them on the end of a micro USB cable. The only one I found that has a micro USB connector is the Riden UM34C. Its software also apparently has a mode for measuring cable resistance, as this video demonstrates, which conveniently calculates resistance from voltage and current measurements. However, you also need an adjustable DC load in addition to the multimeter.

I can't say much about the accuracy of this device without testing it in person. I like the fact that you can measure the cable at a high current. Contacts in connectors usually have a slightly non-linear characteristic, so a measurement at a low current can show a higher resistance than in actual use. The device also apparently compensates for the internal resistance of the source. At least that's my guess why it requires two measurements at approximately the same current: one with a cable and one without.

This was the only reasonably priced device I found that was actually in stock and could be ordered in an ready-to-use state.

USB Cable resistance tester from FemtoCow

Image by FemtoCow

I really like the approach FemtoCow USB cable resistance tester takes. It's very a simple PCB that allows you to use an adjustable lab power supply and a multi-meter to measure the resistance. It seems perfect when you already have all the necessary lab equipment, but just need a convenient setup with a reference resistor and a break-out for various USB connectors. I wanted to immediately order this one when I found it, but sadly it seems to be out of stock.

What I like about this method is again the fact that you are measuring the resistance at a high current. The method with the shunt resistor can be very accurate, since it doesn't depend on the accuracy of the multimeter. If the resistor value is accurate (and 1% resistors are widely available today), even multimeter calibration doesn't really matter, as long as the scale is roughly linear. The PCB also looks like it uses proper Kelvin connections so that resistance of the traces affects the measurement as little as possible.

USBcablecracker from SZDIY.

Finally, Gašper also pointed me to an open hardware project by the Shenzhen hackerspace SZDIY. The USB Cable Cracker is a stand-alone device based around the ATmega32U4. It tests the cable by passing approximately 25 mA through it. It then measures the voltage drop using an amplifier and ATmega's ADC and calculates the resistance. A switch allows you to measure either the power or the data lines. The measured value is displayed on an LCD. Gerber files for the PCB layout and firmware source are on GitHub, so you can make this device yourself (0603 passives can a bit of a pain to solder though). I haven't seen it sold anywhere in an assembled state.

The analog design of this device seems sound. The biggest drawback I think is the low current it uses for measurement. The digital part however looks like an overkill. The author wanted to also use it as a general development board. That is fine of course, but if you're making this only for testing cables, having an expensive 44-pin microcontroller just for using one ADC pin seems like a huge waste. Same with the large 16x2 LCD that only shows the resistance. Another thing that I was missing was a more comprehensive BOM, so if you want to make this yourself be prepared to spend a bit of time searching for the right parts that fit the PCB layout.


In conclusion, this problem of measuring USB cabling was a bit of a rabbit hole I fell into. For all practical purposes, probably any of the methods above is sufficiently accurate to find out cables that are grossly out of spec. I guess you could even do it with a 3½ digit multimeter on the 200 Ω range and a cut-off extender cable as a break-out to access the connections. In the end, if all you're interested in is stability of a Raspberry Pi, just switching cables until you find one where it runs stable works as well.

On the other hand, there is some beauty in being able to get trustworthy and repeatable measurements. I wasn't happy with any of the tools I found and considering I already wasted plenty of time researching this I decided to make my own. It's heavily inspired by FemtoCow's design and I'll write about it in another post.

Posted by Tomaž | Categories: Analog | Comments »

Double pendulum simulation

16.05.2019 21:05

One evening a few years ago I was sitting behind a desk with a new, fairly powerful computer at my fingertips. The kind of a system where you run top and the list of CPUs doesn't fit in the default terminal window. Whatever serious business I was using it for at the time didn't parallelize very well and I felt most of its potential remained unused. I was curious how well the hardware would truly perform if I could throw at it some computing problem that would be better suited for a massively parallel machine.

Somewhere around that time I also stumbled upon a gallery of some nice videos of chaotic pendulums. These were done in Mathematica and simulated a group of double-pendulums with slightly different initial conditions. I really liked the visualization. Each pendulum is given a different color. They first move in sync, but after a while their movements deviate and the line they trace falls apart into a rainbow.

Simulation of 42 double pendulums.

Image by aWakeOfBuzzards

The simulations published by aWakeOfBuzzards included only 42 pendulums. I guess it's a reference to the Hitchhiker's Guide, but I thought, why not do better than that? Would it be possible to eliminate visual gaps between the traces? Since each simulation of a pendulum is independent, this should be a really nice, embarrassingly parallel problem I was looking for.

I didn't want to spend a lot of time writing code. This was just another crazy idea and I could only rationalize avoiding more important tasks for so long. Since I couldn't run Mathematica on that machine, I couldn't re-use aWakeOfBuzzards's code and rewriting it to Numpy seemed non-trivial. Nonetheless, I still managed to shamelessly copy most of the code from various other sources on the web. For a start, I found a usable piece of physics simulation code in a Matplotlib example.

aWakeOfBuzzards' simulations simply draw the pendulum traces opaquely on top of each other. It appears that the code draws the red trace last, since when all the pendulums move closely together, all other traces get covered and the trail appears red. I wanted to do better. I had CPU cycles to spare after all.

Instead of rendering animation frames in standard red-green-blue color planes, I instead worked with wavelengths of visible light. I assigned each pendulum a specific wavelength and added that emission line to the spectrum for each pixel it occupied. Only when I had a complete spectrum for each pixel I converted that to a RGB tuple. This meant that when all the pendulums were on top of each other, they would be seen as white, since white light is a sum of all wavelengths. When they diverged, the white line would naturally break into a rainbow.

Frames from an early attempt at the pendulum simulation.

For parallelization, I simply used a process pool from Python's multiprocessing package with N - 1 worker processes, where N was the number of processors in the system. The worker processes solved the Runge-Kutta and returned a list of vertex positions. The master process then rendered the pendulums and wavelength data to an RGB framebuffer by abusing the ImageDraw.line from the Pillow library. Since drawing traces behind the pendulums meant that animation frames were not independent of each other, I dropped that idea and instead only rendered the pendulums themselves.

For 30 seconds of simulation this resulted in an approximately 10 GB binary .npy file with raw framebuffer data. I then used another, non-parallel step that used Pillow and FFmpeg to compress it to a more reasonably sized MPEG video file.

Double pendulum Monte Carlo

(Click to watch Double pendulum Monte Carlo video)

Of course, it took several attempts to fine-tune various simulation parameters to get a nice looking result you can find above. This final video is rendered from 200.000 individual pendulum simulations. Initial conditions only differed in the angular velocity of the second pendulum segment, which was chosen from a uniform distribution.

200.000 is not an insanely high number. It manages to blur most of the gaps between the pendulums, but you can still see the cloud occasionally fall apart into individual lines. Unfortunately I didn't seem to note down at the time what bottleneck exactly caused me not to go higher than that. Looking at the code now, it was most likely the non-parallel rendering of the final frames. I was also beyond the point of diminishing returns and probably something like interpolation between the individual pendulum solutions would yield better results than just increasing the number of solutions.

I was recently reminded of this old hack I did and I thought I might share it. It was a reminder of a different time and a trip down the memories to piece the story back together. The project that funded that machine is long concluded and I now spend evenings behind a different desk. I guess using symmetric multiprocessing was getting out of fashion even back then. I would like to imagine that these days someone else is sitting in that lab and wondering similar things next to a GPU cluster.

Posted by Tomaž | Categories: Life | Comments »

Recapping the Ice Tube Clock

05.05.2019 11:00

I was recently doing some electrical work and had to turn off the power in the apartment. I don't even remember when I last did that - the highest uptime I saw when shutting down things was 617 days. As it tends to happen, not everything woke up when the power came back. Most were simple software problems common on systems that were not rebooted for a while. One thing that was not a software bug however was the Adafruit's tube clock, which refused to turn on when I plugged it back it.

I got the clock as a gift way back in 2010 and since then it held a prime position in my living room. I would be sad to see it go. Over the years its firmware also accumulated a few minor hacks: the clock's oscillator is quite inaccurate so I added a simple software drift correction. I was also slightly bothered by the way nines were displayed on the 7-segment display. You can find my modifications on GitHub.

Adafruit's Ice Tube Clock.

Thankfully, the clock ran fine when I powered it from a lab power supply. The issue was obviously in the little 9 V, 660 mA switching power supply that came with it.

Checking the output of the power supply with an oscilloscope showed that it had excessive ripple and bad voltage regulation. Idle it had 5 V of ripple on top of the 9 V DC voltage. When loaded with 500 mA, the output DC level fell by almost 3 V.

Power supply output voltage before repair.

I don't know what the output looked like when it was new, but these measurements looked suspicious. High ripple is also a typical symptom of a bad output capacitor in a switching power supply. Opening up the power supply was easy. It's always nice to see a plastic enclosure with screws and tabs instead of glue. Inside I found C9 - a 470 μF, 16 V electrolytic capacitor on the secondary side of the converter:

Ice Tube Clock power supply with the problematic C9 marked.

The original capacitor was a purple Nicon KME series rated for 105°C (the photograph above shows it already replaced). Visually it looked just fine. On a component tester it measured 406 μF with ESR 3.4 Ω. While capacitance looked acceptable, series resistance was around ten times higher than what is typical for a capacitor this size. I replaced it with a capacitor of an equivalent rating. Before soldering it to the circuit, the replacement measured 426 μF with ESR 170 mΩ.

After repair, the output of the power supply looked much cleaner on the oscilloscope:

Power supply output voltage after repair.

The clock now runs fine on its own power supply and it's again happily ticking away back in its place.

I guess 9 years is a reasonable lifetime for an aluminum capacitor. I found this datasheet that seems to match the original part. It says lifetime is 2000-3000 h at 105°C. Of course, in this power supply it doesn't get nearly that hot. I would say it's not much more than 50°C most of the time. 9 years is around 80000 h, so I've seen a lifetime that was around 30 times longer than at the rated temperature. This figure seems to be in the right ballpark (see for example this discussion of the expected lifetime of electrolytic capacitors).

Posted by Tomaž | Categories: Analog | Comments »

Google is eating our mail

25.04.2019 20:06

I've been running a small SMTP and IMAP mail server for many years, hosting a handful of individual mailboxes. It's hard to say when exactly I started. whois says I registered the tablix.org domain in 2005 and I remember hosting a mailing list for my colleagues at the university a bit before that, so I think it's safe to say it's been around 15 years.

Although I don't jump right away on every email-related novelty, I've tried to keep the server up-to-date with well accepted standards over the years. Some of these came for free with Debian updates. Others needed some manual work. For example, I have SPF records and DKIM message signing setup on the domains I use. The server is hosted on commercial static IP space (with the very same IP it first went on-line) and I've made sure with the ISP that correct reverse DNS records are in place.

Homing pigeon

Image by Andreas Trepte CC BY-SA 2.5

From the beginning I've been worrying that my server would be used for sending spam. So I always made sure I did not have an open relay and put in place throughput restrictions and monitoring that would alert me about unusual traffic. In any case, the amount of outgoing mail has stayed pretty minimal over the years. Since I'm hosting just a few personal accounts these days, there have been less than 1000 messages sent to remote servers over SMTP in the last 12 months. I've given up on hosting mailing lists many years ago.

All of this effort paid off and, as far as I'm aware, my server was never listed on any of the public spam black lists.

So why am I writing all of this? Unfortunately, email is starting to become synonymous with Google's mail, and Google's machines have decided that mail from my server is simply not worth receiving. Being a good administrator and a well-behaved player on the network is no longer enough:

550-5.7.1 [...] Our system has detected that this
550-5.7.1 message is likely unsolicited mail. To reduce the amount of spam sent
550-5.7.1 to Gmail, this message has been blocked. Please visit
550-5.7.1  https://support.google.com/mail/?p=UnsolicitedMessageError
550 5.7.1  for more information. ... - gsmtp

Since mid-December last year, I'm regularly seeing SMTP errors like these. Sometimes the same message re-sent right away will not bounce again. Sometimes rephrasing the subject will fix it. Sometimes all mail from all accounts gets blocked for weeks on end until some lucky bit flips somewhere and mail mysteriously gets through again. Since many organizations use Gmail for mail hosting this doesn't happen just for ...@gmail.com addresses. Now every time I write a mail I wonder whether Google's AI will let it through or not. Only when something like this happens you realize just how impossible it is to talk to someone on the modern internet without having Google somewhere in the middle.

Of course, the 550 SMTP error helpfully links to a wholly unhelpful troubleshooting page. It vaguely refers to suspicious looking text and IP history. It points to Bulk Sender Guidelines, but I have trouble seeing myself as a bulk sender with 10 messages sent last week in total. It points to the Postmaster Tools which, after letting me jump through some hoops to authenticate, tells me I'm too small a fish and has no actual data to show.

Screenshot of Google Postmaster Tools.

So far Google has blocked personal messages to friends and family in multiple languages, as well as business mail. I stopped guessing what text their algorithms deem suspicious. What kind of intelligence sees a reply, with the original message referenced in the In-Reply-To header and part quoted, and considers it unsolicited? I don't discount the possibility that there is something misconfigured at my end, but since Google gives no hint and various third-party tools I've tried don't report anything suspicious I've ran out of ideas where else to look.

My server isn't alone with this problem. At work we use Google's mail hosting and I've seen this trigger happy filter from the other end. Just recently I've overlooked an important mail because it ended up in the spam folder. I guess it was pure luck it didn't get rejected at the SMTP layer. With my work email address I'm subscribed to several mailing lists of open source software projects and regularly Google will decide to block this traffic. I know since Mailman sends me a notification that my address caused excessive bounces. What system decides, after months of watching me read these messages and not once seeing me mark one as spam, that I suddenly don't want to receive them ever again?

Screenshot of the mailing list probe message.

I wonder. Google as a company is famously focused on machine learning through automated analytics and bare minimum of human contact. What kind of a signal can they possibly use to train these SMTP rejects? Mail gets rejected at the SMTP level without user's knowledge. There is no way for a recipient to mark it as not-spam since they don't know the message ever existed. In contrast to merely classifying mail into spam/non-spam folders, it's impossible for an unprivileged human to tell the machine it has made a mistake. Only the sender knows the mail got rejected and they don't have any way to report it either. One half of the feedback loop appears to be missing.

I'm sure there is no malicious intent behind this and that there are some very smart people working on spam prevention at Google. However for a metric driven company where a majority of messages are only passed with-in the walled garden, I can see how there's little motivation to work well with mail coming from outside. If all training data is people marking external mail as spam and there's much less data about false positives, I guess it's easy to arrive to a prior that all external mail is spam even with best intentions.

This is my second rant about Google in a short while. I'm mostly indifferent to their search index policies, however this mail problem is much more frustrating. I can switch search engines, but I can't tell other people to go off Gmail. Email used to work, from its 7-bit days onward. It was one standard thing that you could rely on in the ever changing mess of messaging web apps and proprietary lock-ins. And now it's increasingly broken. I hope people realize that if they don't get a reply, perhaps it's because some machine somewhere decided for them that they don't need to know about it.

Posted by Tomaž | Categories: Life | Comments »