Some more notes on Denon RCD-M41DAB

19.10.2019 16:13

Back in January I bought a new micro Hi-Fi after the transformer in my old one failed and I couldn't repair it. Since then I've been pretty happy using Denon RCD-M41DAB for listening to music and radio. I previously did some measurements on its output amplifier and found it meets its ratings in regards to power and distortion. In any case, that was more to satisfy my electrical engineering curiosity and I'm not particularly sensitive to reproduction quality. However after extended use two problems did pop up. Since I see this model is still being sold, I thought I might do a short follow up.

Denon RCD-M41DAB

The first thing is that when streaming audio to the Hi-Fi over Bluetooth there are about 2 seconds of lag. It looks as if there's some kind of an audio processing buffer inside the RCD-M41DAB, or it might simply be the choice of the codec used for wireless transfer. When listening to music the delay isn't that annoying. For example, when clicking "next track" or changing volume on the music player on the phone it takes about 2 seconds before you can actually hear the change. However this lag does make a Bluetooth-connected RCD-M41DAB completely useless as an audio output when playing videos or anything interactive. The lag happens on iOS and Android and I'm pretty sure this is not something related to my smartphone, since when I'm using Bluetooth-connected headphones, there is no discernible lag there.

The other annoyance is that the CD player has problems reading some CDs in my collection. Apparently it reads the table of contents fine, because the number of tracks is displayed correctly. However it has problems seeking to tracks. For example, one disc won't play any tracks at all and I can just hear the head continuously seeking. Another disc won't start playing the first track, but if I skip it manually, player will find and play the second one just fine. All such discs play normally in the CD drive in my desktop computer and other CD players I've tried. It still might be that the discs have some kind of an error in them or have bad reflectivity (all of the problematic ones are kind of niche releases I bought from Bandcamp), but other players apparently are able to work around it.

Posted by Tomaž | Categories: Life | Comments »

Specializing C++ templates on AVR pins

09.10.2019 19:56

AVR is a popular processor core for microcontrollers, probably most well known from the ATmega chips on the classic Arduino boards. There's an excellent, free software C and C++ compiler toolchain available for them, which also usually sits behind the well-known Arduino IDE. AVR is a Harvard architecture, which means that code and data memory (i.e. flash and RAM) have separate address spaces. On these small, 8-bit chips there is very little RAM space (can be as little as 128 bytes on ATtiny devices). On the other hand, the amount of flash is usually much less restrictive.

Sometimes you want the microcontroller to drive multiple instances of the same external peripheral. Let's say for example this peripheral uses one GPIO pin and two instances differ only in the pin they are connected to. This means that the code in the microcontroller for peripheral 1 differs from code for peripheral 2 only by the address of the special function register (SFR) it uses and a bit mask. A naive way of implementing this in clean C++ would be something like the following:

#include <avr/io.h>

class DIO {
	public:
		PORT_t * const port;
		const int pin_bm;

		DIO(PORT_t* port_, int pin_bm_) 
			: port(port_), pin_bm(pin_bm_) {}

		void setup(void) const
		{
			port->DIR |= pin_bm;
		}

		// other methods would go here ...
};

DIO dio1(&PORTB, PIN4_bm);
DIO dio2(&PORTB, PIN5_bm);

int main(void)
{
	dio1.setup();
	dio2.setup();

	// other code would go here ...

	while (1);
}

Note that I'm using the nice PORT_t struct and the bit mask constants provided by avr-gcc headers instead of messing around with register addresses and such.

Unfortunately, this isn't optimal. Even though the class members are declared as const, compiler stores them in RAM. This means that each instance of DIO class costs 4 precious bytes of RAM that, baring a stray cosmic particle, will never ever change during the course of the program.

Compiling the code above for the ATtiny416 with avr-gcc 5.4.0 confirms this:

Program Memory Usage  : 204 bytes   5,0 % Full
Data Memory Usage     : 8 bytes   3,1 % Full

We could store member variables as data in program space. This is possible by using some special instructions and variable attributes (PROGMEM, pgm_read_byte() etc.), but it's kind of ugly and doesn't play nicely with the C language. Another way is to actually have copies of the binary code for each peripheral, but only change the constants in each copy.

Obviously we don't want to have multiple copies in the source as well. In C the usual approach is to do something rude with C preprocessor and macros. Luckily, C++ offers a nicer way via templates. We can make a template class that takes non-type parameters and specialize that class on the register and bitmap it should use. Ideally, we would use something simple like this:

template <PORT_t* port, int pin_bm> class DIO {
	public:
		void setup(void) {
			port->DIR |= pin_bm;
		}
};

DIO<(&PORTB), PIN4_bm> dio1;
DIO<(&PORTB), PIN5_bm> dio1;

Sadly, this does not work. Behind the scenes, PORTB is a macro that casts a constant integer to the PORT_t struct via a pointer:

#define PORTB    (*(PORT_t *) 0x0620)

&PORTB evaluates back to an integral constant, however the compiler isn't smart enough to realize that and balks at seeing a structure and a pointer dereference:

main.cpp(8,7): error: a cast to a type other than an integral or enumeration type cannot appear in a constant-expression
DIO<(&PORTB), PIN4_bm> dio1;
main.cpp(8,7): error: '*' cannot appear in a constant-expression
main.cpp(8,7): error: '&' cannot appear in a constant-expression

I found a thread on the AVR freaks forum which talks about this exact problem. The discussion meanders around the usefulness of software abstractions and the ability of hardware engineers to follow orders, but fails to find a solution that does not involve a soup of C preprocessor macros.

Fortunately, there does seem to exist an elegant C++ solution. The compiler's restriction can be worked around without resorting to preprocessor magic and while still using the convenient constants defined in the avr-gcc headers. We only need to evaluate the macro in another expression and do the cast to PORT_t inside the template:

template <int port_addr, int pin_bm> class DIO {
	public:
		void setup(void) {
			port()->DIR |= pin_bm;

		}

		static PORT_t* port()
		{
			return (PORT_t*) port_addr;
		}
};

static const int PORTB_ADDR = (int) &PORTB; 
DIO<PORTB_ADDR, PIN4_bm> dio1;
DIO<PORTB_ADDR, PIN5_bm> dio2;

This compiles and works fine with avr-gcc. As intended, it uses zero bytes of RAM. Interestingly enough, it also significantly reduces the code size, something I did not expect:

Program Memory Usage  : 108 bytes   2,6 % Full
Data Memory Usage     : 0 bytes   0,0 % Full

I'm not sure why the first version using member variables produces so much more code. A brief glance at the disassembly suggests that the extra code comes from the constructor. I would guess that using a code-free constructor with initializer lists should generate minimal extra instructions. Apparently that is not so here. In any case, I'm not complaining. I'm also sure that in a non-toy example, the template version will use more flash space. After all, It has to duplicate the code for all methods.

You might also wonder how Arduino does this with the digitalWrite() function and friends. They store a number of data tables in program memory and then have preprocessor macros to read them out using pgm_read_byte(). As I mentioned above, this works, but isn't nicely structured and it's pretty far from clean, object-oriented C++ programming.

Posted by Tomaž | Categories: Code | Comments »

Voltage divider cheat sheet

16.09.2019 19:46

The two most common electrical measurements I do these days is measuring output impedance of a source and input impedance of a load. I don't have any special equipment for that. I'm not doing it so often that it would warrant buying or making a specialized tool. Since it's all either DC or around audio frequencies and I'm not interested in very precise values it's a really simple measurement. I usually just use an oscilloscope and whatever else is available around the shop to use as reference loads and sources.

Deriving unknown impedances or resistances from the voltage measurements and reference values is straightforward. It only takes a minute and I must have done it a hundred times at this point by inverting the voltage divider formula. I still don't know the resulting equations by heart though. So to avoid doing it the hundred and first time and the odd mistake, I've made myself a nice cheat sheet to stick above my desk. It contains the formulas for the three most common measurements I make.

Voltage divider measurements cheat sheet

PDF version

The notes about the best accuracy refer to the selection of the reference impedance such that the result is least affected by errors in voltage measurements (which, when measuring amplitudes with an oscilloscope, is the largest source of error in my case). The selection is quite obvious, but I guess I added it there for the sake of rigor.

The cheat sheet is made in LaTeX using the circuitikz package for schematics. I'm mostly putting it here for my own future reference, but maybe someone else will find it useful.

Posted by Tomaž | Categories: Analog | Comments »

z80dasm 1.1.6

09.09.2019 20:32

z80dasm is a command-line disassembler for the Z80 CPU. I initially released it way back in 2007 when I was first exploring Galaksija's ROM and other disassemblers I could find didn't do a good enough job for me. Since then it accumulated a few more improvements and bug fixes as I received feedback and patches from various contributors.

Version 1.1.6 is a release that has been way overdue. Most importantly, it fixes a segmentation fault bug that several people have reported. The patch for the bug has actually been committed to the git repository since June last year, but somehow I forgot to bump the version and roll up a new release.

The problem appeared when feeding a symbol file that was generated by z80dasm back as input to z80dasm (possibly after editing some symbol names). This is something that the man page explicitly mentions is supported. However, when this was done together with block definitions, it caused z80dasm to segfault with a NULL pointer dereference. Some code didn't expect that the symbol automatically generated to mark a block start could already be defined via the input symbol file. Thanks to J. B. Langston for first sending me the report and analysis of the crash.

I took this opportunity to review the rest of the symbol handling code and do some further clean ups. It has also led me to implement a feature that I have been asked for in the past. z80dasm now has the ability to sort the symbol table before writing it out to the symbol file.

More specifically, there is now a --sym-order command-line option that either takes a default or frequency argument. Default leaves the ordering as it was in the previous versions - ordered by symbol value. Frequency sorts the symbol table by how frequently a symbol is used in the disassembly. The most commonly used symbols are written at the start of the symbol file. When first approaching an unknown binary, this might help you to identify the most commonly used subroutines.

Anyway, the new release is available from the usual place. See the included README file for build and installation instructions. z80dasm is also included in Debian, however the new release is not in yet (if you're a Debian developer and would like to sponsor an upload, please get in touch).

Posted by Tomaž | Categories: Code | Comments »

What is a good USB cable?

31.08.2019 20:36

In conclusion of my recent series on the simple matter of USB cable resistance, I would like to attempt the question of what is a good USB cable. After having a reliable measurement of a cable's resistance, the next obvious question is of course whether that resistance complies with the USB standard and whether such a cable is suitable for powering single-board computers like the Raspberry Pi. Claims that most cables don't comply with the standard are quite common whenever this topic is discussed. I'm by no means an expert on USB, but luckily USB Implementers Forum publishes the standards documents in their on-line library. I went in and studied some of the documents on the Cable and Connector Specification topic which, among other things, specify cable resistance.

I've started my reading with USB 2.0, because micro- and mini-USB cables I tested in my previous post are unlikely to be older than USB 2.0. The standard is now nearly 20 years old and over the years it seems to have received many revisions and updates. Hence it's hard to pick up a random cable from the pile and say with certainty with which document it should comply. In addition, I find that the text of the standard itself often isn't particularly clear. For example, the following text implicitly defines the maximum allowable cable resistance in the Universal Serial Bus Cables and Connectors Class Document, revision 2.0 from August 2007:

Cable Assembly Voltage Drop requirement for USB 2.0

Image by USB Implementers Forum

Initially I thought this means voltage drop over the pair of wires. As in, total voltage drop over VBUS and GND wires should be less than 125 mV at 500 mA (effectively 250 mΩ round-trip resistance). However the fact that most cables seem to be around 500 mΩ suggests that manufacturers read this as 250 mΩ per wire (500 mΩ round-trip).

A later document amends this definition somewhat and makes it clearer that the voltage drops are for each wire separately and that this voltage drop includes contact resistance. The following is from Universal Serial Bus 3.0 Connectors and Cable Assemblies Compliance Document, revision 1.0 draft from October 2010. Also note that both the measurement current and the allowable voltage drop were increased. The measurement must now be done at 900 mA, however maximum effective single-wire resistance is still 250 mΩ, same as in USB 2.0:

Cable Assembly Voltage Drop requirement for USB 3.0

Image by USB Implementers Forum

An even later document addresses cable compliance with older revisions of the standard. USB 3.1 Legacy Cable and Connector Revision 1.0 from 2017 contains this calculation:

IR drop at device calculation from a USB 3.1 document.

Image by USB 3.0 Promoter Group

This equation clearly shows that the 250 mΩ figure from the other documents is supposed to be combined from two 30 mΩ contact resistances and a 190 mΩ wire resistance. It also multiplies the voltage drop by two due to the round trip through both VBUS and GND wires.

USB type C specification tries to make this even clearer and comes with schematics that explicitly show where voltage drop must be measured. Since in USB type C you can have different types of cables that are rated for different currents, that standard only specifies maximum voltage drop. Note also that in type C the requirements for the VBUS line were relaxed compared to previous standards. Previously, for a cable delivering 1 A of current, the VBUS line must have had a maximum resistance of 250 mΩ while in type C up to 500 mΩ is allowed.

Figure showing cable IR drop from USB Type-C specification.

Image by USB 3.0 Promoter Group

7 out of 12 micro USB and 5 out of 6 mini USB cables I found at my home have less than 500 mΩ round-trip resistance. So according to my understanding of the standard for pre-type C cables, roughly 70% of my cables comply with it. Here are my resistance measurements plotted versus cable length. I've also included measurements published by Balaur on EEVblog and martinm on their blog. Points in the shaded area represent cables that comply with the standard.

Plot of cable resistance measurements versus length.

So strictly according the USB standards, the situation out there isn't perfect, but it doesn't look like the majority of cable are completely out of spec either. This seems a bit at odds with the general opinion that finding a good cable for running Raspberry Pi is hard. However, things start getting a bit clearer when you look at what exactly Raspberry Pi boards demand from these cables.

In the following table I've extracted maximum required power for all Raspberry Pi model Bs from the Wikipedia article. These boards will display the infamous under-voltage warning when their power supply voltage falls under approximately 4.63V. Assuming a perfect 5 V power supply, this is enough data to calculate the maximum allowable cable resistance for powering these boards:

R_{max} = \frac{U_{supply} - U_{min}}{I_{max}} = \frac{5.00\mathrm{V} - 4.63\mathrm{V}}{I_{max}}
Model Max. supply
current [mA]
Max. cable
resistance [mΩ]
RPi 1 Model B 700 529
RPi 1 Model B+ 350 1057
RPi 2 Model B 820 451
RPi 3 Model B 1340 276
RPi 3 Model B+ 1130 327
RPi 4 Model B 1250 296

Raspberry Pi model Bs after version 2 appear to require cables with resistance well below 500 mΩ that the standard requires for micro USB cables. Only 3 cables from my collection would be able to power a Raspberry Pi 3 model B. Raspberry Pi 4 gets a pass because the type C standard is flexible enough and doesn't directly specify cable resistance (although its type C implementation has other power-related issues). Yet, since type C cables have 750 mV maximum voltage drop at rated current, it requires a cable rated for 3 A or more according to this estimate (I'm not sure if Raspberry Pi 4 uses the same APX803 voltage monitor as earlier versions).

Also note that this calculation is for a perfect 5V power supply, which is optimistic. Power supplies don't have perfect regulation and the calculations in the USB standard assume worst case 4.75 V at the source. Such a worst case power supply, even if it provides sufficient current, would require practically zero ohm cables to power a Raspberry Pi without under-voltage warnings and associated CPU throttling.

To sum up, yes there are USB cables out there that are out of spec. However based on this limited sample, most micro and mini USB cables do seem to actually comply with the standard. Also worth noting is that shorter ones tend to have a better chance of being compliant. On the other hand, at least part of the blame for the grief surrounding USB cables appears to fall onto the Raspberry Pi itself since they designed their boards with an requirement for better-than-the-standard cables and power supplies.

Posted by Tomaž | Categories: Analog | Comments »

Resistance measurements of various USB cables

23.08.2019 10:23

After I made my USB cable resistance tester I naturally wanted to measure some cables. I searched my apartment and ended up with a big jumble of 18 micro and mini USB cables of various lengths and origins. I didn't imagine I would find that many, but I guess today just about everything comes with one and I apparently never throw away anything. In fact some cables were in a very bad shape and already had insulation flaking off from old age.

USB kabelsalat.

I measured the resistance of each cable at 1 A using the voltage ratio method I described in my previous post. The following table lists the results. For a lot of cables I don't know their origin and they must have came bundled with various devices. I've listed the brand if it was printed on or if I knew for certain which device the cable came with. I'm afraid this comparison isn't very useful as a guide which cable brand to buy, but it does give an interesting statistic of what kind of cables can be found out there in the wild.

N Brand Color Type Length [cm] R [mΩ]
1 Wacom Black A / micro B 28 199
2 CellularLine Gray A / micro B 207 212
3 White A / micro B 105 224
4 White A / micro B 51 294
5 Wacom Black A / micro B 98 334
6 Samsung Black A / micro B 82 408
7 Nokia Black / gray A / micro B 115 490
8 CubeSensors White A / micro B 103 522
9 Black A / micro B 103 569
10 HTC Black A / micro B 128 597
11 Google Black A / micro B 153 613
12 Amazon White A / micro B 182 739
13 Silver A / mini B 30 177
14 Black A / mini B 146 323
15 Black A / mini B 125 396
16 Silver A / mini B 56 412
17 Canon White A / mini B 125 435
18 Silver A / mini B 180 804

Unsurprisingly, two short 30 cm cables came out as best in terms of resistance, measuring below 200 mΩ. A bit more unexpected was finding out that the 2 m CellularLine isn't far behind. This is a fancy and laughably overpriced cable I bought in a physical store not so long ago, the only one on this list that I'm sure didn't come bundled with any device. It appears in this case the price was at least somewhat justified.

I was also a bit surprised that some cables that came bundled with devices measured pretty high. The white Amazon was for charging a Kindle 3 and it had the highest resistance among the micro B cables I tested. On the other hand, it was also in pretty bad shape, so it might be that it was damaged somehow. Cables bundled with an HTC phone and Google Chromecast also measured more than 500 mΩ.

Other measurements I could find on the web seem to roughly agree with mine. martinm lists measured values between 289 and 1429 mΩ. Balaur on EEVblog forum measured between 276 and 947 mΩ on his cables. The only report that was really off was this forum post by d_t_a where most of the cables listed are lower than 200 mΩ.

Another thing I was interested in was how repeatable these measurements were. I mentioned several times in my previous posts that contact resistance can play a big role. Since each time you plug in a cable the contacts sit differently and have a slightly different resistance, contact resistance behaves like a random variable in the measurement results. When I was doing the measurements above this was quite obvious. Minimal movements of the cable caused the voltage displayed on the voltmeter to dance around.

Histogram of 10 measurements of cable 16.

I repeated the measurement of cable 16 from the table above 10 times. Before each repetition I unplugged and re-plugged both ends of the cable. Above you can see the histogram of those measurements. The results only vary for approximately ±1%, which is much less than I thought they would. This is about the same as the expected error of the measurement itself due to the accuracy of the reference resistor. Of course, this was all done over a short period of time. I'm guessing the resistance would change more over longer periods of time and more cycles as contacts deteriorate or gather dirt.

I also checked how the measurement is affected if I plug something between the tester and the cable. Gino mentioned in a comment they used an adapter and an extension cable in their measurement. So I repeated the measurement of cable 1 from the table with a short type A-to-type A extension in series. Just for fun, I also tested how much resistance a cheap USB multimeter adds:

Assembly R [mΩ]
Cable 1 202
Cable 1 + 45 cm extension 522
Cable 1 + Muker V21 multimeter 442

As you can see from the results above, both of these added quite a lot. With the excellent 200 mΩ cable, both more than doubled the total resistance. Even with an average 500 mΩ cable, this multimeter would add around 240 mΩ or approximately 50% on top. Battery-powered devices like smartphones adjust their charging current according to the voltage drop they see on their end. Hence they might charge significantly slower when the multimeter is in series with the cable compared to just using a cable. This puts some doubt on the usability of these USB multimeters for evaluating USB cables and power supplies.

Posted by Tomaž | Categories: Analog | Comments »

USB cable resistance tester

18.08.2019 17:04

Back in June I did a short survey of tools for measuring resistance of power supply lines in USB cables. I was motivated by the common complaint about bad cables, often in the context of powering single board computers like the Raspberry Pi. I wasn't particularly happy with what I found: the tool wanted to buy was out of stock, and I've seen various issues with others. Having already dug too deep into this topic I then set out to make my own tool for this purpose.

So roughly two months later I have a working prototype in my hands. It works as designed and I spent a few hours measuring all the USB cables I could get my hands on. I'm reasonably happy with it and can share the design if anyone else wants to make it.

USB cable resistance tester.

As I mentioned in my previous post, I really liked the approach of FemtoCow's USB cable resistance tester and I basically copied their idea. Since USB type C is gaining in popularity I've added connectors so that A-to-C and C-to-C cables can be tested in addition to A-to-mini B and A-to-micro B, I've taken care that even with the added connectors, the voltmeter still has Kelvin connections in all combinations. I've also added proper 4 mm test sockets for all connections.

Simplified schematic of the USB cable tester.

The principle of operation is very simple. Electrically, the resistance tester consists of two parts. On one end of the cable is a reasonably accurate 1 Ω resistor in series with the cable's VBUS and GND lines. The other end only shorts the VBUS and GND lines together. The power supply is used to set a current through the cable. The measured resistance of the cable, which consists of the sum of the four contact resistances and resistances of the two copper cores, can then be calculated as:

R_{measured} = R_{VBUS} + R_{GND} = R_{ref}\frac{U_{measure}}{U_{calibrate}}

Or, if set current is 1 A, the voltmeter reading in volts directly corresponds to the measured resistance in ohms:

R_{measured} [\Omega] = U_{measure} [\mathrm{V}]

The nice thing about this approach is that the cable can be tested at an arbitrary current. If the first equation is used, the accuracy of the method does not depend on the accuracy of the current setting. It even doesn't depend much on the calibration accuracy of the voltmeter: since a ratio of similar voltages is used, any linear error cancels out. The only requirement is that the voltmeter is reasonably linear over the 0.1 V to 1 V range. Since Kelvin connections are used, the resistance of the PCB traces has negligible effect on measurements as well.

The only critical component is the reference resistor. 1% resistors are widely available, so getting to that kind of measurement accuracy should be trivial. With some more effort and a bit higher price, 0.1% current sense resistors aren't that exotic either. For my tool I went with a cheap 1% thick-film resistor since I considered that good enough.

USB cable tester connected to a multimeter.

After measuring a pile of cables, some shortcomings did become apparent that I didn't think of earlier: I really should have added a switch for the voltmeter instead of having four test sockets. Constantly re-plugging the test leads gets tiring really fast. It also affects the measuring accuracy somewhat since it's hard to re-plug the cables without moving the tool slightly. Since moving the connectors results in slightly changing their contact resistances, it's hard to measure both voltages in the exactly the same setup. The errors because of that seem minimal though.

Another thing I noticed is that with my analog power supply, setting the current to exactly 1 A wasn't possible. Since I have only one knob that goes from 0 to 25 V in one rotation, setting low voltages requires very small movements of the knob and isn't very accurate. Hence I mostly used the ratio equation for my measurements. My power supply also tended to drift a bit which was a bit annoying. The power supply at work with a digital interface worked much better in that respect.

Finally, I'm not sure how harmful this kind of test is for type C cables that contain active parts, like the electronically marked power delivery cables. I didn't test any so far. All schematics I could find show that the power delivery ID chip is powered from the VCONN line, which is left unconnected in this tool, so that should be fine. On the other hand, the active cables that do signal conditioning do seem to be powered from VBUS. It's possible, although I think unlikely, those could respond weirdly or even be damaged by the low voltage applied during this test.

If you want to make a tool like this, you can find all required Gerber files and the bill of materials in the GitHub repository. While it might be possible to etch and drill the board yourself, I highly recommend using one of the cheap PCB prototyping services instead. The USB C connectors require very small holes and SMD pads that I think would be pretty challenging to get right in a home workshop. There are some more notes in the README file regarding that. On the other hand, the Würth connectors listed in the BOM are solderable with only a soldering iron, so manual assembly is reasonably straightforward with no hot air station required. However again the type C ones can be pretty tricky due to the fine pitch.

Posted by Tomaž | Categories: Analog | Comments »

Quick and ugly WFM data export for Rigol DS2072A

15.08.2019 14:48

At work I use a Rigol DS2072A oscilloscope. It's quite a featureful little two-channel digital scope that mostly does the job that I need it for. It can be buggy at times though and with experience I learned to avoid some of its features. Like for example the screenshot tool that sometimes, but not always, captures a perfectly plausible PNG that actually contains something different than what was displayed on the physical screen at the time. I'm not joking - I think there's some kind of a double-buffering issue there.

Recently I was using it to capture some waveforms that I wanted to further process on my computer. On most modern digital scopes that's a simple matter of exporting a trace to a CSV file on a USB stick. DS2072A indeed has this feature, however I soon found out that it is unbearably slow. Exporting 1.4 Msamples took nearly 6 minutes. I'm guessing exporting a full 14 Msample capture would take an hour - I've never had the patience to actually wait for one to finish and the progress indicator indeed remained pegged at 0% until I reset the scope in frustration. I planned to do many captures, so that approach was clearly unusable.

Rigol DS2072A oscilloscope.

Luckily, there's also an option for a binary export that creates WFM files. Exporting to those is much faster than to the text-based CSV format, but on the other hand it creates binary blobs that apparently only the scope itself can read. I found the open source pyRigolWFM tool for reading WFM files, but unfortunately it only seems to support the DS1000 series and doesn't work with files produced by DS2072A. There's also Rigol's WFM converter, but again it only works with DS4000 and DS6000 series, so I had no luck with that either.

I noticed that the sizes of WFM files in bytes were similar to the number of samples they were supposed to contain, so I guessed extracting raw data from them wouldn't be that complicated - they can't be compressed and there are only that many ways you can shuffle bytes around. The only weird thing was that the files containing the same number of samples were all of a slightly different size. A comment on the pyRigolWFM issue tracker mentioned that the WFM files are more or less a memory dump of the scope's memory which gave me hope that their format isn't very complicated.

After some messing around in a Jupyter Notebook I came up with the following code that extracted the data I needed from WFM files into a Numpy array:

import numpy as np
import struct

def load_wfm(path):
    with open(path, 'rb') as f:
        header = f.read(0x200) 
        
    magic = header[0x000:0x002]
    assert magic == b'\xa5\xa5'
        
    offset_1 = struct.unpack('<i', header[0x044:0x048])[0]
    offset_2 = struct.unpack('<i', header[0x048:0x04c])[0]
    n_samples = struct.unpack('<i', header[0x05c:0x060])[0]
    sample_rate = struct.unpack('<i', header[0x17c:0x180])[0]
    
    assert n_samples % 2 == 0
    
    pagesize = n_samples//2
        
    data = np.fromfile(path, dtype=np.uint8)
    
    t = np.arange(n_samples)/sample_rate
    x0 = np.empty(n_samples)
    
    # Samples are interleaved on two (?) pages
    x0[0::2] = data[offset_1:offset_1+pagesize]
    x0[1::2] = data[offset_2:offset_2+pagesize]
    
    # These will depend on attenuator settings. I'm not sure
    # how to read them from the file, but it's easy to guess 
    # them manually when comparing file contents to what is
    # shown on the screen.
    n = -0.4
    k = 0.2
    
    x = x0*k + n
    
    return t, x
    
t, x = load_wfm("Newfile1.wfm")

Basically, the file consists of a header and sample buffer. The header contains metadata about the capture, like the sample rate and number of captured samples. It also contains pointers into the buffer. Each sample in a trace is represented by one byte. I'm guessing it is a raw, unsigned 8-bit value from the ADC. That value needs to be scaled according to the attenuator and probe settings to get the measured voltage level. I didn't manage to figure out how the attenuator settings were stored in the header. I calculated the scaling constants manually, by comparing the raw values with what was displayed on the scope's screen. Since I was doing all captures at the same settings that worked for me.

I also didn't bother to completely understand the layout of the file. The code above worked for exports containing only channel 1. In all my files the samples were interleaved in two memory pages: even samples were in one page, odd samples in another. I'm not sure if that's always the case and the code obviously does not attempt to cover any other layout.

Here is a plot of the data I extracted for the trace that is shown on the photograph above:

Plot of the sample data extracted from the WFM file.

I compared the trace data I extracted from the WFM file with the data from the CSV file that is generated by the oscilloscope's own slow CSV export function. The differences between the two are on the order of 10-15. That is most likely due to the floating point precision. For all practical purposes, the values from both exports are identical:

Difference between data from the WFM file and the CSV export.

Anyway, I hope that's useful for anyone else that needs to extract data from these scopes. Just please be aware that is only a minimal viable solution for what I needed to do - the code will need some further hacking if you will apply it to your own files.

Posted by Tomaž | Categories: Code | Comments »

Measuring USB cable resistance

28.06.2019 15:04

I'm commonly hearing questions about finding good USB cables for powering single-board computers like the Raspberry Pi. The general consensus on the web seems to be that you need a good USB charger and a good USB cable. A charger typically has a specification, which gives at least some indication of whether it will be able to deliver the current you require. On the other hand, no USB cable I've seen advertises it's series resistance, so how can you distinguish good cables from bad?

Since it bothered me that I didn't have a good answer at hand, I did a bit of digging around the web to see what tools are readily available. Please note however that this isn't really a proper review, since I haven't actually used any of the devices I write about here. It's all just based on the descriptions I found on the web and my previous experience with doing similar measurements.

A bunch of random micro USB cables.

On battery-powered devices, like mobile phones, using a cable with a high resistance is hardly noticeable. It will often only result in a slightly longer charge times, since many battery management circuits adjust the charging current according to the source resistance they see. Unless the battery is completely dead, it will bridge any extra current demand from the device so operation won't be affected. Since most USB cables are used to charge smartphones these days, this tolerance of bad cables seems to have led us to the current state where many cables have unreasonably high resistance.

With a device that doesn't have its own battery the ability of the power supply to deliver a large current is much more important. If the voltage on the end of the cable drops too much, say because of high cable resistance, the device will refuse to boot up or randomly restart under load. Raspberry Pi boards try to address this to some degree with a built-in voltage monitor that detects if the supply voltage has dropped too much. In that case it will attempt to lower the CPU clock, which in turn lowers current consumption and voltage drop at the cost of computing performance. It will also print out an Under-voltage detected warning to the kernel log.

Measuring resistance is simple theoretically, it's just voltage drop divided by current. However in practice it is surprisingly hard to do accurately with tools that are commonly at hand. The troubles boil down mostly to non-destructively hooking up anything to a USB connector without some kind of a break-out board and the fact that a typical multimeter isn't able to accurately measure resistances in the range of 1 Ω.

"Charging slowly" on the bottom of Android lock screen.

Gašper tells me he has a method where he tests micro USB cables by plugging them into his smartphone and into a 2 A charger. If the phone says that it is fast charging, then the cable is probably good to power a Raspberry Pi. I guess the effectiveness of this method depends on what smartphone you're using and how it reports the charging rate. My phone for example shows either Charging or Charging slowly on the bottom of the lock screen. There are also apparently dedicated apps that show some more information. I'm not sure how much of that is actual measurement and how much is just guesswork based on the phone model and charging mode. Anyway, it's a crude, last resort method if you don't have any other equipment at hand, but it's better than nothing.

Riden UM34 USB multimeter

Image by Banggood

The ubiquitous USB multimeters aren't much use for testing cables. All I've personally seen have a USB A plug on one end and USB A socket on the other, so you can't connect them on the end of a micro USB cable. The only one I found that has a micro USB connector is the Riden UM34C. Its software also apparently has a mode for measuring cable resistance, as this video demonstrates, which conveniently calculates resistance from voltage and current measurements. However, you also need an adjustable DC load in addition to the multimeter.

I can't say much about the accuracy of this device without testing it in person. I like the fact that you can measure the cable at a high current. Contacts in connectors usually have a slightly non-linear characteristic, so a measurement at a low current can show a higher resistance than in actual use. The device also apparently compensates for the internal resistance of the source. At least that's my guess why it requires two measurements at approximately the same current: one with a cable and one without.

This was the only reasonably priced device I found that was actually in stock and could be ordered in an ready-to-use state.

USB Cable resistance tester from FemtoCow

Image by FemtoCow

I really like the approach FemtoCow USB cable resistance tester takes. It's very a simple PCB that allows you to use an adjustable lab power supply and a multi-meter to measure the resistance. It seems perfect when you already have all the necessary lab equipment, but just need a convenient setup with a reference resistor and a break-out for various USB connectors. I wanted to immediately order this one when I found it, but sadly it seems to be out of stock.

What I like about this method is again the fact that you are measuring the resistance at a high current. The method with the shunt resistor can be very accurate, since it doesn't depend on the accuracy of the multimeter. If the resistor value is accurate (and 1% resistors are widely available today), even multimeter calibration doesn't really matter, as long as the scale is roughly linear. The PCB also looks like it uses proper Kelvin connections so that resistance of the traces affects the measurement as little as possible.

USBcablecracker from SZDIY.

Finally, Gašper also pointed me to an open hardware project by the Shenzhen hackerspace SZDIY. The USB Cable Cracker is a stand-alone device based around the ATmega32U4. It tests the cable by passing approximately 25 mA through it. It then measures the voltage drop using an amplifier and ATmega's ADC and calculates the resistance. A switch allows you to measure either the power or the data lines. The measured value is displayed on an LCD. Gerber files for the PCB layout and firmware source are on GitHub, so you can make this device yourself (0603 passives can a bit of a pain to solder though). I haven't seen it sold anywhere in an assembled state.

The analog design of this device seems sound. The biggest drawback I think is the low current it uses for measurement. The digital part however looks like an overkill. The author wanted to also use it as a general development board. That is fine of course, but if you're making this only for testing cables, having an expensive 44-pin microcontroller just for using one ADC pin seems like a huge waste. Same with the large 16x2 LCD that only shows the resistance. Another thing that I was missing was a more comprehensive BOM, so if you want to make this yourself be prepared to spend a bit of time searching for the right parts that fit the PCB layout.


In conclusion, this problem of measuring USB cabling was a bit of a rabbit hole I fell into. For all practical purposes, probably any of the methods above is sufficiently accurate to find out cables that are grossly out of spec. I guess you could even do it with a 3½ digit multimeter on the 200 Ω range and a cut-off extender cable as a break-out to access the connections. In the end, if all you're interested in is stability of a Raspberry Pi, just switching cables until you find one where it runs stable works as well.

On the other hand, there is some beauty in being able to get trustworthy and repeatable measurements. I wasn't happy with any of the tools I found and considering I already wasted plenty of time researching this I decided to make my own. It's heavily inspired by FemtoCow's design and I'll write about it in another post.

Posted by Tomaž | Categories: Analog | Comments »

Double pendulum simulation

16.05.2019 21:05

One evening a few years ago I was sitting behind a desk with a new, fairly powerful computer at my fingertips. The kind of a system where you run top and the list of CPUs doesn't fit in the default terminal window. Whatever serious business I was using it for at the time didn't parallelize very well and I felt most of its potential remained unused. I was curious how well the hardware would truly perform if I could throw at it some computing problem that would be better suited for a massively parallel machine.

Somewhere around that time I also stumbled upon a gallery of some nice videos of chaotic pendulums. These were done in Mathematica and simulated a group of double-pendulums with slightly different initial conditions. I really liked the visualization. Each pendulum is given a different color. They first move in sync, but after a while their movements deviate and the line they trace falls apart into a rainbow.

Simulation of 42 double pendulums.

Image by aWakeOfBuzzards

The simulations published by aWakeOfBuzzards included only 42 pendulums. I guess it's a reference to the Hitchhiker's Guide, but I thought, why not do better than that? Would it be possible to eliminate visual gaps between the traces? Since each simulation of a pendulum is independent, this should be a really nice, embarrassingly parallel problem I was looking for.

I didn't want to spend a lot of time writing code. This was just another crazy idea and I could only rationalize avoiding more important tasks for so long. Since I couldn't run Mathematica on that machine, I couldn't re-use aWakeOfBuzzards's code and rewriting it to Numpy seemed non-trivial. Nonetheless, I still managed to shamelessly copy most of the code from various other sources on the web. For a start, I found a usable piece of physics simulation code in a Matplotlib example.

aWakeOfBuzzards' simulations simply draw the pendulum traces opaquely on top of each other. It appears that the code draws the red trace last, since when all the pendulums move closely together, all other traces get covered and the trail appears red. I wanted to do better. I had CPU cycles to spare after all.

Instead of rendering animation frames in standard red-green-blue color planes, I instead worked with wavelengths of visible light. I assigned each pendulum a specific wavelength and added that emission line to the spectrum for each pixel it occupied. Only when I had a complete spectrum for each pixel I converted that to a RGB tuple. This meant that when all the pendulums were on top of each other, they would be seen as white, since white light is a sum of all wavelengths. When they diverged, the white line would naturally break into a rainbow.

Frames from an early attempt at the pendulum simulation.

For parallelization, I simply used a process pool from Python's multiprocessing package with N - 1 worker processes, where N was the number of processors in the system. The worker processes solved the Runge-Kutta and returned a list of vertex positions. The master process then rendered the pendulums and wavelength data to an RGB framebuffer by abusing the ImageDraw.line from the Pillow library. Since drawing traces behind the pendulums meant that animation frames were not independent of each other, I dropped that idea and instead only rendered the pendulums themselves.

For 30 seconds of simulation this resulted in an approximately 10 GB binary .npy file with raw framebuffer data. I then used another, non-parallel step that used Pillow and FFmpeg to compress it to a more reasonably sized MPEG video file.

Double pendulum Monte Carlo

(Click to watch Double pendulum Monte Carlo video)

Of course, it took several attempts to fine-tune various simulation parameters to get a nice looking result you can find above. This final video is rendered from 200.000 individual pendulum simulations. Initial conditions only differed in the angular velocity of the second pendulum segment, which was chosen from a uniform distribution.

200.000 is not an insanely high number. It manages to blur most of the gaps between the pendulums, but you can still see the cloud occasionally fall apart into individual lines. Unfortunately I didn't seem to note down at the time what bottleneck exactly caused me not to go higher than that. Looking at the code now, it was most likely the non-parallel rendering of the final frames. I was also beyond the point of diminishing returns and probably something like interpolation between the individual pendulum solutions would yield better results than just increasing the number of solutions.

I was recently reminded of this old hack I did and I thought I might share it. It was a reminder of a different time and a trip down the memories to piece the story back together. The project that funded that machine is long concluded and I now spend evenings behind a different desk. I guess using symmetric multiprocessing was getting out of fashion even back then. I would like to imagine that these days someone else is sitting in that lab and wondering similar things next to a GPU cluster.

Posted by Tomaž | Categories: Life | Comments »