Measuring USB cable resistance

28.06.2019 15:04

I'm commonly hearing questions about finding good USB cables for powering single-board computers like the Raspberry Pi. The general consensus on the web seems to be that you need a good USB charger and a good USB cable. A charger typically has a specification, which gives at least some indication of whether it will be able to deliver the current you require. On the other hand, no USB cable I've seen advertises it's series resistance, so how can you distinguish good cables from bad?

Since it bothered me that I didn't have a good answer at hand, I did a bit of digging around the web to see what tools are readily available. Please note however that this isn't really a proper review, since I haven't actually used any of the devices I write about here. It's all just based on the descriptions I found on the web and my previous experience with doing similar measurements.

A bunch of random micro USB cables.

On battery-powered devices, like mobile phones, using a cable with a high resistance is hardly noticeable. It will often only result in a slightly longer charge times, since many battery management circuits adjust the charging current according to the source resistance they see. Unless the battery is completely dead, it will bridge any extra current demand from the device so operation won't be affected. Since most USB cables are used to charge smartphones these days, this tolerance of bad cables seems to have led us to the current state where many cables have unreasonably high resistance.

With a device that doesn't have its own battery the ability of the power supply to deliver a large current is much more important. If the voltage on the end of the cable drops too much, say because of high cable resistance, the device will refuse to boot up or randomly restart under load. Raspberry Pi boards try to address this to some degree with a built-in voltage monitor that detects if the supply voltage has dropped too much. In that case it will attempt to lower the CPU clock, which in turn lowers current consumption and voltage drop at the cost of computing performance. It will also print out an Under-voltage detected warning to the kernel log.

Measuring resistance is simple theoretically, it's just voltage drop divided by current. However in practice it is surprisingly hard to do accurately with tools that are commonly at hand. The troubles boil down mostly to non-destructively hooking up anything to a USB connector without some kind of a break-out board and the fact that a typical multimeter isn't able to accurately measure resistances in the range of 1 Ω.

"Charging slowly" on the bottom of Android lock screen.

Gašper tells me he has a method where he tests micro USB cables by plugging them into his smartphone and into a 2 A charger. If the phone says that it is fast charging, then the cable is probably good to power a Raspberry Pi. I guess the effectiveness of this method depends on what smartphone you're using and how it reports the charging rate. My phone for example shows either Charging or Charging slowly on the bottom of the lock screen. There are also apparently dedicated apps that show some more information. I'm not sure how much of that is actual measurement and how much is just guesswork based on the phone model and charging mode. Anyway, it's a crude, last resort method if you don't have any other equipment at hand, but it's better than nothing.

Riden UM34 USB multimeter

Image by Banggood

The ubiquitous USB multimeters aren't much use for testing cables. All I've personally seen have a USB A plug on one end and USB A socket on the other, so you can't connect them on the end of a micro USB cable. The only one I found that has a micro USB connector is the Riden UM34C. Its software also apparently has a mode for measuring cable resistance, as this video demonstrates, which conveniently calculates resistance from voltage and current measurements. However, you also need an adjustable DC load in addition to the multimeter.

I can't say much about the accuracy of this device without testing it in person. I like the fact that you can measure the cable at a high current. Contacts in connectors usually have a slightly non-linear characteristic, so a measurement at a low current can show a higher resistance than in actual use. The device also apparently compensates for the internal resistance of the source. At least that's my guess why it requires two measurements at approximately the same current: one with a cable and one without.

This was the only reasonably priced device I found that was actually in stock and could be ordered in an ready-to-use state.

USB Cable resistance tester from FemtoCow

Image by FemtoCow

I really like the approach FemtoCow USB cable resistance tester takes. It's very a simple PCB that allows you to use an adjustable lab power supply and a multi-meter to measure the resistance. It seems perfect when you already have all the necessary lab equipment, but just need a convenient setup with a reference resistor and a break-out for various USB connectors. I wanted to immediately order this one when I found it, but sadly it seems to be out of stock.

What I like about this method is again the fact that you are measuring the resistance at a high current. The method with the shunt resistor can be very accurate, since it doesn't depend on the accuracy of the multimeter. If the resistor value is accurate (and 1% resistors are widely available today), even multimeter calibration doesn't really matter, as long as the scale is roughly linear. The PCB also looks like it uses proper Kelvin connections so that resistance of the traces affects the measurement as little as possible.

USBcablecracker from SZDIY.

Finally, Gašper also pointed me to an open hardware project by the Shenzhen hackerspace SZDIY. The USB Cable Cracker is a stand-alone device based around the ATmega32U4. It tests the cable by passing approximately 25 mA through it. It then measures the voltage drop using an amplifier and ATmega's ADC and calculates the resistance. A switch allows you to measure either the power or the data lines. The measured value is displayed on an LCD. Gerber files for the PCB layout and firmware source are on GitHub, so you can make this device yourself (0603 passives can a bit of a pain to solder though). I haven't seen it sold anywhere in an assembled state.

The analog design of this device seems sound. The biggest drawback I think is the low current it uses for measurement. The digital part however looks like an overkill. The author wanted to also use it as a general development board. That is fine of course, but if you're making this only for testing cables, having an expensive 44-pin microcontroller just for using one ADC pin seems like a huge waste. Same with the large 16x2 LCD that only shows the resistance. Another thing that I was missing was a more comprehensive BOM, so if you want to make this yourself be prepared to spend a bit of time searching for the right parts that fit the PCB layout.


In conclusion, this problem of measuring USB cabling was a bit of a rabbit hole I fell into. For all practical purposes, probably any of the methods above is sufficiently accurate to find out cables that are grossly out of spec. I guess you could even do it with a 3½ digit multimeter on the 200 Ω range and a cut-off extender cable as a break-out to access the connections. In the end, if all you're interested in is stability of a Raspberry Pi, just switching cables until you find one where it runs stable works as well.

On the other hand, there is some beauty in being able to get trustworthy and repeatable measurements. I wasn't happy with any of the tools I found and considering I already wasted plenty of time researching this I decided to make my own. It's heavily inspired by FemtoCow's design and I'll write about it in another post.

Posted by Tomaž | Categories: Analog | Comments »

Double pendulum simulation

16.05.2019 21:05

One evening a few years ago I was sitting behind a desk with a new, fairly powerful computer at my fingertips. The kind of a system where you run top and the list of CPUs doesn't fit in the default terminal window. Whatever serious business I was using it for at the time didn't parallelize very well and I felt most of its potential remained unused. I was curious how well the hardware would truly perform if I could throw at it some computing problem that would be better suited for a massively parallel machine.

Somewhere around that time I also stumbled upon a gallery of some nice videos of chaotic pendulums. These were done in Mathematica and simulated a group of double-pendulums with slightly different initial conditions. I really liked the visualization. Each pendulum is given a different color. They first move in sync, but after a while their movements deviate and the line they trace falls apart into a rainbow.

Simulation of 42 double pendulums.

Image by aWakeOfBuzzards

The simulations published by aWakeOfBuzzards included only 42 pendulums. I guess it's a reference to the Hitchhiker's Guide, but I thought, why not do better than that? Would it be possible to eliminate visual gaps between the traces? Since each simulation of a pendulum is independent, this should be a really nice, embarrassingly parallel problem I was looking for.

I didn't want to spend a lot of time writing code. This was just another crazy idea and I could only rationalize avoiding more important tasks for so long. Since I couldn't run Mathematica on that machine, I couldn't re-use aWakeOfBuzzards's code and rewriting it to Numpy seemed non-trivial. Nonetheless, I still managed to shamelessly copy most of the code from various other sources on the web. For a start, I found a usable piece of physics simulation code in a Matplotlib example.

aWakeOfBuzzards' simulations simply draw the pendulum traces opaquely on top of each other. It appears that the code draws the red trace last, since when all the pendulums move closely together, all other traces get covered and the trail appears red. I wanted to do better. I had CPU cycles to spare after all.

Instead of rendering animation frames in standard red-green-blue color planes, I instead worked with wavelengths of visible light. I assigned each pendulum a specific wavelength and added that emission line to the spectrum for each pixel it occupied. Only when I had a complete spectrum for each pixel I converted that to a RGB tuple. This meant that when all the pendulums were on top of each other, they would be seen as white, since white light is a sum of all wavelengths. When they diverged, the white line would naturally break into a rainbow.

Frames from an early attempt at the pendulum simulation.

For parallelization, I simply used a process pool from Python's multiprocessing package with N - 1 worker processes, where N was the number of processors in the system. The worker processes solved the Runge-Kutta and returned a list of vertex positions. The master process then rendered the pendulums and wavelength data to an RGB framebuffer by abusing the ImageDraw.line from the Pillow library. Since drawing traces behind the pendulums meant that animation frames were not independent of each other, I dropped that idea and instead only rendered the pendulums themselves.

For 30 seconds of simulation this resulted in an approximately 10 GB binary .npy file with raw framebuffer data. I then used another, non-parallel step that used Pillow and FFmpeg to compress it to a more reasonably sized MPEG video file.

Double pendulum Monte Carlo

(Click to watch Double pendulum Monte Carlo video)

Of course, it took several attempts to fine-tune various simulation parameters to get a nice looking result you can find above. This final video is rendered from 200.000 individual pendulum simulations. Initial conditions only differed in the angular velocity of the second pendulum segment, which was chosen from a uniform distribution.

200.000 is not an insanely high number. It manages to blur most of the gaps between the pendulums, but you can still see the cloud occasionally fall apart into individual lines. Unfortunately I didn't seem to note down at the time what bottleneck exactly caused me not to go higher than that. Looking at the code now, it was most likely the non-parallel rendering of the final frames. I was also beyond the point of diminishing returns and probably something like interpolation between the individual pendulum solutions would yield better results than just increasing the number of solutions.

I was recently reminded of this old hack I did and I thought I might share it. It was a reminder of a different time and a trip down the memories to piece the story back together. The project that funded that machine is long concluded and I now spend evenings behind a different desk. I guess using symmetric multiprocessing was getting out of fashion even back then. I would like to imagine that these days someone else is sitting in that lab and wondering similar things next to a GPU cluster.

Posted by Tomaž | Categories: Life | Comments »

Recapping the Ice Tube Clock

05.05.2019 11:00

I was recently doing some electrical work and had to turn off the power in the apartment. I don't even remember when I last did that - the highest uptime I saw when shutting down things was 617 days. As it tends to happen, not everything woke up when the power came back. Most were simple software problems common on systems that were not rebooted for a while. One thing that was not a software bug however was the Adafruit's tube clock, which refused to turn on when I plugged it back it.

I got the clock as a gift way back in 2010 and since then it held a prime position in my living room. I would be sad to see it go. Over the years its firmware also accumulated a few minor hacks: the clock's oscillator is quite inaccurate so I added a simple software drift correction. I was also slightly bothered by the way nines were displayed on the 7-segment display. You can find my modifications on GitHub.

Adafruit's Ice Tube Clock.

Thankfully, the clock ran fine when I powered it from a lab power supply. The issue was obviously in the little 9 V, 660 mA switching power supply that came with it.

Checking the output of the power supply with an oscilloscope showed that it had excessive ripple and bad voltage regulation. Idle it had 5 V of ripple on top of the 9 V DC voltage. When loaded with 500 mA, the output DC level fell by almost 3 V.

Power supply output voltage before repair.

I don't know what the output looked like when it was new, but these measurements looked suspicious. High ripple is also a typical symptom of a bad output capacitor in a switching power supply. Opening up the power supply was easy. It's always nice to see a plastic enclosure with screws and tabs instead of glue. Inside I found C9 - a 470 μF, 16 V electrolytic capacitor on the secondary side of the converter:

Ice Tube Clock power supply with the problematic C9 marked.

The original capacitor was a purple Nicon KME series rated for 105°C (the photograph above shows it already replaced). Visually it looked just fine. On a component tester it measured 406 μF with ESR 3.4 Ω. While capacitance looked acceptable, series resistance was around ten times higher than what is typical for a capacitor this size. I replaced it with a capacitor of an equivalent rating. Before soldering it to the circuit, the replacement measured 426 μF with ESR 170 mΩ.

After repair, the output of the power supply looked much cleaner on the oscilloscope:

Power supply output voltage after repair.

The clock now runs fine on its own power supply and it's again happily ticking away back in its place.

I guess 9 years is a reasonable lifetime for an aluminum capacitor. I found this datasheet that seems to match the original part. It says lifetime is 2000-3000 h at 105°C. Of course, in this power supply it doesn't get nearly that hot. I would say it's not much more than 50°C most of the time. 9 years is around 80000 h, so I've seen a lifetime that was around 30 times longer than at the rated temperature. This figure seems to be in the right ballpark (see for example this discussion of the expected lifetime of electrolytic capacitors).

Posted by Tomaž | Categories: Analog | Comments »

Google is eating our mail

25.04.2019 20:06

I've been running a small SMTP and IMAP mail server for many years, hosting a handful of individual mailboxes. It's hard to say when exactly I started. whois says I registered the tablix.org domain in 2005 and I remember hosting a mailing list for my colleagues at the university a bit before that, so I think it's safe to say it's been around 15 years.

Although I don't jump right away on every email-related novelty, I've tried to keep the server up-to-date with well accepted standards over the years. Some of these came for free with Debian updates. Others needed some manual work. For example, I have SPF records and DKIM message signing setup on the domains I use. The server is hosted on commercial static IP space (with the very same IP it first went on-line) and I've made sure with the ISP that correct reverse DNS records are in place.

Homing pigeon

Image by Andreas Trepte CC BY-SA 2.5

From the beginning I've been worrying that my server would be used for sending spam. So I always made sure I did not have an open relay and put in place throughput restrictions and monitoring that would alert me about unusual traffic. In any case, the amount of outgoing mail has stayed pretty minimal over the years. Since I'm hosting just a few personal accounts these days, there have been less than 1000 messages sent to remote servers over SMTP in the last 12 months. I've given up on hosting mailing lists many years ago.

All of this effort paid off and, as far as I'm aware, my server was never listed on any of the public spam black lists.

So why am I writing all of this? Unfortunately, email is starting to become synonymous with Google's mail, and Google's machines have decided that mail from my server is simply not worth receiving. Being a good administrator and a well-behaved player on the network is no longer enough:

550-5.7.1 [...] Our system has detected that this
550-5.7.1 message is likely unsolicited mail. To reduce the amount of spam sent
550-5.7.1 to Gmail, this message has been blocked. Please visit
550-5.7.1  https://support.google.com/mail/?p=UnsolicitedMessageError
550 5.7.1  for more information. ... - gsmtp

Since mid-December last year, I'm regularly seeing SMTP errors like these. Sometimes the same message re-sent right away will not bounce again. Sometimes rephrasing the subject will fix it. Sometimes all mail from all accounts gets blocked for weeks on end until some lucky bit flips somewhere and mail mysteriously gets through again. Since many organizations use Gmail for mail hosting this doesn't happen just for ...@gmail.com addresses. Now every time I write a mail I wonder whether Google's AI will let it through or not. Only when something like this happens you realize just how impossible it is to talk to someone on the modern internet without having Google somewhere in the middle.

Of course, the 550 SMTP error helpfully links to a wholly unhelpful troubleshooting page. It vaguely refers to suspicious looking text and IP history. It points to Bulk Sender Guidelines, but I have trouble seeing myself as a bulk sender with 10 messages sent last week in total. It points to the Postmaster Tools which, after letting me jump through some hoops to authenticate, tells me I'm too small a fish and has no actual data to show.

Screenshot of Google Postmaster Tools.

So far Google has blocked personal messages to friends and family in multiple languages, as well as business mail. I stopped guessing what text their algorithms deem suspicious. What kind of intelligence sees a reply, with the original message referenced in the In-Reply-To header and part quoted, and considers it unsolicited? I don't discount the possibility that there is something misconfigured at my end, but since Google gives no hint and various third-party tools I've tried don't report anything suspicious I've ran out of ideas where else to look.

My server isn't alone with this problem. At work we use Google's mail hosting and I've seen this trigger happy filter from the other end. Just recently I've overlooked an important mail because it ended up in the spam folder. I guess it was pure luck it didn't get rejected at the SMTP layer. With my work email address I'm subscribed to several mailing lists of open source software projects and regularly Google will decide to block this traffic. I know since Mailman sends me a notification that my address caused excessive bounces. What system decides, after months of watching me read these messages and not once seeing me mark one as spam, that I suddenly don't want to receive them ever again?

Screenshot of the mailing list probe message.

I wonder. Google as a company is famously focused on machine learning through automated analytics and bare minimum of human contact. What kind of a signal can they possibly use to train these SMTP rejects? Mail gets rejected at the SMTP level without user's knowledge. There is no way for a recipient to mark it as not-spam since they don't know the message ever existed. In contrast to merely classifying mail into spam/non-spam folders, it's impossible for an unprivileged human to tell the machine it has made a mistake. Only the sender knows the mail got rejected and they don't have any way to report it either. One half of the feedback loop appears to be missing.

I'm sure there is no malicious intent behind this and that there are some very smart people working on spam prevention at Google. However for a metric driven company where a majority of messages are only passed with-in the walled garden, I can see how there's little motivation to work well with mail coming from outside. If all training data is people marking external mail as spam and there's much less data about false positives, I guess it's easy to arrive to a prior that all external mail is spam even with best intentions.

This is my second rant about Google in a short while. I'm mostly indifferent to their search index policies, however this mail problem is much more frustrating. I can switch search engines, but I can't tell other people to go off Gmail. Email used to work, from its 7-bit days onward. It was one standard thing that you could rely on in the ever changing mess of messaging web apps and proprietary lock-ins. And now it's increasingly broken. I hope people realize that if they don't get a reply, perhaps it's because some machine somewhere decided for them that they don't need to know about it.

Posted by Tomaž | Categories: Life | Comments »

When the terminal is not enough

19.04.2019 9:26

Sometimes I'm surprised by utilities that run fine in a terminal window on a graphical desktop but fail to do so in a remote SSH connection. Unfortunately, this seems to be a side-effect of software on desktop Linux getting more tightly integrated. These days, it's more and more common to see a command-line tool pop up a graphical dialog. It looks fancy, and there might be security benefits in the case of passphrase entries, but it also means that often the remote use case with no access to the local desktop gets overlooked.

The GNOME keyring management is one offender I bump against the most. It tries to handle all entry of sensitive credentials, like passwords and PINs on a system and integrates with the SSH and GPG agents. I remember that it used to interfere with private SSH key passphrase entry when jumping from one SSH connection to another, but that seems to be fixed in Debian Stretch.

On the other hand, running GPG in a SSH session by default still doesn't work (this might also pop up when, for example, signing git tags):

$ gpg -s gpg_test
gpg: using "0A822E7A" as default secret key for signing
gpg: signing failed: Operation cancelled
gpg: signing failed: Operation cancelled

This happens when that same user is also logged in to the graphical desktop, but the graphical session is locked. I'm not sure exactly what happens in the background, but something somewhere seems to cancel the passphrase entry request.

The solution is to set the GPG agent to use the text-based pin entry tool. Install the pinentry-tty package and put the following into ~/.gnupg/gpg-agent.conf:

pinentry-program /usr/bin/pinentry-tty

After this, the passphrase can be entered through the SSH session:

$ gpg -s gpg_test
gpg: using "0A822E7A" as default secret key for signing
Please enter the passphrase to unlock the OpenPGP secret key:
"Tomaž Šolc (Avian) <tomaz.solc@tablix.org>"
4096-bit RSA key, ID 059A0D2C0A822E7A,
created 2013-01-13.

Passphrase:

Update: Note however that with this change in place graphical programs that run GPG without a terminal, such as Thunderbird's Enigmail extension, will not work.

Another offender are PulseAudio- and systemd-related tools. For example, inspecting the state of the sound system over SSH fails with:

$ pactl list
Connection failure: Connection refused
pa_context_connect() failed: Connection refused

Here, the error message is a bit misleading. The problem is that the XDG environment variables aren't set up properly. Specifically for PulseAudio, the XDG_RUNTIME_DIR should be set to something like /run/user/1000.

This environment is usually set by pam_systemd.so PAM module. However, some overzealous system administrators disable the use of PAM for SSH connections, so it might not be set in an SSH session. To have these variables set up automatically, you should have the following in your /etc/ssh/sshd_config:

UsePAM yes
Posted by Tomaž | Categories: Code | Comments »

Booting Raspbian from Barebox

01.03.2019 18:28

Barebox is an open source bootloader intended for embedded systems. In the past few weeks I've been spending a lot of time at work polishing its support for Raspberry Pi boards and fixing some rough edges. Some of my patches have already been merged upstream, and I'm hoping the remainder will end up in one the of upcoming releases. I thought I might as well write a short tutorial on how to get it running on a Raspberry Pi 3 with the stock Raspbian Linux distribution.

The Broadcom BCM28xx SoC on Raspberry Pi has a peculiar boot process. First a proprietary binary blob is started on the VideoCore processor. This firmware reads some config files and does some hardware initialization. Finally, it brings up the ARM core, loads a kernel image (or a bootloader, as we'll see later) and jumps to it. When starting the kernel it also passes to it some hardware information, such as a device tree and boot arguments. Raspbian Linux ships with its own patched Linux kernel that tends to be relatively tightly coupled with the VideoCore firmware and the information it provides.

On the other hand, upstream Linux kernels tend to ignore the VideoCore as much as possible. This makes them much easier to use with a bootloader. In fact, a bootloader is required since they can't use the device tree provided by the VideoCore firmware. Unfortunately, upstream kernels also historically have worse support for various peripherals found on Raspberry Pi boards, so sometimes you can't use them. In the rest of this text, I'll be only talking about Raspbian kernels as most of my work focused on making the VideoCore interoperation work correctly for them.

Raspberry Pi 3 connected to a USB-to-serial adapter.

To get started, you'll need a Raspberry Pi 3 board and an SD card with a Raspbian Stretch Lite. You'll also need a USB-to-serial adapter that uses 3.3V levels (I'm using this one) and another computer to connect it to. Connect ground, pin 14 and pin 15 on Raspberry Pi header to ground, RXD and TXD on the serial adapter respectively. We will use this connection to access the Barebox serial console. For the parts where we will interact with the Linux system on the Raspberry Pi you can also connect it to a HDMI monitor and a keyboard. I prefer to work over ssh.

Run the following on the computer to bring up the serial terminal at 115200 baud (adjust the device path as necessary to access your serial adapter):

$ screen /dev/ttyUSB0 115200

It's possible to cross-compile Barebox for ARM on an Intel computer. However, for simplicity we'll just compile Barebox on the Raspberry Pi itself since all the tools are already available there. If you intend to experiment, it's worth setting up either the cross-compiler or two Raspberry Pis: one for compilation and one for testing.

Now log-in to your Raspberry Pi and open a terminal, either through ssh or the monitor and keyboard. Fetch the Barebox source. At the time of writing, not all of my patches (1, 2) are in the upstream repository yet. You can get a patched tree based on the current next branch from my GitHub fork:

$ git clone https://github.com/avian2/barebox.git
$ cd barebox

Update: Barebox v2019.04.0 release should contain everything shown in this blog post.

In the top of the Barebox source distribution, run the following to configure the build for Raspberry Pi:

$ export ARCH=arm
$ make rpi_defconfig

At this point you can also run make menuconfig and poke around. Barebox uses the same configuration and build system as the Linux kernel, so if you've ever compiled Linux the interface should be very familiar. When you are ready to compile, run:

$ make -j4

This should take around a minute on a Raspberry Pi 3. If compilation fails, check if you are missing any tools and install them. You might need to install flex and bison with apt-get.

After a successful compilation the bootloader binaries end up in the images directory. If you didn't modify the default config you should get several builds for different versions of the Raspberry Pi board. Since we're using Raspberry Pi 3, we need barebox-raspberry-pi-3.img. Copy this to the /boot partition:

$ sudo cp images/barebox-raspberry-pi-3.img /boot/barebox.img

Now put the following into /boot/config.txt. What this does is tell the VideoCore firmware to load the Barebox image instead of the Linux kernel and to bring up the serial interface hardware for us:

kernel=barebox.img
enable_uart=1

At this point you should be ready to reboot Raspberry Pi. Do a graceful shutdown with the reboot command. After a few moments you should see the following text appear on the serial terminal on your computer:

barebox 2019.02.0-00345-g0741a17e3 #183 Fri Mar 1 09:44:11 GMT 2019


Board: RaspberryPi Model 3B
bcm2835-gpio 3f200000.gpio@7e200000.of: probed gpiochip-1 with base 0
bcm2835_mci 3f300000.sdhci@7e300000.of: registered as mci0
malloc space: 0x0fe7e5c0 -> 0x1fcfcb7f (size 254.5 MiB)
mci0: detected SD card version 2.0
mci0: registered disk0
environment load /boot/barebox.env: No such file or directory
Maybe you have to create the partition.
running /env/bin/init...

Hit any key to stop autoboot:    3

Press a key to drop into the Hush shell. You'll get a prompt that is very similar to Bash. You can use cd to move around and ls to inspect the filesystem. Barebox uses the concept of filesystem mounts just like Linux. By default, the boot partition on the SD card gets mounted under /boot. The root is a virtual RAM-only filesystem that is not persistent.

The shell supports simple scripting with local and global variables. Barebox calls the shell scripts that get executed on boot the bootloader environment. You can find those under /env. The default environment on Raspberry Pi currently doesn't do much, so if you intend to use Barebox in some application, you will need to do the scripting yourself.

To finally boot the Raspbian system, enter the following commands:

barebox@RaspberryPi Model 3B:/ global linux.bootargs.vc="$global.vc.bootargs"
barebox@RaspberryPi Model 3B:/ bootm -o /vc.dtb /boot/kernel7.img

Loading ARM Linux zImage '/boot/kernel7.img'
Loading devicetree from '/vc.dtb'
commandline: console=ttyAMA0,115200 console=ttyS1,115200n8   8250.nr_uarts=1 bcm2708_fb.fbwidth=656 bcm2708_fb.fbheight=416 bcm2708_fb.fbswap=1 vc_mem.mem_base=0x3ec00000 vc_mem.mem_size=0x40000000  dwc_otg.lpm_enable=0 console=tty1 root=PARTUUID=1c5963cc-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait
Starting kernel in hyp mode

The global command adds the kernel command-line from VideoCore to the final kernel boot arguments. Barebox saves the command-line that was passed to it from the firmware into the vc.bootargs variable. When booting the kernel, Barebox constructs the final boot arguments by concatenating all global variables that start with linux.bootargs.

The bootm command actually boots the ARMv7 kernel (/boot/kernel7.img) from the SD card using the device tree from the VideoCore. Each time it is started, Barebox saves the VideoCore device tree to its own root filesystem at /vc.dtb. A few moments after issuing the bootm command, the Raspbian system should come up and you should be able to log into it. Depending on whether the serial console has been enabled in the Linux kernel, you might also see kernel boot messages and a log-in prompt in the serial terminal.

You can see that the system was booted from Barebox by inspecting the device tree:

$ cat /sys/firmware/devicetree/base/chosen/barebox-version
barebox-2019.02.0-00345-g0741a17e3

Since we passed both the VideoCore device tree contents and the command-line to the kernel, the system should work as if no bootloader was actually used. This means that settings in cmdline.txt and config.txt, such as device tree overlays and HDMI configuration, should be correctly honored.

In the end, why would you want to use such a bootloader? Increasing reliability and resistance to bad system updates is one reason. For example, Barebox can enable the BCM28xx hardware watchdog early in the boot process. This means that booting a bad kernel is less likely to result in a system that needs a manual reset. A related Barebox feature is the bootchooser which can be used to implement the A/B root partition scheme, so that a failed boot from one root automatically results in the bootloader falling back to the other root. Note however that one thing still missing in Barebox is the support for reading reset-source on Raspberry Pi, so you can't yet distinguish between a power-on reset and a watchdog running out.

As it was pointed out on the Barebox mailing list, this is not perfect. Discussion on the design of the hardware watchdog aside, the bootloader also can't help with VideoCore firmware updates, since the firmware gets started before the bootloader. Also since the device tree and kernel in Raspbian are coupled with the VideoCore firmware, this means that kernel updates in Raspbian can be problematic as well, even with a bootchooser setup. However a step towards a more reliable system, even if not perfect, can still solve some real-life problems.

Posted by Tomaž | Categories: Code | Comments »

Wacom Cintiq 16 on Debian Stretch

15.02.2019 18:47

Wacom Cintiq 16 (DTK-1660/K0-BX) is a drawing tablet that was announced late last year. At around 600€ I believe it's now the cheapest tablet with a display you can buy from Wacom. For a while I've been curious about these things, but the professional Wacoms were just way too expensive and I wasn't convinced by the cheaper Chinese alternatives. I've been using a small Intuos tablet for several years now and Linux support has been flawless from the start. So when I recently saw the Cintiq 16 on sale at a local reseller and heard there are chances that it will work similarly well on Linux I couldn't resist buying one.

Wacom Cintiq 16 with GIMP running on it.

Even though it's a very recent device, the Linux kernel already includes a compatible driver. Regardless, I was prepared for some hacking before it was usable on Debian. I was surprised how little was actually required. As before, the Linux Wacom Project is doing an amazing job, even though Linux support is not officially acknowledged by Wacom as far as I know.

The tablet connects to the computer using two cables: HDMI and USB. HDMI connection behaves just like a normal 1920x1080 60 Hz flat panel monitor and the display works even if support for everything else is missing on the computer side. The HDMI cable also caries an I2C connection that can be used to adjust settings that you would otherwise expect in a menu accessible by buttons on the side of a monitor (aside from a power button, the Cintiq itself doesn't have any buttons).

After loading the i2c-dev kernel module, the ddcutil version 0.9.4 correctly recognized the display. The i2c-2 bus interface in the example below goes through the HDMI cable and was in my case provided by the radeon driver:

$ ddcutil detect
Display 1
   I2C bus:             /dev/i2c-2
   EDID synopsis:
      Mfg id:           WAC
      Model:            Cintiq 16
      Serial number:    ...
      Manufacture year: 2018
      EDID version:     1.3
   VCP version:         2.2
$ ddcutil --bus=2 capabilities
MCCS version: 2.2
Commands:
   Command: 01 (VCP Request)
   Command: 02 (VCP Response)
   Command: 03 (VCP Set)
   Command: 06 (Timing Reply )
   Command: 07 (Timing Request)
   Command: e3 (Capabilities Reply)
   Command: f3 (Capabilities Request)
VCP Features:
   Feature: 02 (New control value)
   Feature: 04 (Restore factory defaults)
   Feature: 05 (Restore factory brightness/contrast defaults)
   Feature: 08 (Restore color defaults)
   Feature: 12 (Contrast)
   Feature: 13 (Backlight control)
   Feature: 14 (Select color preset)
      Values:
         04: 5000 K
         05: 6500 K
         08: 9300 K
         0b: User 1
   Feature: 16 (Video gain: Red)
   Feature: 18 (Video gain: Green)
   Feature: 1A (Video gain: Blue)
   Feature: AC (Horizontal frequency)
   Feature: AE (Vertical frequency)
   Feature: B2 (Flat panel sub-pixel layout)
   Feature: B6 (Display technology type)
   Feature: C8 (Display controller type)
   Feature: C9 (Display firmware level)
   Feature: CC (OSD Language)
      Values:
         01: Chinese (traditional, Hantai)
         02: English
         03: French
         04: German
         05: Italian
         06: Japanese
         07: Korean
         08: Portuguese (Portugal)
         09: Russian
         0a: Spanish
         0d: Chinese (simplified / Kantai)
         14: Dutch
         1e: Polish
         26: Unrecognized value
   Feature: D6 (Power mode)
      Values:
         01: DPM: On,  DPMS: Off
         04: DPM: Off, DPMS: Off
   Feature: DF (VCP Version)
   Feature: E1 (manufacturer specific feature)
      Values: 00 01 02 (interpretation unavailable)
   Feature: E2 (manufacturer specific feature)
      Values: 01 02 (interpretation unavailable)
   Feature: EF (manufacturer specific feature)
      Values: 00 01 02 03 (interpretation unavailable)
   Feature: F2 (manufacturer specific feature)

I didn't play much with these settings, since I found the factory setup sufficient. I did try out setting white balance and contrast and the display responded as expected. For example, to set the white balance to 6500K:

$ ddcutil --bus=2 setvcp 14 5

Behind the USB connection is a hub and two devices connected to it:

$ lsusb -s 1:
Bus 001 Device 017: ID 0403:6014 Future Technology Devices International, Ltd FT232H Single HS USB-UART/FIFO IC
Bus 001 Device 019: ID 056a:0390 Wacom Co., Ltd 
Bus 001 Device 016: ID 056a:0395 Wacom Co., Ltd 
$ dmesg
...
usb 1-6: new high-speed USB device number 20 using ehci-pci
usb 1-6: New USB device found, idVendor=056a, idProduct=0395, bcdDevice= 1.00
usb 1-6: New USB device strings: Mfr=1, Product=2, SerialNumber=0
usb 1-6: Product: Cintiq 16 HUB
usb 1-6: Manufacturer: Wacom Co., Ltd.
hub 1-6:1.0: USB hub found
hub 1-6:1.0: 2 ports detected
usb 1-6.2: new high-speed USB device number 21 using ehci-pci
usb 1-6.2: New USB device found, idVendor=0403, idProduct=6014, bcdDevice= 9.00
usb 1-6.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
usb 1-6.2: Product: Single RS232-HS
usb 1-6.2: Manufacturer: FTDI
ftdi_sio 1-6.2:1.0: FTDI USB Serial Device converter detected
usb 1-6.2: Detected FT232H
usb 1-6.2: FTDI USB Serial Device converter now attached to ttyUSB0
usb 1-6.1: new full-speed USB device number 22 using ehci-pci
usb 1-6.1: New USB device found, idVendor=056a, idProduct=0390, bcdDevice= 1.01
usb 1-6.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 1-6.1: Product: Cintiq 16
usb 1-6.1: Manufacturer: Wacom Co.,Ltd.
usb 1-6.1: SerialNumber: ...
input: Wacom Cintiq 16 Pen as /devices/pci0000:00/0000:00:1a.7/usb1/1-6/1-6.1/1-6.1:1.0/0003:056A:0390.000D/input/input62
on usb-0000:00:1a.7-6.1/input0

056a:0395 is the hub and 056a:0390 is the Human Interface Device class device that provides the actual pen input. When the tablet is off but connected to power, the HID disconnects but other two USB devices are still present on the bus. I'm not sure what the UART is for. This thread on GitHub suggests that it offers an alternative way of interfacing with the I2C bus for adjusting the display settings.

On the stock 4.9.130 kernel that comes with Stretch the pen input wasn't working. However, the 4.19.12 kernel from stretch-backports correctly recognizes the devices and as far as I can see works perfectly.

$ cat /proc/version
Linux version 4.19.0-0.bpo.1-amd64 (debian-kernel@lists.debian.org) (gcc version 6.3.0 20170516 (Debian 6.3.0-18+deb9u1)) #1 SMP Debian 4.19.12-1~bpo9+1 (2018-12-30)

I'm using the stock GNOME 3.22 desktop that comes with Stretch. After upgrading the kernel, the tablet correctly showed up in the Wacom Tablet panel in gnome-control-center. Pen settings there also work and I was able to calibrate the input using the Calibrate button. Calibration procedure instructs you to press the pen onto targets shown on the display and looks exactly like when using official software on Mac OS.

Update: It seems that GNOME sometimes messes up the tablet-to-screen mapping. If the stylus suddenly starts to move the cursor on some other monitor instead of displaying it underneath the stylus on the Wacom display, check the key /org/gnome/desktop/peripherals/tablets/056a:0390/display in dconf-editor. It should contain something like ['WAC', 'Cintiq 16', '...']. If it doesn't, delete they key and restart gnome-settings-daemon.

Wacom Tablet panel in the GNOME Control Center.

In GIMP I had to enable the new input devices under Edit, Input Devices and set them to Screen mode, same as with other tablets. You get two devices: one for the pen tip and another one for when you turn the pen around and use the other eraser-like end. These two behave like independent devices in GIMP, so each remembers its own tool settings.

Configure Input Devices dialog in GIMP.

As far as I can see, pen pressure, tilt angle and the two buttons work correctly in GIMP. The only problem I had is that it's impossible to do a right-click on the canvas to open a menu. This was already unreliable on the Intuos and I suspect it has to do with the fact that the pen always moves slightly when you press the button (so GIMP registers click-and-drag rather than a click). With the higher resolution of the Cintiq it makes sense that it's even harder to hold the pen still enough.

Otherwise, GIMP works mostly fine. I found GNOME to be a bit stubborn about where it wants to place new dialog windows. If I have a normal monitor connected alongside the tablet, file and color chooser dialogs often end up on the monitor instead of the tablet. Since I can't use the pen to click on something on the monitor, it forces me to reach for the mouse, which can be annoying.

I've noticed that GIMP sometimes lags behind the pen, especially when dragging the canvas with the middle-click. I didn't notice this before, but I suspect it has to do with the higher pen resolution or update rate of the Cintiq. The display also has a higher resolution than my monitor, so there are more pixels to push around. In any case, my i5-750 desktop computer will soon be 10 years old and is way overdue for an upgrade.


In conclusion, I'm very happy with it even if it was a quite an expensive gadget to buy for an afternoon hobby. After a few weeks I'm still getting used to drawing onto the screen and tweaking tool dynamics in GIMP. The range of pressures the pen registers feels much wider than on the Intuos, although that is probably very subjective. To my untrained eyes the display looks just amazing. The screen is actually bigger than I thought and since it's not easily disconnectable it is forcing me to rethink how to organize my desk. In the end however, my only worry is that my drawing skills are often not on par with owning such a powerful tool.

Posted by Tomaž | Categories: Life | Comments »

Google index coverage

08.02.2019 11:57

It recently came to my attention that Google has a new Search Console where you can see the status of your web site in Google's search index. I checked out what it says for this blog and I was a bit surprised.

Some things I expected, like the number of pages I've blocked in the robots.txt file to prevent crawling (however I didn't know that blocking an URL there means that it can still appear in search results). Other things were weirder, like this old post being soft recognized as a 404 Not Found response. My web server is properly configured and quite capable of sending correct HTTP response codes, so ignoring standards in that regard is just craziness on Google's part. But the thing that caught my eye the most was the number of Excluded pages on the Index Coverage pane:

Screenshot of the Index Coverage pane.

Considering that I have less than a thousand published blog posts this number seemed high. Diving into the details, it turned out that most of the excluded pages were redirects to canonical URLs and Atom feeds for post comments. However at least 160 URL were permalink addresses of actual blog posts (there may be more, because the CSV export only contains the first 1000 URLs).

Index coverage of blog posts versus year of publication.

All of these were in the "crawled, not indexed" category. In their usual hand-waving way, Google describes this as:

The page was crawled by Google, but not indexed. It may or may not be indexed in the future; no need to resubmit this URL for crawling.

I read this as "we know this page exists, there's no technical problem, but we don't consider it useful to show in search results". The older the blog post, the more likely that it was excluded. Google's index apparently contains only around 60% of my content from 2006, but 100% of that published in the last couple of years. I've tried searching for some of these excluded blog posts and indeed they don't show in the results.

I have no intention to complain about my early writings not being shown to Google's users. As long as my web site complies with generally accepted technical standards I'm happy. I write about things that I find personally interesting and what I earnestly believe might be useful information in general. I don't feel entitled to be shown in Google's search results and what they include in their index or not is their own business.

That said, it did made me think. I'm using Google Search almost exclusively to find information on the web. I suspected that they heavily prioritize new over old, but I've never seriously considered that Google might be intentionally excluding parts of the web from their index altogether. I often hear the sentiment how the old web is disappearing. That the long tail of small websites is as good as gone. Some old one-person web sites may indeed be gone for good, but as this anecdote shows, some such content might just not be discoverable through Google.

All this made me switch my default search engine in Firefox to DuckDuckGo. Granted I don't know what they include or exclude from their search either. I have yet to see how well it works, but maybe it isn't such a bad idea to go back to the time where trying several search engines for a query was a standard practice.

Posted by Tomaž | Categories: Life | Comments »

Notes on MightyWatt electronic load

02.02.2019 14:43

MightyWatt is a small, computer controlled electronic DC load made by Jakub Polonský. I recently ordered one for work, since I often need to test power supplies and it was significantly cheaper than a similar professional desktop instrument. I was first looking for a Re:load Pro, since I have been very happy with the old analog Re:load, however it turns out they are out of stock and impossible to get. MightyWatt was the closest alternative that fit the budget. After several months, here is a quick review and some notes on how well it performs in practical use.

MightyWatt electronic load with Olimex USB-ISO isolator.

Compared to Re:load, MightyWatt is not a stand-alone device. It comes in the form of an Arduino shield and you need to mount it on an Arduino Uno (or a compatible) microcontroller board. You also need to compile an open source firmware yourself and program the board with it. The ATmega328 on the Arduino then controls the load and communicates with the computer you connect to its USB port.

It's also worth mentioning that this design already has quite a history. First revision apparently shipped in 2014. I am using revision 3 hardware from 2017 and latest software from GitHub. During these tests I was using an Arduino-knockoff called Smduino. To protect the computer from any mishaps with the load, I'm connecting it over an Olimex USB-ISO USB isolator.

Bottom side of the MightyWatt electronic load PCB.

The initial setup went quite smoothly. The instructions on how to program the firmware using the Arduino IDE were nice and clear. After your order, the author commits the calibration file for your serial number to GitHub, which I thought was a nice approach. The control program for Windows has a pre-compiled build in the GitHub repo, so there is no need to install Visual Studio, unless you want to compile it from the C# source yourself.

In contrast to the straightforward software instructions I could find no illustration showing how to actually mount the device onto the Arduino board and initially I was a bit baffled. The 100 mil male headers on the MightyWatt are missing pins from the standard Arduino shield layout, so it's possible to fit them into the sockets in several possible ways. I'm guessing only one of those doesn't end in disaster. In the end I noticed the (very) tiny pin markings on the MightyWatt silk-screen and matched them with the corresponding pins on the Arduino.

Unfortunately, the headers are also the only thing that keeps MightyWatt mechanically connected to the Arduino. Beefy measurement cables are commonplace when working with large currents and the stiffness of the headers alone just isn't enough to keep the MightyWatt securely connected to the Arduino. On several occasions I found that I have pulled it off when messing with the cabling. I was looking into 3D-printing an enclosure, however this MightyWatt PCB doesn't have any spare holes for additional screws, so it's not easy to work around this issue. It seems that an acrylic case existed at one point, but I couldn't find it for sale and since it doesn't seem to screw onto the PCB I'm not sure it would help.

MightyWatt current measurement compared to VC220 multimeter.

Comparing MightyWatt current readout to VC220. VC220 was in series with MightyWatt current terminals. The load was in constant current mode and was connected to a 5 V power supply.

MightyWatt voltage measurement compared to VC220 multimeter.

Comparing MightyWatt voltage readout to VC220. The load was in 4-point mode with a lab power supply connected to the voltage terminals. Current terminals were not connected.

As far as accuracy and calibration is concerned, I can't be certain since I don't have any good reference to compare it to. After some simple experiments with a VC220 multimeter it seems reasonable, as you can see on the graphs above. The current readout on the MightyWatt is with-in the measurement tolerance of the VC220 for the complete 10 A range. Voltage readout does fall outside of the tolerance of VC220. I don't know whether that is a fault of VC220 or MightyWatt, but in any case, both devices only disagree for about 1% and linearity looks good.

One problem I noticed with constant current setting was that there seems to be a momentary glitch when changing the set point with the load running (i.e. without stopping it). This seems to trigger some fast over-current protections, even when currents should be well below the limit. For example, changing the load from 1 A to 2 A sometimes puts a 3 A supply into foldback mode, but doing the same by stopping the load for the change doesn't.

I really like the fact that MightyWatt supports 4-point Kelvin measurements. The software also supports a mode called Simple ammeter, which puts current inputs into minimum resistance. Combined with the 4-point setting, this converts MightyWatt into a computer-controlled ampere- and voltmeter pair. I have not tried this mode yet, but it sounds like it might be useful as a simple power meter.

Other than that, I haven't looked much into the electronics design. However the author has a blog with many interesting posts on the design of the MightyWatt if you are interested in the details.

Screenshot of the MightyWatt Windows control program.

The Windows software is functional, if somewhat buggy at times. Unfortunately as far as I can see, there is no way to control MightyWatt from Linux at the moment. I would love to automate my measurements with Python, like I've been doing with everything else. Fortunately, the Windows control program allows you do some simple scripting, so that is not that much of a pain. Also, the communications protocol seems well documented and I just might write a Python library for it eventually.

My biggest issue with the software is that the Windows control program seems to often lose connection with the load. This isn't handled gracefully and I often find that it will no longer see the MightyWatt's COM port after that. This then requires some ritual of reconnecting the USB cable and restarting the application to get it working again.

I'm not sure what is the reason for this and I don't know whether this is a software problem on Windows side or whether the Arduino firmware is crashing. First I was blaming electrical interference, since it appeared to often happen when I connected an oscilloscope probe to a supply I was testing. Then I thought the USB isolator was causing it. However after some more testing I found that this still randomly happens even if I just let the MightyWatt run idle, directly connected with a USB cable to a PC.


In conclusion, it's a nice little instrument for its price, especially considering that similar instruments can easily cost an order of a magnitude more. I like that it comes tested and calibrated out of the box and that it's well documented. I really like the open source aspect of it and I always find it hard to criticize such projects without submitting patches. The Windows control software is pretty powerful and can support a lot of different measurements. The ugly part is the flimsy mechanical setup and the connection reliability problem, which means that I can't leave a measurement running without being constantly present to check for errors.

Posted by Tomaž | Categories: Analog | Comments »

Measuring THD on Denon RCD-M41DAB

26.01.2019 21:53

Around a month ago my old micro Hi-Fi system wouldn't turn on. I got a Sony CMT-CP11 as a gift from my parents way back in 2000 and it had served me well. I really liked the fact that I had a service manual for it with complete schematic, and over the years it accumulated a few hacks and fixes. It did start to show its age though. For example, the original remote no longer works because its phenolic paper-based PCB had deteriorated.

Unfortunately, it seems that now the mains transformer had died as well. I spent some time searching around the web for a replacement, but couldn't find any. I contemplated rewinding it, but doing that on a mains transformer seemed too risky. Finally, I gave up and just bought a new system (however, if anyone knows a source for a Sony part 1-435-386-11 or 1-435-386-21 I would still be interested in buying one).

So anyway, now I'm an owner of a shiny new Denon RCD-M41DAB. In its manual, it states that the output amplifier is rated as:

Denon RCD-M41DAB audio amplifier rating.

Image by D&M Holdings Inc.

This a bit less than CMT-CP11, which was rated 35 W at the same conditions. Not that I care too much about that. I doubt that I ever cranked the Sony all the way to full volume and I'm not very sensitive to music quality. Most of my music collection is in lossy compressed files anyway. However, I was still curious if Denon meets these specs.

Unfortunately I didn't have big enough 6 Ω resistors at hand to use as dummy loads. I improvised with a pair of 8 Ω, 60 W banks I borrowed from my father. I connected these across the speaker outputs of the Denon and connected a scope probe over the left one.

Setup for measuring THD, with resistors instead of speakers.

To provide the input signal I used the Bluetooth functionality of the RCD-M41DAB. I paired my phone with it and used the Keuwlsoft function generator app to feed in a sine wave at 1 kHz. I set the app to 100% amplitude, 100% volume and also set 100% media volume in the Android settings. I then set the volume by turning the volume knob on the RCD-M41DAB.

The highest RCD-M41DAB volume setting before visible distortion was 33. This produced a peak-to-peak signal of 37.6 V and a power of around 22 W on the 8 Ω load:

Output signal at maximum level before distortion.

Using the FFT function of the oscilloscope it was possible to estimate total harmonic distortion at these settings:

FFT of the signal at maximum level before distortion.

U_1 = 12.9 \mathrm{V}\qquad[22.2 \mathrm{dBV}]
U_3 = 41 \mathrm{mV}\qquad[-27.8 \mathrm{dBV}]
THD = \frac{U_3}{U_1} \cdot 100\% = 0.3\%

For comparison, I also measured the THD at the unloaded line output of a cheap Bluetooth receiver. I used the same app and measurement method and that came at 0.07% THD.

The next higher volume setting on RCD-M41, 34, was visibly clipped at around 39 V peak-to-peak:

Output signal at one setting past maximum level.

I achieved the rated 30 W (albeit at 8 Ω, not 6 Ω) at volume setting 36. At that point the signal was badly clipped, producing many visible harmonics on the FFT display:

FFT of the signal at 30 W output.

Calculated THD at this output level (including up to 7th harmonic) was 12.9%

So what can I conclude from these measurements? First of all, I was measuring the complete signal path, from DAC onward, not only the output stage. Before saturating the output I measured 0.3 % THD at 22 W, which I think is excellent. According to this article, 1% THD is around the level detectable by an untrained human ear. I couldn't achieve 30 W at 10% THD. However, I wasn't measuring at the specified 6 Ω load.

If I assume that the output stage would saturate at the same peak-to-peak voltage at 6 Ω load as it did at 8 Ω, then it would output 32 W at a similar distortion. This would put it well below the specified 10% THD. Whether this is a fair assumption is debatable. I think the sharply clipped waveforms I saw suggest that most of the distortion happens when the output stage supply voltage is reached and this is mostly independent of the load.

That said, the volume setting I found comfortable for listening to music is around 10, so I'm pretty sure I won't ever be reaching the levels where distortion becomes significant.

Posted by Tomaž | Categories: Analog | Comments »