What's inside cheap SMA terminators

29.05.2020 14:16

I've recently ordered a bag of "YWBL-WH" 50 Ω SMA terminators off Amazon along with some other stuff. Considering they were about 3 EUR per piece and I was paying for shipment anyway, they seemed like a good deal. Unsurprisingly, they turned out less than stellar in practice.

50 Ω SMA terminators I bought off Amazon.

At the time when I bought them, the seller's page listed these specifications, claiming to be usable up to 6 GHz and 2 W of power dissipation. There's no real brand listed and identical-looking ones can be found from other sellers:

Specifications for 50 ohm SMA terminators.

Their DC resistances all measured very close to 51 Ω, which is good enough. However when I tried using them for some RF measurements around 1 GHz I got some unusual results. I thought the terminators could be to blame even though I don't currently have equipment to measure their return loss. If I had bothered to scroll down on that Amazon page, I might have seen a review from Dominique saying that they have only 14 dB return loss at 750 MHz and are hence useless at higher frequencies.

I suspected what's going on because I've seen this before in cheap BNC terminators sold for old Ethernet networks, but I still took one apart.

Cheap SMA terminator taken apart.

Indeed they simply have a standard through-hole axial resistor inside. The center SMA pin is soldered to the lead of the resistor, but ground lead was just pressed against the inside of the case. According to the resistor's color bands it's rated at 51 Ω, 5% tolerance and 100 ppm/K. I suspect it's a metal film resistor based on the blue casing and low thermal coefficient (if that's what the fifth color band stands for). It might be rated for 2 W, although judging by the size it looks more like 1/2 W to me. In any case, this kind of resistor is useless at RF frequencies because of its helical structure that acts like an inductor.

Again it turned out that cheaping out on lab tooling was just a waste of money.

Posted by Tomaž | Categories: Analog | Comments »

Simple method of measuring small capacitances.

22.05.2020 18:17

I stumbled upon this article on Analog Devices' website while looking for something else. It looks like instructions for a student lab session. What I found interesting about it is that it describes a way of measuring small capacitances (around 1 pF) with only a sine-wave generator and an oscilloscope. I don't remember seeing this method before and it seems useful in other situations as well, so I thought I might write a short note about it. I tried it out and indeed it gives reasonable results.

Breadboard capacitance measurement schematic.

Image by Analog Devices, Inc.

I won't go into details - see original article for a complete explanation and a step-by-step guide. In short, what you're doing is using a standard 10x oscilloscope probe and an unknown, small capacitance (Crow in the schematic above) as an AC voltage divider. From the attenuation of the divider and estimated values of other components it's possible to derive the unknown. Since the capacitance of the probe is usually only around 10 pF, this works reasonably well when the unknown is similarly small. The tricky part is calibrating this measurement, by estimating stray capacitances of wires and more accurately characterizing the resistance and capacitance of the probe. This is done by measuring both gain of the divider and its 3 dB corner frequency.

Note that the article is talking about using some kind of an instrument that has a network analyzer mode and can directly show a gain vs. frequency plot. This is not necessary and it's perfectly possible to do this measurement with a separate signal generator and a digital oscilloscope. For measuring capacitances of around 1 pF using a 10 pF/10 MΩ probe a signal generator capable of about 100 kHz sine-wave is sufficient. Determining when the amplitude of the signal displayed on the scope falls by 3 dB probably isn't very accurate, but for a rough measurement it seems to suffice.

The measurement depends on the probe having a DC resistance to ground as well as capacitance. I found that on my TDS 2002B scope you need to set the channel to DC coupled, otherwise there is no DC path to ground from the probe tip. It seems obvious in retrospect, but it did confuse me for a moment why I wasn't getting good results.

I also found that my measured signal was being overwhelmed by the 50 Hz mains noise. The solution was to use external synchronization on the oscilloscope and then use the averaging function. This cancels out the noise and gives much better measurements of the signal amplitude at the frequency that the signal generator is set to. You just need to be careful with the attenuator setting so that noise + signal amplitude still falls inside the scope's ADC range.

Posted by Tomaž | Categories: Analog | Comments »

Another SD card postmortem

16.05.2020 11:28

I was recently restoring a Raspberry Pi at work that was running a Raspbian system off a SanDisk Ultra 8 GB micro SD card. It was powered on continuously and managed to survive almost exactly 6 months since I last set it up. I don't know when this SD card first started showing problems, but when the problem became apparent I couldn't log in and Linux didn't even boot up anymore after a power cycle.

SanDisk Ultra 8 GB micro SD card.

I had a working backup of the system, however I was curious how well ddrescue would be able to recover the contents of the failed card. To my surprise, it did quite well, restoring 99.9% of the data after about 30 hours of run time. I've only ran the copy and trim phase (--no-scrape). Approximately 8 MB out of 8 GB of data remained unrecovered.

This was enough that fsck was able to recover the filesystem to a good enough state so that it could be mounted. Another interesting thing in the recovered data was the write statistic that is kept in ext4 superblock. The system only had one partition on the SD card:

$ dumpe2fs /dev/mapper/loop0p2 | grep Lifetime
dumpe2fs 1.43.4 (31-Jan-2017)
Lifetime writes:          823 GB

On one hand, 823 GB of writes after 6 months was more than I was expecting. The system was setup in a way to avoid a lot of writes to the SD card and had a network mount where most of the heavy work was supposed to be done. It did have a running Munin master though and I suspect that was where most of these writes came from.

On the other hand, 823 GB on a 8 GB card is only about 100 write cycles per cell, if the card is any good at doing wear leveling. That's awfully low.

In addition to a raw data file, ddrescue also creates a log of which parts of the device failed. Very likely a controller in the SD card itself is doing a lot of remapping. Hence a logical address visible from Linux has little to do with where the bits are physically stored in silicon. So regardless of what the log says, it's impossible to say whether errors are related to one failed physical area on a flash chip, or if they are individual bit errors spread out over the entire device. Still, I think it's interesting to look at this visualization:

Visualization of the ddrescue map file.

This image shows the distribution of unreadable sectors reported by ddrescue over the address space of the SD card. The address space has been sliced into 4 MB chunks (8192 blocks of 512 bytes). These slices are stacked horizontally, hence address 0 is on the bottom left and increases up and right in a saw-tooth fashion. The highest address is on the top right. Color shows the percentage of unreadable blocks in that region.

You can see that small errors are more or less randomly distributed over the entire address space. Keep in mind that summed up, unrecoverable blocks only cover 0.10% of the space, so this image exaggerates them. There are a few hot spots though and one 4 MB slice in particular at around 4.5 GB contains a lot of more errors than other regions. It's also interesting that some horizontal patterns can also be seen - the upper half of the image appears more error free than the bottom part. I've chosen 4 MB slices exactly because of that. While internal memory organization is a complete black box, it does appear that 4 MB blocks play some role in it.

Just for comparison, here is the same data plotted using a space-filling curve. The black area on the top-left is part of the graph not covered by the SD card address space (the curve covers 224 = 16777216 blocks of 512 bytes while the card only stores 15523840 blocks or 7948206080 bytes). This visualization better shows grouping of errors, but hides the fact that 4 MB chunks seem to play some role:

Visualization of the ddrescue map file using a Hilbert curve.

I quickly also looked into whether failures could be predicted by something like SMART. Even though it appears that some cards do support it, none I tried produced any useful data with smartctl. Interestingly, plugging the SanDisk Ultra into an external USB-connected reader on a laptop does say that the device has a SMART capability:

$ smartctl -d scsi -a /dev/sdb
smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.9.0-12-amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:               Generic
Product:              STORAGE DEVICE
Revision:             1206
Compliance:           SPC-4
User Capacity:        7 948 206 080 bytes [7,94 GB]
Logical block size:   512 bytes
scsiModePageOffset: response length too short, resp_len=4 offset=4 bd_len=0
Serial number:        000000001206
Device type:          disk
scsiModePageOffset: response length too short, resp_len=4 offset=4 bd_len=0
Local Time is:        Thu May 14 16:36:47 2020 CEST
SMART support is:     Available - device has SMART capability.
SMART support is:     Enabled
Temperature Warning:  Disabled or Not Supported

=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK
Current Drive Temperature:     0 C
Drive Trip Temperature:        0 C

Error Counter logging not supported

scsiModePageOffset: response length too short, resp_len=4 offset=4 bd_len=0
Device does not support Self Test logging

However I suspect this response comes from the reader, not the SD card. Multiple cards I tried produced the same 1206 serial number. Both a new and a failed card had the "Health Status: OK" line, so that's misleading as well.

This is a second time I was replacing the SD card in this Raspberry Pi. The first time it lasted around a year and a half. It further justifies my opinion that SD cards just aren't suitable for unattended systems or those running continuously. In fact, I suggest avoiding them if at all possible. For example, newer Raspberry Pis support booting from USB-attached storage.

Posted by Tomaž | Categories: Digital | Comments »

On missing IPv6 router advertisements

03.05.2020 16:58

I've been having problems with Internet connectivity for the past week or so. Randomly connections would timeout and some things would work very slowly or not at all. In the end it turned out to be a problem with IPv6 routing. It seems my Internet service provider is having problems with sending out periodic Router Advertisements and the default route on my router often times out. I've temporarily worked around it by manually adding a route.

I'm running a simple, dual-stack network setup. There's a router serving a LAN. The router is connected over an optical link to the ISP that's doing Prefix Delegation. Problems appeared as intermittent. A lot of software seems to gracefully fall back onto IPv4 if IPv6 stops working, but there's usually a more or less annoying delay before it does that. On the other hand some programs don't and seem to assume that there's global connectivity as long as a host has a globally-routable IPv6 address.

The most apparent and reproducible symptom was that IPv6 pings to hosts outside of LAN often weren't working. At the same time, hosts on the LAN had valid, globally-routable IPv6 addresses, and pings inside the LAN would work fine:

$ ping -6 -n3 host-on-the-internet
connect: Network is unreachable
$ ping -6 -n3 host-on-the-LAN
PING ...(... (2a01:...)) 56 data bytes
64 bytes from ... (2a01:...): icmp_seq=1 ttl=64 time=0.404 ms
64 bytes from ... (2a01:...): icmp_seq=2 ttl=64 time=0.353 ms
64 bytes from ... (2a01:...): icmp_seq=3 ttl=64 time=0.355 ms

--- ... ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2026ms
rtt min/avg/max/mdev = 0.353/0.370/0.404/0.032 ms

Rebooting my router seemed to help for a while, but then the problem would reappear. After some debugging I've found out that the immediate cause of the problems was that the default route on my router would disappear approximately 30 minutes after it has been rebooted. It would then randomly re-appear and disappear a few times a day.

On my router, the following command would return empty most of the time:

$ ip -6 route | grep default

But immediately after a reboot, or if I got lucky, I would get a route. I'm not sure why there are two identical entries here, but the only difference is the from field:

$ ip -6 route | grep default
default from 2a01::... via fe80::... dev eth0 proto static metric 512 pref medium
default from 2a01::... via fe80::... dev eth0 proto static metric 512 pref medium

The following graph shows the number of entries returned by the command above over time. You can see that most of the day router didn't have a default route:

Number of valid routes obtained from RA over time.

The thing that was confusing me the most was the fact that the mechanism for getting the default IPv6 route is distinct from the the way the prefix delegation is done. This means that every device in the LAN can get a perfectly valid, globally-routable IPv6 address, but at the same time there can be no configured route for packets going outside of the LAN.

The route is automatically configured via Router Advertisement (RA) packets, which are part of the Neighbor Discovery Protocol. When my router first connects to the ISP, it sends out a Router Solicitation (RS). In response to the RS, the ISP sends back a RA. The RA contains the link-local address to which the traffic intended for the Internet should be directed to, as well as a Router Lifetime. Router Lifetime sets a time interval for which this route is valid. This lifetime appears to be 30 minutes in my case, which is why rebooting the router seemed to fix the problems for a short while.

The trick is that the ISP should later periodically re-send the RA by itself, refreshing the information and lifetime, hence pushing back the deadline at which the route times out. Normally, a new RA should arrive well before the lifetime of the first one runs out. However in my case, it seemed that for some reason the ISP suddenly started sending out RA's only sporadically. Hence the route would timeout in most cases, and my router wouldn't know where to send the packets that were going outside of my LAN.

To monitor RA packets on the router using tcpdump:

$ tcpdump -v -n -i eth0 "icmp6 && ip6[40] == 134"

This should show packets like the following arriving in intervals that should be much shorter than the advertised router lifetime. On a different, correctly working network, I've seen packets arriving roughly once every 10 minutes with lifetime of 30 minutes:

18:52:01.080280 IP6 (flowlabel 0xb42b9, hlim 255, next-header ICMPv6 (58) payload length: 176)
fe80::... < ff02::1: [icmp6 sum ok] ICMP6, router advertisement, length 176
	hop limit 64, Flags [managed, other stateful], pref medium, router lifetime 1800s, reachable time 0ms, retrans timer 0ms
	...
19:00:51.599538 IP6 (flowlabel 0xb42b9, hlim 255, next-header ICMPv6 (58) payload length: 176) 
fe80::... < ff02::1: [icmp6 sum ok] ICMP6, router advertisement, length 176
	hop limit 64, Flags [managed, other stateful], pref medium, router lifetime 1800s, reachable time 0ms, retrans timer 0ms
	...

However in this case this wasn't happening. Similarly to what the graph above shows, these packets only arrive sporadically. As far as I know, this is an indication that something is wrong on the ISP side. Sending a RA in response to RS seems to work, but periodic RA sending doesn't. Strictly speaking there's nothing that can be done to fix this on my end. My understanding of RFC 4861 is that a downstream host should only send out RS once, after connecting to the link.

Once the host sends a Router Solicitation, and receives a valid Router Advertisement with a non-zero Router Lifetime, the host MUST desist from sending additional solicitations on that interface, until the next time one of the above events occurs.

Indeed, as far as I can see, Linux doesn't have any provisions for re-sending RS in case all routes from a previously received RAs time out. This answer argues that it should, but I can find no references that would confirm this. On the other hand, this answer agrees with me that RS should only be sent when connecting to a link. On that note, I've also found a discussion that mentions blocking multicast packets as a cause of similar problems. I don't believe that is the case here.

In the end I've used an ugly workaround so that things kept working. I've manually added a permanent route that is identical to what is randomly advertised in RA packets:

$ ip -6 route add default via fe80::... dev eth0

Compared to entries originating from RA this manual entry in the routing table won't time out - at least not until my router gets rebooted. It also doesn't hurt anything if additional, identical routes get occasionally added via RA. Of course, it still goes completely against the IPv6 neighbor discovery mechanism. If anything changes on the ISP side, for example if the link-local address of the router changes, the entry won't get updated and the network will break again. However it does seem fix my issues at the moment. The fact that it's working also seems to confirm my suspicion that something is only wrong with RA transmissions on the ISP side, and that actual routing on their end works correctly. I've reported my findings to the ISP and hopefully things will get fixed on their end, but in the mean time, this will have to do.

Posted by Tomaž | Categories: Code | Comments »

Measuring some Zener diodes

19.04.2020 12:05

I've been stuck working on a problem for the past few days. I need to protect an analog switch in a circuit from expected over-voltage conditions. Zener diodes seemed like a natural solution, but the border between normal operation and over-voltage is very thin in this particular case. I couldn't find components with a characteristic that would fit based solely on the specifications given in the datasheets. I've been burned before by overestimating the performance of Zener diodes so I decided to do some measurements and get some better feel for how they behave. The results were pretty interesting and I thought they might be useful to share.

The following measurements have all been done with my tiny home-brew curve tracer connected to a Tektronix TDS 2002B oscilloscope. Unfortunately this model only has 8-bit vertical resolution. This caused some visible stair-stepping on the vertical parts of the traces below. Nevertheless the measurements should give a pretty good picture of what's going on. Before doing the measurements I've also checked the DC calibration of the whole setup against my new Keysight U1241C multimeter. The error in measured voltage and current values should not be more than ±3%. Measurements were done roughly at room temperature and at a low frequency (100 Hz).

First measurement is with SZMMBZ5230BLT11G, a 4.7 V Zener diode from ON Semi in a SOT-23 SMT package. I've only measured a single one of these, since soldering leads to the SMT package was time consuming. The figure shows current vs. voltage characteristic in the reverse direction. The narrow, dark blue graph shows the actual measured values. The black dashed line shows the maximum power dissipation limit from the datasheet. I also made a model for the diode based on the minimum and maximum values for VZ and the single ZZT value given in the datasheet. The light blue area is the range of characteristics I predicted with that model.

Voltage vs. current graph for SZMMBZ5230BLT11G

The relevant part of the datasheet for this diode:

Excerpt from the MMBZ52xxBLT1G datasheet.

Image by ON Semiconductor

This is the same measurement repeated for BZX79C4V7, also a 4.7 V Zener diode from ON Semi, but this time in a sealed glass THT package. I've measured 10 of these. All came shipped in the same bag, which might mean they're from the same production batch, but I can't be sure. All 10 measurements are shown overlapped on the same graph.

Voltage vs. current graph for BZX79C4V7.

The relevant part of the datasheet:

Excerpt from the BZX79Cxx datasheet.

Image by ON Semiconductor

It's interesting to see that both of these parts performed significantly better than what their datasheets suggest. They were both in the allowed voltage range at the specified current (note that one is specified at 20 mA and the other at 5 mA). The differential impedance was much lower however. SZMMBZ5230BLT11G is specified at 19 Ω at 20 mA and I measured around 1 Ω. BZX79C4V7 is specified at 80 Ω at 5 mA and I measured 11 Ω. The datasheet for BZX79C4V7 does say that 80 Ω is the maximum, but SZMMBZ5230BLT11G isn't clear on whether that is a typical or the maximum value. It's was also surprising to me how the results I got for all 10 BZX79C4V7 measurements were practically indistinguishable from each other.

A note regarding the models. I used the classic diode equation where I calculated the parameters a and b to fit VZ and ZZ (or ZZT) values from the datasheets.

I = a ( e^\frac{U}{b} - 1)

As far as I know, a and b don't have any physical meaning here. This is in contrast to the forward characteristic, where they represent saturation current and thermal voltage. I wasn't able to find any reference that would explain the physics behind this characteristic and most people just seem to use such empirical models. The Art of Electronics does say that the Zener impedance is roughly inversely proportional to the current, which implies an exponential I-U characteristic.

From my rusty understanding of breakdown physics I was expecting that a junction after breakdown wouldn't have much of a non-linear resistance at all. I was expecting that a good enough model would just be a voltage source (representing the junction in breakdown) and a series resistance (representing ohmic contacts and bulk semiconductor). It seems this is not so, at least for the relatively low current conditions I've measured here. The purely exponential model also fits my measurements perfectly, which seems to confirm that this was a correct choice for the model.

Update: I found Zener and avalanche breakdown in silicon alloyed p-n junctions—I: Analysis of reverse characteristics (unfortunately pay-walled). It contains an overview of the various mechanisms behind junction breakdown. In contrast to all other references I've looked at it actually goes into mathematical models and doesn't just stop at hand-waving qualitative descriptions. The mechanisms are complicated and the exponential characteristic I've used is indeed just an empirical approximation.

Finally, it's interesting to also look at how the forward characteristics compare. Here they are plotted against a common signal diode 1N4148. Both Zener diodes are very similar in this plot, despite a different Zener impedance and a differently specified forward voltage in the datasheet. Compared to the signal diode they have the knee at a slightly higher voltage, but also steeper slopes after the knee:

Comparison of forward characteristics.

In conclusion, it's interesting to see how these things look like in practice, beyond just looking at their specifications. Perhaps the largest take away for me was the fact that a purely resistive model obviously isn't a good way of thinking about Zener diodes in relation to large signals. Of course, it's dangerous to base a design around such limited measurements. Another batch might be completely different in terms of ZZ and I've only measured a single instance of the SOT-23 diode. Devices might change after aging and so on. After all, the manufacturer only guarantees what's stated in the datasheet. Still, seeing these measurements was useful for correcting my feel for how these parts are behaving.

Posted by Tomaž | Categories: Analog | Comments »

How a multimeter measures capacitance

13.03.2020 10:57

I've recently bought a Keysight U1241C multimeter. One of the features it has is a capacitance measurement. Since this is my first multimeter that can do that I was curious what method it uses. I was also wondering what voltage is applied to the capacitor under test and whether the probe polarity matters (for example, when measuring electrolytic capacitors).

The Figure 2-16 in the User's Guide seems to imply that polarity is important. The red probe (V terminal) is marked as positive and the black probe (COM terminal) is marked as negative:

Figure 2-16: Measuring capacitance from the U1241C User's Guide.

Image by Keysight Technologies

The description of the measurement method is limited to this note and doesn't say what voltages or frequencies are involved, but does give a rough idea of what is going on:

Note about capacitance measurement from the U1241C User's Guide.

Image by Keysight Technologies

Connecting an oscilloscope to a capacitor while it is being measured by the multimeter reveals a triangle waveform. I made the following screenshot with a 47 μF electrolytic capacitor connected to the multimeter set to the 100 μF range. The oscilloscope was set to DC coupling, so the DC level is correctly shown as 0 V at the center of the screen:

Voltage on the 47 μF capacitor during measurement.

Since current into a capacitor is proportional to the time derivative of the voltage, a triangle-shaped voltage means that there is a constant current flowing alternatively in and out of the capacitor. Connecting different capacitors revealed that the current and the amplitude of the voltage stay constant for each measurement range, while the period of the signal changes. So the multimeter applies a known current source I to the probes and measures time t it takes for the voltage to rise (or fall) for a present difference Upk-pk. From the measured rise (or fall) time it then calculates capacitance:

C = \frac{I\cdot t}{U_{pk-pk}}

These are the approximate current and voltages used by the multimeter for each range:

Range [μF] I [μA] Upk-pk [mV]
1 1.5 800
10 15 800
100 150 800
1000 340 200
10000 340 200

Note that 1000 μF and 10000 μF ranges seem identical in this respect. I'm guessing the only change is how the time is measured internally. Perhaps a different clock is used for the counter.

If a high range is selected while a small capacitor is connected, the voltage on the capacitor can reach much higher amplitudes. The highest I saw was about 2 V peak-to-peak when I had a 4.7 nF capacitor connected while the instrument was set to 100 μF range.

Voltage on the 4.7 nF capacitor during measurement.

In conclusion, the polarity of the probes isn't important. The applied signal to the capacitor is symmetrical and the capacitor will be alternatively polarized in the positive and negative direction regardless of how it is connected to the multimeter. The voltages do seem low enough that they probably don't damage polarized electrolytic capacitors.

Posted by Tomaž | Categories: Analog | Comments »

The tiny curve tracer project

08.03.2020 10:14

About a year ago I got interested in some unusual transistor characteristics. Since I didn't have any suitable instruments at hand I first measured things with an oscilloscope and some improvised circuits on a protoboard. These setups gradually became more elaborate and for some time now I had a dusty rat's nest of wires on my desk that more or less implemented a simple curve tracer. It soon turned out to be useful in other cases as well, so I thought it would be worth moving the circuit from the protoboard to an actual printed circuit board.

The tiny curve tracer circuit board.

The construction is through-hole on a single layer PCB. I've decided on this slightly vintage style because I could just move the components from the protoboard to the PCB without having to buy their surface-mount equivalents. My Dad offered to etch the board for me using the toner transfer method and it turned out very nice with 20 mil design rules. He made the overlay print on the component side for me as well. I've not etched a board at home in years, ever since cheap on-line prototyping services became available.

It took me quite a while to decide on what kind of contacts to use for connecting the device-under-test (DUT). I've considered a transistor socket (too limited in the pin diameters), a Textool-type ZIF socket (seemed wasteful to use a 16-pin socket when I only needed 6 pins) and just ordinary screw terminals (inconvenient to use). In the end, I went with a WAGO push-button style, 3.5 mm pitch terminal block (type 250-206 to be exact).

This seems to work really well and the only slight problem I have with that is that when inserting a transistor you need to push three buttons at the same time. Since the springs are quite stiff, this requires a fair amount of force. If I would be doing a second revision of the board I would make some provisions for better supporting the PCB around the terminal since it tends to flex quite a lot.

The tiny curve tracer block diagram

My circuit obviously isn't on the level of something like a Tektronix 575. The signal that will be applied to the DUT, usually a sine or a triangle wave, comes from a signal generator via a BNC connector, shown on the left of the block diagram. I'm using my GW Instek AFG-2005. After the input there are two amplifier stages and a power transistor in an emitter-follower configuration. Total voltage gain from input to DUT terminal is 5. The actual current and voltage on the DUT are amplified and passed to the BNC outputs. I use two 50 Ω coaxial cables to connect the outputs to an oscilloscope in XY mode.

A switch allows the input to be either AC or DC coupled. AC coupling allows me to quickly change the amplitude on the signal generator without having to simultaneously also adjust the offset voltage. In this case the circuit clamps the lowest voltage to 0 V. On the other hand, DC coupling allows me to put the DUT under some DC bias and only measure some small signal characteristic.

The curve tracer is powered by a bench power supply with around 30 V and doesn't need a negative supply. It is capable of providing up to about 25 V of voltage to the DUT and around 200 mA of current. 25 V isn't enough to reach collector-emitter breakdown voltages of common transistors, but it is plenty to investigate knee regions of diodes or breakdown in low-voltage Zener and TVS diodes. For example, this is the forward characteristic of a 1N4148 diode I measured:

Measured I-V curve for the 1N4148 diode.

It's also possible to measure base-emitter junction characteristic in bipolar transistors. Here is a BC546, a common small-signal NPN transistor, in forward and reverse directions. Breakdown in the junction is visible in the reverse direction at around 11 V:

Base-emitter junction measurements for BC546 transistor.

There's a protection circuit that turns off the voltage to the DUT, and lights up a warning LED, if DUT current exceeds 200 mA. The protection gets automatically reset when the input voltage goes to 0, which usually means on the next period of the signal generator. This protection is more about saving the curve tracer than the DUT. It's still perfectly possible to obliterate a small transistor that can't handle 5 W of dissipation.

The power to the DUT however can be limited by connecting a load resistor in series, which is similar to how the old Tektronix instruments did it (they came with this nice resistor selection graph). I've left two contacts on the DUT terminal for the load resistor connection. On the photo above they are shorted with a wire jumper.

Curve traces for a 100 ohm resistor.

One of the things I aimed for was also support for high frequency signals since I wanted to observe some dynamic effects. Traditional curve tracers only use 100 Hz or 120 Hz. Above you can see some reference measurements of a 100 ohm carbon-film resistor I did at different frequencies. Ideally, all plots should show a line with a 10 mA/V slope and the same length. However at around 100 kHz the amplitudes start falling and voltage and current are no longer in sync, which causes the curve to open up into an ellipse.

The performance is mostly limited by the slew rate of the amplifiers, especially when observing fast edges, like in an avalanche breakdown. I'm currently using the excellent 4.5 MHz Renesas CA3240, which are one of the fastest operational amplifiers I could get in a DIP package. There's also a limitation on the DUT capacitance. Since I'm using an emitter-follower and not a push-pull output stage, my curve tracer can only tolerate about 150 pF of DUT capacitance at 100 kHz.

Tweaking the high frequency performance was the most challenging part of designing this. Matching the delay between voltage and current outputs involved a lot of experimenting by adding small capacitances into the feedback networks. Despite this effort however each measurement still involves some double-checking to be sure that I'm seeing a real effect and not an artifact of the instrument.

Curve tracer on the desk with other instruments.

In conclusion, I'm reasonably happy how this turned out. Obviously, there are some limitations and in a second iteration of the design I would do some things differently. For example the decision to go with 10 mA/V current output wasn't the best. Decreasing the gain would reduce the slew-rate requirements without losing much precision. The clamp circuit also doesn't work very well at high frequencies due to the opamp going into saturation and could do with a redesign. In the end, it was a useful refresh of my knowledge about various details of imperfect amplifiers.

I've also found out that the XY mode on my Tektronix TDS2002B seem to be a bit crude. You can't have cursors and for some reason it's also impossible to save a screenshot. Also, while the scope does support current probes, the 10 mA/V range is missing so I can't have the proper scale displayed on screen. While it's a bit annoying, it's not too much extra work to save individual traces to CSV in YT mode and then re-plot them with matplotlib or something when I have a measurement I want to save.

Another thing is that there's currently no step generator or any other provision for setting the base current (or FET gate voltage). However I did reserve a contact on the DUT terminal for the base/gate pin and there's a header already wired for a possible future expansion board with that capability. I might make that add-on at one point in the future but currently I'm not missing it too much since I've mostly been using the curve tracer with two-terminal devices.

Finally, if you're interested in this sort of things and maybe designing or building your own curve tracer, I can recommend reading Paul's Building Another Curve Tracer post. He goes into much more detail about the design of his own instrument. Another very useful resource I found is the TekWiki which contains a staggering amount of information about old Tektronix instruments, including manuals and often also full schematics.

Posted by Tomaž | Categories: Analog | Comments »

Printing .lto_priv symbols in GDB

14.02.2020 16:08

Here's a stupid little GNU debugger detail I've learned recently - you have to quote the names of some variables. When debugging a binary that was compiled with link time optimization, it sometimes appears like you can't inspect certain global variables.

GNU gdb (Debian 7.12-6) 7.12.0.20161007-git
[...]
(gdb) print usable_arenas
No symbol "usable_arenas" in current context.

The general internet wisdom seems to be that if a variable is subject to link time optimization it can't be inspected in the debugger. I guess this comes from the similar problem of inspecting private variables that are subject to compiler optimization. In some cases private variables get assigned to a register and don't appear in memory at all.

However, if it's a global variable, accessed from various places in the code, then its value must be stored somewhere, regardless of what tricks the linker does with its location. It's unlikely it would get assigned to a register, even if it's theoretically possible. So after some mucking about in the disassembly to find the address of the usable_arenas variable I was interested in, I was surprised to find out that gdb does indeed know about it:

(gdb) x 0x5617171d2b80
0x5617171d2b80 <usable_arenas.lto_priv.2074>:	0x17215410
(gdb) info symbol 0x5617171d2b80
usable_arenas.lto_priv in section .bss of /usr/bin/python3.5

This suggests that the name has a .lto_priv or a .lto_priv.2074 suffix (Perhaps meaning LTO private variable? It is declared a static variable in C). However I still can't print it:

(gdb) print usable_arenas.lto_priv
No symbol "usable_arenas" in current context.
(gdb) print usable_arenas.lto_priv.2074
No symbol "usable_arenas" in current context.

The trick is not that this is some kind of a special variable or anything. It just has a tricky name. You have to put it in quotes so that gdb doesn't try to interpret the dot as an operator:

(gdb) print 'usable_arenas.lto_priv.2074'
$3 = (struct arena_object *) 0x561717215410

TAB completion also works against you here, since it happily completes the name without the quotes and without the .2074 at the end, giving the impression that it should work that way. It doesn't. If you use completion, you have to add the quotes and the number suffix manually around the completed name (or only press TAB after inputting the leading quote, which works correctly).

Finally, I don't know what the '2074' means, but it seems you need to find that number in order to use the symbol name in gdb. Every LTO-affected variable seems to get a different number assigned. You can find the one you're interested in via a regexp search through the symbol table like this:

(gdb) info variables usable_arenas
All variables matching regular expression "usable_arenas":

File ../Objects/obmalloc.c:
struct arena_object *usable_arenas.lto_priv.2074;
Posted by Tomaž | Categories: Code | Comments »

Checking Webmention adoption rate

25.01.2020 14:42

Webmention is a standard that attempts to give plain old web pages some of the attractions of big, centralized social media. The idea is that web servers can automatically inform each other about related content and actions. In this way a post on a self-hosted blog, like this one, can display backlinks to a post on another server that mentions it. It also makes it possible to implement gimmicks such as a like counter. Webmention is kind of a successor to pingbacks that were popularized some time ago by Wordpress. Work on standardizing Webmention seem to date back to at least 2014 and it has been first published as a working draft by W3C in 2016.

I've first read about Webmention on jlelse's blog. I was wondering what the adoption of this standard is nowadays. Some searching revealed conflicting amounts of enthusiasm for it, but not much recent information. Glenn Dixon wrote in 2017 about giving up on it due to lack of adoption. On the other hand, Ryan Barrett celebrated 1 million sent Webmentions in 2018.

To get a better feel of what the state is in my local web bubble, I've extracted all external links from my blog posts in the last two years (January 2018 to January 2020). That yielded 271 unique URLs on 145 domains from 44 blog posts. I've then used Web::Mention to discover any Webmention endpoints for these URLs. Endpoint discovery is a first step in sending a notification to a remote server about related content. If that fails it likely means that the host doesn't implement the protocol.

The results weren't encouraging. None of the URLs had discoverable endpoints. That means that even if I would implement the sending part of the Webmention protocol on my blog, I wouldn't have sent any mentions in the last two years.

Another thing I wanted to check is if anyone was doing the same in the other direction. Were there any failed incoming attempts to discover an endpoint on my end? Unfortunately there is no good way of determining that from the logs I keep. In theory endpoint discovery can look just like a normal HTTP request. Many Webmention implementations seem to have "webmention" in their User agent header however. According to this heuristic I did likely receive at least 3 distinct requests for endpoint discovery in the last year. It's likely there were more (for example, I know that my log aggregates don't include requests from Wordpress plug-ins due to some filter regexps).

So it seems that implementing this protocol doesn't look particularly inviting from the network effect standpoint. I also wonder if Webmentions would become the spam magnet that pingbacks were back in the day if they reached any kind of wide-spread use. The standard does include a provision for endpoints to verify that the source page indeed links to the destination URL the Webmention request says it does. However to me that protection seems trivial to circumvent and only creates a little more work for someone wanting to send out millions of spammy mentions across the web.

Posted by Tomaž | Categories: Code | Comments »

On "The Bullet Journal Method" book

17.01.2020 12:02

How can you tell if someone uses a Bullet Journal®? You don't have to, they will immediately tell you themselves.

Some time last year I saw this book in the window of a local bookstore. I was aware of the website, but I didn't know the author also published a book about his method of organizing notebooks. I learned about the Bullet Journal back in 2014 and it motivated me to better organize my daily notes. About 3000 written pages later I'm still using some of the techniques I learned back then. I was curious if the book holds any new useful note-taking ideas, so I bought it on the spot.

The Bullet Journal Method by Ryder Carroll.

The Bullet Journal Method is a 2018 book by Ryder Carroll (by the way, the colophon says my copy is printed in Slovenia). The text is split into 4 parts: first part gives motivation for writing a notebook. That is followed by a description of the actual note-taking methods. The third and longest part of the book at around 100 pages is called "The Practice". It's kind of a collection of essays giving advice on life philosophy with general topics such as meaning, gratitude and so on. The last part explores a few variations of the methods described in the book.

The methods described in the book differ a bit from what I remember. In fact the author does note in a few places that their advice has changed over time. The most surprising to me was the change from using blank squares as a symbol for an unfinished task to simple dots. The squares were in my opinion one of the most useful things I took from the Bullet Journal as they are a very clear visual cue. They really catch the eye among other notes and drawings when browsing for things left undone in a project.

In general, the contents of my notebooks are quite different from the journals the book talks about. I don't have such well defined formats of pages (they call it "collections"), except perhaps monthly indexes. My notebooks more resemble lab notes and I also tend to write things in longer form than the really short bullet lists suggested in the book. The author spends a lot of time on migrations and reflection: rewriting things from an old, full notebook to a new one, moving notes between months and so on. I am doing very little of that and rely more on referencing and looking up things in old notebooks. I do see some value in that though and after reading the book I'm starting to do more of it for some parts of my notes. I've experimented with a few other note-taking methods from the book as well, and some seem to be working for me and I've dropped the others.

The Bullet Journal Method on Productivity.

I was surprised to see that a large portion of the book is dedicated to this very general motivational and life style advice, including diagrams like the one you see above, much in the style of self-help books. It made me give up on the book midway half-way through for a few months. I generally have a dislike for this kind of texts, but I don't think it's badly written. The section is intertwined with exercises that you can write down in your journal, like the "five whys" and so on. Some were interesting and others not so much. Reading about a suggestion to write your own obituary after a recent death in the family was off-putting, but I can hardly blame the book for that coincidence.

There is certainly some degree of Bullet Journal® brand building in this book. It feels like the author tries quite hard to sell their method in the first part of the book via thankful letters and stories from people that solved various tough life problems by following their advice. Again, something I think commonly found in self-help books and for me personally this usually has the opposite effect from what was probably intended. I do appreciate that the book doesn't really push the monetary side of it. Author's other businesses (branded notebooks and the mobile app) are each mentioned once towards the end of the book and not much more.

Another pleasant surprise was the tactful acknowledgment from the author that many journals shared on the web and social media don't resemble real things and can be very demotivational or misleading. I've noticed that myself. For example, if you search for "bullet journal" on YouTube you'll find plenty of people sharing their elaborately decorated notebooks that have been meticulously planned and sectioned for a year in advance. That's simply not how things work in my experience and most of all, I strongly believe that writing the notebook with the intention of sharing it on social media defeats the whole purpose.

In conclusion, it's an interesting book and so far I've kept it handy on my desk to occasionally look up some example page layouts that are given throughout it. I do recommend it if you're interested in using physical notebooks or are frustrated with the multitude of digital productivity apps that never tend to quite work out. It's certainly a good starting point, but keep in mind that what's recommended in there might not be what actually works best for you. My advice would be only to keep writing and give it some time until you figure out the useful parts.

Posted by Tomaž | Categories: Life | Comments »