On "ap_pass_brigade failed"

25.05.2016 20:37

Related to my recent rant regarding the broken Apache 2.4 in Debian Jessie, another curious thing was the appearance of the following in /var/log/apache2/error.log after the upgrade:

[fcgid:warn] [pid ...:tid ...] (32)Broken pipe: [client ...] mod_fcgid: ap_pass_brigade failed in handle_request_ipc function, referer: ...

Each such error is also related to a 500 Internal Server Error HTTP response logged in the access log.

There's a lot of misinformation floating about this on the web. Contrary to the popular opinion, this is not caused by wrong values of various Fcgid... options or the PHP_FCGI_MAX_REQUESTS variable. Actually, I don't know much about PHP (which seems to be the primary use case for FCGI), but I do know how to read the mod_fcgid source code and this error seems to have a very simple cause: clients that close the connection before waiting for the server to respond.

The error is generated on line 407 of fcgid_bridge.c (mod_fcgid 2.3.9):

/* Now pass any remaining response body data to output filters */
if ((rv = ap_pass_brigade(r->output_filters,
                          brigade_stdout)) != APR_SUCCESS) {
    if (!APR_STATUS_IS_ECONNABORTED(rv)) {
        ap_log_rerror(APLOG_MARK, APLOG_WARNING, rv, r,
                      "mod_fcgid: ap_pass_brigade failed in "
                      "handle_request_ipc function");
    }

    return HTTP_INTERNAL_SERVER_ERROR;
}

The comment at the top already suggests the cause of the error message: failure to send the response generated by the FCGI script. The condition is easy to reproduce with a short Python script that sends a request and immediately closes the socket:

import socket, ssl

HOST="..."
# path to some document generated by an FCGI script
PATH="..."

ctx = ssl.create_default_context()
conn = ctx.wrap_socket(socket.socket(socket.AF_INET), server_hostname=HOST)
conn.connect((HOST, 443))
conn.sendall("GET " + PATH + " HTTP/1.0\r\nHost: " + HOST + "\r\n\r\n")
conn.close()

Actually, you can do the same with a browser by mashing refresh and stop buttons. The success somewhat depends on how long the script takes to generate the response - for very fast scripts it's hard to tear down the connection fast enough.

Probably at some point ap_pass_brigade() returned ECONNABORTED when the client broke the connection, hence the if statement in the code above. It appears that now EPIPE is returned and mod_fcgid was not properly updated. I was testing this on apache2 2.4.10-10+deb8u4.

In any case, this error message is benign. Fiddling with the FcgidOutputBufferSize might cause the response to be sent out earlier and reduce the chance that this will be triggered by buggy crawlers and such, but in the end there is nothing you can do about it on the server side. The 500 response in the log is also clearly an artifact in this case, since it's the client that caused the error, not the server, and no error page was actually delivered.

Posted by Tomaž | Categories: Code | Comments »

Jessie upgrade woes

23.05.2016 19:59

Debian 8 (Jessie) was officially released a bit over a year ago. Previous January I mentioned I plan to upgrade my CubieTruck soon which in this case meant 16 months. Doesn't time fly when you're not upgrading software? In any case, here are some assorted notes regarding the upgrade from Debian Wheezy. Most of them are not CubieTruck specific, so I guess someone else might find them useful. Or entertaining.

Jessie armhf comes with kernel 3.16, which supports CubieTruck's Allwinner SoC and most of the peripherals I care about. However, it seems you can't use the built-in NAND flash for booting. It would be nice to get away from the sunxi 3.4 kernel and enjoy kernel updates through apt, but I don't want to get back to messing with SD cards. Daniel Andersen keeps the 3.4 branch reasonably up-to-date and Jessie doesn't seem to have problems with it, so I'll stick with that for the time being.

CubieTruck

Dreaded migration to systemd didn't cause any problems, apart from having to migrate a couple of custom init.d scripts. The most noticeable change is a significant increase in the number of mounted tmpfs filesystems, which makes df output somewhat unwieldy and, by consequence, Munin's disk usage graphs a mess.

SpeedyCGI was a way of making dynamic web pages back in the olden days. In the best Perl tradition it tweaked some low-level parts of the language in order to avoid restarting the interpreter for each HTTP request - like automagically persisting global state and making exit() not actually exit. From a standpoint of a lazy web developer it was an incredibly convenient way to increase performance of plain old CGI scripts. But alas, it remained unmaintained for many years and was finally removed in Jessie.

FCGI and Apache's mod_fcgi (not to be confused with mod_fastcgi, it's non-free and slightly more broken cousin) seemed like natural replacements. While FCGI makes persistence explicit, the programming model is more or less the same and hence the migration required only some minor changes to my scripts - and working around various cases of FCGI's brain damage. Like for instance intentional ignorance of Perl's built-in Unicode support. Or the fact that gracefully stopping worker processes is more or less unsupported. In fact, FCGI's process management seems to be broken on multiple levels, as mod_fcgi has problems maintaining a stand-by pool of workers.

Perl despair

In any case, the new Apache 2.4 is a barrel of fun by itself. It changes the syntax for access control in such a way that config files need to be updated manually. It now also ignores all config files if they don't end in .conf. Incidentally, Apache will serve files from /var/www/html if it has no VirtualHosts defined. This seems to be a hard-coded default, so you can't find why it's doing that by grepping through /etc/apache2.

The default config in Jessie frequently warns about deadlocks in various places:

(35)Resource deadlock avoided: [client ...] mod_fcgid: can't lock process table in pid ...
(35)Resource deadlock avoided: AH00273: apr_proc_mutex_lock failed. Attempting to shutdown process gracefully.
(35)Resource deadlock avoided: AH01948: Failed to acquire OCSP stapling lock

I'm currently using the following in apache2.conf, which so far seems to work around this problem:

# was: Mutex file:${APACHE_LOCK_DIR} default
Mutex sem default

Apache 2.4 in Jessie breaks HTTP ETag caching mechanism. If you're using mod_deflate (it's used by default to compress text-based content like HTML, CSS, RSS), browsers won't be getting 304 Not Modified responses, which means longer load times and higher bandwidth use. The workaround I'm using is the following in mods-available/deflate.conf (you need to also enable mod_headers):

Header edit "Etag" '^"(.*)-gzip"$' '"$1"'

This differs somewhat from the solution proposed in Apache's Bugzilla, but as far as I can see restores the old and tested behavior of Apache 2.2, even if it's not exactly up to HTTP specification.

I wonder whether this state of affairs means that everyone has moved on to nginx or these are just typical problems for a new major release. Anyway, to conclude on a more positive note, Apache now supports OCSP stapling, which is pretty simple to enable.

Finally, rsyslog is slightly broken in Jessie on headless machines that don't have an X server running. It spams the log with lines like:

rsyslogd-2007: action 'action 17' suspended, next retry is Sat May 21 18:12:53 2016 [try http://www.rsyslog.com/e/2007 ]

This can be worked around by commenting-out the following lines in rsyslog.conf:

#daemon.*;mail.*;\
#       news.err;\
#       *.=debug;*.=info;\
#       *.=notice;*.=warn       |/dev/xconsole
Posted by Tomaž | Categories: Code | Comments »

Materialized Munin display

15.05.2016 21:25

Speaking of Munin, here's a thing that I've made recently: A small stand-alone display that cycles through a set of measurements from a Munin installation.

Munin display

(Click to watch Munin display video)

Back when ESP8266 chip was the big new thing I ordered a bag of them from eBay. The said bag then proceeded to gather dust in the corner of my desk for a year or so, as such things unfortunately tend to do these days. I also had a really nice white transflective display left over from another project (suffice to say, it cost around 20 £ compared to ones you can get for a tenth of the price with free shipping on DealExtreme). So something like this looked like a natural thing to make.

The hardware is not worth wasting too many words on: an ESP8266 module handles radio and the networking part. The display is a 2-line LCD panel using the common 16-pin interface. An Arduino Pro Mini acts as glue between the display and the ESP8266. There are also 3.3 V (for ESP8266) and 5 V (for LCD and Arduino) power supplies and a transistor level shifter for the serial line between ESP8266 and the Arduino.

ESP8266 runs stock firmware that exposes a modem-like AT-command interface on a serial line. I could have omitted the Arduino and ran the whole thing from the ESP8266 alone, however the lack of GPIO lines on the module I was using meant that I would have to use some kind of GPIO extender or multiplexer to run the 16-pin LCD interface. Arduino with the WeeESP8266 library just seemed less of a hassle.

Top side of the circuit in the Munin display.

From the software side, the device basically acts as a dumb display. The ESP8266 listens on a TCP socket and Arduino pushes everything that is received on that socket to the LCD. All the complexity is hidden in a Python daemon that runs on my CubieTruck. The daemon uses PyMunin to periodically query Munin nodes, renders the strings to be displayed and sends them to the display.

Speaking of ESP8266, my main complaint would be that there is basically zero official documentation about it. Just getting it to boot means reconciling conflicting information from different blog and forum posts (for me, both CH_PD and RST/GPIO16 needed to be pulled low). No one mentioned that RX pin has an internal pull-up. I also way underestimated the current consumption (it says 1 mA stand-by on the datasheet after all and the radio is mostly doing nothing in my case). It turns out that a linear regulator is out of the question and a 3.3 V switch-mode power supply is a must.

My module came with firmware that was very unreliable. Getting official firmware updates from a sticky forum post felt kind of shady and it took some time to get an image that worked with 512 kB flash on my module. That said, the module has been working without resets or hangs for a couple of weeks now which is nice and not something that all similar radio modules are capable of.

Inside the Munin display.

Finally, this is also my first 3D printed project and I learned several important lessons. It's better to leave too much clearance than too little between parts that are supposed to fit together. This box took about four hours of careful sanding and cutting before the top part could be inserted into the bottom since the 3D printer randomly decided to make some walls 1 mm thicker than planned. Also, self-tapping screws and automagically hollowed-out plastic parts don't play nice together.

With all the careful measuring and planning required to come up with a CAD drawing, I'm not sure 3D printing saved me any time compared to a simple plywood box which I could make and fit on the fly. Also, relying on the flexibility and precision of a 3D print made me kind of forget about the mechanical design of the circuit. I'm not particularly proud of the way things fit together and how it looks inside, but most of it is hidden away from view anyway and I guess it works well enough for a quick one-off project.

Posted by Tomaž | Categories: Life | Comments »

Power supply voltage shifts

02.05.2016 20:16

I'm a pretty heavy Munin user. In recent years I've developed a habit of adding a graph or two (or ten) for every service that I maintain. I also tend to monitor as many aspects of computer hardware as I can conveniently write a plugin for. At the latest count, my Munin master tracks a bit over 600 variables (not including a separate instance that monitors 50-odd VESNA sensor nodes deployed by IJS).

Monitoring everything and keeping a long history allows you to notice subtle changes that would otherwise be easy to miss. One of the things that I found interesting is the long-term behavior of power supplies. Pretty much every computer these days comes with software-accessible voltmeters on various power supply rails, so this is easy to do (using lm-sensors, for instance).

Take for example voltage on the +5 V rail of an old 500 watt HKC USP5550 ATX power supply during the last months of its operation:

Voltage on ATX +5 V rail versus time.

From the start, this power supply seemed to have a slight downward trend of around -2 mV/month. Then for some reason the voltage jumped up for around 20 mV, was stable for a while and then sharply dropped and started drifting at around -20 mV/month. At that point I replaced it, fearing that it might soon endanger the machine it was powering.

The slow drift looks like aging of some sort - perhaps a voltage reference or a voltage divider before the error amplifier. Considering that it disappeared after the PSU was changed it seems that it was indeed caused by the PSU and not by a drifting ADC reference on the motherboard or some other artifact in the measurements. Abrupt shifts are harder to explain. As far as I can see, nothing important happened at those times. An application note from Linear mentions that leakage currents due to dirt and residues on the PCB can cause output voltage shifts.

It's also interesting that the +12 V rail on the same power supply showed a bit different pattern. The last voltage drop is not apparent there, so whatever caused the drop on the +5 V line seemed to have happened after the point where regulation circuit measures the voltage. The +12 V line isn't separately regulated in this device, so if the regulation circuit would be involved, some change should have been apparent on +12 V as well.

Perhaps it was just a bad solder joint somewhere down the line or oxidation building up on connectors. At 10 A, a 50 mV step only corresponds to around 5 mΩ change in resistance.

Voltage on ATX +12 V rail versus time.

This sort of voltage jumps seem to be quite common though. For instance, here is another one I recently recorded on a 5 V, 2.5 A external power supply that came with CubieTruck. Again, as far as I can tell, there were no external reasons (for instance, power supply current shows no similar change at that time).

Voltage on CubieTruck power supply versus time.

I have the offending HKC power supply opened up on my bench at the moment and nothing looks obviously out of place except copious amounts of dust. While it would be interesting to know what the exact reasons were behind these voltage changes, I don't think I'll bother looking any deeper into this.

Posted by Tomaž | Categories: Analog | Comments »

Measuring interrupt response times, part 2

27.04.2016 11:40

Last week I wrote about some typical interrupt response times you get from an Arduino and Raspberry Pi, if you follow basic examples from documentation or whatever comes up on Google. I got some quite unexpected results, like for instance a Python script that responds faster than a compiled C program. To check some of my guesses as to what caused those results, I did another set of measurements.

For Arduino, most response times were grouped around 9 microseconds, but there were a few outliers. I checked the Arduino library source and it indeed always enables AVR timer/counter0 overflow interrupt. If timer interrupt happens at the same time as the GPIO interrupt I was measuring, the GPIO interrupt can get delayed. Performing the measurement with the timer interrupt masked out indeed removes the outliers:

Effect of timer interrupt on Arduino response time.

With timer off, all measured response times are between 9.1986 to 8.9485 μs. This is a 0.2501 μs long interval. It fits perfectly with theory - at 16 MHz CPU clock and instruction length between 1 and 5 cycles, uncertainty for interrupt latency is 0.25 μs.

The second weird thing was the aforementioned discrepancy between Python and C on Raspberry Pi. The default Python library uses an ugly hack to bypass the kernel GPIO driver and control GPIO lines directly from user space: it mmaps a range of physical memory containing GPIO registers into its own process memory space using /dev/mem. This is similar to how X servers on Linux (used to?) access graphics hardware from user space. While this approach is very unportable, it's also much faster since you don't need to do context switches into kernel for every operation.

To check just how much faster mmap method is on Raspberry Pi, I copied the GPIO access code from the RPi.GPIO library into my test C program:

Response times using sysfs and mmap methods on Raspberry Pi.

As you can see, the native program is now faster than the interpreted Python script. This also demonstrates just how costly context switches are: the sysfs version is more than two times slower on average. It's also worth noting that both RPi.GPIO and my C program still use epoll() or select() on a sysfs file to wait for the interrupt. Just output pin change can be done with direct memory accesses.

Finally, Raspberry Pi was faster when the CPU was loaded which seemed counterintuitive. I tracked this down to automatic CPU frequency scaling. By default, Raspberry Pi Zero seems to be set to run between 700 MHz and 1000 MHz using ondemand governor. If I switch to performance governor, it keeps the CPU running at 1 GHz at all times. In that case, as expected, the CPU load increases the average response time:

Effect of cpufreq governor on Raspberry Pi response time.

It's interesting to note that Linux kernel comes with pluggable idle loop implementations (CONFIG_CPU_IDLE). The idle loop can be selected through /sys/devices/system/cpu/cpuidle in a similar way to the CPU frequency governor. The Raspbian Jessie release however has that disabled. It uses the default idle loop for ARMv6 processors. Assembly code has been patched though. The ARM Wait For Interrupt WFI instruction in the vanilla kernel has been replaced with some mcreq (write to coprocessor?) instructions. I can't find any info on the JIRA ticket referenced in the comment and the change has been added among other BCM-specific changes in a single 6400-line commit. Idle loop implementation is interesting because if it puts the CPU into a power saving mode, it can affect the interrupt latency as well.

As before, source code and raw data is on GitHub.

Posted by Tomaž | Categories: Digital | Comments »

Measuring interrupt response times

18.04.2016 15:13

Embedded systems were traditionally the domain of microcontrollers. You programmed them in C on bare metal, directly poking values into registers and hooking into interrupt vectors. Only if it was really necessary you would include some kind of a light-weight operating system. Times are changing though. These days it's becoming more and more common to see full Linux systems and high-level languages in this area. It's not surprising: if I can just pop open a shell, see what exceptions my Python script is throwing and fix them on the fly, I'm not going to bother with microcontrollers and the whole in-circuit debugger thing. Some even say it won't be long before we will all be just running web browsers on our devices.

It seems to be common knowledge that the traditional approach really excels at latency. If you're moderately careful with your code, you can get your system to react very quickly and consistently to events. Common embedded Linux systems don't have real-time features. They seem to address this deficiency with some combination of "don't care", "it's good enough" and throwing raw CPU power at the problem. Or as the author of RPi.GPIO library puts it:

If you are after true real-time performance and predictability, buy yourself an Arduino.

I was wondering what kind of performance you could expect from these modern systems. I tend to be very conservative in my work: I have a pile of embedded Linux-running boards, but they are mostly gathering dust while I stick to old-fashioned Cortex M3s and AVRs. So I thought it would be interesting to do some experiments and get some real data about these things.

Measuring interrupt response times on Arduino.

To test how fast a program can respond to an event, I chose a very simple task: Raise an output digital line whenever a rising edge happens on an input digital line. This allowed me to very simply measure response times in an automated fashion using an USB-connected oscilloscope and a signal generator.

I tested two devices: An Arduino Uno using a 16 MHz ATmega328 microcontroller and an Raspberry Pi Zero using a 1 GHz ARM-based CPU running Raspbian Jessie. I tried several approaches to implementing the task. On Arduino, I implemented it with an interrupt and a polling loop. On Raspberry Pi, I tried a kernel module, a native binary written in C and a Python program. You can see exact source code on GitHub.

Measuring interrupt response times on Raspberry Pi.

For all of these, I chose the most obvious approach possible. My implementations were based as much as possible on the preferred libraries mentioned in the documentation or whatever came up on top of my web searches. This meant that for Arduino, I was using the Arduino IDE and the library that comes with it. For Raspberry Pi, I used the RPi.GPIO Python library, the GPIO sysfs interface for native code in user space and the GPIO consumer interface for the kernel module (based on examples from Stefan Wendler). Definitely many of these could be further hand-optimized, but I was mostly interested here in out-of-the-box performance you could get in the first try.

Here is a histogram of 500 measurements for the five implementations:

Histogram of response time measurements.

As expected, Arduino and the Raspberry Pi kernel module were both significantly faster and more consistent than the two Raspberry Pi user space implementations. Somewhat shocking though, the interpreted Python program was considerably faster than my C program compiled into native code.

If you check the source, RPi.GPIO library maps the hardware registers directly into its process memory. This means that it does not need any syscalls for controlling the GPIO lines. On the other hand, my C implementation uses the kernel's sysfs interface. This is arguably a cleaner and safer way to do it, but it requires calls into the kernel to change GPIO states and these require expensive context switches. This difference is likely the reason why Python was faster.

Histogram of response time measurements (zoomed)

Here is the zoomed-in left part of the histogram. Raspberry Pi kernel module can be just as fast as the Arduino, but is less consistent. Not surprising, since the kernel has many other interrupts to service and not that impressive considering 60 times faster CPU clock.

Arduino itself is not that consistent out-of-the-box. While most interrupts are served in around 9 microseconds (so around 140 CPU cycles), occasionally they take as long as 15 microseconds. Probably Arduino library is to blame here since it uses the timer interrupt for delay functions. This interrupt seems to be always enabled, even when a delay function is not running, and hence competes with the GPIO interrupt I am using.

Also, this again shows that polling on Arduino can sometimes be faster than interrupts.

Effect of CPU load on response time.

Another interesting result was the effect of CPU load on Raspberry Pi response times. Somewhat counter intuitively, response times are smaller on average when there is some other process consuming CPU cycles. This happens even with the kernel module, which makes me think it has something to do with power saving features. Perhaps this is due to CPU frequency scaling or maybe the kernel puts an idle CPU into some sleep mode from which it takes longer to wake up.

In conclusion, I was a bit impressed how well Python scores on this test. While it's an order of magnitude slower than Arduino, 200 microseconds on average is not bad. Of course, there's no hard upper limit on that. In my test, some responses took two times as much and things really start falling apart if you increase the interrupt load (like for instance, with a process that does something with the SD card or network adapter). Some of the results on Raspberry Pi were quite surprising and they show once again that intuition can be pretty wrong when it comes to software performance.

I will likely be looking into more details regarding some of these results. If you would like to reproduce my measurements, I've put source code, raw data and a notebook with analysis on GitHub.

Posted by Tomaž | Categories: Digital | Comments »

Clockwork, part 2

10.04.2016 19:39

I hate to leave a good puzzle unsolved. Last week I was writing about a cheap quartz mechanism I got from an old clock that stopped working. I said that I could not figure out why its rotor only turns in one direction given a seemingly symmetrical construction of the coil that drives it.

There is quite a number of tear downs and descriptions of how such mechanisms work on the web. However, very few seem to address this issue of direction of rotation and those that do don't give a very convincing argument. Some mention that the direction has something to do with the asymmetric shape of the coil's core. This forum post mentions that the direction can be reversed if a different pulse width is used.

So, first of all I had a closer look at the core. It's made of three identical iron sheets, each 0.4 mm thick. Here is one of them on the scanner with the coil and the rotor locations drawn over it:

Coil location and direction of rotation.

It turns out there is in fact a slight asymmetry. The edges of the cut-out for the rotor are 0.4 mm closer together on one diagonal than on the other. It's hard to make that out with unaided eye. It's possible that the curved edge on the other side makes it less error prone to construct the core with all three sheets in same orientation.

Dimension drawing of the magnetic core.

The forum post about pulse lengths and my initial thought about shaded pole motors made me think that there is some subtle transient effect in play that would make the rotor prefer one direction over the other. Using just a single coil, core asymmetry cannot result in a rotating magnetic field if you assume linear conditions (e.g. no part of the core gets saturated) and no delay due to eddy currents. Shaded pole motors overcome this by delaying magnetization of one part of the core through a shorted auxiliary winding, but no such arrangement is present here.

I did some measurements and back-of-the-envelope calculations. The coil has approximately 5000 turns and resistance of 215 Ω. The field strength is nowhere near saturation for iron. The current through the coil settles somewhere on the range of milliseconds (I measured a time constant of 250 μs without the core in place). It seems unlikely any transients in magnetization can affect the movements of the rotor.

After a bit more research, I found out that this type of a motor is called a Lavet type stepping motor. In fact, its operation can be explained completely using static fields and transients don't play any significant role. The rotor has four stable points: two when the coil drives the rotor in one or the other direction and two when the rotor's own permanent magnetization attracts it to the ferromagnetic core. The core asymmetry creates a slight offset between the former and the latter two points. Wikipedia describes the principle quite nicely.

To test this principle, I connected the coil to an Arduino and slowly stepped this clockwork motor through it's four states. The LED on the Arduino board above shows when the coil is energized. The black dot on the rotor roughly marks the position of one of its poles. You can see that when the coil turns off, the rotor turns slightly forward as its permanent magnet aligns it with the diagonal on the core that has a smaller air gap (one step is a bit more pronounced than the other on the video above). This slight forward advancement from the neutral position then makes the rotor prefer the forward over the backward motion when the coil is energized in the other direction.

It's always fascinating to see how a mundane thing like a clock still manages to have parts in it whose principle of operation is very much not obvious from the first glance.

Posted by Tomaž | Categories: Life | Comments »

Clockwork

28.03.2016 15:17

Recently one of the clocks in my apartment stopped. It's been here since before I moved in and is probably more than 10 years old. The housing more or less crumbled away as I opened it. On the other hand the movement inside looked like it was still in a good condition, so I had a look if there was anything in it that I could fix.

Back side of the quartz clock movement.

This is a standard 56 mm quartz wall clock movement. It's pretty much the same as in any other cheap clock I've seen. In this case, its makers were quick to dispel any myths about its quality: no jewels in watchmaker's parlance means no quality bearings and I'm guessing unadjusted means that the frequency of its quartz oscillator can't be adjusted.

Circuit board in the clock movement.

As far as electronics is concerned, there's not much to see in there. There's a single integrated circuit, bonded to a tiny PCB and covered with a blob of epoxy. It uses a small tuning-fork quartz resonator to keep time. As the cover promised, there's no sign of a trimmer for adjusting the quartz load capacitance. Two exposed pads on the top press against some metallic strips that connect to the single AA battery. The life time of the battery was probably more than a year since I don't remember the last time I had to change it.

Coil from the clock movement.

The circuit is connected to a coil on the other side of the circuit board. It drives the coil with 30 ms pulses once per second with alternating polarity. The oscilloscope screenshot below shows voltage on the coil terminals.

Voltage waveform on the two coil terminals.

When the mechanism is assembled, there's a small toroidal permanent magnet sitting in the gap in the coil's core with the first plastic gear on top of it. The toroid is laterally magnetized and works as a rotor in a simple stepper motor.

Permanent magnet used as a rotor in clock movement.

The rotor turns half a turn every second and this is what gives off the audible tick-tock sound. I'm a bit puzzled as to what makes it turn only in one direction. I could see nothing that would work as a shaded pole or something like that. The core also looks perfectly symmetrical with no features that would make it prefer one direction of rotation over the other. Maybe the unusual cutouts on the gear for the second hand have something to do with it.

Update: my follow-up post explains what determines direction of rotation.

Top side of movement with gears in place.

This is what the mechanism looks like with gears in place. The whole construction is very finicky and a monument to material cost reduction. There's no way to run it without the cover in place since gears fall over and the impulses in the coil actually eject the rotor if there's nothing on top holding it in place (it's definitely not as well behaved as one in this video). In fact, I see no traces that the rotor magnet has been permanently bonded in any way with the first gear. It seems to just kind of jump around in the magnetic field and drive the mechanism by rubbing against the inside of the gear.

In the end, I couldn't find anything obviously wrong with this thing. The electronics seem to work correctly. The gears also look and turn fine. When I put it back together it would sometimes run, sometimes it would just jump one step back and forth and sometimes it would stand still. Maybe some part wore down mechanically, increasing friction. Or maybe the magnet lost some of its magnetization and no longer produces enough torque to reliably turn the mechanism. In any case, it's going into the scrap box.

Posted by Tomaž | Categories: Life | Comments »

The problem with gmail.co

14.03.2016 19:41

At this moment, the gmail.co (note missing m) top-level domain is registered by Google. This is not surprising. It's common practice these days for owners of popular internet services to buy up domains that are similar to their own. It might be to fight phising attacks (e.g. go-to-this-totally-legit-gmail.co-login-form type affairs), prevent typosquatting or purely for convenience to redirect users that mistyped the URL to the correct address.

$ whois gmail.co
(...)
Registrant Organization:                     Google Inc.
Registrant City:                             Mountain View
Registrant State/Province:                   CA
Registrant Country:                          United States

gmail.co currently serves a plain 404 Not Found page on the HTTP port. Not really user friendly, but I guess it's good enough to prevent web-based phising attacks.

Now, with half of the world using ...@gmail.com email addresses, it's not uncommon to also mistakenly send an email to a ...@gmail.co address. Normally, if you mistype the domain part of the email address, your MTA will see the DNS resolve fail and you would immediately get either a SMTP error at the time of submission, or a bounced mail shortly after.

Unfortunately, gmail.co domain actually exists, which means that MTAs will in fact attempt to deliver mail to it. There's no MX DNS record, however SMTP specifies that MTAs must in that case use the address in A or AAAA records for delivery. Those do exist (as they allow the previously mentioned HTTP error page to be served to a browser).

To further complicate the situation, the SMTP port 25 on IPs referenced by those A and AAAA records is blackholed. This means that a MTA will attempt to connect to it, hang while the remote host eats up SYN packets, and fail after the TCP handshake timeouts. A timeout looks to the MTA like an unresponsive mail server, which means it will continue to retry the delivery for a considerable amount of time. The RFC 5321 says that it should take at least 4-5 days before it gives up and sends a bounce:

Retries continue until the message is transmitted or the sender gives up; the give-up time generally needs to be at least 4-5 days. It MAY be appropriate to set a shorter maximum number of retries for non- delivery notifications and equivalent error messages than for standard messages. The parameters to the retry algorithm MUST be configurable.

In a nutshell, what all of this means is that if you make a typo and send a mail to @gmail.co, it will take around a week for you to receive any indication that your mail was not delivered. Needless to say, this is bad. Especially if the message you were sending was time critical in nature.

Update: Exim will warn you when a message has been delayed for more than 24 hours, so you'll likely notice this error before the default 6 day retry timeout. Still, it's annoying and not all MTAs are that friendly.

The lesson here is that, if you register your own typosquatting domains, do make sure that mail sent to them will be immediately bounced. One way is to simply set an invalid MX record (this is an immediate error for SMTP). You can also run a SMTP server that actively rejects all incoming mail (possibly with a friendly error message reminding the user of the mistyped address), but that requires some more effort.

As for this particular Google's blunder, a workaround is to put a special retry rule for gmail.co in your MTA so that it gives up faster (e.g. see Exim's Retry configuration).

Posted by Tomaž | Categories: Life | Comments »

Some SPF statistics

21.02.2016 19:04

Some people tend their backyard gardens. I host my own mail server. Recently, there has been a push towards more stringent mail server authentication to fight spam and abuse. One of the simple ways of controlling which server is allowed to send mail for a domain is the Sender Policy Framework. Zakir Durumeric explained it nicely in his Neither Snow Nor Rain Nor MITM talk at the 32C3.

The effort for more authentication seems to be headed by Google. That is not surprising. Google Mail is e-mail for most people nowadays. If anyone can push for changes in the infrastructure, it's them. A while ago Google published some statistics regarding the adoption of different standards for their inbound mail. Just recently, they also added visible warnings for their users if mail they received has not been sent from an authenticated server. Just how much an average user can do about that (except perhaps pressure their correspondents to start using Google Mail) seems questionable though.

Anyway, I implemented a SPF check for inbound mail on my server some time ago. I never explicitly rejected mail based on it however. My MTA just adds a header to incoming messages. I was guessing the added header may be picked out by the Bayesian spam filter, if it became significant at any point. After reading about Google's efforts I was wondering what the situation regarding SPF checks looks like for me. Obviously, I see a very different sample of the world's e-mail traffic as Google's servers.

For this experiment I took a 3 month sample of inbound e-mail that was received by my server between November 2015 and January 2016. The mail was classified by Bogofilter into spam and non-spam mail, mostly based on textual content. SPF records were evaluated by spf-tools-perl upon reception. Explanation of results (what softfail, permerror, etc. means) is here.

SPF evaluation results for non-spam messages.

SPF evaluation results for spam messages.

As you can see, the situation in this little corner of the Internet is much less optimistic than the 95.3% SPF adoption rate that Google sees. More than half of mail I see doesn't have a SPF record. A successful SPF record validation also doesn't look like that much of a strong signal for spam filtering either, with 22% of spam mail successfully passing the check.

It's nice that I saw no hard SPF failures for non-spam mail. I checked my inbox for mail that had softfails and permerrors. Some of it was borderline-spammy and some of it was legitimate and appeared to be due to the sender having a misconfigured SPF record.

Another interesting point I noticed is that some sneaky spam mail comes with their own headers claiming SPF evaluation. This might be a problem if the MTA just adds another Received-SPF header at the bottom and doesn't remove the existing one. If you then have a simple filter on Received-SPF: pass somewhere later in the pipeline it's likely the filter will hit the spammer's header first instead of the header your MTA added.

Posted by Tomaž | Categories: Life | Comments »