Moving Dovecot indexes and control files

23.12.2016 22:06

Dovecot IMAP server can use a standard Maildir for storage of messages inside user's home directories. The default in that case is to store search indexes and control files in the same directory structure, alongside mail files. That can be convenient, since no special setup is needed and everything is stored in the same place.

However, this doesn't work very well if you have disk quotas enabled on the filesystem that stores Maildirs. In case a user reaches their quota, Dovecot will not be able to write to its own files, which can lead to problems. Hence, documentation recommends that you configure a separate location for Dovecot's files in that case. This is done with INDEX and CONTROL options to the mail_location specification.

For example, after setting up appropriate directory structure and permissions under /var/lib/dovecot:

mail_location = maildir:%h/Maildir:INDEX=/var/lib/dovecot/index/%u:CONTROL=/var/lib/dovecot/control/%u

You can just set this up and leave old index and control files in place. In that case, Dovecot will automatically regenerate them. However, this is not ideal. It can take significant time to regenerate indexes if you have a lot of mail. You also lose some IMAP-related metadata, like message flags and unique IDs, which will confuse IMAP clients. It would be better to move existing files to the new location, however the documentation doesn't say how to do that.

I found that the following script works with Dovecot 2.2.13 on Debian Jessie. As always, be careful when dealing with other people's mail and double check that the script does what you want. I had my share of problems when coming up with this. Make backups.


set -ue

# Run as " user path-to-users-maildir" for each user.

# Make sure that Dovecot isn't running or that this specific IMAP user isn't
# connected (and can't connect) while this script runs!


# Correct after double-checking that this script does what you want.
MV="echo mv -i"


# Index files like dovecot.index, dovecot.index.cache, etc. go under the 
# INDEX directory. The directory structure should be preserved. For example,
# ~/Maildir/.Foo/dovecot.index should go to index/.Foo/dovecot.index.

# Exception are index files in the root of Maildir. Those should go under 
mkdir -p "$b"
$MV *index* "$b"

find . -name "*index*"|while read a; do
	b=$DOVECOTDIR/index/$USERNAME/`dirname "$a"`
	mkdir -p "$b"
	$MV "$a" "$b"

# dovecot-uidlist and dovecot-keywords files should go under CONTROL, in a
# similar way to indexes. There is the same exception for .INBOX.
mkdir -p "$b"
$MV dovecot-uidlist dovecot-keywords "$b"

find . -name "*dovecot*"|while read a; do
	b=$DOVECOTDIR/control/$USERNAME/`dirname "$a"`
	mkdir -p "$b"
	$MV "$a" "$b"

# subscriptions file should go to the root of the control directory.

# Note that commands above also move some dovecot-* files into the root of
# the control directory. This seems to be fine.
$MV "subscriptions" "$DOVECOTDIR/control/$USERNAME"
Posted by Tomaž | Categories: Code | Comments »

About the Wire loop probe

15.12.2016 21:08

Recently I was writing about how my father and I were checking a HiFiBerry board for a source of Wi-Fi interference. For want of better equipment we used a crude near-field probe that consisted of a loop of stripped coaxial cable and a trimmer capacitor. We attempted to tune this probe to around 2.4 GHz using the trimmer to get more sensitivity. However we didn't see any effect of capacitance changes on the response in that band.

The probe was made very much by gut feeling, so it wasn't that surprising that it didn't work as expected. We got some interesting results nonetheless. Still, I thought I might do some follow-up calculations to see how far off we were in our estimates of the resonance frequency.

Our probe looked approximately like the following schematic (photograph). The loop diameter was around 25 mm and the wire diameter was around 1 mm. Trimmer capacitor was around 10 pF:

Wire loop at the end of a coaxial cable.

Inductance of a single, circular loop of wire in air is:

L = \mu_0 \frac{D}{2} \left( \ln \frac{8D}{d} - 2 \right) \approx 50 \mathrm{nH}

The wire loop and the capacitor form a series LC circuit. If we ignore the effect of the coaxial cable connection, the resonant frequency of this circuit is:

f = \frac{1}{2 \pi \sqrt{LC}} \approx 200 \mathrm{MHz}

So it appears that we were off by an order of magnitude. In fact, this result is close to the low frequency peak we saw on the spectrum analyzer at around 360 MHz:

Emissions from the HiFiBerry board from DC to 5 GHz.

Working backwards from the equations above, we would need capacitance below 1 pF or loop diameter on the order of millimeters to get resonance at 2.4 GHz. These are very small values. Below 1 pF, stray capacitance of the loop itself would start to become significant and a millimeter-sized loop seems too small to be approximated with lumped elements.

Posted by Tomaž | Categories: Analog | Comments »

HiFiBerry and Wi-Fi interference

01.12.2016 11:43

HiFiBerry is a series of audio output cards designed to sit on the Raspberry Pi 40-pin GPIO connector. I've recently bought the DAC+ pro version for my father to use with a Raspberry Pi 3. He is making a custom box to use as an Internet radio and music player. I picked HiFiBerry because it seemed the simplest, with fewest things that could go wrong (the Cirrus Logic board for instance has many other features in addition to an audio output). It's also well supported out-of-the-box in various Raspberry Pi Linux distributions.

Unfortunately, my father soon found out that the internal wireless LAN adapter on the Raspberry Pi 3 stopped working when HiFiBerry was plugged in. Apparently other people have noticed that as well, as there is an open ticket about it at the Raspberry Pi fork of the Linux kernel.

Several possible causes were discussed on the thread on GitHub, from hardware issues to kernel driver bugs. From those, I found electromagnetic interference the most likely explanation - reports say that the issue isn't always there and depends on the DAC sampling rate and the Wi-Fi channel and signal strength. I thought I might help resolving the issue by offering to make a few measurements with a spectrum analyzer (also, when you have RF equipment on the desk, everything looks like EMI).

HiFiBerry board with a near-field probe over the resonators.

I don't have any near-field probes handy, so we used an ad-hoc probe made from a small wire loop on an end of a coaxial cable. We attempted to tune the loop using a trimmer capacitor to get better sensitivity around 2.4 GHz, but the capacitor didn't have any noticeable effect. We swept this loop around the surface of the HiFiBerry board as well as the Raspberry Pi 3 board underneath.

During these tests, the wireless LAN and Bluetooth interfaces on-board Raspberry Pi were disabled by blacklisting brcmfmac, brcmutil, btbcm and hci_uart kernel modules in /etc/modprobe.d. Apart from this, the Raspberry Pi was booted from an unmodified Volumio SD card image. Unfortunately we don't know what kind of ALSA device settings the Volumio music player used.

What we noticed is that the HiFiBerry board seemed to radiate a lot of RF energy all over the spectrum. The most worrying are spikes approximately 22.6 MHz apart in the 2.4 GHz band that is used by IEEE 802.11 wireless LAN. Note that the peaks on the screenshot below almost perfectly match the center frequencies of channels 1 (2.412 GHz) and 6 (2.437 GHz). The peaks continue to higher frequencies beyond the right edge of the screen and the two next ones match channels 11 and 14. This seems to approximately match the report from Hyperjett about which channels seems to be most affected.

Emissions from the HiFiBerry board in the 2.4 GHz band.

The spikes were highest when the probe was centered around the crystal resonators. This position is shown on the photograph above. This suggests that the oscillators on HiFiBerry are the source of this interference. Phil Elwell mentions some possible I2S bus harmonics, but frequencies we saw don't seem to match those.

Emissions from the HiFiBerry board down to 1 GHz.

Scanning lower frequencies shows that the highest peak is around 360 MHz, but that is likely because of the sensitivity of our probe and not due to something related to the HiFiBerry board.

Emissions from the HiFiBerry board from DC to 5 GHz.

I'm pretty sure these emissions are indeed connected with the HiFiBerry itself. With the probe on Raspberry Pi board underneath HiFiBerry, the spectrum analyzer barely registered any activity. Unfortunately, I forgot to take some measurements with a 2.4 GHz antenna to see how much of this is radiated out into the far-field. I'm guessing not much, since it doesn't seem to affect nearby wireless devices.

Related to that, another experiment points towards the fact that this is an EMI issue. If you connect a Wi-Fi dongle via a USB cable to the Raspberry Pi, it will work reliably as long as the dongle is kept away from the HiFiBerry board. However if you put it a centimeter above the HiFiBerry board, it will lose the connection to the access point.

In conclusion, everything I saw seems to suggest that this is a hardware issue. Unfortunately the design of the HiFiBerry board is not open, so it's hard to be more specific or suggest a possible solution. The obvious workaround is to use an external wireless adapter on an USB extension cable, located as far as feasible from the board.

I should stress though that the measurements we did here are limited by our probe, which was very crude, even compared to a proper home-made one. While frequencies of the peaks are surely correct, the measured amplitudes don't have much meaning. Real EMI testing is done with proper tools in a anechoic chamber, but that is somewhat out of my league at the moment.

Posted by Tomaž | Categories: Analog | Comments »

AFG-2005 noise generation bug

09.10.2016 13:49

The noise output function in the GW Instek AFG-2005 arbitrary function generator appears to have a bug. If you set up the generator to output a sine wave at 10 kHz, and then switch to noise output using the FUNC key, you get output like this:

Noise output in frequency domain.

Noise output in time domain.

The red trace shows the spectrum of the signal from the signal generator using the FFT function of the oscilloscope. The yellow trace shows the signal in the time domain.

The noise spectrum looks flat and starts to roll off somewhere beyond 10 MHz. This is what you would expect for an instrument that is supposed to have a 20 MHz DAC. However, if you set the output to a sine wave at 1 kHz before switching to noise output, the signal looks significantly different:

Noise output in frequency domain, low sampling rate.

Noise output in time domain, low sampling rate.

These two later oscilloscope screenshots have been made using the same settings as the pair above.

This is obviously an error on the part of the signal generator. The setting for the sine wave output shouldn't affect the noise output. It looks like the DAC is now only set to around 4 MHz sampling rate. Probably it has been switched to a lower sampling rate for the low-frequency sine wave output and the code forgot to switch it back for the noise function.

This behavior is perfectly reproducible. If you switch back to sine wave output, increase the frequency to 10 kHz or more and switch to noise output, the DAC sampling rate is increased again. Similarly, if you set a 100 Hz sine wave, the DAC sampling rate is set to 400 kHz. As far as I can see there is no mention of this in the manual and you cannot set the sampling rate manually. The FREQ button is disabled in Noise mode and there is no indication on the front panel about which sampling rate is currently used.

I've been writing about the AFG-2005 before. It's an useful instrument, but things like this make it absolutely necessary to always have an oscilloscope at hand to verify that the generator is actually doing what you think it should be doing.

Posted by Tomaž | Categories: Life | Comments »

A naive approach to rain prediction

02.10.2016 19:49

Ever since the Slovenian environment agency started publishing Doppler weather radar imagery I've been a regular visitor on their web site. For the last few years I try to use my bicycle for my daily commute as much as possible. However, I'm not such a hard-core fan of biking that I would do it in any weather. The animated map with the recent rainfall estimate history helps very much with that: it's quite easy to judge by eye whether there will be rain in the next 30 minutes or so and hence whether to seek alternative modes of transportation.

Some time ago I had a more serious (or should I say scientific) use for weather data, related to one of the projects at the Institute. Gašper helpfully provided me with some historical archives then and I also started collecting images myself straight from ARSO website. That project came and went, but the amount of data on my disk kept growing. I've been continuously tempted to do something interesting with it. I've previously written what the data seems to reveal about the radar itself.

Most of all, I've been playing with the idea of doing that should-I-take-the-bus prediction automatically. Obviously I'm not a weather expert, so my experiments were very naive from that perspective. For instance, a while ago I tried estimating an optical flow field from the apparent movement of regions with rain and then using that to predict their movement in the near future. That didn't really work. Another idea I had was to simply dump the data into a naive Bayesian model. While that also didn't work to any useful degree as far as prediction is concerned, it did produce some interesting results worth sharing.

What I did was model rain at any point in the radar image (x, y) and time t as a random event:


To determine whether the event happened or not from the historical data, I simply checked whether the corresponding pixel was clear or not - I ignored the various rates of rainfall reported. To calculate the prior probability of rain on any point on the map, I did a maximum-likelihood estimation:

P(R_{x,y}) = \frac{n_{x,y}}{N}

Here, nx,y is number of images where the point shows rain and N is the total number of images in the dataset.

I was interested in predicting rain at one specific target point (x0, y0) based on recent history of images. Hence I estimated the following conditional probability:

P(R_{x_0,y_0,t+\Delta t}|R_{x,y,t})

This can be estimated from historical data in a similar way as the prior probability. In other words, I was interested in how strongly is rain at some arbitrary point on the map x,y at time t related to the fact that it will rain at the target point at some future time tt. Are there any points on the map that, for instance, always show rain 30 minutes before it rains in Ljubljana?

The video below shows this conditional probability for a few chosen target points around Slovenia (marked by small white X). Brighter colors show higher conditional probability and hence stronger relation. The prior probability is shown in lower-right corner. In the video, the time difference runs in reverse, from 4 hours to 0 in 10 minute steps. For each point, the animation is repeated 4 times before moving to the next point.

The estimates shown are calculated from 62573 radar images recorded between July 2015 and September 2016. Since the format of the images has been changing over time it's hard to integrate older data.

As you might expect, when looking several hours into the future, there is very little relation between points on the map. All points are dull red. If it rains now in the east, it might rain later in Ljubljana. Similarly, if it rains in the west. There's no real difference - the only information you get is that it's generally rainy weather in the whole region.

When you decrease the time difference, you can see that nearby points are starting to matter more than those far away. Brighter colors concentrate around the target. Obviously, if it rains somewhere around Ljubljana, there's a higher chance it will shortly rain in Ljubljana as well. If you note the color scale though, it's not a particularly strong relation unless you're well below one hour.

What's interesting is that for some cities you can see that rain more often comes from a certain direction. Around the coast and in Notranjska region, the rain clouds seem to mostly move in from the Adriatic sea (lower-left corner of the map). This seems to fit the local saying you can sometimes hear, that the "weather comes in from the west". In the mountains (top-left), it seems to come from the north. All this is just based on one year of historical data though, so it might not be generally true over longer periods.

Of course, such simple Bayesian models are horribly out-of-fashion these days. A deep learning convolutional neural network might work better (or not), but alas, I'm more or less just playing with data on a rainy weekend and trying to remember machine learning topics I used to know. There's also the fact that ARSO themselves now provide a short-term rain prediction through an application on their website. It's not the easiest thing to find (Parameter Selection - Precipitation in the menu and then Forecast on the slider below). I'm sure their models are way better than anything an amateur like me can come up with, so I doubt I'll spend any more time on this. I might try to add the forecast to ARSO API at one point though.

Posted by Tomaž | Categories: Life | Comments »

Blacklisting users for inbound mail in Exim

18.09.2016 12:00

You can prevent existing local users from receiving mail by redirecting them to :fail: in /etc/aliases. For example, to make SMTP delivery to list@... fail with 550 Unrouteable address:

list: :fail:Unrouteable address

See special items section in the Redirect router documentation.

By default, Exim in Debian will attempt to deliver mail for all user accounts, even non-human system users. System users (like list above) typically don't have a traditional home directory set in /etc/passwd. This means that mail for them will get stuck in queue as Exim tries and fails to write to their mailbox. Because spam also gets sent to such addresses, mail queue will grow and various things will start to complain. Traditionally, mail for system accounts is redirected to root in /etc/aliases, but some accounts just receive a ton of spam and it's better to simply reject mail sent to them.

Another thing worth pointing out is the Handling incoming mail for local accounts with low UID section in README.Debian.gz in case you want to reject mail sent to all system accounts.

This took way too much time to figure out. There's a ton of guides on how to blacklist individual users for outgoing mail, but none I could find for cases like this. I was half-way into writing a custom router before I stumbled upon this feature.

Posted by Tomaž | Categories: Code | Comments »

Script for setting up multiple terminals.

16.09.2016 19:04

Sometimes I'm working on software that requires running a lot of different inter-dependent processes (are microservices still a thing?). Using systemd or some other init system for starting up such systems is fine for production. While debugging something on my laptop however it's useful to have each process running in its own X terminal. This allows me to inspect any debug output and to occasionally restart something. I used to have scripts that would run commands in individual GNU screen sessions, but that had a number of annoying problems.

I've recently came up with the following:


set -ue

if [ "$#" -ne 1 ]; then
	echo "USAGE: $0 path_to_file"
	echo "File contains one command per line to be started in a terminal window."
	exit 1

cat "$1" | while read CMD; do
	if [ -z "$CMD" -o "${CMD:0:1}" = "#" ]; then


	cat > "$RCFILE" <<END
source ~/.bashrc
history -s $CMD
echo $CMD

	gnome-terminal -e "/bin/bash --rcfile \"$RCFILE\""
	rm "$RCFILE"

This script reads a file that contains one command per line. Empty lines and lines starting with a hash sign are ignored. For each line it opens a new gnome-terminal (adjust as needed - most terminal emulators support the -e argument) and runs the command in a way that:

  • The terminal doesn't immediately close after the command exits. Instead it drops back to bash. This allows you to inspect any output that got printed right before the process died.
  • The command is printed on top of the terminal before the command runs. This allows you to identify the terminal running a particular process in case that is not obvious from the command output. For some reason, gnome-terminal's --title doesn't work.
  • The command is appended on top of bash' history list. This allows you to easily restart the process that died (or you killed with Ctrl-C) by simply pressing the up cursor and enter keys.
Posted by Tomaž | Categories: Code | Comments »

The NumPy min and max trap

30.08.2016 20:21

There's an interesting trap that I managed to fall into a few times when doing calculations with Python. NumPy provides several functions with the same name as functions built into Python. These replacements typically provide a better integration with array types. Among them are the min() and max(). In vast majority of cases, NumPy versions are a drop-in replacement for built-ins. In a few, however, they can cause some very hard-to-spot bugs. Consider the following:

import numpy as np

print(max(-1, 0))
print(np.max(-1, 0))

This prints (at least in NumPy 1.11.1 and earlier):


Where is the catch? The built-in max() can be used in two distinct ways: you can either pass it an iterable as the single argument (in which case the largest element of the iterable will be returned), or you can pass multiple arguments (in which case the largest argument will be returned). In NumPy, max() is an alias for amax() and that only supports the former convention. The second argument in the example above is interpreted as array axis along which to perform the maximum. It appears that NumPy thinks axis zero is a reasonable choice for a zero-dimensional input and doesn't complain.

Yes, recent versions of NumPy will complain if you have anything else than 0 or -1 in the axis argument. Having max(x, 0) in code is not that unusual though. I use it a lot as a shorthand when I need to clip negative values to 0. When moving code around between scripts that use NumPy, those that don't and IPython Notebooks (which do "from numpy import *" by default), its easy to mess things up.

I guess both sides are to blame here. I find that flexible functions that interpret arguments in multiple ways are usually bad practice and I try to leave them out of interfaces I design. Yes, they are convenient, but they also often lead to bugs. On the other hand, I would also expect NumPy to complain about the nonsensical axis argument. Axis -1 makes sense for a zero-dimensional input, axis 0 doesn't. The alias from max() to amax() is dangerous (and as far as I can see undocumented). A possible way to prevent such mistakes would be to support only the named version of the axis argument.

Posted by Tomaž | Categories: Code | Comments »

Linux loader for DOS-like .com files

25.08.2016 18:09

Back in the olden days of DOS, there was a thing called a .com executable. They were obsolete even then, being basically inherited from CP/M and replaced by DOS MZ executables. Compared to modern binary formats like ELF, .com files were exceedingly simple. There was no file header, no metadata, no division between code and data sections. The operating system would load the entire file into a 16-bit memory segment at offset 0x100, set the stack and the program segment prefix and jump to the first instruction. It was more of a convention than a file format.

While this simplicity created many limitations, it also gave rise to many fun little hacks. You could write a binary executable directly in a text editor. The absence of headers meant that you could make extremely small programs. A minimal executable in Linux these days is somewhere in the vicinity of 10 kB. With some clever hacking you might get it down to a hundred bytes or so. A small .com executable can be on the order of bytes.

A comment on Hacker News recently caught my attention. One could write a .com loader for Linux pretty easily. How easily would that be? I had to try it out.

/* Map address space from 0x00000 to 0x10000. */

void *p = mmap(	(void*)0x00000, 0x10000, 
		-1, 0);

/* Load file at address 0x100. */

(a boring loop here reading from a file to memory.)

/* Set stack pointer to end of allocated area, jump to 0x100. */

	"mov    $0x10000, %rsp\n"
	"jmp    0x100\n"

This is the gist of it for Linux on x86_64. First we have to map 64 kB of memory at the bottom of our virtual address space. This is where NULL pointers and other such beasts roam, so it's unallocated by default on Linux. In fact, as a security feature the kernel will actively prevent such calls to mmap(). We have to disable this protection first with:

$ sysctl vm.mmap_min_addr=0

Memory mappings have to be aligned with page boundaries, so we can't simply map the file at address 0x100. To work around this we use an anonymous map starting at address 0 and then fill it in manually with the contents of the .com file at the correct offset. Since we have the whole 64 bit linear address space to play with, choosing 0x100 is a bit silly (code section usually lives somewhere around 0x400000 on Linux), but then again, so is this entire exercise.

Once we have everything set up, we set the stack pointer to the top of our 64 kB and jump to the code using some in-line assembly. If we were being pedantic, we could set up the PSP as well. Most of it doesn't make much sense on Linux though (except for command-line parameters maybe). I didn't bother.

Now that we have the loader ready, how do we create a .com binary that will work with it? We have to turn to assembly:

    call   greet
    mov    rax, 60
    mov    rdi, 0

# A function call, just to show off our working stack.
    mov     rax, 1
    mov     rdi, 1
    mov     rsi, msg
    mov     rdx, 14

    msg db      "Hello, World!", 10

This will compile nicely into an x86_64 ELF object file with NASM. We then have to link it into an ELF executable using a custom linker script that tells the linker that all sections will be placed in a chunk of memory starting at 0x100:

	RAM (rwx) : ORIGIN = 0x0100, LENGTH = 0xff00

Linker will create an executable which contains our code with all the proper offsets, but still has all the ELF cruft around it (it will segfault nicely if you try to run it with kernel's default ELF loader). As the final step, we must dump its contents into a bare binary file using objcopy:

$ objcopy -S --output-target=binary hello.elf

Finally, we can run our .com file with our loader:

$ ./loader
Hello, World!

As an extra convenience, we can register our new loader with the kernel, so that it will be invoked each time you try to execute a file with the .com extension (update-binfmts is part of the binfmt-support package on Debian):

$ update-binfmts --install com /path/to/loader --extension com
$ ./
Hello, World!

And there you have it, a nice 59 byte "Hello, World" binary for Linux. If you want to play with it yourself, see the complete example on GitHub that has a Makefile for your convenience. If you make something small and fun with it, please drop me a note.

One more thing. In case it's not clear at this point, this loader will not work with DOS executables (or CP/M in that case). Those expect to be run in 16-bit real mode and rely on DOS services. Needless to say, my loader makes no attempts to provide those. Code will run in the same environment as other Linux user space processes (albeit at a weird address) and must use the usual kernel syscalls. If you want to run old DOS stuff, use DOSBox or something similar.

Posted by Tomaž | Categories: Code | Comments »

A thousand pages of bullet journal

12.08.2016 17:27

A few weeks ago I filled the 1000th page in my Bullet Journal. Actually, I don't think I can call it that. It's not in fact that much oriented around bullet points. It's just a series of notebooks with consistently numbered, dated and titled pages for easy referencing, monthly indexes and easy-to-see square bullet points for denoting tasks. Most of the things I said two years ago still hold, so I'll try not to repeat myself too much here.

A thousand pages worth of notebooks.

Almost everything I do these days goes into this journal. Lab notes, sketches of ideas, random thoughts, doodles of happy ponies, to-do lists, pages worth of crossed-out mathematical derivations, interesting quotes, meeting notes and so on. Writing things down in a notebook often significantly clears them up. Once I have a concise and articulated definition of a problem, the solution usually isn't far away. Pen and paper helps me keep focus at talks and meetings, much like an open laptop does the opposite.

Going back through past notes gives a good perspective on how much new ideas depend on the context and mindset that created them. An idea for some random side-project that seems interesting and fun at first invariably looks much less shiny and worthy of attention after reading through the written version a few days or weeks later. I can't decide though whether it's better to leave such a thing on paper or hack together some half-baked prototype before the initial enthusiasm fades away.

The number of pages I write per month appears to be increasing. That might be because I settled on using cheap school notebooks. I find that I'm much more relaxed scribbling into a 50 cent notebook than ruining a 20€ Moleskine. Leaving lots of whitespace is wasteful, but helps a lot with readability and later corrections. Yes, whipping out a colorful children's notebook at an important meeting doesn't look very professional. Then again, most people at such meetings are too busy typing emails into their laptops to notice.

Number of pages written per month.

As much as it might look like a waste of time, I grew to like the monthly ritual of making an index page. I like the sense of achievement it gives me when I look back at what I've accomplished the previous month. It's also an opportunity for reflection. If the index gets hard to put all on one page, that's a good sign that the previous month was all too fragmented and that too many things wanted to happen at once.

The physical nature of the journal means that I can't carry the whole history with me at all times. That is also sometimes a problem. It is an unfortunate feature of my line of work that it is not uncommon for people to want to have unannounced meetings about a topic that was last discussed half a year ago. On the other hand, saying that I don't have my notes on me at that moment does present an honest excuse.

Indexes help, but finding things can be problematic. Then again, digital content (that's not publicly on the web) often isn't much better. I commonly find myself frustratingly searching for some piece of code or a document I know exists somewhere on my hard disk but can't remember any exact keyword that would help me find it. I considered making a digital version of monthly indexes at one point. I don't think it would be worth the effort and it would destroy some of the off-line quality of the notebook.

As I mentioned previously, gratuitous cross-referencing between notebook pages, IPython notebooks and other things does help. I tend not to copy tasks between pages, like in the original Bullet Journal idea. For projects that are primarily electronics related though, I'm used to keeping a separate folder with all the calculations and schematics, a habit I picked up long ago. There are not many such projects these days, but I did on one occasion photocopy pages from the notebook. I admit that made me feel absolutely archaic.

Posted by Tomaž | Categories: Life | Comments »