Reversible computing

10.04.2009 19:35

I've stumbled upon a Wikipedia page about reversible computing recently. It's a concept I haven't seen before.

The idea is that there's a theoretical limit to the efficiency of computation. For every irreversible bit operation (for example AND, OR, but not XOR or NOT), it is thought that two new microscopic states are added to the thermodynamical system that's doing the computation, increasing its entropy by:

\Delta S = k\ln{2}

If the temperature T of the computer remains constant, an amount of heat must be released to the environment:

\Delta Q = kT\ln{2}

That's called the von Neumann-Landauer limit, and it comes to 33 picowatts at 10 GHz clock and 75°C

As it's name suggests, reversible computing is exploring possibilities of doing computations with only reversible operations, that is logic gates that have a 1 to 1 mapping between input and output states. In contrast to irreversible operations there is no known lower limit to the amount of work that has to be done by such a gate.

Such computers would be very different from what we have today. For example, a mere variable assignment in procedural languages is irreversible, since the previous content of the variable is lost. On the other hand, you could run a program backwards starting with the result and ending with the input parameters. That would be an interesting feature for a debugger.

As I see it, this kind of the research fits in the same box as Dyson spheres and warp drives. These kinds of limits won't be reached any time soon in practice (they may bug Universal AC eventually). But it is still fascinating to see just how far you can push some concept before the most basic physics starts to interfere with you.

Posted by Tomaž | Categories: Ideas | Comments »

Three rollers and a ruler

25.12.2008 11:04

I've seen a lot of discussion recently (e.g. BoingBoing, xkcd) on the Internet about the possibility of a vehicle moving directly downwind faster than the wind. I don't want to go into that debate, but what did caught my attention was the following cute video that demonstrates the behavior of a cart constructed out of three rollers under a moving ruler:

Under the ruler faster than the ruler

(Click to watch Under the ruler faster than the ruler video)

Theatrical skills of the author aside, what I found interesting is that very few people had any doubts about the validity of the experiment and the explanation given in this video. When I watched it for the first time, I was pretty sure the little plush monkey got it right. And after some back-of-the-envelope calculations I still thought he was tricked. The cart shouldn't be able to move at all.

So I did an experiment of my own and it just confirmed my thoughts. The experiment is easily duplicated and I encourage you to try it yourself.

However, just before posting my notes I realized there is one minor, but important difference between the geometry of the cart I analyzed and the geometry used in the video. I'm now confident that the video is genuine and I'll be posting the corrected calculations after I get back from Berlin.

Meanwhile, below is my original analytical solution followed by a video recording of my version of the same experiment (minus the furry spectators). Mind that it's still correct, it just provides an answer to a slightly different question (see if you can spot the difference).

Let's start with a simpler case of a single roller on a flat surface:

Single roller

Here vc is the velocity of the center of the roller relative to the ground, while vr1 and vr2 are the velocities relative to the center at different locations along the roller's surface. Remember, the magnitude of vr is constant around the circumference while it's direction changes around the circumference and is always tangential to the surface.

Obviously, the velocity of the roller's surface at the point where it touches the ground must be 0 relative to the ground or the roller would be slipping. So for that point the following equation is true (subtracting magnitudes, since vectors are parallel and pointed in opposite directions):

v_r - v_c = 0
v_r = v_c

Now that we know vr, we can calculate the velocity of the top point of the roller (adding magnitudes, since vectors are again parallel, but pointed in the same direction):

v_t = v_r + v_c = 2 \cdot v_c

Single roller with a ruler

So, in this case the top point is moving in the same direction as the center and at twice its speed relative to the ground. And this is of course also the speed of any non-slipping ruler that rests on the top of the roller.

This result is expected: if you're moving a large rock by rolling it on tree trunks, you put trunks in front of the rock and pick them up behind it.

Ok, so let's now go on to the cart. The situation is very similar to the previous example. The centers of all rollers are moving with the same velocity, vc. At the points where the top roller touches the bottom two rollers surfaces must have identical velocities as there is no slipping. From here it follows that the magnitude of vr is equal for all rollers.

Cart with a ruler

Again, we can write equations for the points where the cart touches to top and the bottom surfaces:

v_r + v_c = v_t
v_r + v_c = 0

And therefore:

v_t = 0

So the ruler at the top can not move relative to the ground as long as it is not slipping. The crucial difference here was that the second roller rotates in the opposite direction to the top one. This changed the sign in the second equation, since vectors vc and vr were now pointed in the same direction at the bottom.

As you can see, the radii of the rollers don't even come into play in this calculation. So the final result is identical with arbitrary roller dimensions.

The conclusion therefore is that the ruler is either stationary in respect to the ground or two surfaces are slipping somewhere. It's impossible to move the cart by applying a horizontal force only to the ruler since the bottom rollers will apply exactly the same torque to the top roller, but in the opposite direction.

I've made a series of simple experiments that confirm the theory above. You can see them on video below:

Experiments with a three-roller cart

(Click to watch Experiments with a three-roller cart video)

You can see that moving the ruler in the 4th experiment didn't move the cart - it only caused the ruler to slip along the top wheel.

The only way to move the cart is to apply the force to it directly as in the 3rd experiment, or as the last experiment in the video shows, by resting the ruler on the cart at an angle, so that the force of the ruler is no longer parallel to the force of the ground. The force and torque diagram in that case is left as an exercise for the reader.

Again, you don't have to believe everything I said, but do try it yourself if you have any doubts. Experiments are fun and this one really just takes some cardboard and a couple of minutes (or seconds if you have Legos handy).

Posted by Tomaž | Categories: Ideas | Comments »

Alpha-Centaurians get a free movie

13.12.2008 0:54

NewScientist reports that a Hollywood studio has arranged for their latest movie to be transmitted towards Alpha Centauri.

I'm sure any modestly intelligent life out there is already bored full of Hollywood and their recycled ideas. What's interesting though is that the people doing the transmission had to assure the movie makers that their precious intellectual property could not be intercepted by any resident of Earth. Funny, because as I understand the whole idea of this exercise is to provide the video for free to anyone listening on the other end. Which either means that studios don't believe that planets around our neighboring star are inhabited or that they've already shipped a rocket full of DRM encumbered receivers to them.

The report also says that the transmission will be done by Deep Space Communications Network, a company with a cheesy web site that sends any message to the stars for a price. Not surprisingly, they don't provide any technical details of the broadcast, but they do seem to have their own interesting idea of the prime directive. They say that they will only send NTSC or PAL signal (for which you must prove copyright ownership, of course). I guess any aliens with SECAM sets are out of luck then. Oh, and forget about telling those Klingon p'taks to beat it, because they will not send any offensive materials either.

Finally, note that they aren't NASA's DSN. Those guys have more serious things to do than provide marketing campaigns for movie producers.

Posted by Tomaž | Categories: Ideas | Comments »

Computers want to learn too

28.11.2008 21:01

Wikipedia is a wonderful learning resource. It provides a wealth of easily browsable articles on just about every topic. An article on English Wikipedia is a great starting point when you're either merely curious about a specific topic or you're just beginning a more serious study of a subject. Indeed, the ease of access to that much knowledge even poses a problem for some.

XKCD: The problem with Wikipedia

Image by Randall Munroe CC BY-NC 2.5

This is all the realization of dreams the original creators had of Wikipedia becoming a mainstream freely accessible and editable encyclopedia. However what they probably didn't envisage is that their site will also become an invaluable resource for computers to learn about the world. English Wikipedia as one of the largest freely accessible corpora has also become an important resource in machine learning science. A lot of research in natural language processing, search algorithms, text classification and similar fields is based on data gathered from Wikipedia. Results of this research are now being used by a number of companies and non-profit projects - some directly like Wikia, Powerset, Tagaroo, FreeBase, DBPedia and last but not least Zemanta. Many more are using them indirectly, maybe even unknowingly, by employing methods and algorithms that have been developed from research that was based or was evaluated on data from Wikipedia.

What makes Wikipedia inviting for research is that it's the best real-life approximation of a very large repository of structured information. Why is this structure important? After all the promises in the past decades artificial intelligence research has failed to come up with a system that could understand natural language to a degree comparable with an average human. With the hopes that a computer could ever learn directly from plain text trashed it was realized that in order to make computer systems smarter people must help them understand important pieces text. This means that concepts in the text must be clearly marked as having some specific meaning. Only then can the current state-of-the-art algorithms start learning from it, giving rise to intelligent systems that know how to suggest what book you might want to read next, or can directly answer your questions, not just point you to a semi-relevant webpage, leaving the tough part of extracting the information from its text to you.

While Wikipedia isn't properly semantically tagged, it is a good approximation. What makes this possible is its use of templates - an editing tool originally designed to ease input of data and standardize layout of specific classes of topics. Since text is entered into templates through a standardized set of parameters, the template gives structure to that text that can be used for more that just page layout. For example text that is entered for the parameter birthdate in the Infobox People template suddenly becomes a piece of information with a certain meaning: person described by the article was born on the date, described by that piece of text. Even the presence of Infobox People on a page itself classifies that page as biographical page.

DBpedia links between databases

Image by DBpedia

However not all templates are created equal. Wikipedia as a collaboratively edited project has a curious property that some technical feature (like templates) will only be used properly when the misuse of the feature will be blatantly obvious to ordinary (human) visitors of Wikipedia.

Take for example the category hierarchy. MediaWiki software that powers Wikipedia supports assigning articles to a hierarchy of categories. By themselves these categories seem like a more natural way of classifying articles than checking which page uses which Infobox template. A closer look however reveals that the category system is wonderfully abused: a lot of pages are put in completely wrong categories, hierarchy is full of circles and nonsensical relationships. The reason is that only a minority of Wikipedia visitors know that a category system exists. Even less actually use it to find pages. On the other hand a Botanical Infobox on a biographical page is so striking to most users that sooner or later somebody will replace it with a more fitting Infobox.

Interlocking by Paul Goyette

Image by Paul Goyette CC BY-SA 2.0

Recently a movement in the Wikipedia community seems to have arisen that is against adding more specific fields to Infobox templates, voting instead for smaller, more specialized templates dispersed throughout the page. Take for example the decision to move external links to IMDB out of the Infobox Film and into smaller templates, specialized to make links to IMDB. Or refusal to add official home page fields to several other templates.

While in theory smaller templates give as much structure to text as larger Infoboxes they are in practice much more easily abused. An IMDB field in the Infobox can only be used to point to the Internet Movie Database entry for the movie that is the subject of the article the Infobox appears on. If it's not, it will be very noticeable for anyone that follows that link and there are good chances that it will be fixed soon. On the other hand, smaller templates can (and are) used to link to IMDB entries that have only some weak relationship with the subject - for example a page about an actor can have a multiple smaller templates providing links to movies she has acted in. It will not be obvious to the average user that a template that should only point to an IMDB entry, equivalent to the Wikipedia page it appears on, has been misused. Since a computer can not understand the text surrounding the links like a human reader does, it will learn that the concept of the actor is equivalent to the concepts of her movies. Suddenly, a pretty reliable way to link Wikipedia entries to another large database has been made a lot more noisy.

I understand there are (some) good reasons these decisions have been made in the Wikipedia community. Pages with large Infoboxes do become less convenient for human readers and can be time-consuming to keep up-to-date. However Wikipedia editors should acknowledge that Wikipedia has also become an important resource outside their original human audience.

Both goals, an easily readable and editable encyclopedia and a good quality machine learning resource are not necessarily incompatible. There are many minor changes that could be made to enhance Wikipedia for machine learners without sacrificing human usability. If some piece of information really can not be put inside an Infobox, then at least the specialized templates should be made in a way that makes them hard to abuse. For example the current recommended way to link to an IMDB entry is a template that looks like this to a visitor of a page:

TITLE at the Internet Movie Database

Where TITLE is a movie title, chosen by the editor that inserted that template. A better, more robust way to make that template would be for example to make TITLE always say the name of the current page. This approach, which is well within MediaWiki current capabilities, would make it immediately obvious that a template has been misused.

This is a pretty minor change, but it would probably go a long way to make Wikipedia easier to reliably connect to other databases. If not sooner, this is a problem Wikipedia will have to face itself when they will make the transition to semantic MediaWiki, as distant as that seems right now. It's clear that such a change to the IMDB template is no longer possible now that thousands of pages use it, however I do hope that more thought will be given to this problem when interfaces of new templates are debated.

Posted by Tomaž | Categories: Ideas | Comments »

The sound of hot tea

23.11.2008 16:25

One thing that I was wondering about for some time is why when I pour boiling water from a kettle into a cup, the bubbling sound it makes seems different than when I fill it with cold water. I found it curious, but I never gave it much thought. I always guessed it that if I wasn't just imagining this difference it was more likely the effect of the container from which I was pouring water, not the temperature of the water itself. I never tried filling a cup with cold water from a kettle to check this assumption.

Teakettle by Mr. T in DC

Photo by Mr. T in DC CC BY-ND 2.0

That is until the coffee machine in the office broke down. That forced me to use the office water cooler to make tea. You see, this particular water cooler has two identical faucets: one for chilled and one for hot water. And this time it occurred to me that the sound is still different, even when the two faucets are identical in shape. So I went on and made an experiment in controlled circumstances to come to the bottom of this.

I made a simple replica of the important parts of the water cooler: a funnel with an empty ceramic cup below it, so that when the funnel was quickly filled up, the water trickled down into the cup over the period of around 20 seconds (diameter of the opening was 3 mm, volume of water was 200 ml). I recorded the sound with a microphone placed over the top of the cup. The funnel was high enough that the flow became turbulent.

I did 10 measurements, 5 with water at room temperature and 5 with freshly boiled water just below 100°C. For each measurement I cut out a 10 s long part 3 s into the recording to ignore any transient effects of filling the funnel. On those cut-outs I made a discrete Fourier transform.

Sound spectrum of a cup being filled with water

This figure shows all 10 measured spectra superimposed. Measurements with hot water are red, while those with cold water are blue.

The most obvious difference is the nicely defined peak at 3000 Hz: it raises in frequency for almost 500 Hz with hot water. Also noticeable is that hot water spectra are on average weaker than cold water between 6000 to 12000 Hz.

So it looks like there is a noticeable difference. The question remains what mechanism causes it.

One factor that contributes to the sound is the ringing of the ceramic cup, excited by the falling water. To get the resonant frequencies of an empty cup I did an impulse response test without any water in it (i. e. I hit the cup and recorded the spectrum of the 'ding' sound):

Impulse response spectrum of a ceramic cup

As you can see this particular cup design resonates strongest at around 2500 Hz, so I'm confident that the similar peak in the spectra in the previous figure is connected with the same cause - the resonant frequency is probably higher when the cup is partly filled with water. I'm not sure why the peak moves with temperature though. Mechanical resonant frequencies of solid objects do change with temperature, but the rate observed here seems a bit excessive. It's also possible that difference in water viscosity caused the hot cups to fill up faster and so resonate at higher frequencies during the measurement interval I used. Some more measurements of responses of an empty cup at different temperatures may clear this up.

The change in higher frequencies is a bit trickier to explain. After a bit of browsing it turned out that I'm not the only one asking such silly questions. For example, the change can be attributed to tiny water droplets of condensed steam in the air according to this post at Yahoo Answers. It seems a plausible explanation to me, although I can't think of a simple way to test it.

Posted by Tomaž | Categories: Ideas | Comments »

Best photoshoped pilot ever

13.11.2008 17:13

Recently a video has begun circulating of an airplane that loses a wing during a snap roll. Despite this problem the pilot miraculously manages to save himself and what is left of the airplane. Even a major Slovenian news site picked up the story, attributing the maneuver to James Andresson.

As many have noted, the video is undoubtedly fake. While the basic aerodynamics of the flying appear to be correct, there are glitches: like the direction of the initial uncontrolled spin of the aircraft and the unrealistically hard landing that does not even bend the landing gear. A more careful look at the video itself also reveals lots of other clues that support the theory that it has been constructed from several different sources.

my father's x-free

However, if you forget for a minute that it's fake, the video actually shows a really good trick. I have some experience piloting model aircraft and I know that aerobatic airplanes are capable of flying with wings in the vertical position ("knife edge" flight). In this position the body of the airplane provides the lift instead of the wings. So, in theory a controlled flight with a missing wing is possible, provided you manage to pull out of the initial spin.

Now there are always people wanting to show off their skills at RC meetings and competitions. Why doesn't somebody try to replicate this, Mythbuster style? The wing could be constructed so that one half would come off in mid-flight by remote control. And the model airplane can be one of those modern, lightweight Depron foam types so that it would survive more than one failed attempt.

It would take on hell of a pilot to do it, but I'm sure that with a trick like this you would be the star of the event. Anyone up for the challenge?

Posted by Tomaž | Categories: Ideas | Comments »

The not so great zero challenge

13.10.2008 21:27

The Great Zero Challenge tries to dispel the myth that you can recover any data from a hard drive that has been intentionally erased by overwriting its contents once with a single stream of zeros.

It's a nice idea with a problem. If it is possible to recover data, it's probably hugely expensive and it's highly unlikely that any company that is capable of doing that would take the challenge for a (recently increased) prize of $500. The money amount itself would probably be irrelevant if there would be a large media company backing it so that there would be some warranty of positive media coverage. Still, I congratulate the organizers for betting their money in order to dispel untruths as they put it.

Anyway, when I first heard of the challenge some time ago I noticed something interesting. The challenge says that you must identify the name of at least one of the files or folders on the disk. What if it would be possible to win the challenge without even touching the disk? Here's a enlarged portion of one of the censored screenshots available on the challenge website:

Sloppy censor, exhibit 1

Notice the dotted line? The censor has been a bit sloppy. That's one side of the selection box. So it looks like the folder was selected in the Explorer when the screenshot was made. This gives you a pretty good idea of the length of the folder name. More specifically, since the line is dotted, you know the width of the box is 62 or 63 pixels.

Windows uses a proportional font for file names (and by the looks of the screenshot they used the default Windows XP theme), so that further reduces the number of possible filenames. With a couple of trial screenshots I measured the width of all English letters and a short C program tried all possible combinations that included only letters.

The result? From a dictionary of English words, 2091 names matched. Far too many to be useful for guessing the correct name.

So knowing the length of a string rendered in proportional fonts isn't enough without some kind of a context. Is there any more information available in the screenshot to narrow the possibilities even further?

Sloppy censor, exhibit 2

Take a look at what else haven't been censored on the screenshot. There is one file with .gz extension, one with .tar extension and a directory. This suggests a decompressed distribution of one of the open source programs (for example something like "linux-2.6.23.tar.gz", "linux-2.6.23.tar" and "linux-2.6.24"). Size of the .tar file confirms that since it's approximately twice the size of the .gz file - a typical compression ratio for ASCII text or source code. The modification date of the .tar file suggests this is a fairly old release from the late 2006.

So, now all I need to find is an open source software that had a tar.gz release in November or December 2006, was 4.862 kB in size and had around 10 characters in the filename. Is there an easily searchable database of open source software out there that has this information? I haven't found one yet, but the Great Zero Challenge sure looks much less formidable when you look at it this way. And you don't even need to dust off that old electron microscope.

Posted by Tomaž | Categories: Ideas | Comments »

Pillow talk

18.09.2008 19:12

Software is broken. No, really.

Ask any self respecting software engineer and he'll tell you that software never breaks. It can't wear out in the same way a mathematical equation never degrades over the years of use. The same person is usually quick to add how much he hates the hardware his perfect programs run on. Hard drives always fail when you need them, CPUs overheat and fan bearings seize.

Why is it then that software failure has become so ubiquitous in our lives, that a catastrophic failure in most systems does not even fall under warranty terms, while hardware is guaranteed to work for at least a year without errors or your money back? Why must basically every device today have a little button that says "reset" (or else you curse it because then that same common operation involves removing a battery or pulling the plug). Watchdog timers are common, a mechanism where imperfect hardware helps infallible algorithms do their job. I'm sure the possibility of data loss due to some software bug is several orders of magnitude higher that that of hardware failure.

IBM 402 plugboard by Chris Shrigley

Photo by Chris Shrigley CC BY 2.5

The software itself may indeed be immune to wear and tear (although even that could be debated), but its human authors are all but perfect, especially when faced with the immense complexity that is common today in software engineering. In contrast to physical products, software is usually equally broken when it's brand new as when it's of a ripe old age.

Complexity is causing all of these problems. Vast majority of production versions of software today should fall under the label of crude prototype. Engineering means understanding what you are doing. Software engineers do not understand their creations. Not with all the layers of abstraction, from high level programming languages, to underlying operating systems and complex CPU instruction sets. Even if you're writing low-level assembly, chances are you can't predict exactly how your code will execute on a user's PC. And given the reliability of embedded software in consumer electronics it looks like that's impossible even when you know exactly what hardware the program will run on.

High-level programming languages have made this problem worse. They give the programmer a false sense of security. It was way too easy for a C program to outgrow its creator's capacity to comprehend all its possible execution paths. It's stupidly easy to do that in Python. Latest trend towards web applications sounds like a bad joke in this respect. Industry that isn't capable of creating reliable consumer software that runs on a single computer wants to move to systems that span thousands of interconnected processes.

Physical systems do not tend to grow that large because production costs rise fast with complexity. Software has no production costs, only design and prototyping. And even then majority of design is usually skipped in favor of getting a semi-working prototype out on the market as soon as possible. The lack of documentation and write-only code is a running joke that comes true way to often.

Code reuse is seen by some as a holy grail that will solve this problem. The theory is that you use a library of well checked, proven code instead of rolling your own, probably flawed solution to a common problem. In practice however, this usually means that such code is used without understanding its full behavior and side effects, even when they are properly documented. It also makes it easier to make a blind assumption that someone else did his homework properly so you don't have to. In short, it makes the software author think he's actually in control.

This is not a technological problem and as such can not be solved purely by technological means. Software is still a novelty. Most users will fall for the shiniest, best advertised product, not for the one that will serve them best. Sadly, shiniest is usually the one with most features and hence the most complex and unreliable. Hopefully this will slowly correct itself when market gets more educated and computers stop being magic dust to most people. It's shockingly apparent that today in a lot of cases the final users are the ones that have the worst ideas about what functionality the product should have.

The software industry should also get its act together. It should have the courage to resist the vocal minority of users demanding thousands of new features and focus on providing simpler software that will work for the silent majority. Bugs should not be seen as an unavoidable problem. The engineering community should learn to respect simple, reliable solutions, not the most clever. Engineers should get a firm grasp of the complexity of the systems they are working on, even beyond the lines of code they themselves have written.

And finally new developer tools that aim to help this situation should focus on revealing the underlying complexity, not hiding it. They should help writing better software, not help writing more software. Rapid application development should become a thing of the past.

Posted by Tomaž | Categories: Ideas | Comments »

Reblog icon

13.06.2008 10:26

Last week Zemanta released Re-blog - a feature that allows easy and proper quotation by clicking on a small icon on the bottom of a blog post.

I found the previous Zemanta icon visually annoying and did not want to include it in posts on my blog. However, in the spirit of "don't just complain, but suggest a solution", I started up GIMP and came up with these in an hour or so:

Compare this to the old icon:

When I got back from Tenerife I was surprised to learn that my "Traditional" version of the icon is now one of the icons you can choose in the preferences page.

Posted by Tomaž | Categories: Ideas | Comments »

Early summer math

22.05.2008 20:02

Getting caught in an early summer shower a while ago got me thinking. If you have to run a certain distance through the rain and want to remain as dry as possible, is it better to slowly walk or run as fast as possible? It's a popular question and I remember seeing Mythbusters and Brainiac episodes on a similar topic. But since I didn't feel like experimenting I tried to come up with a theoretical solution.

First there are some things that needed to be defined:

I defined rain as a homogeneous mixture of air and water, moving downwards with a constant velocity (since raindrops reach their terminal velocity well before hitting the ground).

The measure of wetness is the amount of water accumulated on you during the exercise, and that amount is proportional to the volume of air/water mixture you displace during movement. The rationale here is that the air will move around you as you move through the rain while water droplets will stick to you since they cannot follow the air flow due to their inertia.

To make calculation simpler I also presumed that the person is of a rectangular shape (the following calculation is done in two dimensions, but accounting for the depth is trivial). You can think of the rectangle sides a and b as your projections to the vertical and horizontal plane.

Now with these things defined, it's pretty simple to get to a result. You basically have to calculate the volume of the hole you bore through the rain. Here is the situation with the ground as the frame of reference, where va is your velocity, vb is the velocity of rain droplets and d is the distance you have to cross in rain.

It may be simpler to think with the water droplets as the frame of reference. In that case the rain is stationary and you are moving up and right through it with the velocity va - vb.

The displaced volume is then the sum of volumes of three parallelograms:

\mathcal{P} = \mathcal{P}_1 + \mathcal{P}_2 + \mathcal{P}_3
\mathcal{P} = a \cdot b + a \cdot h + b \cdot d
\mathcal{P} = a \cdot b + a \cdot \frac{v_b \cdot d}{v_a} + b \cdot d

Now if you look at these three terms: the first and the third one are constant. Only the second one depends on your velocity va and it's an inverse relationship.

So the conclusion of this purely theoretical endeavor is that the faster you run, the dryer you'll be and even if your speed goes to infinity, you're still going to get wet. Also note that the amount of water accumulated on your front side is the same regardless of your speed, it's only the amount that falls on the top of your head that varies.

Posted by Tomaž | Categories: Ideas | Comments »

Mutual inductance problem

05.04.2008 17:02

A few days ago I was browsing through my notes from the first year of study and I stumbled upon this interesting problem I've never managed to solve. I've discussed this a couple of times with the late professor Valenčič and we couldn't find a flaw in my line of reasoning. So, if you know what's wrong, please drop me a mail.

Describing mutual inductance

In professor's book, there is an introduction to the principles of mutual inductance that goes like this:

Imagine two coils (designated 1 and 2) in some arbitrary relative position to each other. Current i1 that flows through coil 1 will cause a magnetic flux through coil 1 Φ11 = i1 ⋅ L1 (according to the definition of inductance). However some of the flux will also flow through coil 2, designated Φ21.

Mutual inductance between coils 1 and 2 is then by definition:

M21 = Φ21 / i1

Obviously, the magnetic flux Φ21 is less or equal to Φ11, so we define a coupling coefficient k ≤ 1 so that:

Φ21 = k ⋅ Φ11

Now due to principle of reciprocity, the same holds true if the current flows through coil 2 and we calculate the flux through coil 1. Coupling coefficient stays the same:

Φ12 = k ⋅ Φ22

Now multiply both mutual inductances:

M21 M12 = k2 Φ11 / i1 Φ22 / i2

Use definition of inductance:

M2 = k2 L1 L2
M = k sqrt(L1 L2)

Now this final formula is present in a lot of literature and it's certainly correct. It's also certainly true that from the principle of reciprocity M21 = M12. However this way of deriving the formula seems dubious - from steps above you can also see that:

M21 = k Φ11 / i1
M12 = k Φ22 / i2

And since M12 = M21:

M = k L1 = k L2

L1 = L2

Which would mean all pairs of inductances are equal, which is certainly false since we didn't impose any restriction on the geometry of the two coils.

If I would have to guess, I would say something is wrong with the first application of the reciprocity to the fluxes through the coils, however from what I know this should be correct.

So yeah, if you are fluent in electromagnetic theory, I would love to hear your opinion.

Update: see this follow-up post for a solution to this problem

Posted by Tomaž | Categories: Ideas | Comments »

Being a bit evil

17.03.2008 19:20

One thing that I learned at Zemanta is never to underestimate processing power and memory needed to do anything non-trivial (and also a lot of trivial things) with English Wikipedia dumps. After you spend some time dealing with these huge XML files you gradually learn from your mistakes and accept that as a fact.

The resources needed to process Wikipedia also became kind of a recurring inside joke at the office, especially when this needs to be explained to someone new in this field:

It is certainly not
a bug

(this is, of course, a completely unofficial imitation of a xkcd comic)

Posted by Tomaž | Categories: Ideas | Comments »


22.11.2007 3:54

Last Friday I went with the rest of Zemanta's team to see the famous crack on the floor of Tate Modern gallery. Considering that I read Stephen Baxter's Moonseed recently, the timing just couldn't be better.

The gallery occupies the building of a former power station and this particular art installation is placed in its massive turbine hall.

The sheer size of the hall is impressive. You can still see some large steam pipes that were cut off at the walls. I can't stop myself thinking that this place probably looked more impressive when it was filled with machines than how it is now, serving as a gallery for modern art.

On the other hand Shibboleth (the formal name of the installation) looks equally impressive. It spans the whole length of the hall - it starts at the entrance and disappears below the far wall - and looks very realistic, down to the smallest detail. Both edges of the crevice really look like they once fit together. I haven't seen any clues on how this was made - even the hairline cracks at the edges look like they formed in the material of the floor. I expected to see some marks where they dug out a larger groove and then filled it with concrete, but now I have no idea how they managed to dig such deep and narrow grooves into the concrete floor.

The realism breaks down only when you look closely at the inner walls of the crack which are too smooth to be natural and where you can see the iron mesh that reinforces the concrete.

I failed to see how this installation addresses a long legacy of racism and colonialism that underlies the modern world which is, as I learned from a sign on a wall, the message that the artist wanted to convey with her work.

Posted by Tomaž | Categories: Ideas | Comments »

Cool switches

12.11.2007 18:24

The next electronic thing I build will definitely have one of these things on it.

(found on CNET's list of Top ten off switches)

Posted by Tomaž | Categories: Ideas | Comments »

Gimping Galaksija

07.10.2007 13:24

Some time ago (probably while I was waiting to get my thesis approved) I didn't have anything better to do and I played with printed circuit board masks for Galaksija motherboard and GIMP. I just found these two images again today when I did some hard disk clean-up

This is how a professionally created Galaksija PCB would look like, with green solder mask, white silk layer print and gold plated contacts. I got the idea for doing this from a post on gEDA mailing list. Maybe I'll eventually hack up a GIMP script that will do this automatically from a PCB file. A picture like this is useful to do a last sanity check on a board before sending it to the manufacturer.

This one is a bit weirder and shows how a motherboard would look like on a x-ray machine.

Posted by Tomaž | Categories: Ideas | Comments »