On ICT Seminars at IPS

05.06.2014 1:04

Today was the last day of seminars on Information and Communication Technologies for this year at the Jožef Stefan International Postgraduate School. I believe I managed to attend 77.7% of the 18 talks given by other PhD students and gave one short presentation on spectrum sensing. That should bring me above the mandatory attendance with a sufficient margin of error.

With a few exceptions, most of these talks were wrist-cuttingly boring. Considering that the final conclusion was that this year's presentations were better than usual I dread to think what previous years had to endure.

It was suggested to me that I might not understand all the finer nuances of the pedagogic process. However I still believe that if students are giving these talks to learn how to a present their research topic, more realistic feedback on their performance might be helpful. So here is mine.

I think two mistakes were prevalent:

First was gross mismatch between audience and the basic knowledge required to follow presentations. ICT field of study at this school is so broad that you will have someone parsing tweets about celebrities on one side of the classroom, someone working on protein databases in the center and someone doing motor bearing fault prediction on the other side. As an electronics engineer I had the misfortune of acquainting myself with sentiment analysis before. Still it doesn't make sense to talk to me about Segmented Discourse Representation Theory if you can't introduce it in a way that someone who has not been studying it for the past 6 months can understand.

Personally, I like best the format where presentation is roughly split into three parts: the first one can be followed by anyone with an elementary education, second one by someone familiar with the field and third one by your supervisor and colleagues from your research group. I find that I can follow such presentations with interest even when topic is completely outside of my specialties.

I realize that with the suggested 15 - 30 minute length of these ICT seminars this might not be possible. But I think it's better to drop the last third than the first. Your supervisor already knows what you are doing and will skip the talk anyway. Others can read your paper.

"Because Magic" by Egophiliac

"Because Magic" by Egophiliac

And that brings me to the other problem I noticed. I know that it is the de facto standard that academic slides are walls of text and are expected to stand alone relatively well without the accompanying speech (as much as I like LaTeX, Beamer was a terrible invention). I also know I enjoy every talk that breaks this mold. Don't recite a table of contents of your talk at the beginning. Make slides colorful - you're not paying Springer's color print fees here. Explain the topic using examples, plots, photographs. Don't fill up slides with 10 pt text that I can't even read from the back row. Use slide notes if the slide deck must be usable as a stand-alone publication (by the way, it doesn't in these seminars - the school doesn't even publish slides on the web).

Presentations in an academic setting certainly must be on a much higher level than your usual marketing pitch with stock photos, cute kittens and a stevejobsian act, but that doesn't mean they must make me sorry I didn't rather spend the afternoon debugging some PHP code.


In conclusion, I think everyone would be better off if these seminars would be given a bit more exposure. Obviously there is a lot of interesting research being done in the offices at our Institute. But as long as presenting it is viewed strictly as a formality and there is no push towards teaching people how to talk about their work in an understandable way, it will stay behind those closed lab doors. Presenting scientific discoveries to the public can be done by postgraduate students at informal seminars as well as by big-name professors with 3-hour extravaganzas in the large auditorium.

Maybe if that would be the case, the school wouldn't need mandatory attendance and enforce one (1) mandatory insightful question per student after the talk to give an impression of an academic debate. We might even get some of that much-sought-after cooperation between departments.

Posted by Tomaž | Categories: Ideas | Comments »

Universe in a cup

01.03.2014 16:42

This topic came up in an IRC channel the other day. How does the universe compare to a cup of tea? How long after the big bang did it take the universe to cool through the temperatures, characteristic of the popular hot beverage?

Fortunately both tea and the early universe are fairly well understood these days.

In the case of tea, the cup cools down approximately exponentially. Time constant depends on the thermal resistance between the drink and the ambient, its mass and specific heat. Fortunately, a dedicated research unit provides empirical data on the subject.

As far as the universe is concerned, that part of its cooling is described by the Friedmann equation for the matter-dominated era.

Cooling of a cup of tea and the universe.

As you can see from the graph above, the universe took slightly longer than your five o'clock tea to cool down.

However, unlike the cup of tea, the universe at that age wasn't terribly interesting. All the fun sub-atomic business had finished long ago and darkness was filled more or less evenly with neutral hydrogen and helium.

It will take more than 100 million years for the first stars to light up and another 10-something billion years before an odd biped gets the idea of throwing a fistful of leaves into boiling water.

Posted by Tomaž | Categories: Ideas | Comments »

Origin of frequency division

17.11.2013 18:35

The most basic feature of radio communication, practically since its invention, is the division of the electromagnetic spectrum between different users on the basis of different sine wave frequencies. In fact, the word radio spectrum is basically synonymous with this division and the first question about any kind of radio is usually what frequency it operates on.

After working in the Department of Communication Systems for most of the past two years, I began to wonder what is the original reason behind frequency division. It's one of those fundamental questions that sometimes pop into your head to keep your mind off more depressing topics.

The color spectrum rendered into the sRGB color space.

Image by Spigget CC BY-SA 3.0

The classical electromagnetic field theory gives a wave equation in empty space that does not favor the sine wave over any other kind of wave function. Similarly, a wave shared between transmitters can be decomposed into multiple independent channels based on any one out of an infinite set of orthogonal function families. Again, there is no preference to the sine and cosine functions and the Fourier decomposition that is ubiquitous in radio communication theory.

In fact, a lot of recent technologies, for example third-generation GSM, sub-divide their channels using orthogonal functions other than a sine wave. However, this is done only after first filtering the radio signal based on sine wave frequencies.

Electromagnetic field in a practical, Earth-based environment however does favor a division of signals based on sine waves. One classical reason is that objects that appear in the path of radio waves only come in a certain range of sizes. Diffraction and other such phenomena are mostly based on the relationship between wavelength and obstacle size. This means that sine waves with certain frequencies will have more favorable propagation properties than others. Hence it makes sense for instance to use a frequency band that will have better propagation for longer-range applications.

Another reason why it is natural to treat electromagnetic waves as a sum of sine functions is because of quantum mechanics and the fact that the frequency determines the photon energy. Size of energy quanta determines how the field can interact with matter in its path and this again affects atmospheric path loss in different frequency bands.

Early radio receiver.

While physics of radio propagation gives valid reasons why limit a transmission to a particular part of the electromagnetic spectrum, it doesn't explain why use relatively narrow band transmissions. Radio spectrum generally spans from 3 kHz to 300 GHz while most communication technologies will currently top out in the range of 100 MHz per channel.

The historic reason why frequency division was originally used is that the natural response of most electromagnetic and mechanical systems is a harmonic oscillation. Such oscillators can be conveniently used as signal filters to extract a channel in a shared medium.

Modern systems that use other kinds of orthogonal functions for multiplexing use digital processing for signal filtering. Only in recent history has digital processing been able to process signals with significant bandwidth. That left analog filters and the frequency division as the only option for multiplexing in the early days of radio. We are still long way off before 300 GHz of radio spectrum could be processed digitally.

Another problem with purely digital processing is that passive analog filters can have a much higher dynamic range compared to A/D converters or even active analog filters. The range between noise floor and coils melting or quartz crystals shattering is significantly better than the linear range of transistors. The ability to extract a weak signal in the presence of another transmission with a much higher power is crucial in radio technology where it's not unusual to see power differences well over 10 orders of magnitude. That is why even state-of-the-art software defined radios have front-end signal conditioning implemented in analog technology.

The only current technology I know that largely does away with frequency multiplex is Ultra-wideband. It is of course still frequency band limited. Partly because of propagation physics mentioned above and partly artificially, to minimize interference with other technologies that share the same frequency band. However with effective bandwidths in the range of giga-hertz it depends on frequency division much less than conventional technologies. Unfortunately I don't know the details of UWB implementation, so I don't know how it manages to overcome the dynamic range problem and other technological limitations.

Posted by Tomaž | Categories: Ideas | Comments »

On the phenomenon of paddling in place

24.08.2013 22:30

I recently came across the On the Phenomenon of Bullshit Jobs article. In it the author observes that technology has made it possible to produce goods with minimal manual labor, however work hours in general have not fallen accordingly. He attributes this to invention of pointless managerial jobs that seemingly don't contribute anything of value. I don't really subscribe to the notion that this was a conscious design by some entity to keep everyone nice and busy and submissive but the article does show western society in an interesting light (and the ensuing Hacker News discussion offers some better theories of how this situation came to be).

While this topic is fertile ground for pointing fingers at various middle managers and telephone sanitizers, I think a lot of such useless work is also done right here in the engineering community that is doing the said finger pointing.

Take for instance how a lot popular technologies in the mature part of the life cycle evolve. Once some product has reached a stable point where it solves a users' problem well enough and no one in the industry has any technological advantage, things often start going downhill. Manufacturers start adding useless superficial features, like adding "revolutionary" user interfaces, all in an futile attempt to be better than the competition. Think microwaves with elaborate functions no one every uses, "feature phones" from recent history with tons of useless fluff when everyone wanted to just use them to hear the voice on the other side, and so on.

The situation in the commercial software world is often similar, except the competing product is commonly its own older version. And with the forced upgrades, planned obsolescence and user's data held in ransom through incompatible formats the software world has even more leverage to force such useless changes on those users that find the older version fits their needs perfectly fine. As the most extreme example, see almost all changes in Microsoft Office beyond circa 1995.

In the commercial world it's easy to blame the ignorant management, greedy shareholders and anomalous market pressures for this situation. But it's baffling why similar things happen with free software projects that are mainly based on voluntary work. With recent Debian upgrade this has become most apparent. Projects that have reached the point where they more or less implemented all features one would want from them started implemented trivial changes that make no sense. I'll leave GNOME for some other flamewar, but take GIMP for example: the new version changes the semantics of windows versus application, replaces traditional sliders with something I have never seen before and shuffles around keyboard shortcuts. As far as my use of this software is concerned, there is no functional improvement and any benefits are completely offset by yanking my muscle memory from under me.

There seems to be some kind of a stigma attached to applications that stop having any visible changes. But I believe that for a mature application that does its job this is a good sign. Granted though that it's not easy to discern such a properly maintained project that only accepts an occasional bugfix from an abandoned one. Unix and GNU authors of old got this right though. You don't see sed renaming s command to r on a whim, but you do still see after all these years an odd bug fix release.

So to connect this back to the initial thought. It seems that there's a notion that even once you've arrived at something that genuinely works, you have to keep paddling in place, even though just maintaining the existing state with minimal effort would have better results for everyone involved. Someone with a social science background can now try to explain whether free software projects follow this because people bring in this habit from their day jobs or there is some other (market) pressure involved.

Anyway, from the technical standpoint I would just like to conclude by saying that please, acknowledge at some point that your project has reached everything that its users want to do with it and stick to bug fixes. Understand that the user interface is an interface. You don't randomly change mature APIs without a very good reason, do you? Rather than constantly trying to marginally improve it and making everyone miserable in the process, you can then use your remaining resources on starting something excitingly new from scratch.

Posted by Tomaž | Categories: Ideas | Comments »

Unreasonable effectiveness of JPEG

28.05.2013 20:41

A colleague at the Institute is working on compressive sensing and its applications in the sensing of the radio frequency spectrum. Since what ever he comes up with is ultimately meant to be implemented on VESNA and the radio hardware I'm working on, I did a bit of reading on this topic as well. And as it occasionally happens, it led me to a completely different line of thought.

The basic idea of compressive sensing is similar to lossy compression of images. You make use of the fact that images (or many other real-world signals) have a very sparse representation in some forms. For images in particular the most well known forms are discrete cosine and wavelet transforms. For instance, you can ignore all but a few percent of the largest coefficients and a photograph will be unnoticeably different from the original. That fact is exploited with great success in common image formats like JPEG.

Gradual decrease of JPEG quality from right to left.

Image by Michael Gäbler CC BY 3.0

What caught my attention though is that even after some searching around, I couldn't find any good explanation of why common images have this nice property. Every text on the topic I saw simply seemed to state that if you transform an image so-and-so, you will find out that a lot of coefficients are near zero and you can ignore them.

As Dušan says, images that make sense to humans only form a very small subset of all possible images. But it's not clear to me why this subset should be so special in this regard.

This discussion is about photographs which are ultimately two-dimensional projections of light reflected from objects around the camera. Is there some kind of a succinct physical explanation why such projections should have in the common case sparse representations in frequency domain? It must be that this somehow follows from macroscopic properties of the world around us. I don't think it has a physiological background as Wikipedia implies - mathematical definition of sparseness doesn't seem to be based on the way human eye or visual processing works.

I say common case, because that certainly doesn't hold for all photographs. I can quite simply take a photograph of a sheet of paper where I printed some random noise and that won't really compress that well.

It's interesting if you think about compressing video, most common tricks I know can be easily connected to the physical properties of the world around us. For instance, differential encoding works well because you have distinct objects and in most scenes only a few objects will move at a time. This means only a part of the image will change against an unmoving background. Storing just differences between frames will obviously be more efficient than working on individual frames. Same goes for example for motion compensation where camera movement in the first approximation just shifts the image in some direction without affecting shapes too much.

However, I see no such easy connection between the physical world and the sparseness in the frequency domain, which is arguably much more basic to compressing images and video than tricks above (after all, as far as I know most video formats still use some form of frequency domain encoding underneath all other processing).

Posted by Tomaž | Categories: Ideas | Comments »

On the death of hardware

23.09.2012 19:11

A few days ago I came across this article with an eye-catching title Hardware is dead. Two arguments are made there: first that consumer electronics designed and manufactured in Asia is becoming so cheap while maintaining a reasonable level of quality that western hardware companies don't stand a chance against them. And second that this will lead to a situation where content producers will be able to subsidize the whole cost of a device, giving hardware away for free with the hope that profits from content will more than make for the loss.

The first argument is nothing new. I remember how strongly it was voiced by the dean of Faculty of economics in the last year's round table on the role of engineers in society. Yet somehow there is still electronics design and manufacture going on in Europe. I am quite confident that the cost of eastern products will further go up as soon as their environmental and social standards catch up with the west, but this is merely my mostly uneducated opinion.

I'm more worried about the second prospect. It would create an incredible situation where you can get a physical object for free, something that necessarily requires a certain amount of raw materials and work to construct per copy, while on the other hand you pay for bits that are essentially free to copy. You may muddle up the issue by saying how selling hardware has low profit margins while content allows for a high margin. Still that doesn't change the fact that for the foreseeable future the cost of rearranging electrons in the RAM of your device into the recording of the latest blockbuster will be insignificant compared to the cost of rearranging atoms to make the said device itself.

I will conveniently skip the question of fixed costs, e.g. cost of design for hardware and production for content. We're talking here about complex devices produced by the millions where the situation regarding the cost of design is probably not all that different from the cost of producing some content for mass consumption. Hardware can still be surprisingly expensive for niche products though.

Subsidizing a product from the sale of consumables is nothing new of course. It's been done for all sorts of things, from printers, mobile phones, coffee machines to paper towels. The problem invariably appears when someone else tries to make consumables compatible with yours, only cheaper since they didn't have to subsidize your device. To prevent this kind of competition this business model always depends on some kind of intellectual-property monopoly-enabling legislation. In the past there have been some pretty creative uses of such laws in this context, for example where DMCA was applied to physical objects in the case of famous Lexmark ink cartridges.

Die Aktivitӓten in diesem Raum sind in deinem Land nich erwünscht.

What happens in the case where consumers themselves can make the consumable you depend on for your profit? Content shown on that give-away tablet is exactly such a case. Everyone can write text, make a program, record audio or video. Not to mention that pesky copy command every computer has that is just as effective as content producer's. Suddenly one product's users also become its enemies. This leads to nonsensical solutions like DRM and walled gardens. In some markets this practice has already been generally recognized as a bad idea. In European Union for instance mobile operators can no longer lock phones they are selling to their network, which means that at least in this country you can no longer get free mobile phones. Many are still subsidized of course, but whoever is selling them has to account for the subscriber's freedom to switch to another network. As regulators in other fields realize that such practice is harmful, I'm sure other kinds of give-away hardware will fail in a similar way.

Actually, I hope hardware subsidizing will never even come up to such a level. It might be great in the short term for hackers (hey, you get a free toy which you can jailbreak) and ordinary users (hey, you get a free toy) alike. But in the long term it will do more harm than good. For most people it basically exploits a psychological trick where people are bad at accounting for future costs, meaning you are tricked into buying things you don't actually need. Not to mention the throw-away culture such practice would enable (supposedly we are working towards a sustainable economy) and the escalation in the war over general purpose computing as companies more eagerly start to fight jailbreaks.

Posted by Tomaž | Categories: Ideas | Comments »

A sense of science fiction

03.09.2012 18:52

You've probably seen this vintage job advertisement for Jet Propulsion Laboratory at the height of the space race. It has done a few rounds on the web a day or two ago.

When you were a kid, science fiction gave you a sense of wonder.

Image by fdecomite CC BY 2.0

While it was printed well over a decade before I was born I was still touched by it when I first saw it in the web browser. I have always been a fan of science fiction of all kinds and this form of art certainly played a big role in shaping my decision to pursue this meandering career in engineering and science.

Of course, I can't be sure about what the authors of the poster originally had in mind with the sense of wonder. They might have been simply thinking about curiosity and the pleasure of finding things out. Science fiction can certainly spur both, even though the answers it provides to what-ifs must usually be taken with a grain of salt. That's why the word fiction is there after all.

What started this train of thought was rather the possibility that they thought about another feeling. The one you get when you realize you're doing or witnessing something that you have already done or seen a number of times, except now it's for real and the other times were while day dreaming or exercising imagination, fueled by works of fiction.

In any case, I'm sure readers of the advertisement that actually got involved in the space program weren't disappointed in that respect, regardless of how they understood the phrase.

Thinking back, this kind of déjà-vu feeling is actually quite rare. As mundane as it might sound, my only experience of it I can currently recall happened sometime after I got a Kindle, the electronic book reader. Reading a book one evening in my bed, it dawned on me that here I have a device which has been present in the background of so many science fiction stories. I don't think I've charged its battery even once at that point, which added to the feeling of novelty. It was just there, day after day, a few shelves worth of books, casually in my hand. A device I would have placed firmly beside a light saber or a food replicator a decade and a half ago.

I guess the rarity of the feeling can be attributed to the fact that science fiction rarely manages to accurately predict future technological advances. Everyday computing, ubiquitous communications and Internet we have today were completely missed by the stories I used to read as a child. While the capabilities of devices we carry in our pockets are certainly impressive by the standards of several decades ago, they are also completely different from computers that appeared in stories.

Posted by Tomaž | Categories: Ideas | Comments »

Decibels per hertz

05.04.2012 20:10

I promise this isn't turning to a series of math rants. But since I have been lately studying spectrum analyzers and similar machinery let me tell you about a small annoyance that has been bothering me in texts and datasheets covering this topic.

Often when discussing noise that has constant power spectral density over a range of frequencies the level of this noise is given in the logarithmic scale in units of decibels per hertz (for instance thermal noise is often said to be -174 dBm/Hz). This is wrong, as it implies that you need to multiply this value by bandwidth (in hertz) to get the power in dBm when in reality you need to add a logarithm of the bandwidth. Of course, everyone dealing with these equations just knows that. Logarithms turn multiplication into addition right? But then you end up with equations where the two sides of the equal sign have different units and that is just plain sloppy writing.

Here's how you properly convert a formula for Johnson-Nyquist noise into logarithmic units:

P = kT\Delta f

Apply definition of dBm:

P_{dBm} = 10\log{\frac{kT\Delta f}{1\mathrm{mW}}}

See how the value inside the logarithm has no dimension? The value in the numerator is in units of power and that cancels with the milliwatt in the denominator. If you are doing things correctly there should never be any physical unit inside a logarithm or exponential function.

To split off the bandwidth term, multiply and divide by one hertz.

P_{dBm} = 10\log{\frac{kT\Delta f\cdot 1\mathrm{Hz}}{1\mathrm{mW}\cdot 1\mathrm{Hz}}}
P_{dBm} = 10\log{\frac{kT\cdot1\mathrm{Hz}}{1\mathrm{mW}}\cdot\frac{\Delta f}{1\mathrm{Hz}}}
P_{dBm} = 10\log{\frac{kT\cdot1\mathrm{Hz}}{1\mathrm{mW}}}+10\log{\cdot\frac{\Delta f}{1\mathrm{Hz}}}

Note that you still have dimensionless values inside logarithms. There is no need to invent magic multiplication factors "because of milliwatts" or fix the units of the variables in the equation. The division by 1 Hz in the bandwidth part also nicely solves the confusion that happens when you have bandwidth defined in units other than hertz.

So how do you concisely write this value? There is no short notation that I'm aware of that conveys the proper meaning. I would simply write out that noise level is -174 dBm at 1 Hz bandwidth and leave it at that.

Now let the flames come that this is school nonsense and that real engineers with real dead-lines don't do any of this fancy dimensional analysis stuff and that back-of-the-envelope calculations just work since you just know that this here number is in watts and that there is in kilohertz.

Posted by Tomaž | Categories: Ideas | Comments »

Pre-school mathematics

29.03.2012 20:02

Recently the following question has been circling the Internet and also ended up in my inbox a few times. It's yet another one of those silly exercises that ask you to continue a sequence based on a number of examples. They are a pet peeve of mine and I'll explain why I hate them here once and for all instead of replying to everyone that sent a copy my way.

This problem is said to be solved by pre-school children in minutes

The logic of this specimen is particularly broken, even putting aside the questionable statement about pre-school children being able to solve it in mere minutes. It's obvious that whoever wrote that is not familiar with the meaning of the equals sign, as 8809 isn't equal to 6. So strictly reading, this exercise gives you 21 false statements. From that it can be simply inferred that the author is looking for another false statement, so ??? can be replaced with any number not equal to 2581.

With that out of the way, let's look at what was probably meant with this notation. You are given N=21 pairs of values (xi, yi) such that:

f(x_i) = y_i \qquad i\in[0 \dots N-1]

And asked to find yN for a given xN:

f(x_N) = y_N

From a purely mathematical standpoint, this problem has an infinite number of solutions. To demonstrate, assume that f is a function that satisfies the condition set above. In that case:

f(x) + g(x)

where

g(x_i) = 0 \qquad i \in [0 \dots N-1]

also satisfies this condition. Note that g is a function that can have an arbitrary value at xN and hence this new solution will give a different value of yN.

Just to make this a little bit more practical, here's an example of a polynomial that goes through N points:

f(x) = \sum_{i=0}^{N-1} (x - a_i) \prod_{j=0; j \neq i}^{N-1} (x - x_j)
a_i = x_i - \frac{y_i}{\prod_{j=0;j \neq i}^{N-1}(x_i - x_j)}

And here is how its graph looks like if you plug in the given 21 values:

Polynomial function that goes through given 21 points

At x=2581 it gives the value of around 5.8·1073.

Note that using the formula above and N=22, you can get a polynomial that goes through the required 21 points plus the 22nd that you can choose to your liking.

This is just one example. You could construct functions that meet the requirements from trigonometric or exponential functions in a similar way.

Sure, so this is probably not the solution the author of this question was looking for. You could say that I'm overcomplicating things and that the rule that pre-school children would come up with, based on the shape of the digits the author arbitrary chose to represent these numbers in, is nicer. But there is no such thing as niceness in mathematics (see interesting number paradox).

This is the big difference between mathematics and physics. Our universe has physical laws that for some reason can often be represented in certain mathematical forms. If a liter of water has a mass of one kilogram and two liters a mass of two kilograms it's quite likely that three liters will have a mass of three kilograms and so on. But this is purely based on our experience and beliefs about our environment and not on any purely mathematical basis. Such a prediction, just like the one above and many like it, when stripped of any physical context, does not make any sense.

Posted by Tomaž | Categories: Ideas | Comments »

Meta meta talk

22.01.2012 18:12

I noticed that a lot of talks and presentations I attend, especially in the more academic circles, begin with the speaker showing a slide with the outline of the talk in form of a bulleted list and talks through it, explaining what he will talk about. With short talks this can actually take a significant part of the total time, especially if the speaker returns to that slide after each part, showing how far into the talk he progressed.

What is the point of that, short of giving the audience an opportunity for one last glance of their email client? The purpose of such an overview in printed articles is to give the reader some idea of the contents. This means that you can skip reading it altogether and move to the next article if the table of contents doesn't give you enough confidence that the article will be of use to you. Via page numbering it also allows you to skip directly to the interesting part, should only a part of the document be useful.

None of these uses apply to the spoken word though. You can't fast forward to the interesting bit and if you find that the talk won't be worth your time after the introduction it's usually considered quite rude to walk out at that point. As is browsing the web waiting for the part the interests you.

Some may argue that the table of contents makes the slide stack more readable in stand-alone form. I don't buy that - slides are part of the presentation and I don't think any consideration should be given on how useful they might be without the accompanying speech. It's 2012 after all and making at least an audio recording with cheap equipment shouldn't be science fiction anymore.

Posted by Tomaž | Categories: Ideas | Comments »

On black magic

27.09.2011 17:42

A while ago, during lunch break at a BarCamp, I sat at a table with some of my colleagues from Zemanta. There was an iPhone 4 on the table and conversation touched its infamous antenna issue. Gašper said "You know, I heard antenna design is kind of a black magic" to which I replied that somewhere an RF engineer is probably saying to her colleagues "You know, I heard software development is kind of a black magic".

While I was only half serious at the time, it did get me thinking. I believe that design in all fields of engineering has parts that can be calculated analytically. Parts where laws of nature, either directly or through simplification and abstraction permit parameters to be calculated with formulas and algorithms to some useful engineering precision. There are also other parts where the only solution is an experiment. Those are the parts where you have difficult to predict parameters, chaotic systems or simply involve systems so complex that an analytical solution either doesn't exist or costs more to obtain than doing a series of experiments that will lead to the same result.

I understand black magic as a problem that favors an extremely experiment-oriented approach. A problem where a solution can only be obtained with a heuristical process. One that requires intuition of a designer that has over the years accumulated and internalized a large number of failed and successful experiments. Designer with enough experience that when presented with a problem he can intuitively recall what approach worked well in a similar case, even if he can't explain it.

The two approaches, the purely analytical and purely experimental, overlap to a large degree. For example when designing analog circuits you can go pretty far calculating exact values down to the last resistor in the system, counting in stray capacitances and so on. Or you can take a more experimental approach. Calculate some basic parameters with wide enough error margins, build a prototype circuit or a simulation and tweak it until it works to your satisfaction. I would argue that the sweet spot lies somewhere between both extremes. Both in the aspect of least effort and quality of solution. Of course, hitting that sweet spot, knowing when you went too deep into theory, is also where intuition and experience come into play. It's a different kind of intuition though and arguably one that is more broadly applicable than the black magic one mentioned above.

Moving work from design to implementation.

Original by Paul A. Hoadley CC BY-SA 2.5

It's quite the same with software engineering. Just replace pages upon pages of differential equations with design documentation in a waterfall model. Or incremental building and optimization with the coveted agile approach.

In fact my growing dissatisfaction with the software development world is that the mindset is ever more moving towards extremely experimental design. This is most apparent in the fresh field of web application programming. There is little to no strict documentation on much of the modern frameworks that hold the web together. Few well defined interfaces have specifications that are too complex to fully understand and are riddled with implementation bugs. It's the field that favors the magicians that have enough mileage to recite individual web browser CSS bugs by heart in the middle of the night, ordered backwards by the issue numbers. With continuous deployment, increasing hate of version numbers and release cycles for software this bias is only growing more prevalent throughout the software industry.

It's an interesting contrast to the inherently analytical nature of computer programs and it certainly doesn't look like the fictional RF engineer's statement would be any less well grounded than my colleague's.

Posted by Tomaž | Categories: Ideas | Comments »

The most average blog

17.06.2011 21:49

Here is another interesting result extracted from the dataset of 150.000 blog screenshots I mentioned in my previous post. I summed the pixel values of all images and created the screenshot of an average blog:

Browser window averaged over 150 thousand blogs

Actually I made this on a whim after remembering a beautiful average face that decorated a NewScientist cover a while back. It took only around 40 lines of Python code using Numpy and Python imaging library and a few hours of processing time. I wouldn't say the result is cover-page material, but it interesting nonetheless.

I guess everyone can draw their own conclusions from it. The most prominent feature is the Firefox notification bar, which is the artifact of my screenshotting method - the browser I used didn't have Adobe Flash installed. There are methods suggested in the comments to my original post on how to do web page screenshots properly, which I will definitely use should I want to repeat a survey like this.

In hindsight, this bar might have affected the HSV histograms a bit. It is quite possible that on pages with a patterned background the yellow background of the notification bar would be picked up as the dominant color on the page. However I think the effect isn't significant, since this would have resulted in a single-color spike on the dominant color histogram in the yellow part and the spike observed there covers at least two histogram bars.

Posted by Tomaž | Categories: Ideas | Comments »

United colors of the blogosphere

10.06.2011 21:57

Several months ago we had a discussion in the office about the icons that Zemanta automatically adds to the footer of blog posts that contain suggested content. The conversation mostly revolved about how aesthetically pleasing they are combined with various web site designs out there.

What bothered me is that most of the arguments there were based on guesses and anecdotal evidence. It made me curios about what are the actual prevailing colors used on web sites out there. So I dumped the list of blogs Zemanta knows about, threw together a bunch of really simple shell scripts and let a machine crawl the blogs around the world. Of course it wasn't that simple and it wasted a week making screen shots of a Firefox error window before I noticed and fixed the bug. The whole machinery grew up to be pretty complex towards the end, mostly because it turns out that modern desktop software just isn't up to such a task (and I refused to go through the process of embedding a HTML rendering engine into some custom software). When you are visiting tens of thousands of pages a browser instance is good for at best one page load and the X server instance survives maybe thousand browser restarts.

Collage of screen shots of a few blogs.

After around two months and a bit over 150.000 visited blogs I ended up with 50 GB of screen shots, which hopefully make a representative sample of the world's blogger population.

So far I extracted two numbers from each of those files: the average color (the mean red, green and blue values for each page) and the dominant color (the red, green and blue value for the color that is present in the most pixels on the page). The idea is that the dominant color should generally be equal to the background color (except for pages that use a patterned background), while the average color is also affected by the content of the page.

Here are how histograms of those values look like, when converted to the HSV color model. Let's start with the dominant colors:

Histogram of dominant color hue used in blog themes.

You can see pretty well defined peaks around orange, blue and a curious sharp peak around green. Note that this graph only shows hue, so that orange peak also includes pages with, for instance, light brown background.

I excluded pages where the dominant color had zero saturation (meaning shades of gray from black to white) and as such had an undefined hue.

Histogram of dominant color saturation used in blog themes.

The saturation histogram is weighted heavily towards unsaturated colors (note that the peak at zero is much higher and is cut off in this picture). This is pretty reasonable. Saturated backgrounds are a bad choice for blogs, which mainly publish written content and should focus on the legibility of the text.

Histogram of dominant color value used in blog themes.

Again this result is pretty much what I expected. Peaks at very light colors and very dark ones. Backgrounds in the middle of the scale don't leave much space for text contrast.

Moving on to histograms of average colors:

Histogram of average hues used in blog themes.

Average color hues are pretty much equivalent to dominant color hues, which increases my confidence in these distributions. Still we have high peaks around orange and blue, although they are a bit more spread out. That is expected, since average colors are affected by content on the site and different blogs using the same theme but publishing different content will have a slightly different average color.

Histogram of average color saturation used in blog themes.

Again, weighted strongly towards unsaturated colors.

Histogram of average color value used in blog themes.

Now this is interesting. The peak around black has disappeared completely! This suggests that the black peak in dominant colors was an artifact, probably due to the black color of the text being dominant over any single background color (say in a patterned background). The white peak is again very spread out, probably due to light background colors mixing with dark text in the foreground.

Conclusions at this point would be that light backgrounds are in majority over dark backgrounds, most popular colors are based on orange and blue and most bloggers have the common sense to use desaturated colors in their designs.

I'm sure there are loads of other interesting metrics that can be extracted from this dataset, so any suggestions and comments are welcome as always. I also spent this Zemanta Hack Day working on a fancy interactive visualization, which will be a subject of a future blog post.

Posted by Tomaž | Categories: Ideas | Comments »

Aiming for the center

11.04.2011 21:08

Software that exposes its user interface over the web offers limitless opportunities for monitoring the behavior of people using it. You can literally watch each move of the mouse and each key press. And it's so tempting to gather all this information and try to make sense out of it that some are pushing this idea of data centered design where each decision must come from a bump in a graph or two. From my point of view as an engineer (UX is not in my verbal vocabulary) I find that kind of approach shortsighted at best.

You are invariably optimizing for an average user, even if you divide people into neat little numbered bins. Consider an analogy from the physical world: Center of gravity is the mean location of mass in an object. In many objects, like in this ring for instance, it is also devoid of any mass. Optimizing something for the average user can mean you simply make no one happy and everyone equally miserable.

Ring with a center of gravity mark

AB testing and similar ideas are just hill-climbing optimization algorithms for some variable that ultimately depends on processes in a wet human brain. Such simple approaches fall short in the real world where laws of physics are infinitely less complex. How can they be the main guide for development in user interface design, where a linear law is replaced by chaotic mess of firing neurons? I don't doubt that in some time in the future well be able to understand and model it (and that certainly won't be a two dimensional graph with vaguely defined axes). Until then it will take a human to understand a human.

Some may argue that at large scales groups of people are surprisingly predictable. My answer is that it's also well known that entropy increases in the universe. That doesn't bring you any closer to designing an internal combustion engine.

I'll stop myself here and won't go into the impossibility of doing controlled experiments with people or the moral problem I see in choosing profit as the universal optimization variable for all design questions. I'll offend enough liberal arts students as it is.

Measurements are important in evaluation, but anything involving a human in the loop is one significant figure at best. Modifying things based on that second decimal increase in metric instead of going with the gut feeling of what is best just takes everyone a little further from happiness.

Even if you reap mad profits from that bouncing blinking purchase button.

Posted by Tomaž | Categories: Ideas | Comments »

Treacherous waters

09.02.2011 21:44

These days (or rather months) my daily work at Zemanta often takes me to the part of the web I would not normally visit. Its shadier parts so to speak. And it turns out that those are surprisingly crowded these days.

I'm sure you've been there. Probably when you were searching for some useful piece of information and such sites cluttered the top of the result list and you had to sort through piles and piles of fluff before you found what you were looking for. Or maybe it was recommended by a friend through one of the many channels such recommendations travel in the age of social web. Perhaps you even had to deal with a bug report because a piece of your web-facing software, while compliant to all relevant standards, didn't perform up to some user's satisfaction when dealing with such a web site.

Dark tunnel that is HTML 5

Imagine for a second the stereotypical web site of this class: fixed width design, unreadably small, gray type on white backgrounds. Left-top logo in saturated colors and gray gradients, courtesy of web-two-point-oh. Probably the definitive destination for some wildly popular topic right off the first page of your typical yellow press (celebrities, health, cars, shopping) or emerging interests (say Android development). At most two paragraphs of actual content and the rest filled with ads, user comments and everything in between. And of course at least 10 ways to share this page on all social networks you know about plus 10 more that you don't.

Considering that serving those abominations of the web is the only thing the companies behind them do, they are surprisingly incompetent about it. Pages won't validate or will throw a hundred Javascript errors from tens of different scripts that load behind the curtains. The little content there is was scraped from the Wikipedia or it looks like someone from a less fortunate country was hired to copy-and-paste a few statements on a prescribed topic from all around the Internet. Everything under a CC license is considered free-for-all (but don't dare break their lengthy personal-use-only terms of use!). Nobody cared about the fact that there is anything other than ASCII encoding or the subtleties XML parsing or for that matter even that the description of software product and an image that shows a porn star of the same name do not refer the same thing. As long as half a dopamine-starved human brain is able to decode it it's good enough.

What's puzzling at first is that such sites seem to be getting a shocking amount of traffic (and probably revenue as well). Of course, the opinions about the quality differ. Even among my colleagues some consider such sites valuable destinations. They have comment buttons and you can share them on Twitter! They're way more fun to visit than some tired old Wikipedia that doesn't even have Facebook integration. Regardless of the fact that any user-contributed discussion is as devoid of actual content as the site itself.

What I see in such pages is an evolution of link farming. Social farming if you will. Search engines have gotten better at detecting content that has just been blatantly and automatically copied from somewhere. So an up-to-date spammer, er, vertical influencer has switched from a website copying bot to a few mechanical turks producing syntactically unique but semantically carbon-copied content. The network effect of modern social networks brings more and more people to the site, producing worthless comments that again give the appearance of a respectable site. At this point they are trying to trick an algorithm by introducing living people into the content copying process.

Therefore you can hear a lot about how the traditional search engines will in time be completely replaced by your social network, introducing wetware on the other side as well. The idea is that natural language processing and information retrieval won't be able to distinguish between what you would consider a reputable site and a link-farmed site that approximately copied content from that reputable site. But your friends in a social network will. First because they are (hopefully) human and can understand things AI can't and second because you share their interests and trust what they trust. They can therefore in theory push more useful information in your direction than some algorithm, even when it is intimately familiar with your click- and search history.

However I think the sites I described above are a perfect example why this scheme won't bring any reduction to the fluff you will need to go through before getting to the information you want. At this point you even don't have to fool search engines any more because once you've got people clicking those little "share" buttons they will bring more visitors to your site and push your coupon codes and endorsements and whatnot regardless. In the end it's just as easy to subvert the human social network to pass around unsolicited advertisements as it is for a software algorithm. You just need a different kind of engineering.

Posted by Tomaž | Categories: Ideas | Comments »