Some SPF statistics

21.02.2016 19:04

Some people tend their backyard gardens. I host my own mail server. Recently, there has been a push towards more stringent mail server authentication to fight spam and abuse. One of the simple ways of controlling which server is allowed to send mail for a domain is the Sender Policy Framework. Zakir Durumeric explained it nicely in his Neither Snow Nor Rain Nor MITM talk at the 32C3.

The effort for more authentication seems to be headed by Google. That is not surprising. Google Mail is e-mail for most people nowadays. If anyone can push for changes in the infrastructure, it's them. A while ago Google published some statistics regarding the adoption of different standards for their inbound mail. Just recently, they also added visible warnings for their users if mail they received has not been sent from an authenticated server. Just how much an average user can do about that (except perhaps pressure their correspondents to start using Google Mail) seems questionable though.

Anyway, I implemented a SPF check for inbound mail on my server some time ago. I never explicitly rejected mail based on it however. My MTA just adds a header to incoming messages. I was guessing the added header may be picked out by the Bayesian spam filter, if it became significant at any point. After reading about Google's efforts I was wondering what the situation regarding SPF checks looks like for me. Obviously, I see a very different sample of the world's e-mail traffic as Google's servers.

For this experiment I took a 3 month sample of inbound e-mail that was received by my server between November 2015 and January 2016. The mail was classified by Bogofilter into spam and non-spam mail, mostly based on textual content. SPF records were evaluated by spf-tools-perl upon reception. Explanation of results (what softfail, permerror, etc. means) is here.

SPF evaluation results for non-spam messages.

SPF evaluation results for spam messages.

As you can see, the situation in this little corner of the Internet is much less optimistic than the 95.3% SPF adoption rate that Google sees. More than half of mail I see doesn't have a SPF record. A successful SPF record validation also doesn't look like that much of a strong signal for spam filtering either, with 22% of spam mail successfully passing the check.

It's nice that I saw no hard SPF failures for non-spam mail. I checked my inbox for mail that had softfails and permerrors. Some of it was borderline-spammy and some of it was legitimate and appeared to be due to the sender having a misconfigured SPF record.

Another interesting point I noticed is that some sneaky spam mail comes with their own headers claiming SPF evaluation. This might be a problem if the MTA just adds another Received-SPF header at the bottom and doesn't remove the existing one. If you then have a simple filter on Received-SPF: pass somewhere later in the pipeline it's likely the filter will hit the spammer's header first instead of the header your MTA added.

Posted by Tomaž | Categories: Life

Add a new comment


(No HTML tags allowed. Separate paragraphs with a blank line.)