Blog

A Fuzzy Border: Malvertising

Posted on February 1, 2013 - 10:12 by imeister

Earlier this week, my colleague Caitlin blogged at some length about the way Web users perceive malware warnings, and how they are couched in the media. I wholeheartedly concur with her that good Internet hygiene dictates that when in doubt, respect your search provider’s and/or browser vendor’s malware warnings. It’s interesting to feel out some of the contours of that doubt by looking at one of the the more ‘controversial’ blacklisting practices out there today: blacklisting websites that have been shown to serve a malvertisement. (A malvertisement here refers to a Web advertisement that contains malicious code, using our standard definition of ‘badware’ as what constitutes ‘malice’ here.)

Most popular websites incorporate code received from advertising networks. From time to time, these advertising networks may find themselves compromised by enterprising coding or incompletely vetted sourcing. Websites blacklisted for malvertising have not themselves been compromised, but instead incorporate third party code that is detected to be suspicious. This, as the argument runs, means they are guilty of a venal sin instead of a mortal one—and that they therefore do not merit the sternly-worded warnings that are shown to their visitors.There are some appealing elements to this argument: that sites should not be punished for displaying, in good faith, content that is not directly of their making; that modern ad networks so carefully tailor and so frequently rotate the content they show to consumers that the actual infection surface created by a single bad ad is small enough to outweigh the reputational damage associated with a blacklisting; and that it is sufficient simply to blacklist the URLs or domains associated with the advertiser, not the site on which the actual adverts run.

For the reasons Caitlin so eloquently stated, I don’t think this argument stands up to scrutiny as a matter of public policy — in large part because it’s difficult for the automated systems that support and define many large blacklisting efforts to content-neutrally weigh the equities of displaying a warning. But there’s something to the idea of blacklist operators trying to distinguish programmatically between maliciousness that is resident on a site rather than a visitor from an ad company.  Another way to look at it is this: are there characteristics that one finds in advertisements that deliver malware (other than the actual observed delivery of malware) that are clearly distinct from regular ad content?

The distinction isn’t quite as clear cut as it may appear. In the course of testing websites as part of our reviews process, we frequently find examples of code that we may have difficulty classing as malware-distributing or not. (Bear in mind that most modern malware delivery code is heavily obfuscated and is designed to evade execution in a controlled environment, so sometimes the puzzling code is all we have to go on.) Caitlin referred me to this code sample on Pastebin which may have been the code flagged by an automated malware detection system as malicious, and digging into it, it’s easy to see why. Take just two examples:

line 4: var m3_r = Math.floor(Math.random()*99999999999); 

Using the Math.random function to fill a value in a variable with a junk name is really common when distributing malware.

line 6: document.write ("<scr"+"ipt type='text/javascript 

This looks like an attempt to evade detection of script invocation by the browser, perhaps to confuse a browser ad-blocking extension — a goal that advertisers and malware distributors frequently share, since both want their content to be consumed, and in both cases a visitor is generally not seeking out the offending content as a primary goal.

None of this is to cast any aspersions on the motivations of the programmers who wrote that ad code — I suspect that this script serves a legitimate and legal function, unlike any malware distribution. It does serve to underscore, however, how the boundary between legal, legitimate, and potentially unwanted code and illegal, illegitimate, and certainly unwanted code is not a boundary that is easily susceptible to analysis at human scale—much less at Web scale. And given the very real threat malware distribution poses to the health of the Internet (and its users), as well as the difficulty in using ‘obvious’ characteristics in code to divine that code’s intent, it is an understandable choice for blacklist operators to alert users about a site that has actually distributed malware and has suspicious-looking code on it.

At present, it is the conventional wisdom (for good reason) that website security remains the responsibility of site owners — including advertising and other third-party content. But there’s an opportunity for those with much more tech savvy (ad networks) to take steps to assist those with much less. It would be a very positive development for the web ecosystem as a whole if advertisers were to take voluntary steps to disclose compromises in their network, and to write their code to a set of identifiable, specific standards that can help to steer clear from these gray areas.

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.