security

security

Visualizing eight years of independent reviews

Posted on August 14, 2015 - 12:24 by ccondon

StopBadware has been performing independent reviews of websites blacklisted by our data providers for more than eight years. As we've explained in the past, a manual review done by our staff is not always necessary: if a webmaster requests a StopBadware review of a site on Google's Safe Browsing blacklist, the first step in our review process is an automated request for Google to rescan the site in search of malicious code. If Google's automated systems don't find anything suspicious, that site will come off Google's blacklist without our ever having to touch it. When Google still finds malware, or when one of our other data providers is the blacklisting party, one of our website testing team uses a variety of tools to scour the site for malicious code and other bad behavior.

As our home page proclaims in red, we've helped de-blacklist more than 171,000 websites since 2007. Before we shutter operations as an independent nonprofit next month, we want to give our community a better idea of what goes into that number. 

Since we started collaborating with Google, and later ThreatTrack Security and NSFocus, we've performed 53,167 manual reviews. We've also processed an additional 188,149 review requests that were resolved automatically thanks to our automated integration with Google. Those aren't all unique requests, so combining them doesn't yield an accurate number. Here's what all those review requests look like over time:

Why the decline? 

You'll undoubtedly notice that we received many more review requests early on than we do today. Better security awareness, wide availability of relatively low-cost security tools, and default use of things like Webmaster Tools all contribute to the decline we've experienced in review requests. We also have better ways of detecting and weeding out abusive requests than we used to. 

Unfortunately, something else that's contributed to the decline in review requests is malware distributors' widescale use of stealthier, more targeted methods like malvertising. When a resource is compromised only very briefly (e.g., through an infected ad network), even when blacklist operators are able to detect the infection and warn users away, the compromise is often resolved too quickly for StopBadware's Clearinghouse to reflect that the resource was ever blacklisted. Generally speaking, if something is blacklisted for fewer than six hours, we won't have a record of it in our Clearinghouse. On the one hand, this is good news, in that we want blacklists to operate as narrowly as possible to maximize user protection while minimizing penalty to site owners; on the other hand, this is bad news, in that malicious actors are able to effectively utilize powerful technologies to spread malware in ways that are difficult to detect and counter. 

What's not included in this data? 

What you don't see in this chart is the tens of thousands of URLs we've reviewed in bulk for web hosting providers, AS operators, and other network providers over the years. We've worked with everyone from dynamic DNS companies and bulk subdomain providers to small resellers and abuse departments at big companies to clean up malicious resources on their networks and help remove them from blacklists. The majority of this process is manual, and because it's initiated based on trust and human communication instead of by clicking a button, bulk review data isn't reflected in our public review data. 

StopBadware's review process will continue to operate normally during and after our operations transfer to our research team at the University of Tulsa. Thanks to our research scientist, Marie Vasek, for putting this data together!

Community news and analysis: May 2015

Posted on June 9, 2015 - 12:54 by ccondon

Featured news

  • How effective are the security questions—and answers—used to protect sensitive accounts and information? Not very, according to new Google research. Read about how easy it is for hackers and bots to guess answers to common questions, and what users can do about it.
  • Google also published research last month on the ad injection economy (key findings here, full report here).
  • Mozilla sent a communication to CAs with root certificates included in Mozilla’s program; Mozilla, acting in the best interest of users, asked CAs to respond to five action items. They’ve stated they intend to publish the responses this month.
  • WordPress users: The Automattic team released WordPress 4.2.2, featuring critical security fixes, the first week of May. Please make sure you’re updated!
  • DomainTools put together their first report profiling malicious domains by delving into domain registration attributes and overlaying this with data on malicious activity. Their summary links to the full report here.

Malware news + analysis

  • ESET: Whitepaper on CPL malware in Brazil
  • Sophos: “PolloCrypt” ransomware sounds as ridiculous as its mascots look—but it’s a real thing targeting Aussie users. Also from Sophos: Can Rombertik malware really destroy your computer? Nope.
  • Fortinet analyses of Rombertik malware and Tinba botnet malware
  • Sucuri: Hacked websites redirect to...Bitcoin?

Other security news

  • SiteLock: Who else is reading your email? A guide to PGP encryption
  • Fortinet: Should new WHO disease-naming guidelines also be applied to malware?