Quantcast
Channel: Terry Zink: Security Talk
Viewing all 243 articles
Browse latest View live

The Top Spamming Countries

$
0
0

A little over a week ago, Sophos published a blog post about the countries that sent the most spam in the third quarter of 2012. They found that India was number one on the list with 16% of the spam, followed by Italy at number two with 9% and the US at number three with 7%.

As usual, I’m a little late in publishing my own analysis. But mine is different. Rather than looking at the countries that send the most spam, I break it down by countries that send the most spam as a proportion of the total mail they send and how much total spam they send to us. The US is the country from which we receive the most spam, but it’s traffic isn’t nearly as bad as Brazil. My numbers reflect this.

A couple of things, though:

  1. Sophos’s numbers and ordering are not going to be the same as mine because we see different email streams. Our customer base skews heavier towards the United States, Canada and the UK. Our corporate customers are also different from theirs. Thus, the trends will be similar but not identical.

  2. My numbers do not account for all of our IP blocklist rejections. We do not keep individual statistics on those. My belief is that they would be pretty much the same as what we catch in our content filter, but I have not verified this.

In my analysis, I have decided to split it into the top five “good” countries that send us spam, and the top five “bad” countries that send us spam.

The “good” countries all have a total spam percentage (after IP blocks) of less than 10% for the 3rd quarter of 2012. But instead of showing you a flat numbers chart, I’m going to show you an animation. These countries are the US (US), Britain (BR), France (FR), Singapore (SG), and Japan (JP). I’m surprised that Singapore is on that list, but it is what it is. How do these numbers change over time? Take a look on the motion chart* below:

Top Spamming ”Good” Countries Spam Percentage; 3Q, 2012

 

Note: when I post this, the chart reverts back to its default state instead of my description below. I apologize profusely for this, but I can’t figure out how to “anchor” these settings. So, for best results, please do the following:

  1. Change the left, vertical axis to Total
  2. Change the bottom, horizontal axis to Spam Percentage
  3. Change the Color in the top right to Unique Colors
  4. Change the Size in the top right(ish) to Total
  5. Select all the checkboxes
  6. Unselect Trails
  7. To slow down the animation, beside the little triangle Play button is a second smaller triangle. The second one scrolls up and down, slide it down a little to make it easier to watch.

In the chart above, the size of the dot corresponds to how much total mail it sent. The total mail is also the vertical axis. The horizontal axis is the percent of mail that was marked as spam for that particular day. The further to the right the dot is, the more spammy it was on that date. Countries want to be as far to the left as possible.

What about the “bad” countries? To get the these, I took the top countries that sent us more than 20% spam over the same period. These are Brazil (BR), China (CN), India (IN), Italy (IT), and Turkey (TR). How do their numbers change over time? Let’s take a look:

Top Spamming ”Bad” Countries Spam Percentage; 3Q, 2012

Remember, the further right on the horizontal axis, the more spammy the country was on that day. The vertical axis and size of the dot (which are the same scale), show how much total mail they sent that day.

When we compare our numbers to Sophos’s, India, Italy, Brazil, China and Turkey all make our list but in not quite the same order. This shows that the trends are similar although not necessarily identical. I prefer breaking it out into “good” and “bad” countries so as to differentiate the rates of spam, not just pure volume.

And that’s the comparison of the top spamming countries between Microsoft Forefront Online and Sophos.


* I credit my manager for giving me the idea to organize and format data in this way. I think it’s a lot better than showing stuff in a static chart. Instead, you get to view things over time and also show more than two dimensions. The above chart shows time, spam percent and total mail.


Is the term “cyberwarfare” overstating the case?

$
0
0

At the Virus Bulletin conference last month, Andrew Lee from ESET gave a talk entitled “Cyberwar: Reality or Weapon of Mass Distraction?”

In it, Lee talks about how the term “cyberwar” is thrown around a lot these days. However, he disagreed with the use of the term because it uses inflationary language and overstates the case; today’s “cyberwar” is not the same as a conventional ware. We read in the newspapers things like “Stuxnet is the new face of 21-st century warfare: invisible, anonymous, and devastating” and “Very respected scientists have compared nuclear arms race to cyber arms race.”

Really? Is it really a cyber arms race?

The Path to Cyberwar started with Kosovo in the late 1990’s. It was the first war where information and disinformation over the Internet became very important. NATO forces were often fooled by this information. They were so reliant on aerial surveillance that the Serbs put up fake tanks, fake heat sources so as to divert campaigns.

More instances:

  • In 2007 in Estonia, they came under attack although later analysis showed it to be more of a cyber riot by patriotic hackers (the Russian Nashi youth group).
  • A similar instance occurred in Georgia in 2008, and again in Kyrgyzstan.
  • However, in Iran in 2010, Stuxnet was first instant where there was some kind of destructive element to the attacks.
  • China is interesting; although they are building a lot of infrastructure, they are trying to develop by getting secrets from other places

But are these examples of cyber warfare?

While Stuxnet was called a “Digital Apocalypse” it was really “just” a DOS attack.  Iran possesses weapons grade reactors, and that's what Stuxnet damaged. No people were injured. It was not even close to a digital Hiroshima. The fallout of nuclear weapons is much, much worse than cyber weapons. Terms like these seriously devalue what real war looks like. A real act of war has to be violent, purposeful and political. Stuxnet does not meet this criteria.

It’s as if we in the security industry have been talking about viruses that could destroy hard drives for years. Now that we finally got one, we cry “APT!”

Below is what real warfare looks like:

(The aftermath of Hiroshima, Japan in 1945)

image

(Fallujah during the War in Iraq)

All of this matters for multiple reasons:

  • Use of resources - Cyberwar isn’t just about malware, it involves militarization of civilians and civilian resources and these things may provoke a military response.

  • Politics - Furthermore, there is a possible politicization of public anti-malware efforts (e.g., should US companies issue malware signatures for US gov’t malware?).

  • Special interests - In addition, cyberwar is being defined almost exclusively by and within the civilian sphere.  You don't hear the military talking on and on about the cyberwar. They go to great pains to reduce the kind of hype. It's mostly by those who have a vested interest in selling something to gov't, or public.

Who are the possible targets in “cyberwar”?

The US has more to lose than anyone else because of the way its economy is linked to the online world.  If you have the widest attack surface, your opponent’s strength lies in your weakness. People with no reliance on cyber are the biggest threats because they don't need to worry about defense. They also don't worry about the threat of retaliation because they don't care about the loss of human life.

There is also the problem of “attribution pollution.”

What happens when you don't know who the enemy is? Is it civilian? Military? False flag (i.e., a diversion to make it look like it came from someone else)?  Furthermore, there is implausible deniability - if you did it, why would you ever admit it? Unless you are declaring war?

Ultimately, we must reduce the hype and increase our knowledge, and take responsibility for our own cyber hygiene: harden and strengthen defenses, include code review and test processes, educate people to the risks they face but with a practical slant that they can use.

Those are my notes from Lee’s session at VB. I thought it was a good talk with plenty to think about.

How to measure False Positive rates

$
0
0

As someone who is in charge of our spam filtering here in Microsoft Forefront (i.e., I’m on the spam team and one of my tasks is to improve the service, but it’s not me all by myself), there are two critical pieces of information:

  1. What’s our spam catch rate?
  2. What’s our false positive rate?

I’m talk about measuring spam catch rates in a future post. But today, I’d like to look at false positives. How do you measure how much good mail you catch but should have allowed through?

There are a few ways to do this. Hotmail has a bunch of graders who receive a copy of a message every so often of their own mail stream that says “Is this mail spam or non-spam?” The graders then give the verdict and Hotmail compares it to what their filter’s actual verdict was. If the grader says “non-spam” but Hotmail’s filter said “spam”, then they have a false positive. With a big enough set of graders, this process provides a reasonably effective number.

This doesn’t work in an enterprise environment like what we filter because (a) professional workers don’t want to be bothered to do this every day, and (b) there are privacy issues. What works in Hotmail doesn’t work for us.

A second way to do it is with an independent test. There are organizations like Virus Bulletin that will test the filtering effectiveness of spam filters. To do this, they run honeypot spam traffic through filters and they also run legitimate mail through the filter. The problem with this is that the mail volumes are not very large and the process requires some manual validation afterwards.

One of my requirements is that a measurement must:

  1. Be automatic
  2. Be repeatable
  3. Have high volume
  4. Not depend upon examining the mail afterwards

It’s hard to get all of these, especially #3. Numbers only mean something when you have lots of them.

One idea that I have is to use IP whitelists. Every IP on this whitelist is supposed to be a good sender who never sends spam – that’s why they are on the whitelist. If you get mail from this IP and it is marked as spam, then you have a false positive.

To do this, record the spam/non-spam stats for each IP address each day that sends you email and then check for the intersection of your total IPs vs. the ones on the whitelist. If any IPs, according to your stats, have messages marked as spam, then those are the FPs. Use that as the FP rate. For example:

Date: Oct 31, 2012

Whitelist_IP_1: Spam 0  Non-spam 68
Whitelist_IP_2: Spam 2 Non-spam 97
Whitelist_IP_3: Spam 38 Non-spam 122
Whitelist_Total: Spam: 40 Non-spam 287

FP rate: 13.9%

This approach satisfies all of my requirements:

  1. It can be automated. It is easy to automate the cross-section and analysis of two different lists. There is no human required to get in between this process.

  2. It’s repeatable.  Simply pull the list whenever you wish, and check against the total stats, and your results are popped out for you. The methodology never changes and is consistent throughout.

  3. It has high volume. Manual analysis prevents high volume. It’s also hard to generate that much mail manually. However, if your whitelist is big enough, you can generate a statistically significant amount of traffic.

  4. It’s (reasonably) reliable. Since someone else has populated the known whitelist of good senders, and they are vouching for its cleanliness, there is no need to check afterwards if the mail really is legitimate or the feed is polluted.

Of course, there are some drawbacks to this approach as well:

  1. Is the list really clean? Even though the IPs are on the whitelist, are they actually sending good mail? What if the list is polluted?

    The way to get around this is to pull multiple different independently generated whitelists. If one of the lists is an outlier, then the list can be weighted differently or excluded altogether. But if multiple lists are saying the same thing, then you can be reasonably sure that the data you are gathering is reflective of reality.

  2. Are the lists representative of real life? Not every good piece of email comes from a good IP address. In fact, there are lots of IP addresses that cannot be whitelisted that send out good mail.

    This is alleviated by taking IP lists that are big enough to generate mail in large volumes, as well as multiple lists that are populated independently. If you have enough data points, they average out. Outliers can be excluded or weighted lower.

Using whitelists in this manner is a quick-and-dirty way to measure the effectiveness of a spam filter (assuming that you don’t give mail on the whitelist a free pass to the inbox; you should filter it in parallel). It’s not a perfect way to do it, but it’s fast and efficient. For internal purposes, it’s probably the best method I can think up for a ballpark estimate of how good a filter is.

U.S. potentially looking to establish a cyber “army” national reserve

$
0
0

<My fist slams down on the desk in a satisfied act of self-congratulations>

I knew it!

A couple of weeks ago on my blog, I wrote a blog post entitled Will cyberwar create new rules of engagement? In it, I mused about the possibility of whether or not the government would ever draft people from the civilian space into the military if they have skills in cyber security and hacking. As technology evolves, and the military requires skillsets that are in short supply, they might need a new class of soldier. Rather than drafting people for the military to do fighting with conventional weapons, they may draft people to do fighting with cyber weapons. But to do that, you need to start with people who already have some skills.

That was just my random speculation.

Well, today Reuters published the following article: US seeks patriotic computer geeks for help in cyber crisis. It’s not about the military (or DHS) setting up a draft, but instead is about setting up a “Cyber Reserve.” I was wrong about a draft, but not too far off (I should have mused about the setting up of a Reserve; why didn’t I? I could have claimed 100% foresight!).

Anyhow, from the article:

The Department of Homeland Security is considering setting up a "Cyber Reserve" of computer security experts who could be called upon in the event of a crippling cyber attack.

The idea came from a task force the agency set up to address what has long been a weak spot - recruiting and retaining skilled cyber professionals who feel they can get better jobs and earn higher salaries, in the private sector.



[The DHS] said they hope to have a working model for a Cyber Reserve within a year, with the first members drawn from retired government employees now working for private companies. The reserve corps might later look to experts outside of government.

Experts outside the government? That’s regular people who work in the industry who have backgrounds in hacking or cyber security.

The article continues that computer geeks want cool jobs (which is true). It’s not about the money:

The Department of Homeland Security has had trouble attracting and retaining top cyber talent since it was created after 9/11 in a massive merger of 22 agencies in 2002. In its early days, the DHS farmed out cyber work to contractors so it could quickly get systems running to improve national security.

As a result, the agency tends to award the most coveted cyber jobs to outside contractors. Those positions include forensics investigators, posts on "flyaway teams" that probe suspected cyber attacks and intelligence liaisons.

"It's not the money that makes people go to the contractors. It's the cool jobs," said Alan Paller, co-chair of the DHS task force. "People want the excitement."

I decided to look up how the National Guard Reserve works, and its exact obligations on Yahoo! Answers:

The Army National Guard is part of the United States Army, comprising approximately one half of its available combat forces and approximately one third of its support organization.

Army National Guard units are trained and equipped as part of the U.S. Army and are expected to adhere to the same moral and physical standards as their "full-time" Federal counterparts.

National Guard units can be mobilized at any time by presidential order to supplement regular armed forces, and upon declaration of a state of emergency by the governor of the state or territory in which they serve (in the case of Washington DC, the Commanding General).

Traditionally, most National Guard personnel serve "One weekend a month, two weeks a year", although a significant number serve in a full-time capacity, in a role called Active Guard and Reserve, or AGR. AGR's basically take care of things during the week while the "One weekend a month, two weeks a year" personnel are working at their civilian jobs.

I expect that this Cyber Reserve would work the same way. People would receive training and expected to adhere to the standards set up by the military.

However, rather than only physical requirements, I think that they’d be expected to maintain their skills in cyber security and keep up their education in addition to staying physically healthy. But what would this mean? Would the military provide this training? Or would it come from private industry since that’s where the expertise is? If it comes from industry, are the reserve members expected to maintain it themselves? Or would the military pay for this training?

What would the “two-weeks a year” look like? Training exercises? Basic familiarity with important infrastructure like the electrical or water grid? How to create cyber weapons?

Interesting stuff.

The relationship between economics, malware and piracy

$
0
0

Today, I read a report released by the Legatum Institute where they published their 2012 Prosperity Index. In their research, they surveyed 142 countries and ranked them against eight variables: their relative Economies, Entrepreneurship & Opportunity, Governance, Education, Health, (personal) Safety & Security, Personal Freedom and Social Capital. You can read about their methodology at the link I provided. Basically, it’s a way of ranking how good it is to live in a country by ranking a number of factors based upon statistical data as well as surveys.

Here are the top twelve best countries;

  1. Norway
  2. Denmark
  3. Sweden
  4. Australia
  5. New Zealand
  6. Canada
  7. Finland
  8. The Netherlands
  9. Switzerland
  10. Ireland
  11. Luxembourg
  12. United States

Looking over that list, while some people think that their country should be higher than others, for the most part, anyone looking at that list would say “Oh, that’s a pretty good list and each of those countries deserves to be in the top spot.”

Here are the bottom twelve countries:

131. Iraq
132. Pakistan
133. Ethiopia
134. Yemen
135. Zimbabwe
136. Togo
137. Burundi
138. Haiti
139. Chad
140. Afghanistan
141. Republic of Congo
142.Central African Republic

(Sorry about the uneven indentation)

Looking over that list, unless you lived there and wanted to dispute it, it probably doesn’t surprise you that any of those countries are that far down the list.

I decided to to run a quick correlation analysis. Are countries with a lower Prosperity Index at a higher risk for malware infections? And are they at a higher risk for software piracy?

To determine this, I downloaded a copy of the 2011 BSA Global Software Piracy Study.  Then I went to Microsoft’s latest Security Intelligence Report (SIR), volume 12, and looked at the Worldwide Threat Assessment.  In the SIR, Microsoft has a measurement that it calls CCM, or Computers Cleaned per Thousand executions of the Malicious Software Removal Tool.  They also include some telemetry from the Microsoft Security Essentials software.  One execution/removal of the MSRT corresponds to a malware infection.

I then did a correlation analysis. I discarded the countries for which I had no data and then ran against each of the eight factors that make up the Prosperity Index.

The results are:

  1. There was no wide disparity in any of the variables.

    Every single one of those eight variables has a statistically significant correlation. That is to say, we never had a case where a lower Economic factor was relevant but Education was not.

  2. Every single of one of the factors has a strong statistical correlation between the various factors in the Prosperity Index and the rate of software piracy

    If you score poorly on Economy, you have a high rate of software piracy. If you score poorly on Health, you have a high rate of software piracy.

    The strongest correlation was between Entrepreneurship Scale and software piracy. The weakest (yet still strong) was between Personal Safety and software piracy.


  3. Every single one of the factors has a medium statistical correlation between the various factors in the Prosperity Index and the rate of malware infection.

    The relationship here was either medium for all variables (correlation between 0.3 and 0.6). The strongest was between Malware Infections and Personal Safety once again, while the weakest (but still medium strength) was between Malware and the Economy which surprised.

What can we make of this?

I think that the better the country you live in, the less chance there is that you will pirate software and therefore the less chance that you will experience a malware infection. Furthermore, these variables are all linked. Poor personal safety leads to poor governance, which leads to weak economies and entrepreneurship. Worse yet, there is a multiplication factor. Poor countries are mired in non-progress while wealthier countries build on what they already have.

The result is that the poor countries get further behind because they go nowhere while the wealthy countries make incremental progress. It’s tough to get out of that rut and close the gap.

Everything is linked.

A promising new antispam technique – does it deliver what it promises?

$
0
0

I’m always skeptical when I read about new antispam techniques, especially those ones coming out of academia. Today, while browsing news stories, I came across the following article entitled Scientists devise new technique to get rid of spam mail. Here are some excerpts:

Researchers have proposed a new statistical framework for spam filtering that can quickly and efficiently block unwanted messages in your email inbox.

When I first read this, I was like “Oh, a new technique using statistics! Please, tell me more!” After all, using statistics to fight spam is one of my specialties.

Scientists from the Concordia University have conducted a comprehensive study of several spam filters in the process of developing a new and efficient one.

“Our new method for spam filtering is able to adapt to the dynamic nature of spam emails and accurately handle spammers’ tricks by carefully identifying informative patterns, which are automatically extracted from both text and images content of spam emails,” said Researcher Ola Amayri in a statement.

Until now, the majority of research in the domain of email spam filtering has focused on the automatic extraction and analysis of the textual content of spam emails and has ignored the rich nature of image-based content.

My curiosity quickly turned to disappointment. What is “new” in this technique is that the filter extracts the textual content from the image and then patterns run against them. For example, suppose you got a spam message with the following image:

image

This filter could extract the URL www.my.example.com and then feed it into other parts of the spam engine. The article continues:

When these tricks are used in combination, traditional spam filters are powerless to stop the messages, because they normally focus on either text or images but rarely both, the study found.

“The majority of previous research has focused on the textual content of spam emails, ignoring visual content found in multimedia content, such as images. By considering patterns from text and images simultaneously, we’ve been able to propose a new method for filtering out spam,” said researcher Ola Amayri.

Amayri explained that new spam messages often employ sophisticated tricks, such as deliberately obscuring text, obfuscating words with symbols, and using batches of the same images with different backgrounds and colours that might contain random text from the web.

By conducting extensive experiments on traditional spam filtering methods that were general and limited to patterns found in texts or images, the new method is much stronger, based on techniques used in pattern recognition and data mining, to filter out unwanted emails.

These assertions are not true. While this technique might be a new research method of filtering spam, it’s years behind modern spam filters. Modern filters are quite capable of extracting different parts of a message and considering them when they occur together. As I say in my post Combating Phishing, there are numerous techniques that filters use:

  1. IP Reputation

    This is the most common filter and all good ones use it. Filters maintain lists of IPs that are malicious or sending spam and block them at the SMTP level before the message has even been accepted for content scanning (some accept and use it as a weight in the filter).


  2. URL reputation

    Similar to IP reputation, modern filters extract URLs and example them against reputations or even use forward resolution to examine the IP space where the URL points to. This is the main gap this technique aims to fill.


  3. Sender authentication

    Most filters use checks including SenderID, SPF, DKIM and DMARC to make spam decisions.


  4. Content filtering

    The last piece of this puzzle is content filtering. These are rules – keywords, tokens, phrases or regular expressions – that operate on the various parts of a message including the message body, headers and attachments. These pieces are considered together, assigned a weight, and then added up to make a spam or non-spam decision.

This “new” method aims to fill in the gaps in #2 and #4. While it is true that URLs cannot be extracted out of text very easily, it is not true that content filtering cannot catch this type of spam. What’s the issue?

  1. There are other properties of messages besides the content within an image

    Every image has MIME properties. Many spammers name their files with predictable patterns and content filters can match those. These include the file names, encoding and file sizes. Put together, these can be an indicator of spam (for example, if a file has no content other than an image, and a certain file name, and comes from an IP that’s never appeared before, that is suspicious).


  2. There are ways to catch images beyond text extraction

    Second, many filters create signatures or fingerprints based upon spam messages going to spam traps. They then create signatures on the images within the message (since they are attached and encoded in base64). You don’t need to extract the content to match the spammy content it is hiding because you can just compare the image’s signature with your database of known bad signatures.


  3. Its unique catch rate is limited

    Third, way back in 2007 and 2008, Hotmail’s Smartscreen filter did use image extraction and analysis to catch certain campaigns. They ended up moving away from it because its unique incremental catch rate was negligible. Everything that the filter caught with image extraction could be caught with other antispam techniques. This is especially important – existing methods are very good at catching image spam without doing image content extraction.


  4. It is computationally expensive

    Fourth, why wouldn’t you want to do image content extraction if it helps you catch a little more? Because image content extraction is very expensive. Filters are scanning millions of message per day. Doing this type of processing incurs a major CPU hit. Filters are fast but they are not that fast. Large scale filters must scale to the types of volumes they are accustomed to seeing.


  5. The dynamics work against spammers

    Fifth, image spam is not the huge problem it was years ago. The reasons why spammers send image spam is to avoid spam filters examining the URLs within a message. However, if a URL within a message is within an image, the user can’t click on it, either. They must manually type in the URL into their browser and this drops their click-through rate. It is much more effective to have a one-click solution.

    Furthermore, sending spam with images eats up their bandwidth. You can only send so many messages depending on their size. To get around the lower click through, you need to send more messages. But because you are constrained by how much mail you can actually send, your spam campaign needs to last longer (i.e., it will take longer to send ten thousand 50 kb messages with an image than it will to send ten thousand 10 kb short messages). But if you’re sending spam for that long, IP blocklists detect it, update, and then you’re blocked from sending spam before you’ve even gotten your entire campaign out.

    Which means you’re out of luck.

Because of all of these, this “new” research method isn’t new at all and isn’t something I would implement. The idea has floated around in the industry for years but it hasn’t caught on.

On to the next technique.

Israel also looking to a cyber army national reserve

$
0
0

The same day I wrote my blog post US potentially looking to establish a cyber army national reserve, I stumbled across another article in the Telegraph: Israel invests millions in drive for elite cyber warriors. But unlike the US national reserve cyber version, the Israeli version is more about fighting on offense than trying to establish a defense.

From the article:

The Jewish state is facing a dire shortage of "cyber-combat troops" and is scouring the Jewish Diaspora for exceptional, teenage computer minds to recruit to its cyber unit Intelligence Corps Unit 8200, a leading Israeli newspaper reported on Thursday.

"It has become clear that the demand for soldiers in this field is growing, which is why we're searching for solutions not only in Israel but abroad as well," a top officer in the Manpower Directorate told the Yedioth Aranoth.

The Israeli military has made cyber warfare a dominant priority as it looks ahead to the next five years. Military Intelligence Chief Major General Aviv Kochavi is reported to have allocated £320 million to his cyber programme.

"Cyber readiness is one of the new pillars in our plan, including both defence and offence," Major General Isaac Ben Israel confirmed. However, he described any suggestion that Israel is scouting abroad for cyber-warriors as "far fetched", pointing out that the army has the pick of every Israeli 18 year-old obligated to three years military service.

The way I read it is that they are looking for Israeli nationals so any regular ham-and-egger can’t just join up. However, scouting abroad is not “far-fetched”. There aren’t enough people with the advanced skills that they are looking for within the state of Israel. People may be able to turn on a computer, create a Word document and navigate the Internet but that’s a far cry from the abilities that a cyber army needs.

The people with advanced hacking skills are what they need and there aren’t that many people around in general, let alone a country with only 8 million people.

Hacking skills are what they’re looking for, right?

Major General Ben Israel is Israel's leading cyber warfare expert, widely acknowledged to be the architect of the air strike that decimated a Syrian nuclear facility in 2007. The attack was possible only because Syrian air defence systems were hacked and disabled minutes before.

Yep.

There were reports of a similar communications blackout in Khartoum shortly before the explosion at the Sudanese arms factory last month.

Yep, again.

Advanced cyber hacking is an emerging field. Computer geeks with these types of abilities should have an easy time getting into the military because there is a growing demand for them.

Oh, Microsoft, where art thou?

$
0
0

In its recent Q3 2012 Threat Evolution, Kaspersky reported on the Top Ten Threats that it saw during the previous three months. Here they are with the percentage of users on whose computer the vulnerability was detected:

  1. 35% - Oracle Java
  2. 22% – Oracle Java again
  3. 19% – Adobe Flash Player
  4. 19% – Adobe Flash Player again
  5. 15% – Adobe Acrobat/Reader
  6. 14% – Apple Quicktime
  7. 12% – Apple iTunes
  8. 11% – Winamp
  9. 11% – Adobe Shockwave
  10. 10% – Adobe Flash Player yet again

Microsoft products no longer feature among the Top 10 products with vulnerabilities. This is because the automatic updates mechanism has now been well developed in recent versions of Windows OS.

The big story here is that Microsoft is no longer part of the top ten list for vulnerabilities. For years Microsoft’s products have been the main target but that has shifted over time as criminals have moved to other targets.

I think that this is the result of multiple factors:

  1. Microsoft’s Secure Windows Initiative

    All new code developed at Microsoft has to go through a security review. We have to model possible threats and how these are mitigated (e.g., tampering, information disclosure, unauthorized access, etc). This forces developers to think about security.

    This isn’t perfect, security holes will be found (such as this story where hackers claim that they have defeated Windows 8’s security measures) but it does force hackers to go to more effort and reduces the window of insecurity.

  2. Automatic updates

    As the note from Kaspersky explains, Microsoft has built in automatic updates to its software processes. When software updates, vulnerabilities are closed and the window for attack shrinks. Not all users have automatic updates, but as users move from OS’es that don’t (Windows XP) to ones that do (Windows 7 and Windows 8), it results in better security.

  3. A change in the market place

    While there are some things that Microsoft has done that have helped, the market has also shifted. Hackers and malware writers have moved onto developing for other platforms because that’s where the user base has shifted to (this is not a move from Windows to other devices but a move from Windows + other devices). Thus, Microsoft products have dropped from the top ten list in part because the criminals don’t find it as enticing as it once was.

Still, as someone who works for Microsoft, it’s nice to see a validation of some of our efforts to develop secure software.


Cyber security conference in Asia

$
0
0

I was contacted by a reader of mine about an upcoming conference in 2013 in Asia – the 3rd Annual Cyber Security for Government Asia 2013, to be held in Kuala Lumpur.

I’m always interested by conferences over in Asia because I have so much less visibility into that part of the world. It feels like the American Wild West to me. By that, I mean that we have a pretty good handle on English-language spam originating in the United States and the UK. Europe is okay but there’s still a lot of banks that don’t sign their mail with either DKIM or SPF. By contrast, I have no idea what’s going on in Asia. Many of the worst spamming countries are over there – Indonesia, Vietnam and South Korea (who bounces around from good to bad).

My belief is that IT professionals in Asia want to do the right thing but the security expertise just doesn’t exist in the region. Most of the conferences are located in Europe or North America. 100% of my contacts in the security industry are in North America or Europe, and a couple in Australia.

Anyhow, given how Asia is an up-and-coming region economically, it makes sense to start focusing on them. If the lack of education is the main driver for spam and malware, then we need to engage with them to close down on these security vulnerabilities.

I took a scan through the schedule for the first day and found Panel Discussion: Spotlight on Malaysia. I enjoy round panel discussions and here’s the summary:

The Malaysian Government has recently been stepping up its efforts in promoting and enhancing cyber security for its government agencies and ministries. Understand what these policies actually entail from various ministries in Malaysia how they have been faring thus far:

  • Analysing cyber-attacks at various government agencies and departments: What are the current and latest threats and what are some of the solutions to overcome them?

  • Human Expertise vs Technology: Is there a right blend? How do you achieve the right blend?

  • Analysing the ISMS guidelines for the Malaysian government: How effective has it been so far?

  • Highlighting the importance of interagency collaboration between various ministries and agencies in Malaysia

I think what would be useful is for representatives from Malaysia to talk to representatives from countries that have good cyber hygiene. According to my statistics, Malaysia:

  • Ranks 59th (out of 127) for rates of malware infection in 2011 (higher numbers are worse)
  • Ranks 49th (out of 127) for rate of software piracy in 2011
  • Ranks 69th (out of 129) for rate of how much spam is sent since July 1, 2012 (but this number isn’t that bad in terms of absolute numbers)
  • Ranks 45th (out of 142) on the Prosperity Index

A good country is Finland. According to my stats, Finland:

  • Ranks 3rd for rates of malware infection in 2011 (behind only Japan and China, but I think the China as #1 is skewed because of collection methodology so Finland is really #2)
  • Ranks 3rd for rates of software piracy
  • Ranks 4th for rates of how much spam is sent
  • Ranks 7th on the Prosperity Index

What does Finland do that makes it so special? It’s not the overall economy because Finland rates 16th while Malaysia rates 15th. Maybe it’s values like Entrepreneurship, Governance, Education and Health that are the decisive factor between the two of them.

image

If we’re serious about ensuring that countries need to get better at security, then technical solutions may only be part of the issue. It’s not enough to only have a strong economy, we need a multitude of factors.

10 Simple Things you should do to Protect your Privacy

$
0
0

A couple of months ago, Kashmir Hill over at Forbes published an editorial on 10 simple things you should do to protect your privacy. I thought I would repost them here. I’m not going to go into detail about the specifics because they are available at the article (and I am too lazy to retype them).

  1. Password protect your devices: your smartphone, your iPad, your computer, your tablet, etc.

  2. Put a Google Alert on your name.

  3. Sign out of Facebook, Twitter, Gmail, etc. when you’re done with your emailing, social networking, tweeting, and other forms of time-wasting.

  4. Don’t give out your email address, phone number, or zip code when asked.

  5. Encrypt your computer.

  6. Gmailers, turn on 2-step authentication in Gmail.

  7. Pay in cash for embarrassing items

  8. Change Your Facebook settings to “Friends Only.”

  9. Clear your browser history and cookies on a regular basis.

  10. Use an IP masker.

How many of these do you do?

Are spammers just like high frequency traders? Or is it the other way around?

$
0
0

A couple of weeks ago, we had a problem wherein a spammer signed up for our service tens of thousands of times and started sending out low volume spam. He would send a small blast and then discard the account. He would then move on to the next one and would send out the same spam campaign. He did this over and over again. He was doing it so much he managed to create a backlog in legitimate trials.

The reason a spammer can do this is because of technology. He obviously was doing some work ahead of time to find out the forms that he had to fill in as well as break the Turing test. As soon as he found something that worked, he scripted it and proceeded to send spam, then continuously discard accounts and re-sign up. He didn’t need to be personally involved. He couldn’t; a human cannot sign up that many times in so short a period.

Aside: it’s not all bad, I guess. Being abused in this manner means that we are worthwhile abusing. It means we have name recognition.

Technology allows the spammer to scale out in order to accomplish things that would otherwise be impossible. By breaking down a massive task (sending out a huge spam campaign) into a series of smaller tasks (sending out small bursts many, many times from many, many accounts), he can evade detection but also keep his costs down. Human effort is only required up front to program the algorithm to (a) sign up, and (b) send out spam. After that, it’s auto-pilot.

Compare this to high frequency trading (HFT). HFT is when large institutions frequently buy and sell small chunks of stock in the financial markets hoping to make a small profit. They cannot just buy or sell large chunks because that moves the price too much. It either increases supply when buying (which drives up their purchasing cost) or increases demand (by driving up selling pressure and thereby lowering their profit). Thus, rather than selling 1 million shares in one transaction, they will sell 1000 shares 100 times.

It’s more than this, though. It’s not just about moving 1 million shares, it’s about trading hundreds of thousands or even millions of times per day. This is in order to capture small spreads.

In stock trading terms, the spread is the difference between the bid and the ask. When you buy stock, you purchase at the bid, and when you sell it, you sell at the ask. The difference between the two is the spread. This means that if you bought stock and immediately turned around and sold it, even if the price of the stock hadn’t moved, you would lose money.

For example, below is a quote of Microsoft stock. The current closing price is 27.36, but look at the bid and ask:

image

The spread between the bid and ask is 4 cents. But in the above, if you bought at the bid and sold at the ask, even if the price didn’t change (27.36), you’d still lose 4 cents per share. Lousy deal, eh? The market is currently closed, but because Microsoft is such a big company, the spread during the market hours is frequently 1 cent.

But that bid/ask spread is not the same everywhere. For you see, in computer networks there are inefficiencies. The Bid/Ask on the stock exchange in New York might be 27.35/27.39. However, because the market is geo-distributed, the exchange in San Francisco (if an exchange existed there) might be 27.41/27.43. You see, because the markets are always moving, the price is always changing. But because it takes time to replicate prices everywhere in the world, sometimes markets are out of sync. Eventually everything will sync up. But there is a period of time where these differentials exist.

image

This condition doesn’t exist very long. It may be a few minutes, or a few seconds, a few milliseconds, or a few microseconds. But during that period of time, there’s opportunity. From our example, suppose we have the following condition:

New York: Bid=27.35, Ask=27.39
San Francisco: Bid=27.41, Ask=27.43

If you buy at the Ask in New York (27.39) and sell at the bid in San Francisco (27.41), you can make two cents per share. This process is called arbitrage – looking for and exploiting inefficiencies in the market. Two cents per share doesn’t sound like much unless you do it hundreds of thousands of times per day. Then it starts to add up. This is HFT.

High Frequency Trading is more prolific today for two reasons (in my opinion):

  1. As a byproduct of protecting small investors.

    During the 1990’s, SEC Chairman Arthur Levitt pushed to have penny bid/ask spreads. Prior to that, stocks were always quoted in eighths of a unit – 27 1/8, 27 3/8, and so forth. However, this means that instead of getting shafted on pennies per stock (due to the spread), investors were getting shafted on at least 12.5 cents per share because the minimum bid/ask spread was always 1/8 of a dollar (12.5 cents).

    Levitt fought to lower that down to penny bid/ask spreads because those big spreads were so unfair to small investors (the difference was going to the market makers). However, because the spreads are now so small, big institutions trade more frequently than they normally would have had to because they are trying to multiply a much smaller number.

  2. As a byproduct of the improvement of technology.

    Technology has improved the speed at which we can do communications. Trading frequently works because institutions can use all the bandwidth to query multiple data sources simultaneously and make decisions quickly. If you weren’t sure that the quote you were getting on a 56k modem was going to be accurate, you might not be so inclined to trade that often for fear of making a purchase decision at one price but the execution of the order at another.

    In addition, people have gotten better at programming over time, and machines have gotten better at doing sophisticated numerical calculations in real time. These complicated algorithms look for inefficiencies and make trades automatically. The processing power to make these decisions, and the availability of programmers, makes high frequency trading accessible to most firms who have decent, but not outlandish, capital to invest.

     


HFT is a problem because it adds very little value to the market. People are not buying because of their perceptions of the value of the underlying security, whether it is stocks, bonds or commodities; instead, they are acting only to extract profit from underlying inefficiencies in the markets. It’s kind of like if the only reason you went into a restaurant was to enjoy the air conditioning, but you sat at a table and ordered nothing. You’re getting value, but to the restaurant, all you’re doing is taking up space.

This might not be so bad except that HFT has some serious drawbacks. For spammers, when they create botnet algorithms to sign up and spam, they each have their idiosyncrasies. Some target Hotmail, others target Yahoo, and others target Office 365. But they are all pretty similar and they each do more or less the same thing – sign up and spam. The main differences are in the up front work developing the algorithms to do the spamming.

HFT is the same. The algorithms are all fairly similar – they each look for certain patterns and if they see them, they buy. All the financial managers go to the same schools, and all the programmers study similar curriculums. They are all looking for the same patterns and they all act on the same patterns. This means that machines can all see patterns and drive up prices in tandem. It’s not because of perceived value but because of machines acting on heuristics.

The flip side is when machines start to sell, they all see the same thing and act in concert with each other. One firm sees the price drop and sells. Then another sees it drop and it, too, sells. And so forth. This causes a snowball effect. The effect is massive run ups and crashes in value (e.g., the 2000 Internet bubble, the 2008 oil run up, the housing run up, etc). This matters because when things crash, people’s retirement accounts and pensions lose huge value and they end up working longer than they wanted to because they no longer have enough money.

And this brings me back to the title of this post. Are spammers like High Frequency Traders?

  1. Both break down a big task (sending out a huge spam run vs. selling large blocks of stock) into a series of smaller tasks.

  2. Both use machines to automate the tasks without human interference.

  3. Both bring little value to the medium they are exploiting – spammers are borrowing the infrastructure of services they are abusing without using it for legitimate purposes (of sending out personal or wanted communication), and HFT’s use the financial markets to squeeze out micro-value (whereas markets are supposed to provide liquidity and act as a rational arbitrator of value of the underlying security).

  4. Both have been made possible through the drop in the costs of technology.

  5. Both need huge amounts of small transactions in order to make it profitable.

  6. Both need to stay small in order to survive. Spammers need small campaigns to stay under throttles and avoid detection, and HFTs need to stay small to avoid moving the market too much from their profit/loss price point.

  7. Both destabilize the mediums they are exploiting and ruin it for legitimate users. Spammers create backlogs and degrade the outbound IP reputation of the service. HFTs create bubbles and crashes.

  8. However, the one key difference is that spammers bring no value whatsoever to the landscape.

    You could make the case that HFT does bring some value in that it generates profits for the financial firms and the shareholders, as well as the people whose money they have invested. Employees at these companies pay taxes and are not using social safety nets. They provide for themselves and their families. Furthermore, bubbles and crashes are not necessary parts of HFT, you could argue that they could be regulated somehow and the people who program them should put in fail safes to prevent dumb-assery.

    In this regard, HFT is not like spamming.

So I guess the answer to my question is that spammers are not like high frequency traders. But they’re close.

Do you know this guy? The troll? Internet Explorer fights back.

$
0
0

For years, Internet Explorer has been maligned as the browser that trails the others (Firefox and Chrome). However, to its credit, IE 9 and 10 has always performed very well in security tests, beating its rivals. Competition has resulted in three good browsers.

But yet, Internet Explorer gets no love. Microsoft is aware of this and has produced the following video; it challenges the company’s critics while humorously touting some of its latest milestones.

Enjoy!

A whole slew of security reports

$
0
0

If you’re looking for something to read, say, the latest trends on Internet threats, I have a whole bunch of them here for your online perusal. I’ve gone through them and I have a highlight from each of them:

  1. Microsoft’s Security Intelligence Report, Volume 13 (3 MB)
    Microsoft’s semi-annual security report, it reports on threats and data against all of its services including spam, malware, drive-by downloads and breaks out the data into geo-locations.

  2. M-Trends 2012: An Evolving Threat (Mandiant) (4.3 MB) (registration required)
    A short (15 page) report that highlights some of the threats seen in the industry: how hackers get in, how malware is delivered, and so forth. Lots of pictures and easy on the eyes.

  3. Commtouch’s Internet Threats Report, October 2012 (1.1 MB)
    An embedded slideshow about the latest threats that Commtouch sees: Malware, Spam, Web trends and Zombies.

  4. Trustwave’s 2012 Global Security Report (7.2 MB) (registration required)
    Similar to Mandiant’s report above, but more data driven.

  5. Verizon 2012 Data Breach Report (3.3 MB)
    A data-heavy description of data breaches and threats.

  6. APWG Global Antiphishing Survey, 2Q 2012 (1.5 MB)
    The Antiphishing Working Group’s quarterly report on phishing trends.

  7. Sophos’s Security Threat Report 2013 (2.3 MB)
    Getting a jump on next year, Sophos’s covers Java exploits, OS X threats and the long tail of targeted attacks.

  8. Panda Labs Quarterly Report, June – Sept 2012 (621 kb)
    A quick summary of the various threats – mobile, messaging and malware.

  9. Internet Identity’s eCrime Trends Report, 3rd Quarter 2012 (808 kb)
    One of this reports highlights is its discussion on the DNS Changer Working Group.


There is a lot of overlap between them: Mobile malware is growing; traditional malware is still a threat; and as consumers move to other platforms, criminals follow them.

Google, Apple, Microsoft… why is there such fanboy-ism in tech?

$
0
0

I’m going to depart from my typical security related topics to discuss another issue: fanboy-ism.

You all reading this know what I mean – it’s when people have such a devotion to a certain product that they will defend, to the death, their preferred device or product and attack, to the death, their non-preferred anti-product. Mac vs. PC. iOS vs. Android. PS3 vs XBox. Just go to any article about any device on the Internet and you will see lots of comments that reflect this phenomenon.

Why does it exist?

I recently purchased the book You Are Not So Smart by David McRaney. In the book, he looks at all of the various behavioral biases that we humans have. As it turns out, we all have tons of them. The fact that we can get anything done is a miracle. We all like to think that we are logical, rational actors most of the time and act irrationally only occasionally. It’s actually the other way around.

The reason why fanboys exist with such blind devotion is because of something called Choice Supportive Bias. This occurs when we make a decision to invest a significant amount of time, energy, money or a combination thereof into a product. In order to justify to ourselves that such a purchase was worth it, we make up reasons why it was a good idea.

From the You Are Not So Smart blog post: Fanboyism and Brand Loyalty:

… if the product is unnecessary, like an iPad, there is a great chance the customer will become a fanboy because they had to choose to spend a big chunk of money on it. It’s the choosing one thing over another which leads to narratives about why you did it.

If you have to rationalize why you bought a luxury item, you will probably find ways to see how it fits in with your self-image.


Apple advertising, for instance, doesn’t mention how good their computers are. Instead, they give you examples of the sort of people who purchase those computers. The idea is to encourage you to say, “Yeah, I’m not some stuffy, conservative nerd. I have taste and talent and took art classes in college.”

Are Apple computers better than Microsoft-based computers? Is one better than the other when looked at empirically, based on data and analysis and testing and objective comparisons?

It doesn’t matter.

Those considerations come after a person has begun to see themselves as the sort of person who would own one. If you see yourself as the kind of person who owns Apple computers, or who drives hybrids, or who smokes Camels, you’ve been branded.

Once a person is branded, they will defend their brand by finding flaws in the alternative choice and pointing out benefits in their own.

This type of irrational behavior doesn’t occur when you have to buy something where it doesn’t matter where you get it. Nobody cares where they buy their brand of gasoline – Shell, Exxon or 76. Nobody cares where they get their box of Kleenex. You don’t care that much which super market you go to.

I think this explains why people throw so much hate at Microsoft but not at Apple or Google. For years, Microsoft’s OS was the only game in town and you had to buy it. It was a successful model for the company but  you didn’t develop any sort of brand loyalty.

By contrast, devices that are optional like phones or tablets do develop loyalty because of Choice Supportive Bias. This is when you look at all the various options and finally settle on one. After you decide, you look back and rationalize your actions by believing the TV you bought was the best one. If it didn’t matter which TV you could have bought, it wouldn’t matter. But personal devices do because you have options.

As the blog post puts it:

To combat post-decisional dissonance, the feeling you have committed to one option when the other option may have been better, you make yourself feel justified in what you selected to lower the anxiety brought on by questioning yourself.

All of this forms a giant neurological cluster of associations, emotions, details of self-image and biases around the things you own. This is why all over the Internet there are people in word fights over video games and sports teams, cell phones and TV shows.

Many people in my generation grew up with only Microsoft OS’es to choose from and didn’t develop the loyalty. But the people coming up after me who are younger and have many options – Google, Amazon, Facebook, Apple, and Microsoft – won’t have those same sorts of biases. Microsoft will be another option and if they have to sink a lot of money into it, they’ll develop blind for their devices. But if only one product or company from that list were dominant, it wouldn’t develop brand loyalty either.

So all you lovers-and-haters out there:

  1. Our decisions about why we like the things we do are irrational.

  2. Why do we defend these things so fervently? Unless you own shares in the company you love so dearly, your loyalty increases their bottom line, not yours.

After I read this book, I realized “Man, maybe I shouldn’t care so much about the things I like, and shouldn’t pay much attention to the things others like, either.”

Because we are not so smart.

Large scale spoofing campaign

$
0
0

Over the past week or so we have seen a lot of spoofing going on with campaigns that look like the following:

image

These campaigns have the following characteristics:

  1. They are high-volume zero-day campaigns.

    The IPs typically end up on IP blocklists but they are successful at emitting huge blasts of spam before they are caught. This is a pattern that is more typical to 2005-2009 when spammers went with large spam campaigns before scaling back their efforts in an attempt to stay under the radar.

  2. The URLs within the message are compromised.

    In the above example, the spam URL is a legitimate site that has been compromised and is hosting malware. The top level site has been disabled (I checked). What this means is that whoever owns the site noticed they were hacked and took it down.

  3. There isn’t a lot of other content to filter on.

    The above call-to-action is a URL but the nudge in that direction contains text that has grammatical errors. It is language that might be used in a real outage. Still, the content does contain parts that are spammy although it is difficult to create content rules that are predictive of this sort of thing.


Spam campaigns like these require speed and co-ordination of efforts. If it’s a single spammer behind it, he needs a very complex infrastructure:

  1. Spamming IPs - He has to maintain or acquire a botnet of spamming IPs.

  2. Compromised hosts - He has to maintain or acquire a botnet of compromised URLs – he may have one but probably prefers many more for redundancy. This requires breaking into host and uploading content.

  3. Payload – He has to maintain payload. If the payload is a drive-by download wherein the user clicks the link and gets infected, then he has to have skills writing malware and exploiting browser and OS vulnerabilities. If the payload is spam, then he has to maintain an advertising, payment processing and (most likely) pharmaceutical distribution mechanism.

  4. Speed – The key cog in this wheel is speed. Good spammers have to do this quickly because they know their window of opportunity is small before spam filters catch up. They will set all of this up, send test emails and if they get through, crank up the speed. Their time limit only lasts maybe a few hours.

This is a lot for a single spammer to handle because it requires too many skills. More likely it is the result of an underground economy where each of the above parts (1-3) is maintained by multiple players who buy and sell services from each other.


Why people keep proposing a Final Ultimate Solution to the Spam Problem (FUSSP)

$
0
0

In the antispam world, from time to time somebody new likes to come in and propose a solution that will wipe out spam: Email authentication! Statistical classifiers! Blacklists! User education!

These terms are derisively referred to as the Final Ultimate Solution to the Spam Problem. It’s a term that industry veterans give to ideas that have been considered but abandoned because they are unworkable or don’t address the full problem while still leaving large gaps for spammers to exploit. This is summed up at this blog post (click link for the full list, I have pruned it):

Your idea advocates a

(x) technical ( ) legislative ( ) market-based ( ) vigilante

approach to fighting spam.

Your idea will not work. Here is why it won't work. (One or more of the following may apply to your particular idea, and it may have other flaws which used to vary from state to state before a bad federal law was passed.)

( ) Spammers can easily use it to harvest email addresses
( ) Mailing lists and other legitimate email uses would be affected
( ) No one will be able to find the guy or collect the money
( ) It is defenseless against brute force attacks

Specifically, your plan fails to account for

( ) Laws expressly prohibiting it
( ) Lack of centrally controlling authority for email
( ) Open relays in foreign countries
( ) Ease of searching tiny alphanumeric address space of all email addresses

and the following philosophical objections may also apply:

( ) Ideas similar to yours are easy to come up with, yet none have ever been shown practical
( ) Any scheme based on opt-out is unacceptable
( ) SMTP headers should not be the subject of legislation
( ) Blacklists suck

Furthermore, this is what I think about you:

( ) Sorry dude, but I don't think it would work.
( ) This is a stupid idea, and you're a stupid person for suggesting it.
( ) Nice try, assh0le! I'm going to find out where you live and burn your house down!

Of course, this is tongue in cheek but it is not far off from the truth. Unless you have been fighting spam for a long time, there is no simple solution. It is only the people who haven’t been doing this for very long who propose these ideas without understanding that there is an existing huge investment in the existing email protocol and people won’t move off it quickly if it is less convenient than their existing, insecure system.

So why do people keep proposing it?

It turns out that this is explained by science: the Dunning-Kruger effect. The Dunning-Kruger effect was published in the late 1990’s by researchers at a university. It is a cognitive bias where unskilled people overestimate their ability in something. But not only do they overestimate their own abilities, they fail to recognize how poor they are at it, nor can they recognize actual in others who really do have it.

For example, suppose I went and took a couple of badminton lessons and learned the basics. I then went and played against all my friends and beat them soundly. Feeling pretty confident about myself, I enter a tournament and get destroyed by all of my opponents. My basic knowledge after a few lessons greatly increased my confidence but I was still a terrible player compared to people who were very good at what they do.

The Dunning-Kruger effect also finds that people with genuine skill tend to underestimate their abilities in something. They think that if they find something easy, others do, too. Therefore, they have no advantage. This is not true, they really are good, but skilled people don’t think so.

Finally, the Dunning-Kruger effect is present only when people have some ability in an area. For example, in our above example, the beginner badminton player may overestimate his abilities at badminton, but at horseback riding he knows he is an amateur and is unlikely to do well in competition. Thus, a little bit of knowledge goes a long way to giving you false confidence in your abilities.

This brings us back to the FUSSP.

Relative newbies to the industry know a little bit about fighting spam and online abuse. They know about filters and blacklists but then falsely extrapolate that it is much simpler than it really is. This is wrong, spam filtering is very complex but because of Dunning-Kruger, newbies think they know more than they do and they fail to recognize how little they know. Furthermore, they fail recognize that others with far more experience have never proposed nor implemented what they think will solve the problem. It is not the complete outsiders (like friends or relatives in different professions) who make these proposals, but industry newcomers with a little experience.

This also explains why experts never propose an actual FUSSP, they only propose managing the spam problem. Experts know that spammers are actively trying to subvert filters; they also understand how this can be done and if they can do it, then so can spammers. Therefore, they are far less assertive in what they do and do not claim.

So why do people keep proposing Final Ultimate Solutions to the Spam Problem?

Because of cognitive bias.

Another day, another phish campaign

$
0
0

Today we are seeing another high volume spam campaign. It is very similar to the one I wrote about yesterday:

  1. The IPs are all compromised (i.e., the spam is coming directly from botnets).

  2. The URLs point mostly to compromised web hosts, that is, the URLs are legitimate but have been broken into and are either serving malware or hosting phishing pages. But not all of them are compromised, some look like they have been created exclusively for the purpose of spamming.

  3. The content contains legitimate words and phrasing but to the trained eye (or even the untrained one) it is clearly phishing (or spoofing in an attempt to infect you with malware, or resolve to a pharmaceutical page which is kind of weird – confirm your identity leads to an advertising landing page?).

  4. Many of the sending domains do not contain SPF records, meaning that the spammers can spoof them without negatively impacting delivery.

My guess is that this is the same spammer that was doing it yesterday. After getting blocked he just updated his campaign: he rotated his spamming IPs, compromised URLs and message content.

My sources indicate that this is the darkmailer botnet. Looking back over my historical data, darkmailer sends in waves. The past couple of days have seen an increase in activity after a “quiet” period of a couple of weeks. This would lend credence to my theory of a spammer renting the botnet since most spammers don’t do it continuously but instead rent the equipment for a period of time.

My stats also indicate that most of the spamming IPs over the past couple of days originate in China. This is unusual for a botnet these days because the most commonly occurring botnets are in the US, Russia, India and south east Asia (and parts of Europe). China used to be a spam source but has cleaned up its act significantly.

 

image

IT Gangnam Style parody from F5 networks

My behavior has changed when answering my phone due to my suspicious nature of unsolicited email

$
0
0

Nowadays, whenever I get email from someone I don’t recognize, I am instantly suspicious of it. To be sure, there are people I’ve never heard from that I want to hear from, but I am always wary whenever I see an email address that is unfamiliar to me. I never open any attachments, I don’t click on links, I barely even want to read the message. I am instantly suspicious.

This, of course, has been driven by spammers. The odds of me receiving email from someone I’ve never heard before and their being a spammer are higher than their being a legitimate person. I’m just not that special in real life that all sorts of people want to talk to me.

But also, if I ever get email from someone I do know and the message looks suspicious – all upper case SUBJECT LINE, a short internal message, broken grammar, etc., then I make the leap that the person’s account is compromised. If you’re using poor grammar, then I think you’re a spammer. Hey, that rhymes!

I’m surprised at how this has carried over to my cell phone.

I’m even less special when it comes to phone calls than I am with email. Unknown people contact me by email fairly regularly. But they rarely do on my phone (except for that person who keeps calling up and asking for “Chris”). Therefore, when I get a phone call from a number I don’t recognize, I’m usually content to let it go to voice mail. Afterwards, I check my voice mail and if it’s from a friend of mine, or a service I want to call back (like a Massage Therapist, or the bank, or something not personal but important) I will add them as a contact to my smart phone (as an aside, I am biased but I really like my Windows Phone).

In other words, I treat unknown phone callers as suspected spammers.

It never used to be this way. Way back when I was growing up, every phone call was from an unknown caller. There was no way to predict who was on the other end of the line unless you had a good idea of who was going to call you that day or evening (some of my sister’s friends were so predictable we literally could predict them). But so what? The phone rang, you answered it. That’s how it works.

Many of you reading this can remember when call display was introduced. It wasn’t even that long ago. It would give you the phone number of who was dialing and it was a big deal. We were all like “Hey, this is cool! We now can see who is calling before we pick up the phone!”

Today, in the age of cell phones and smart phones, at least to me, to not have call display is bizarre. Why wouldn’t I want to know who is calling before I answered the phone? (Services like Google Voice, Skype and even the corporate line here at Microsoft mask the caller identity by sending from a general number but you still get a number).

Why do I treat unknown callers like potential spammers?

I know that phone calls are unlikely to be spammers, but sometimes they are. Sometimes I get a call from the Symphony asking me to buy tickets to some symphony because I purchased tickets to the video game symphony two years ago. Ugh. I don’t need to spend 5 minutes arguing with you. I also don’t want to be bothered when the gym I thought about signing up to keeps calling me every day with a special offer. Double ugh.

So I guess it’s all about not wanting to deal with the hassle of marketing.

Just like in email how I am suspicious of unknown senders because they are trying to sell me something, this has translated into phone calls with suspicions that they, too, are trying to sell me something. Because sometimes, they are trying to sell me something! But the cost is higher; whereas with spammers I can hit delete so long as there isn’t too many of their messages, with a phone call I have choose to politely sit and listen to their spiel and it wastes 2-10 minutes of my time.

It’s just easier to screen the call and add them to my address book and then if they call back, intentionally choose to ignore them.

What about any of you? Does anyone else do the same thing?

Practical Cybersecurity, Part 1 – The problem of Education

$
0
0

I thought I’d close out the year by presenting my 2011 Virus Bulletin presentation. It builds upon my 2010 presentation about why we fall for scams which I blogged about earlier this year in my series The Psychology of Spamming:

Part 1 - How our brains work
Part 2 - The Limbic system, cognition and affect
Part 3 - External factors that influence our decisions
Part 4 - Why we fall for scams
Parr 5 - Solutions
Part 6 - The Flynn Effect

What follows is the solution to the problem.


Practical Cybersecurity – An Introduction

The cybersecurity industry has a problem.

For years we have been preaching to users that they need to practice better cyber security awareness – don’t click on links in spam, hover your mouse over a link to see where it goes, don’t click on suspicious videos in your Facebook account. But the message never gets through; people fall for hacker tricks every day.

The security industry then moans “Oh, users cannot be taught simple concepts! It’s hopeless!” But is the situation really hopeless? Is the problem the general public’s inability to grasp the message? Or is the problem the message itself? For example, take some standard password advice that the computer industry routinely gives: use a strong password, one that consists of random letters and numbers and contains a lot of letters and numbers. Do this for all of the websites that you use. Yet countless studies demonstrate that humans are only capable of memorizing 7-10 random digits at a time. How are we supposed to memorize 10 random digits, and do this multiple times for the many websites that we use?

 

The advice that the computer industry gives is impractical; we may give people the secret formula to becoming a millionaire: first, get a million dollars…

It’s not that we in the cybersecurity industry don’t have a valuable message to get across to people. We do. However, we need to learn how to give good, practical advice that people can use in real life, and we need to learn how to teach it so people will retain it. To do that, we need to look at successful educational techniques and use them when we evangelize our own message.

Background

At the 2010 Virus Bulletin conference, I presented a paper titled ThePsychology of Spamming. In it, I examined why people fall for scams in email. The reason is that the amount of change in technology has outpaced our biological capacity to absorb it. For example, our bodies evolved to seek out fats, salts and sugars. We need those in order to survive. But today, we can mass produce donuts, salad dressing and yummy French fries. We know that these are not healthy for us, but our brains tell us that they are very tasty and they masquerade as food. We can’t yet tell the difference between good-for-us and not-good-for-us.

Similarly, when it comes to technology, we fall for scams when they involve money, food, sex or revenge.

When a scam hits us and hides behind any of these masks – a phishing scam that threatens to cut off your source of income, or a fake Viagra scam that promises you more sex – the logical part of our brains, the neo-cortex, stops executing and the limbic part of our brain, the part designed to react, takes over. If the correct emotions are triggered, we behave in ways that are contrary to our own best interests. Thus, while technology has helped our lives immensely, it does not replace our basic biological needs and drives. You can’t eat an iPad.

The solution to combating scams is through education. Researchers have determined that over time, people are becoming more intelligent. Educational test scores have not improved, but IQ test scores have. People are better at abstract reasoning now than they were before. For example: what animal is that? A cow. What sounds does a cow make? Moo. How many legs does a cow have? Four. What else has four legs? A dog. How are they similar? They are both mammals. And so forth.

Because people are better at abstract reasoning, they are better at transferring concepts from one topic to another. People today understand moral concepts like theft and robbery and the need to protect your property. If we already teach people the ideas of protection of their physical property and how to recognize physical danger, then through good education techniques we should be able to teach them to recognize cyber danger and protecting their online property.

Transfer

The key to educating people about cyber security is through “transfer”; it is the ability to take what you have learned and transfer it to a new situation. When we are in school, we transfer basic addition to learning our multiplication tables, and transfer multiplication to calculus. We transfer our knowledge gained from walking to running to navigating while driving. We transfer cooking a single food item to preparing complex meals. The learning that we have acquired previously is reused for – transferred to – other situations, and then built upon.

When security experts complain about users’ lack of security awareness, they are really complaining about users’ inability to transfer common sense in real life to a fake Viagra scam in their junk mail folder. They might consider themselves savvy people at recognizing real life scams, but this vigilance does not transfer to computer scams. Instead, they revert back to believing that a deal too good to be true really is true and not think through the possibility that it is most likely a scam.

Why is there this lack of transfer?

Research into learning techniques and education has uncovered methods that support transfer. In order to make our message stick with the general public, we need to use these methods when we are distributing our message.



Part 1 - Introduction 
Part 2 – Expertise
Part 3 – Experience 
Part 4 – Metacognition 
Part 5 – What should we teach?
Part 6 – Bringing it all together  

Viewing all 243 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>