Quantcast
Channel: Terry Zink: Security Talk
Viewing all 243 articles
Browse latest View live

The Psychology of Spamming, part 2 - The Limbic system, cognition and affect

$
0
0

The Limbic System

The limbic system is the center of emotion in the brain and it governs much of our non-conscious behavior.

We know from psychological studies that people will sometimes engage in behavior counter to their own best interests in order to satisfy short term desires. This is the work of our limbic system. We make decisions today and the chemicals in our brain are trying to influence our decisions to either attain pleasure or avoid pain. The avoidance of pain response is especially powerful and more often than not trumps the desire to attain pleasure.

There are a number of basic survival mechanisms that are especially prone to limbic persuasion:

  • · Money (financial gain)
  • · Sex (keep the species going)
  • · Food (make the hunger go away)
  • · Revenge (yep)

Our brains took millions of years to evolve and they did so out of necessity in order to stay alive. Yet the scope of technological shift has far surpassed the rate at which our brains have kept up. That’s why spamming works – the emotional responses it (spamming) elicits speaks to our limbic system which is hardwired into our brain, and our brains are not trained to recognize the potential damage that spamming can do. We may know using our neocortexes that spam is bad, but it is our limbic brains that is seeking to either pacify an emotional response, or attain something, and that’s what advertising does; spam is no different.

Cognition and Affect

When we make decisions, such as to buy something, or click on a link in a suspicious message, there is a concept known as “affect”. This is the quality and quantity of goodness or badness that we feel in response to a stimulus, whether it is conscious or not. For example, if we are in a dark alley and we see a stranger approaching, we would feel negative affect. If we were in a well-lit hall and an attractive member of the opposite sex was walking towards us, we would feel positive affect.

Affect has a significant impact on how humans judge risk. Whereas risk and benefit tend to be positively correlated in the world (such as investing), they are negatively correlated in people’s minds. If people can be made to feel negative affect (such as anger or fear), their impressions of situations can be influenced in a negative way. For example, a group of test subjects were told all about the negative downside risks of nuclear power – it was expensive to build, could contaminate water supplies, and so forth. When asked to give their impressions about it, people judged nuclear power negatively. Yet when people were made to experience positive affect and then judge nuclear power, and they all judged it positively.

People also do better with actual numbers than they do with percentages, and sometimes it is counterintuitive. For example, consider the two statements:

  • 12 out of 100 people who do not wear seatbelts die in car accidents day
  • 12% of people who do not wear seatbelts die in car accidents each day

Even though both statements are numerically equivalent, the first statement is more likely to make an impact on people and affect their behavior. Similar results have been found when judges rated a disease that kills 1286 people out of every 10,000 as more dangerous than one that kills 24.14% of the population, even though mathematically speaking the second is twice as dangerous as the first.

People do better still when affect is produced using imagery, story and narratives. Warnings are more effective when rather than being expressed in terms of percentages, they were presented in the form of affect-laden scenarios and stories. Compared to being presented with bar graphs or tables, respondents who were shown stories more accurately estimated or recalled information than those who were not.

Consider the following advance-fee fraud scam:

Dear Respected One,

GREETINGS,

I am writing to you for the chance to make some money. I am currently locked out of my account of 12 million dollars. If you would be willing to put some money to unlock the funds, we would be willing to offer you 5% of the sum as compensation for effort input after the successful transfer of this fund to your designate account overseas.

Anticipating to hear from you soon.

Thanks and God Bless.

Contrast that with the following:

Dear Respected One,

GREETINGS,

I am Wumi Abdul; the only Daughter of late Mr and Mrs George Abdul. My father was a very wealthy cocoa merchant in Abidjan, the economic capital of Ivory Coast before he was poisoned to death by his business associates on one of their outing to discus on a business deal. Before the death of my father on 30th June 2002 in a private hospital here in Abidjan. He secretly called me on his bedside and told me that he has a sum of 2 million left in a suspense account in a local Bank here in Abidjan.

He also explained to me that it was because of this wealth and some huge amount of money his business associates supposed to balance his from the deal they had that he was poisoned by his business associates, that I should seek for a God fearing foreign partner in a country of my choice where I will transfer this money and use it for investment purpose. Sir, we are honourably seeking your assistance in the following ways.

Moreover Sir, we are willing to offer you $7000 of the sum as compensation for effort input after the successful transfer of this fund to your designate account overseas.

Anticipating to hear from you soon.

Thanks and God Bless.

The second scam has less money associated with it for the victim ($7000 vs 5% x 12 million = $100,000), but the second story, due to having more actual numbers (highlighted above), framed within a story and appealing to the emotion of sadness (death of a father) has a greater chance at swindling victims.
 

Part 1 - How our brains work
Part 2 - The Limbic system, cognition and affect
Part 3 - External factors that influence our decisions
Part 4 - Why we fall for scams
Parr 5 - Solutions
Part 6 - The Flynn Effect


The Psychology of Spamming, part 3 - External factors that influence our decisions

$
0
0

Spam, Emotion and Decision

Researchers distinguish between two types of emotions – expected emotions and immediate emotions. Expected emotions are predictions about how we will feel if certain decision outcomes occur. They are forward looking and their benefits are to determine the optimal course of action to maximize our long-term well-being. They are functions of our neocortex.

For the purposes of spamming, it is not expected emotions that spammers (unknowingly) prey upon. Instead, it is our immediate emotions that kick in. Immediate emotions are the affect that we experience at the time of making a decision. These are heuristic judgments that we make to prioritize information and introduce important considerations that are not captured by expected emotions.

Immediate emotions save cognitive processing by triggering time-tested responses to universal experiences. For example, anger triggers aggression, and fear triggers flight. This action tendency lingers for some time if it is not discharged. For example, in one study subjects were told about a perpetrator who committed a crime. They were divided into two groups, and one group was told that the offender was not punished and the other group was told that they were. However, only the group that was told that the offender got away with it led to that group to experiencing anger and demanding harsher penalties rendered in unrelated legal cases. In other words, their emotion persisted and was not “diffused.” In the other group, the offender was punished and through that outlet, their emotion of anger had a mechanism of release.

Immediate emotions do not guide or influence our actions completely, however. At low and moderate levels of affect, immediate emotions play more of an advisory role. Thus, in order to trigger action, more intense levels of affect must be invoked.

Probability

One thing that researchers have discovered is that affect is correlated with the intensity of anticipated outcome, not the probability of it occurring (except in the case of zero probability of outcome). For example, one study demonstrated that when subjects were given an electric shock in response to certain outcomes, they would experience physiological responses (sweating, nervousness, tension) that increased with their perceived intensity of that shock rather than with how likely they were to receive it.

We can see how this is expressed in a phishing scam and that using this analysis, some phishing scams would be more effective than others. When we get a message saying that we will be locked out of our account, that the bank might take that action is less relevant to us than us losing access to our finances.

clip_image002


Wells Fargo is constantly working to ensure security by regularly screening the accounts in our system. We recently upgraded our security services on your account, and until we can collect this information, you will be unable to access your account. We would like to restore your access as soon as possible, and we apologize for the inconvenience.

Why is my account access limited?

Your account access has been limited for the following reason(s):

October 1, 2010:
We have reason to believe that your account was accessed by a third party. Because protecting the security of your account is our primary concern, we have limited access to sensitive Wells Fargo account features. We understand that this may be an inconvenience but please understand that this temporary limitation is for your protection.

Regards,
Wells Fargo Security Department


In the above example, the probability of someone breaking into our account and having that access restricted is what factors into our decision to take action, not that the bank would actually take that action.

Time

Another factor that affects immediate emotions is time. The closer something is approaching in time, the more intensely we will experience positive or negative affect. This is independent of the probability of occurrence.

A modern example of this is the “chickening out” factor. If you ask a group of people to tell a joke in front of their peers in a week’s time, you’ll likely get several volunteers. However, both groups of people can change their minds at any time. As the time approaches, right before the time for the joke to be told, those who had volunteered will have many of them change their minds (a flight to safety to alleviate the fear response). However, none of those who had not volunteered will change their minds and decide to tell the joke after all.

In our phishing example, whether or not they do it intentionally, spammers who put into their messages that a user must take action within a certain time frame – a short time frame – will get greater compliance than those who put no time frame or a longer period of time.

Control

Yet another factor that influences our decisions is our perception about how much control we have over a situation. People that believe that they have the power to reduce or increase a stressful situation report fewer panic symptoms and less distress.

Looking over to phishing, it is important to the phishers to give their victims an “out”. They must act soon otherwise they will be locked out of their accounts. Money is important to people, we need it to live and that’s what goes through our minds. Without our money, how will we live? Yet luckily, all the victim has to do is click on the web link, fill out a few simple details and everything will be okay. The negative affect can be released with a couple of actions. Giving control to the victim helps to alleviate the negative immediate emotion – it pacifies the flight response.


Part 1 - How our brains work
Part 2 - The Limbic system, cognition and affect
Part 3 - External factors that influence our decisions
Part 4 - Why we fall for scams
Parr 5 - Solutions
Part 6 - The Flynn Effect

The Psychology of Spamming, part 4 - Why we fall for scams

$
0
0

Impact

We humans had Stone Age ancestors for a long time and our limbic systems kept us alive; when we feel afraid of something, say snakes, it is because our brains are hard wired to avoid things that could harm us.  The fear response is actually a good thing.  However, eventually our neocortexes evolved.  When they did, civilization advanced.  Unfortunately, our limbic system was ill-prepared to deal with all of the change.  There is a mismatch between the evolutionary adaptiveness of emotions and the environments in which we make decisions.

We used to eat in order to survive, and survival was difficult.  Now, we have all sorts of temptations like chocolate, fatty fried foods, excess sugar, and the like.  Our brains tell us that food is good and we need it to survive, and our emotions of desire tell us that it will taste good.  Similarly, back in the Stone Age, and when we were still prior to the missing link, species had to engage in sexual behavior in order to keep the species alive.  Sexual desire is built into our brains because it was a survival necessity.  Yet now, the population is not in any need of propagation.  Better nutrition and medical advances has prolonged our lives, but the desire for sexual contact is still there (and has to remain there otherwise we’d all disappear within a generation).   Our limbic systems have not caught up to the scope of change, and we, as a civilization, are in future shock.

Unfortunately, this scope of technological change which has introduced future shock has also resulted in malicious actors being able to exploit the rest of us for nefarious purposes.

Spamming

The availability of technology in modern society has enabled scammers to move to a new platform. I say “scammers” and not “spammers” because scams have been around for a very long time. Our Stone Age brains are still hard wired to protect us and alleviate the fear response. However, when we see something that we trust and are given a way to protect ourselves from harm, we haven’t adapted yet to automatically think about the possibility someone might be attempting to deceive us.

image

These scams work because of the following:

  • They speak to our limbic brain that drives our basic biological instincts required to survive
  • They use techniques such as time urgency, control, and serious impact associated with doing or ignoring them
  • They invoke positive or negative affect which has been demonstrated to influence people’s decisions

The advantage of email is that it is cheap and the sender does not bear the cost of processing it. If the receiver had to pay for postage, then scammers would have been doing the same thing as soon as the US Postal Service was created. If only 1 in 10,000 phishing scams worked on its victims, and if the average theft from a phishing scam is $900[1], then to make a respectable gross wage of $50,000 a spammer only need to send out half a million spams. Spam filters make delivery more difficult, and so a spammer would need to send at least 10 times that amount, say 50 million spam messages. However, as the cost of technology has fallen, the regrettable by-product is that 50 million is a small amount of emails to send over the course of a year.

Our Stone Age brains have not yet caught up to the reality of malicious intent, particularly with technology.

Part 1 - How our brains work
Part 2 - The Limbic system, cognition and affect
Part 3 - External factors that influence our decisions
Part 4 - Why we fall for scams
Parr 5 - Solutions
Part 6 - The Flynn Effect


[1] Technically $886 according to Gartner in 2007, http://www.gartner.com/it/page.jsp?id=565125. Consumer Reports claimed it was $395 per incident in 2005, see http://www.hancockbank.com/personal/per_identity/per_identity_phish.asp

The Psychology of Spamming, part 5 - Solutions

$
0
0

Solutions

So how do we get people to stop falling for scams?

Will Cognition Save Us?

  1. Thinking about it - Thinking about a decision often changes people’s minds. When people are asked to justify why they like a particular choice, they ended up less happy with their choices. Analyzing our reasons “cognitivizes” our preferences and makes salient certain features about the target that may not have anything to do with why we liked or disliked it in the first place.

    In other words, thinking about the emotion removes the effects of immediate emotion. The problem is that reducing emotion makes it difficult for us, as humans, to make actionable decisions. Without emotions, we would never do anything at all.
  2. Anger - Ironically, if people got angry when they were spammed, they would be less likely to fall for scams. People who are angry are more likely to overlook mitigating conditions before assigning blame (this is a scammer trying to fleece me vs I feel sorry for his dead uncle), to perceive ambiguous behavior as hostile (this person is up to no good), and to discount the role of uncontrollable factors (I don’t care that someone broke into my account).
     
  3. Learning and education - What does seem to work is that increasing the decision makers’ level of vigilance is often sufficient to attenuate the impact of weak-to-moderate immediate emotions. For higher levels, the realization that the message is a scam relieves the emotions that have been invoked. In other words, when it comes to spamming, education does work.

    People learn to recognize scams and the emotional reactions that are raised become less susceptible over time. This is similar to the chess player that learns pattern recognition – they can recognize good outcomes vs bad ones because they have played so many games.

Can Technology Save Us?

On the other hand, for those of us in the tech industry, or the financial industry, or law enforcement, we can’t wait around for our brains to evolve. Education is too slow, and nobody is going to spend hours upon hours learning how to recognize phishing scams or 419 scams. While we are waiting for evolution to catch up, people are being defrauded now.

Everything is achievable through technology.

- Howard Stark

Technology has been used to bridge biological gaps. Through medical science, human life spans have been extended. Automobiles and improved agricultural techniques have increased our capacity for food production and distribution. But can technology save us from malicious intent?

There are a few techniques that software vendors employ to combat the problem of financially-motivated spam.

 

Visual Identification

One the techniques that legitimate websites use is certificates, which allows browsers to identify web sites with an actual identifying certificate (that has been verified by a trusted 3rd party). Users can then see the padlock in the corner of the browser, or in the address bar, and the address bar is also color-coded with green in some browsers (such as newer versions of Internet Explorer). Websites that are untrusted do not have the lock and ones that are suspicious have the address bar color-coded red.

This makes use of human heuristics; from an early age we know that green means “Go” and red means “Stop.” Responsible brands make use of every legitimate identifier they can think of so that when consumers go to their page, they know that they can trust it.

Unfortunately, compliance using digital certificates isn’t uniform across many phished brands. And whether or not users actually pay attention to the color coding is up for debate when it comes to phishing.

Note that visual identification is a tool to help the end-user tell the difference between something that is real and something that is not. The end user still needs to know the difference.

image

 

Sender Authentication

Sender authentication is a mechanism, in email, that allows a brand to set up policies that allow receivers a way to identify mail that truly comes from that brand. If the mail is trying to appear like it is, but is not in the brand’s official list of permitted senders, the brand has the ability to say what to do with that email. Thus, the receiver has instructions about what to do with spoofed email, and if they positively identify the sender, they can flag to the end user that the sender is trusted. Two technologies for doing this are SPF (Sender Policy Framework) and Domain Keys Identified Mail.

image

One of the main weaknesses of sender authentication is that it relies upon the sending domain to make the identification for spoofing. If a spammer puts in a domain that looks and sounds similar, like faceb00k.com, and uses visually identifying pictures, then the end user can be fooled. In fact, this is something that phishers regularly do. The actual spoofed brand is normally further on down in the URL, like http://www.fakedomain.com/facebook.com/signin.aspx.

Another weakness is that sender authentication doesn’t do anything to prevent the case where no identification is required. For example, in a 419 or lottery scam, it doesn’t matter if the sender sends from a Yahoo or Hotmail alias. Identifying the user is irrelevant because they are not relying upon visual recognition of a trusted brand, but rather, trying to invoke an emotional response from the end user tricking them into performing some action, usually sending money to the spammer.

URL Inspection

One reactive technique is URL inspection, or reputation analysis. Browsers today make use of lists of URLs. If the URL goes to a malicious site, then a message is displayed to the end user indicating that site is malicious.
URL lists are populated by a collection of volunteers. When a new link is discovered, trusted users are eligible to submit the links to a central repository where they are made available to others to use. The browsers download them and upload them to individual users.

URL inspection takes the decision away from the end user who may not be able to tell the difference between a malicious site and a legitimate one by telling them, in big red text, that they are going to some place dangerous. It’s similar to somebody automating their bill payments by having the bank pay them. If you can’t trust yourself to pay your bills on time, have someone else do it for you so you don’t get into trouble.

The drawback of URL inspection is three-fold:

  1. It takes much effort to maintain it. These URL lists require human and physical resources. Hardware goes down constantly. It’s not free to maintain.
  2. It is prone to false positives. Because of the way submissions are done, there is some automation. With automation comes false positives, and with false positives comes the blocking of legitimate sites. This degrades the user experience.
  3. It is reactive. Somebody has to detect and submit the malicious URL. Miscreant actors are endlessly creating new domains and in the window of time between submission, verification, download and distribution, people can still get scammed. The time delay prevents URL inspection from being a panacea to the problem of phishing.
DNS Takedowns

DNS takedowns are the next logical step of URL filtering. Once a site has been identified as malicious, brand owners can take action to have them removed from DNS. The problems associated with DNS takedowns are the same as for URL inspection, except one more is added – the amount of human effort involved, when taking out lots of domains, doesn’t scale. Humans usually have to get involved when it comes to taking domains out of DNS and that takes a lot of time and effort. Many organizations just don’t have the resources.


Part 1 - How our brains work
Part 2 - The Limbic system, cognition and affect
Part 3 - External factors that influence our decisions
Part 4 - Why we fall for scams
Parr 5 - Solutions
Part 6 - The Flynn Effect

The Psychology of Spamming, Part 6 - The Flynn Effect

$
0
0

The Flynn Effect

Some of the most phished brands are Paypal, HSBC, Bank of America, Facebook, and eBay. All of these sites have security policies set up on their home pages and they are all fairly similar – they use education as a means of informing their user base about what techniques they will never use to contact their customers and how to recognize phishing scams. While they all engage in some of the techniques above to counter phishing, they all consider education a central focus in user education.

During the 1980’s and 1990’s, psychological researcher James Flynn discovered that the scores of IQ tests of the general population was increasing over time – about 3 IQ points per decade. This was consistent across all decades and across all demographics, even those in foreign countries, including the developed and undeveloped world. To Flynn, it looked like the world was getting smarter.

Flynn was puzzled by this effect. Was the world truly getting smarter? If so, then by working backwards it would suggest that people 60 years ago, when compared to today, were incredibly unintelligent and hardly even functional.

There are various theories for the Flynn Effect.

  1. Better education – One theory is that through better education programs, people have gotten smarter. However, this is contradicted by evidence showing that academic progress has not improved steadily at the same rate as intelligence scores.

  2. Improved nutrition - Improved nutrition is another explanation. Today's average adult from an industrialized nation is taller than a comparable adult of a century ago. Available data suggest that these gains have been accompanied by analogous increases of head size, and presumably by an increase of the average size of the brain. However, groups who tend to be of smaller overall body size (e.g. women, people of Asian ancestry) do not show lower average IQs. It is also unlikely that the speed of increase across everyone is explained by nutrition alone.

  3. Flynn’s theory - Flynn decided to put forth a theory to explain the effect. He likened it to shifting priorities in society and uses a sports analogy. If people are going to excel at sports and start to train more intensely and put themselves on better nutritional plans, then people will become better at more sports than their ancestors.

    Yet if we also decide that the 100-m sprint is the most exciting sport and more training and resources are directed at excelling in this sport, then the most advances will be made in sprinting. To be sure, there will be advances in other sports, but sprinting is the one that will make the biggest gains.

And so today, as technology has evolved, so have we as a society. Our mental priorities have changed over time. Our problem solving skills have become less attached to the concrete and more applied to finding abstract similarities (Question: how are dogs and rabbits alike? Answer: they are both mammals). The reason we do better on IQ tests is because we are getting better at thinking about abstract concepts.

This was not a priority for our ancestors. They needed to work in factories in order to generate a living and running machines, not higher education, was the priority. Yet as society evolved and technology shifted, we, too, had to shift. Perhaps we haven’t shifted as fast as technology, but the effect is observable and measurable. Because of this, we understand computers better. Children born today have advantages that they didn’t have 50 years ago. Abstract concepts that children today grasp could not have been nearly as easily grasped by children 50 years ago because societal priorities were different 50 years ago.

Flynn’s theory is what represents the hope of the fight against financial scams. As we become familiar with financial websites and people start to become familiar with phishing, 419 scams, and so forth, it will eventually start to become ingrained into us. We will eventually learn to recognize these with greater ease because societal priorities will put things like online banking and email at the center of our daily lives.

Greater intelligence allows us to recognize patterns easier and this will include patterns of abuse. As we become more used to technology we will also become more used to detecting scams and abnormal behavior – the education efforts phished brands have been investing will eventually start to pay off. Furthermore, technology will also speed up the process and distribution at which we can foster education, which will allow us to grasp abstract concepts. We will still have to deal with the dopamine released by our limbic system, but we will get better at controlling our response.

Technology is what allowed miscreants to make abuse widespread, but it is also technology that will eventually lead to these techniques becoming obsolete.

A Word of Caution

Lest we all start to think that the future looks bright, we must be mindful that technology will continue to evolve and progress. As it progresses, there will be new avenues for attack that spammers and their ilk will continue to exploit. In response, the security community will have come up with new mitigations to counter the threats. As time passes, the general public will start to catch up.

New trends are where fortunes are made. Human evolution has a very strong built-in infrastructure upon which malicious advertisers can prey upon, giving them an inherent advantage. This give-and-take model will continue long into the future, and whether or not technology can combat it effectively is still undecided. We can be certain, however, that it is likely to continue long into the future.

Conclusion

Spam works. Financial related spam also works, and the reason it works is because the emotional responses that it invokes are hard wired into our brains. We are biologically predisposed to respond certain ways to certain stimuli, and when malicious actors start to prey on those same emotional responses, they can get us to act in ways that we might not normally respond. It’s not that we are lacking in intelligence, but rather, the environment in which our brains evolved is quite different than in what we live in today. We are not optimized to live in a world where threats can be disguised as things that we trust.

Fortunately, we do have some tools in place to lessen the impact of the threats. We will never be able to eradicate them completely. However, through the use of technology, we can impede spammers’ efforts to deceive us. And over time, as the population becomes more adapted to the environment in which we live, we will also become better at recognizing malicious players. The types of scams that work today have a shelf life and that clock is ticking. Time does move slowly, too slow for many of us, but it does march inexorably onwards.

The End.

Part 1 - How our brains work
Part 2 - The Limbic system, cognition and affect
Part 3 - External factors that influence our decisions
Part 4 - Why we fall for scams
Parr 5 - Solutions
Part 6 - The Flynn Effect

Mail from legitimate webmail sources

$
0
0

For many years, I have tracked spam from botnets and reported on it on this blog. I have analyzed those botnets’ distribution patterns by number of IPs, number of messages per email envelope and geographical distribution.

While spam from botnets is interesting, and the main source of spam, it is not the only source of spam. What about spam that originates from the MAGY sources?

MAGY stands for Microsoft (Hotmail/Outlook.com), AOL, Google (Gmail) and Yahoo. Spammers create botnets that go out, sign up for accounts on these services and then send spam from them. This continues until the service shuts them down.

Spammers also compromise legitimate MAGY users’ accounts. However they acquire the passwords to these accounts, they subsequently log in and send spam until the user notices and changes their password.

In either case, this is known as reputation hijacking. Spammers are betting that spam filters will not IP block these accounts because it would cause too many false positives.

I’ve tracked mail from these four sources using the same scripts I use to track mail from botnets. I take the IPs in the service’s SPF record and then record how much mail comes from these accounts. Below are some graphs of the total mail (not spam) from these services. Is there anything we can determine from these mailing patterns?

Before we continue, there are some things I must point out:

  1. In August, my script that counts these things up crashed and died for a few days. I don’t know why this is, but it mysteriously fixed itself without any intervention on my part.

  2. I have not included the spam percentage in these figures. My goal is to only look at volume patterns.

  3. I have only included six months worth of data – March through August 2012.

With that out of the way, what can we say about mail from MAGY?  First up is Hotmail.

image

We can see that Hotmail uses a weekend sawtooth pattern – that is, during the week we see plenty of mail but it drops over the weekend. This means that most users are sending mail from Hotmail during the week but not on weekends.

Why is this?

It looks like people are sending from Hotmail at work but not from home on the weekends. Or possibly they do it at home but for some reason don’t send that much mail from Hotmail on the weekend.

Do people have better things to do than send email on weekends?

Next up is Yahoo, the same caveats as #1-3 apply here, too.

image

Yahoo has the same sawtooth pattern as Hotmail but we see a spike at the end of March that was not present with Hotmail, and a huge spike in early July.  These correspond to spam outbreaks (both in Yahoo and Hotmail). Whereas Hotmail had the spike near the end of the month, Yahoo’s was near the beginning.

However, just like Hotmail, people aren’t sending as much mail on the weekend.

Next up is Gmail. Below is their mail distribution sending to us:

image

Just like Hotmail and Yahoo, Gmail has the same sawtooth pattern. But unlike Gmail and Yahoo, there are no spiky blips aside from my script crashing. We haven’t seen any major spam campaigns from Gmail during this time.

Next is AOL:

image

As in the other three, there is the same sawtooth pattern, and a spiky blip in the middle of the Yahoo and Hotmail campaigns. This is evidence that spammers were rotating through those three services in July, but skipped Gmail. Interesting, the mail from AOL dropped off at the end of July and through the start of August but has since recovered.

So far, everyone pretty much looks the same. People send plenty of mail during the week but not so much on weekends. Weekends are roughly 35-40% the volume of weekdays.

But there is one exception to this pattern: Facebook. I collect statistics on mails from IPs on Facebook’s TXT record. Below is what Facebook looks like:

image

Aha!

The sawtooth pattern here does not exist.  Instead, it is very erratic but gradually increasing upward (that blip at the end looks ugly, doesn’t it?). The summer months are really where we saw the largest gains, which corresponds to school finished for that part of the year.

Unlike the sawtooth pattern of MAGY, Facebook doesn’t care about weekends very much. However, Facebook is not just about sending personal mail like Hotmail or Yahoo. Instead, Facebook sends you all sorts of notifications depending on your settings:

  • Someone sent you a private message on Facebook
  • Someone tagged you in a photo
  • Sometime invited you to Farmville, or you have to take action
  • And a bunch of others

But it doesn’t really matter what people are doing, all of their friends are logged onto Facebook during all the days of the week and doing stuff, and people are getting alerts about it. Whether or not they read all those alerts is another question.

But it does go to show that people use Facebook differently than they use their email accounts. Email is for certain times of the day, Facebook is for whenever.

What do consumers know about Antivirus?

$
0
0

I’ve been at the Virus Bulletin 2012 conference in Dallas, Texas this week and there have been a lot of good presentations. I took notes on over 20 of them and thought I’d write about some of the more memorable ones.

One of them was a presentation entitled Malware and Mrs Malaprop: what do consumers really know about AV? by Stephen Cobb of ESET. The term “Mrs. Malaprop” is a character in an 18th century play who used malapropisms – using the wrong word to describe something.

He spoke about the story of a massive fireworks failure in San Diego this past 4th of July where it badly malfunctioned. The co-owner of the fireworks company blamed it on a virus – somehow a piece of malware must have gotten into the program that controlled the lights display. The moral of the story is that if anything goes wrong in a computer program, users attribute it to a virus.

In my experience, this rings true many times. When my parents can’t get a computer program to run properly, it’s because of a virus. But in my parents’ case and in the case of this fireworks malfunction, it wasn’t malware at all. The causes of glitches are often far more complex and much less malicious.

So why don’t people know better?

The reality is that most users haven’t had any security training from their employer (68%). Of those 32% who have, only 1/10 have had it in the past 12 months. Security training must be refreshed; this means that only 3% of people have had security training from their employers in the past 12 months!

But does this really matter?

Well, even in spite of all this ignorance, the Internet has survived and hasn’t fallen apart. What we find is that 83% of people have heard of phishing, although only 58% correctly identified the definition of phishing. However, the more education people have, the better they do at identification.

How do people view the security of various platforms? Well, going from least secure to most secure, the order is the following:

  1. Windows PCs
  2. Windows tablets
  3. Android smartphones
  4. Windows smartphones
  5. Android tablets
  6. iPhones
  7. iPads
  8. Macs

This corresponds to most of the press articles we see and read about when it comes to security. Yet in spite of these perceived insecurities, the majority of people access the Internet from home using Windows PCs – which is what they believe is the most insecure. Clearly, there is a gap between what people believe and what they actually do.

Not only that, but there is another behavior/belief gap when it comes to social networks. About a quarter of people think that their private information on social networks is unsafe. However, the doubt about safety is greater among those who spend more time on social networks. So, the more you use it, the more unsafe you think it is… but you still use it.

This is reminiscent of the time Homer ate the rotten sandwich that made him sick but he kept on eating it.

What about security practices?

It turns out people are pretty good at assessing how good a password is in terms of strength, but they don’t necessarily use strong passwords themselves. I believe that the reason people do this (use weak passwords) is because it’s too difficult to remember strong passwords and therefore they use heuristic shortcuts.

Also, 91% of consumers use some sort of security software – usually they mean A/V. Of those who don’t, they either can’t afford it, can’t figure out how to install it, it slows down the computer or it conflicts with other software. Sometimes they say that because they are using a Mac, tablet or Linux, they don’t need it.

To conclude the presentation, Cobb made the following five observations:

  1. Even without security training, users are good at figuring things out.

  2. More educated users = more security-aware.

  3. There is an ongoing cost to consumers due to security failures.

  4. A/V software could be improved.

  5. Educating the market makes a lot of sense for security vendors (some would dispute this because it disrupts the business model).


Those were my takeaways from this presentation.

A Plan for Email over IPv6, part 1 – Introduction, and How Filters Work in IPv4

$
0
0

Last week, myself and a colleague from work did a presentation at the Virus Bulletin conference entitled “A Plan for Email Over IPv6.” I have written about this previously on this blog, but this paper contains updates to my previous plan as well as goes into further detail beyond what is in my IETF draft.

Our presentation went well, much better than it did at the IETF this past summer in Vancouver, and the feedback I got about the idea is much better than I ever received on discussion lists. So without further ado, here it is.


A Plan For Email Over IPv6

There’s a storm coming, Mr. Wayne.”

- Anne Hathaway as Catwoman in The Dark Knight Rises

Introduction

As the number of available IP address space in IPv4 is depleting, and the amount of Internet-connected devices continues to increase, the world is moving to IPv6. Slowly but surely, it is coming.

But not for email.

Amongst email receivers, there is no agreement on how to perform IPv6 over email in the short term, although there is agreement that eventually it will have to be figured out.

The reason for the lack of consensus of transmitting email over IPv6 is spam filtering:

  • Some email experts believe that spam filtering will be done the same way in IPv6 as we do it in IPv4: with the use of IP address [1] blocklists[2].

  • Others believe that IP blocklists will be ineffective in IPv6.

  • There are some that believe email will never move to IPv6
  • Still others that think that all email will eventually be sent over IPv6 but not anytime soon. In the meantime, just use IPv4 to send mail.
     

With so many opinions on the topic, why is email over IPv6 such a problem, and why are email receivers so reluctant to do it?

Background

Before we get started, let’s define what we mean by email over IPv6. It does not mean transmitting email to the user’s mail server over IPv4, and the user then accesses their mail server over IPv6. No, what we mean is that email travels over the public Internet over IPv6. How the user connects to their mail server is irrelevant to the discussion.

Not this:

image

But this:

image

The biggest reason why no email providers are eager to transmit email over IPv6 is because there is currently no way to deal with the problem of abuse. Today, spammers make extensive use of botnets. Each day, they compromise new machines and start using them to spew out spam. Each of these bots use different IP addresses, and their IP addresses change all of the time. If you had 10,000 IP addresses today that were sending out spam, then tomorrow there would be 10,000 again but at least 9700 of them would be different IP addresses than were used today [3].

The reason that there is so much rotation in IP addresses is because modern spam filters make use of IP blocklists. When a blocklist service detects that an IP is sending spam, it adds it to the blocklist and rejects all mail from it. There are exceptions to this listing process such as a legitimate IP that sends a majority of good mail (such as a Hotmail or Gmail IP address), but in general, mail servers reject all mail from blocklisted IPs. The reason they do this is the following:

  1. Resource Optimization - 70% of all email flowing across the Internet (not including internal mail within an organization) is spam. If a sending IP is on a blocklist, a mail server can reject it in the SMTP transaction and save on all of the processing costs associated with accepting the message and filtering it in the content filter.

    Most mail servers today would topple over and crash because they could not keep up with the load if they had to handle all of the mail coming from blocklisted IPs since it would increase the number of total spam messages by a factor of 10.

    image
  2. Storage - Spam filters that mark messages as spam in the content filter must store it for the end user. This mail is either stored in a user’s junk mail folder or in a cloud-based spam quarantine. By rejecting email up front, mail servers can reduce the amount of hardware required to store email. This reduces the costs associated with filtering mail.

  3. Spam Effectiveness - Spam filters achieve better antispam metrics, and better user experiences, by using IP blocklists. Modern content filters are good, but rejecting 100% of mail from a spamming IP address means that there is no possibility of a false negative from that IP address.

    By contrast, if a spam filter does not use an IP blocklist, the content filter has to learn to recognize the spam coming from that IP address, update the filter and then replicate out the changes. This method is slower than pulling down a blocklist and then using it as the first line of defense. Without an IP blocklist, a spam filter will catch between 80% and 99% of the mail coming from a blocklisted IP. While many spam filters get close to that 99% range, it’s still not 100%.

  4. Reduced Risk - Fewer spams arriving in users’ inboxes reduces risks for organizations and for individual users. If spam and other malicious messages, such as messages with links to malware, are rejected up front, a user cannot later go digging through their spam quarantine or junk mail folders, retrieve the message and click on the malicious link. Rejecting this mail precludes this possibility from ever occurring since the user will never see it.

All of these reasons make the use of IP blocklists indispensable.


Posts in this series:

- A Plan for Email over IPv6, part 1  – Introduction, and How Filters Work in IPv6
- A Plan for Email over IPv6, part 2 - Why we use IP blocklists in IPv4 and why we can't in IPv6
- A Plan for Email over IPv6, part 3 - A solution
- A Plan for Email over IPv6, part 4 - Population of the whitelists
- A Plan for Email over IPv6, part 5 – Removals, key differences and standards

 


[1] In this paper, whenever I use the term “IP,” I mean “IP address.”

[2] The terms “blocklist” and “blacklist” are synonymous.

[3] I have confirmed this by digging through our own IP statistics and checking the uniqueness of abusive IP addresses.


A Plan for Email over IPv6, part 2 – Why we use IP blocklists in IPv4, and why we can’t in IPv6

$
0
0

IP Blocklists

Blocklists are populated in a number of different ways. Some use spam traps to capture email to email addresses that have never been used publically, others use statistical algorithms to judge that a sender is malicious or compromised. Once the data is acquired, blocklist operators populate their lists in two ways:

  1. They list individual IP addresses, one by one, of all the servers that are sending mail.

  2. They make use of CIDR (rhymes with spider) notation. CIDR notation, or Classless Internet Domain Routing, is a way to group large blocks of IP addresses. A provider would list a larger group of IP addresses in CIDR notation in order to save on space in the file or database so they don’t have to list them one by one.

    For example, the IP addresses in the range 127.0.0.0 – 127.0.0.255 can be listed as 127.0.0.0/24. Rather than using 256 lines in a file, only 1 line need be used.

    The XBL from Spamhaus is about 7 million entries (lines of text) and around 100 megs in size. By contrast, the PBL contains 200,000 lines of text (without exceptions in ! notation) and is 6 megs. However, the PBL is represented mostly in CIDR notation. If all of these ranges were expanded, it would be over 650 million individual IP addresses. That’s a whole heck of a lot more IPs in the PBL for a whole lot less file size!

When implementing the IP blocklists from Spamhaus in a real organization, running the XBL in front of the PBL blocks about 4 times as much mail as PBL [1]. The XBL is better at catching individual bots that are sending out spam but are not listed anywhere (they are new IPs) whereas the PBL is better at pre-emptively catching mail servers that should never send out spam (probable bots but it doesn’t matter because they shouldn’t be sending mail anyhow).

However, if every single PBL IP had to be listed singly instead of compressing it into CIDR ranges, then the PBL would be 9.4 gigs in total size. 9.4 gigs is alargefile. It isn’t completely unmanageable but it goes from being a minor inconvenience to a major one. It takes a long time to download, upload, and process a 9.4 gig file. It’s also easier to store the file entries in a database if it is only 500,000 entries (or even 7 million) vs 650 million of them. Databases that large run into the problem of scale.

The PBL and XBL are examples of why different styles of IP blocklists are required. The PBL lists 650 million IPs and we still have over 7 million IPs on the XBL that aren’t on the PBL. Clearly, spamming bots can move around such that they are not published on the lists that have large address spaces listed. Bots are very good at hiding in places that are not blocked yet. Given enough space to hide, spammers will hide in that space because if they didn’t they would not be able to stay in business. The problem that the industry faces is that as soon as we find a spammer’s hiding space, we can block it for a while but the spammer will vacate it, relocate elsewhere and continue to spam [2].

image

 

And therein is the problem of IPv6. An IPv4 IP address consists of 4 octets and each octet is a number running from 0-255. This means that there are 256 x 256 x 256 x 256 possible IP addresses, which is 4.2 billion possible IP addresses (in reality, there are less than this because there are many ranges of IPs that are reserved and not for public consumption). If you had to list every single IP address singly in a file, then the size of the file would be 61 gigs. 61 gigs is a very large file size and there are very few pieces of hardware that can handle that size of file in memory (whether you are doing IP blocklist look ups in rbldnsd or some other in-memory solution on-the-mail server). Processing the file and cleaning it up would take a very long time; you simply couldn’t do it in real time where IP blocklists need to be updated frequently (once per hour at a bare minimum).

IPv6 multiplies this problem. We have seen that spammers already possess the ability to hop around IP addresses quickly. They do this because once an IP gets blocked, it is no longer useful to them. However, there are only so many places they can hide – 4.2 billion places. In IPv6, if they copy the same pattern of sending out spam and hopping around IP addresses the same way they do in IPv4, then there is virtually unlimited space they can hide in. To put it one way, there are 250 billion spam messages sent per day. Under IPv6, spammers could send out 1 piece of spam per IPv6 address, discard it and then move on to the next IPv6 address for the next 10,000 years [3] and never need to re-use a previous IPv6 address. A mail server could never load a file big enough even for one day’s IPv6 blocklist if spammers sent every single spam from a unique IPv6 address. Because spammers could hop around so much, IP blocklists would encounter the following problems:

  1. They would get to be too large for anyone to download, process and upload. 
     
  2. Even if blocklist maintainers listed only the IP addresses that were spamming, a spammer could send spam from an IP address, let the IP address it used get listed on a blocklist, but discard that IP address and move onto the next IP address. By rotating through IP addresses quickly, a spammer would always be one step ahead of the blocklists, and the lists would lose their effectiveness.

How do we know spammers will do this?

Because they are already doing it! The biggest shift in spammer behavior over the past year and a half is not the move to infected bots, but moving to compromised accounts. By compromising accounts, spammers have virtually unlimited resources from which to spam. Spam filters cannot do IP address blocks without creating many false positives. Thus, from the spammer perspective, they have defeated IP reputation filtering and can send from randomized email accounts. It is difficult for spam filters to create proactive rule sets when the population of potential email addresses is nearly unlimited since you can make your email address almost any combination of letters.

Similarly, the population of potential IPv6 addresses is nearly unlimited. Spammers have already generated the capacity to shift their tactics to mechanisms that evade spam filters. Once they learn that IPv6 gives them another way around the filter, they will start using this technique en masse.

It’s probably true that for the first little while until IPv6 becomes more common, spammers will not use it. However, it is only a matter of time before the cost/benefit ratio shifts to their favor and when it does, they will do it. Time and experience has shown that better spammers always evolve. There truly is a storm coming.

This is why no email receivers are eager to send and receive email over IPv6 [4]. Performing spam filtering in IPv6 the same way as IPv4 will not work. We have to allow for the worst case scenario which is that spammers will overwhelm mail servers and drain processing power by having to deal with a 10x increase in traffic. 
 


Posts in this series:  

- A Plan for Email over IPv6, part 1   – Introduction, and How Filters Work in IPv6
- A Plan for Email over IPv6, part 2 - Why we use IP blocklists in IPv4 and why we can't in IPv6
- A Plan for Email over IPv6, part 3 - A solution
- A Plan for Email over IPv6, part 4 - Population of the whitelists
- A Plan for Email over IPv6, part 5 – Removals, key differences and standards   


[1] Confirmed by independent research by antispam companies.

[2] This is the origin of the term “whack-a-mole”, a term the antispam industry borrowed from the carnival game. As soon as you whack one mole, it hides and another pops up.

[3] Coincidentally, this is the same amount of time it will take before the Toronto Maple Leafs win another NHL Stanley Cup.

[4] Other readers will point out that the major reason it won’t work is because a server could never cache that many IP addresses. While true, not every mail server looks up IPs on a blocklist via a DNS query.

A Plan for Email over IPv6, part 3 – A solution

$
0
0

A solution

How do we deal with it?

Eventually, the Internet community will come up with a permanent solution for email over IPv6 but in the meantime, a transition model is required. The use of IPv6 whitelists is an interim solution.

Rather than using IP blocklists to reject mail from known bad IP addresses, email receivers who wish to receive email over IPv6 should use whitelists to only accept mail from known good IP addresses and reject all email from IPv6 IP addresses that are not on the list.

This IPv6 whitelist is a "Do not reject all mail from this IP address" list; email from these IP addresses may still go through traditional content filtering. IP addresses on this whitelist are there because they send email over IPv6 intentionally; they are not sending email without the computer owner's consent, as part of a botnet.

It is not unusual for email receivers in modern spam filters to use whitelists, or "do not block" lists but still filter the mail by content. For example, many large email receivers do not block the IP address ranges of large webmail providers but still apply content filtering. Other email receivers implement whitelists wherein a small set of IP addresses undergo no spam filtering.

A flowchart of the process is below:

image

Using an IPv6 whitelist has the following advantages:

  1. It allows email communication between those Internet users who need to do it over IPv6 instead of IPv4.

  2. It does not permit widespread abuse of email over IPv6 since senders must make an effort to get onto the whitelist.

  3. The lists will not take up much memory or bandwidth since the total amount of legitimate senders over IPv6 is projected to be substantially fewer than the total amount of Internet users or devices. There simply are not that many senders who require sending email over IPv6, less than 20 million which is smaller than many IPv4 blocklists.

It is not unusual to put restrictions on IP addresses that are newly sending email. Today (2012) on IPv4, Internet users cannot simply start sending email out a new IP address without encountering problems; most spam filters will view mail from a new IP address as abusive and either block it or throttle mail from it. Therefore, representatives between those users contact each other informing them to expect to see mail from their dormant IP addresses in the near future, or else they ask for a pre-emptive whitelisting. Thus, using an IPv6 whitelist already has precedent. Just as new senders in IPv4 request pre-emptive whitelisting as a courtesy, in IPv6 they will have to request pre-emptive whitelisting as a requirement.

A further refinement is that receivers of email over IPv6 do not need to reject non-whitelisted anonymous senders over IPv6. Instead, they can throttle the senders by limiting the amount of mail they can send. As time passes, the IPv6 senders can build up a good reputation and move from the throttle list, where the amount of mail they can send per IP is limited, to the whitelist – where the amount of mail they can send per IP is nearly unlimited.

Thus, the key characteristic of the whitelist solution is not a default-treat-everyone-as-potentially-good-until-they-show-otherwise, but instead treat-everyone-as-suspicious-until-they-prove-otherwise. The decision to throttle or reject mail from untrusted senders is up to the recipient.

Email receivers may continue to filter the whitelisted sender’s message by content filter and either store it in the user's spam quarantine, or reject the message based upon spam content, but they must not block or throttle messages from those IP addresses simply because the sending IP address is IPv6.

IPs addresses in the whitelist can be either single IP addresses or in IP address ranges, it is up to the receiver to decide which format to use.

Other types of Whitelists, not just IP Addresses

It is not necessary to restrict whitelists to use only IP addresses. Email receivers can whitelist based upon domain and combine it with an SPF (see RFC 4408) or DKIM (see RFC 4871) validation, or by using certificates such as those exchanged in TLS (see RFC 5246).

The advantage of using either of these authentication methods is that instead a domain whitelist is more stable and easier for humans to read than a list of IP addresses. The drawback of using any of these options is that additional DNS queries or certificate exchanges must be performed during the initial SMTP conversation.

Large email providers prefer to make decisions about whether or not to accept or reject the mail as quickly as possible. The longer it takes to decide – which includes waiting for a DNS response and DNS timeouts are frequent – the more load the mail filters are under and the more hardware that is required. If an email receiver only uses an IP whitelist then it can perform an IP lookup as soon as the sending IP connects to it and reject as soon as it gets a verdict back. If the email receiver uses SPF, it must wait to perform the whitelist lookup after the MAIL FROM command in the SMTP conversation and wait for a DNS query to return. If the email receiver uses DKIM, it must wait until it receives the entire message and waits for the DNS query to return from looking up the public key. This slows down the email transaction and increases load on the email infrastructure.

Thus, using other types of authentication are more flexible but do not scale for larger email infrastructure without corresponding increases in hardware.


Posts in this series:

- A Plan for Email over IPv6, part 1 – Introduction, and How Filters Work in IPv6
- A Plan for Email over IPv6, part 2 - Why we use IP blocklists in IPv4 and why we can't in IPv6
- A Plan for Email over IPv6, part 3 - A solution
- A Plan for Email over IPv6, part 4 - Population of the whitelists
- A Plan for Email over IPv6, part 5 – Removals, key differences and standards

A Plan for Email over IPv6, part 4 – Population of the whitelists

$
0
0

Population of the whitelists

How do email receivers go about populating whitelists?

The whole strength of email is that you can hear from people you’ve never heard from before; new people outside your normal circle can talk to you. But the whole weakness of email is that you can hear from people that you’ve never heard from before; spammers can send you junk.

The weakness of using whitelists – and blocking the rest of the world – is the “introduction problem.” How do you hear from new people? They have important things to say to you, yet you aren’t listening to them and that is by design.

In reality, this problem has analogies in real life. Think back to your own experiences; when you first started out your working career, nobody would hire you because they wanted people with experience. But how are you supposed to get experience if nobody will hire you? Another example is when Homer Simpson wanted to join the Stonecutters, he was stonewalled because to get into the club, you had to either be related to an existing member, or save the life of an existing member. Since the Stonecutters wouldn’t reveal their membership, and the odds of saving the life of anyone is extremely small, Homer initially couldn’t get into the club.

Fortunately, there are ways to get around the introduction problem, but all of these ways have their own degrees of difficulty. Below are some possible mechanisms to accomplish this:

  1. Manually - Administrators may contact each other by email over IPv4, by telephone, by regular mail, by word-of-mouth, or any other form of communication. Both parties may agree to whitelist each other, or one party may whitelist the other without the other doing the same. 

    Difficulty: High (easy to implement but doesn’t scale) 
     
  2. Use a 3rd party - Administrators may rely on a third party reputation service that provides lists of IP addresses of known good senders of email over IPv6. An administrator may acquire this list and proactively whitelist all IP addresses on this list, or a subset of them. 

    Difficulty: High at first (nobody has such a list) but Low to Medium thereafter 
     
  3. Pre-populate your list yourself - Administrators can create their own lists. One way to do this is to maintain IP reputation statistics of senders over IPv4. By combining that with sending domains and SPF records, receivers can “guess” what IP addresses senders will use to transmit over IPv6 and use that to pre-populate a whitelist. They do this by looking up the SPF records of trusted domains and proactively adding any IPv6 addresses to the whitelist. Receivers who plan to use SPF or DKIM acceptance do not need to do anything, they simply take the list of trusted domains they were using for IPv4 and reuse them for IPv6. 

    Difficulty: High (but not in the case of DKIM or SPF whitelists) 
     
  4. Provide an easy way for senders to get onto the whitelist - Another way to populate the whitelists is to give email senders a way to do it themselves with as little human interaction as possible. 

    a) If a receiver rejects a sender, the bounce message might contain instructions on how to add themselves to the whitelist: 

    550 Access Denied. The sending IP [121::1] is not permitted to send email over IPv6. To attain permission to send over IPv6, please see the following web page: http://...

    b) The web page contains a form that the sender can fill out and send to the receiver. The form would have standard sign-up security checks including a CAPTCHA plus another form of identification, whether it is SMS validation or sending an email to another email address requiring a second call to action (e.g., click this link). 

    c) Once the sender has passed multiple validations (filling in the CAPTCHA, responding to the text message to their phone or clicking on the link in the email), their IPv6 address is added to the receiver’s whitelist. This puts the responsibility on the sender to whitelist themselves and at the same time scales for the receiver so they are not endlessly managing whitelists using humans. If a sender truly needs to send email over IPv6, they will take the time to do it.  

    Difficulty: Medium for the receiver, Easy for the sender 
     
  5. Do it yourself by tracking reputation– Rather than rejecting mail from senders on IPv6, receivers might allow new senders transmitting over IPv6 but throttle them instead. The sender could send some mail over IPv6 and then fallback to IPv4 once they have reached their daily limits. By keeping track of a sending IP addresses’ reputation (ratio of spam to non-spam, passing authentication, etc.) over a period of time, a receiver can upgrade the sender from the Untrusted list to the Whitelist. The amount of mail they can send over IPv6 increases as their reputation increases. 

    This changes the concept from a binary whitelist (Accept/Deny) to a sliding-scale whitelist where a sender’s reputation is on a sliding scale. It works automatically; it does not require the sender or receiver to do anything whereas in the above web page method, some users or administrators won’t understand how to or won’t care to whitelist themselves. The drawback of this method is that it can take a long time to go from Bad to Good and during that waiting time, email delivery is sporadic. It also forces mailers to send mail over IPv6 and IPv4. While most mailers can do that, it is not necessarily true for everyone and those that cannot will not be able to send email. 

    Difficulty: Medium for the receiver, Medium for the sender

 

Any of these methods, or a combination of them, could be used for whitelist population.


Posts in this series:

- A Plan for Email over IPv6, part 1 – Introduction, and How Filters Work in IPv6
- A Plan for Email over IPv6, part 2 - Why we use IP blocklists in IPv4 and why we can't in IPv6
- A Plan for Email over IPv6, part 3 - A solution
- A Plan for Email over IPv6, part 4 - Population of the whitelists
- A Plan for Email over IPv6, part 5 – Removals, key differences and standards

A Plan for Email over IPv6, part 5 – Removals, Key differences and standards

$
0
0

What happens if spammers get on the whitelists?

The question arises – what happens if a spammer gets onto the whitelist? Maybe they have compromised an IP address of a good sender. Or maybe they snuck onto the list. What should be done if this is the case?

A whitelist model makes abuse tracking easier. In IPv4, all IPs are accepted and then the abusive ones are singled out and blocked, but the problem is a new IP address will arise the next day. In IPv6 whitelisting, the population of sending IPs is limited. There are only so many legitimate mailers on the web. If someone is spamming, you look at the list of permitted IPs and either:

  1. Kick the spammer off the whitelist.

  2. Lower the sender’s reputation which lowers their throttling limits in the case of a sliding-scale whitelist.

  3. Use the sending IP as part of a weight in the spam filter.

If a spammer does get kicked off the whitelist, they can’t just show up the next day on a new IP without going through one of the processes they went through to get onto the whitelist to begin with. Making the steps difficult for a machine to automate but easy enough for a human to do adds costs to a spammer’s business model. A spammer must be able to do things quickly and if getting onto a whitelist takes manual steps, it will cut into their bottom line and makes IPv6 spamming a less attractive target.

What about if a spammer takes the time to manually get onto the whitelist?

Some spammers are determined and are willing to spend the time to perform laborious steps. However, additional behavior checks can be implemented in the signup process (in the event no human is involved in whitelist addition), and traditional IPv4 spam and reputation techniques can be used to the same effect they are today. Spammers attempting to game the system is an existing problem and there are people working to combat it. It will be no different under an IPv6 whitelist model.

Thus, the mitigation for spammers compromising the whitelist is to track the reputation of the senders even after they have gotten onto the whitelist.

Summary of Differences

The following table represents the key differences between what we do in IPv4 vs what this plan proposes for IPv6:

 

image


Sharing Data

One of the most crucial components of the whitelist solution is data sharing. Large mail receivers need to share data to make this effective. If Microsoft does one thing, Comcast does another and Google does yet another, and each is different than whitelists, or even if they all do whitelists, it becomes a management nightmare with everyone doing their own thing.

clip_image001

(How standards proliferate – comic taken from xkcd).

Wouldn’t it be easier if the big players got together and decided that everyone shares lists with each other? Then, if someone got whitelisted at one big player, you automatically became whitelisted with a lot of other players, too? It would make it so much easier for legitimate people to send mail across IPv6 without having to get whitelisted everywhere.

It would mean that the industry is finally working together to stop the problem of spam. 25 years ago we never predicted that the Internet would become abused as much as it is today. As big a problem as IPv6 spam could be, we have a chance to do something different and stop it before it begins – designing with security in mind. Having everyone agree to the same process is a major step in this direction. It stops the spammers and helps everyone else which is the whole point of the antispam industry. It makes life easier for end users and for the email receivers.

But who should be responsible for this industry collaboration? Should private companies get together and form the standards? Should already-existing regulatory bodies (such as the Internet Engineering Task Force) do it? Should government oversee it?

This is a body of work that is ripe for exploration.

Conclusion

As the world transitions to IPv6 and Internet connected users and devices start to use it, email servers will eventually need to send email over IPv6 as well. However, the solutions for combatting spam over IPv6 cannot be the same as they are for IPv4.

The use of IPv6 whitelists to accept mail from reputable senders, rather than IPv4 blocklists to reject mail from disreputable senders, will help address one of the problems of spam over IPv6. It keeps the problem to a manageable size while ensuring that large email receivers can still scale their services without excessive hardware costs.

Yet the use of whitelists is not without its problems; how do you get onto a whitelist to begin with? The easier it is to get onto it, the more likely it is that spammers will abuse it. Yet making it more difficult will prevent the most useful feature of email – its ease of use. There are multiple ways to balance usability with security but ultimately email over IPv6 will look different than IPv4. It is still too soon to know how different it will be.

Fortunately, we are at a stage where we can decide how to build email protocols with security in mind. Had we done that 25 years ago, we might not even have needed to build such complex spam filters. With any luck, the decisions we make (or more importantly, don’t make) today will not need to be revisited 25 years from now.

The end.


Posts in this series:

- A Plan for Email over IPv6, part 1 – Introduction, and How Filters Work in IPv6
- A Plan for Email over IPv6, part 2 - Why we use IP blocklists in IPv4 and why we can't in IPv6
- A Plan for Email over IPv6, part 3 - A solution
- A Plan for Email over IPv6, part 4 - Population of the whitelists
- A Plan for Email over IPv6, part 5 – Removals, key differences and standards

Do tech-savvy readers practice what they preach?

$
0
0

While at the Virus Bulletin conference in Dallas last week, Sabina Raluca Datcu and Ioana Jelea of BitDefender gave a presentation entitled “Practise what you preach: a study on tech-savvy readers’ immunity to social engineering techiques.”

In this talk, presenters spoke about a study they conducted – do tech savvy people actually have better security habits than regular ham-and-eggers? The idea is that many people believe “Oh, I would never be a victim because I know all about scams.” But is it accurate?

It’s true that security awareness has increased today, but scammers can still exploit human nature. Having antimalware installed does not prevent sophisticated attackers because the art scamming is a combination of skill and creativity.

To measure this, BitDefender did a survey of 643 tech savvy users defined as people who regularly read and comment on technical articles on the Internet. These are not security professionals but rather people who are tech aware. For example, I regularly read up on stocks and finance and therefore I am stock market aware, but I am not a financial professional.

Anyhow, BitDefender’s study was effectively a collection of qualitative analysis – it’s less about numbers and more about interpretation of data collected. What they found was this – Personal norms help the user (victim) to decide what course of action to take. To put it another way, the way you are in real life is how you behave online.

  • For example, there are people that understand the risks of sharing passwords. You might be setting up a test account somewhere and your co-worker needs access to it. They send you an email saying “Hey, what’s the username and password to that test account?”

What do you do?

Do you say “Ah, it’s probably fine” and then hit reply and send the login information? Many people do. You see the risks but disregard them.

  • Furthermore, many people disbelieve the risks. For example, one respondent in the survey claimed that they have no antivirus on their Mac because it isn’t needed. Mac users never get infected. You don’t have to go very far to see that this belief is rampant among Mac users. It’s usually stated in one form or another:
  • Mac users don’t get viruses.
  • Macs are more secure than PC’s.
  • PC users keep saying that Macs will eventually suffer the same fate as PC’s but it never happens.

And so forth. These statements have some degree of truth to them but the people who say them are taking them to mean more than they should. That is, there is some truth the claim that Macs get fewer malware infections but the risk is not negligible. Not in 2012.

  • BitDefender also found the the more narcissistic the user, the more likely they were to share personal information. If the user was well admired, they would enthusiastically disclose information. This is also unsurprising, people like to talk about themselves. We’ve known this since Dale Carnegie. Social engineers can use this against people.

  • Finally, the lower the level of perceived risk, the more likely users are to break security rules. In our example above of sharing passwords, you might decide to share a username and password combination because you think the odds that someone will intercept that mail and use it for nefarious purposes are small. You might not bother to change your router password because the odds of that are small.

    And so forth.

BitDefender concluded by saying that distance between what people say they would do, and what they would actually do, depends on numerous elements. They combine to affect people’s gullibility factor.

They stated in the Q&A section that more study is needed – larger sample sizes, more in-depth analysis, but I thought that this was a good start.

And that’s what I learned at VB about whether or not tech-savvy users practice what they preach.

Measuring the cost of cybercrime

$
0
0

Last week at Virus Bulletin in 2012, Tyler Moore of Southern Methodist University (SMU) gave a talk entitled "Measuring the cost of cyber crime.” It was a study done in collaboration with multiple individuals in multiple countries.

The study sought to answer this question – How much does cyber crime cost? Up until this point, nobody really knew. The answers given were way out of line with a reasonable estimate. For example:

  • According to a UK study, it cost the UK £27 billion annually which is 2% of British GDP. That is huge!
  • According to testimony given by someone from AT&T (the CEO?) to Congress, it is $1 trillion, or 1.6% of the world’s GDP. Also huge!

How accurate is this?

The costs of cybercrime don’t consider the profits by the spammers (and others involved in the underground economy) vs. the losses incurred by legitimate businesses trying to fight it.

Furthermore, there are some types of cybercrime that are extremely difficult to measure. IP theft and corporate espionage are the biggest types by far but they can’t be measured with any degree of confidence. All you can do is assign targets and guess, and then multiply through. For example, if Microsoft lost a secret design that “cost” them $50 million, and Microsoft represents 1% of the US’s GDP (which it doesn’t), then that means that theft, on average, is 50 million x 100 = $5 billion! 

Obviously, this is inaccurate because (1) Microsoft’s losses are not representative to the economy as a whole, and (2) the secret design cost of $50 million is a guess. 2/3 of all losses are guesses.

Because of all of these widely varying estimates, we are now seeing pushback against these big numbers. Accuracy matters because of legislation that is currently being lobbied for at the upper echelons of government power. If people want more laws, it had better be based on accurate data.

So how do we get better?

SMU’s study tried to define a framework for measuring cybercrime’s cost. First, there is often conflation of costs among categories. SMU broke it down into multiple sub-types (these numbers are for the UK whose government commissioned them to come up with a better model):

  1. Criminal revenue– what the spammers make.

  2. Direct losses– how much a business loses because of it.

  3. Indirect losses– loss of confidence by consumers (e.g., users no longer use online banking).

  4. Defense costs– e.g., installing antivirus.

Most data available does not decompose by type. To simplify things, SMU only considered losses over $10 million, and only used reliable data.

So what are the costs?

  1. Credit card fraud– in the UK (for the past year?), credit card fraud costs £563 million. Online fraud was £210 million, offline fraud was £353.

    Credit card fraud is part of transitional fraud – it is fraud that has always existed but is moving online. Another example would be tax fraud. Just because you cheat on your online taxes, it doesn’t make it cybercrime.

  2. Cost to merchants  - Online merchants figure that customers forgo 10% of transactions because of distrust in the system. This leads to £1.6 billion in lost sales.

  3. Defense costs - £2.5 billion annually. The costs of software like antivirus and other protection was £1.2 billion, which is much, much greater than the revenue that criminals bring in. Thus, the defense costs are very asymmetrically skewed onto the defending company or user compared to what the spammers actually make.

  4. Espionage– The study did not collect any data on cyber theft.

How much does this translate into a cost per citizen?

  1. For traditional fraud, it is a few hundred dollars per year.

  2. For transitional fraud, it is a few tens of dollars per year.

  3. For cyber fraud, it is a few tens of dollars per year, mostly in defense.

Thus, for the industry that we are in, the cost of what people spend to protect against cybercrime is much, much more than what people actually lose from it. It’s like spending $50,000 to insure your $10,000 car. The greatest gains per dollar spent would be investment in law enforcement.

That should cause us in the industry to make us think twice about our relative value.

After the presentation, a couple of thoughts came to mind:

  • In the Q&A, someone brought up the point that the cyber security industry is not a complete cost to society. It creates jobs; people like you and me are employed because of it, and businesses spend money on software. They pay us to write it, and we spend money in the general economy. So it’s not all bad.

    I don’t agree with this point of view. It’s like saying that we should continue to have regular crime in order to keep the police in business. Or we shouldn’t cure cancer in order to keep pharmaceutical companies in business with expensive treatments.

  • Even though businesses spend a lot to prevent what looks like small losses, what would the losses look like without any prevention?

    I spend a lot of money on dental hygiene – there’s toothbrushes, toothpaste, dental floss, mouthwash and dental visits. How much extra would my dental visits be if I didn’t invest in keeping my teeth clean? The $50/year I spend on home supplies results in me spending a few hundred a year at the dentist. If I didn’t it would be several thousand per year.

    Similarly, we don’t know what it would be like if we didn’t spend money on cybercrime prevention.

All in all, this was a much more reasonable study of the cost of cybercrime. It’s a problem and it is growing as traditional fraud moves online, but it is not the behemoth that headlines make it out to be.

The pros and cons of Bring Your Own D(evice|estruction)

$
0
0

At the Virus Bulletin conference this past September in Dallas, Righard Zwienenberg from ESET gave a presentation entitled BYOD. BYOD stands for Bring Your Own Device, but he reframed the acronym to “Bring Your Own Destruction”, that is, he alluded to the security implications of bringing your own device.

BYOD is the latest trend sweeping business and schools. More and more people are bringing their own personal devices from home and using them for business. Rather than companies issuing people laptops, they let people use their personal machines from home – machines such as tablets and smart phones. They then use their devices to access the corporate network and access corporate data. But while more and more people are using their own devices in the workplace, only 25% are aware of the security risks. And it is this lack of awareness that spells potential destruction for the enterprise.

Bring your own device from home has many advantages:

  1. Size– Your own smart phone and tablet are small and lightweight. They are easy to carry and to transport. No one likes carrying around a heavy laptop and the trend is to go smaller.

  2. Battery life– Most devices have a battery that lasts an entire workday. How long would my laptop last on battery alone? Maybe a couple of hours.

  3. Cost– Devices are cheaper than laptops. Some employers may even get away with pushing the cost onto employees (we’ll let you use your own device, and we can spend the money elsewhere).

  4. Easy adaptation– Consumers are familiar with their device and can customize them to their liking.

However, it’s not all fun and games with BYOD. There are some serious drawbacks as well:

  1. Content management– Most devices have a proprietary OS. How are they supposed to connect to your corporate network and take advantage of everything?

  2. Updates– There is no standard mechanism for issuing updates. At work, Microsoft IT forces me to install security updates every so often. They couldn’t do that for my iPhone (if I had one), which means that they lose control of patching vulnerabilities.

  3. Difficult to protect– Because devices are non-standard, outbound traffic is hard to monitor (data leakage, outbound spam, etc).

  4. No multi-tasking– Devices are great, but working on multiple things at once on them is very difficult.

  5. Plug-ins– Corporate supported plugins are often not supported.

  6. Interchangeability– Applications for different devices are frequently not interchangeable (e.g., text editors).

  7. Physical security– Smaller devices can be easily stolen and easier to conceal (under a thief’s clothing) because they of their size.


Given all this, what can we do? Should we allow BYOD on the work floor, or any professional environment for that matter? The first part of that question is whether or not we can actually stop it. Secondly, even if you could, would you even want to? Banning these things is unrealistic;the USB drive is ubiquitous and difficult to police even if you do warn employees of the security risks. There is simply too much foreign media and too many options.

No, trying to stop the tidal wave of BYOD is not a winning strategy.

It’s impossible for a corporate security team to know about all the features of new OS’es, new firmware upgrades, security patches, and so forth. But software companies are coming up with ways improve security. For example, Windows 8 includes “Windows To Go” that allows a corporation to create a full corporate environment by booting from a USB drive. All of the corporate standards can be on that USB key. Furthermore, it can have security extras like preventing the USB key from being removed otherwise the device freezes in 60 seconds. Furthermore, it can be protected with BitLocker.

So what should you do?

  • Acceptance - We need to accept that BYOD is here to stay. But because of this, companies must institute a form of device control. Instead of BYOD, how about C(hoose)YOD? Just select the devices you can manage, allows corporate protection to be installed, and that has updates.

  • Use a form of device control – With BYOD, you always run the risk of losing the device but if the data is encrypted, the worst case is that all you have lost is the device. This controls which ports are available, it avoids data leakage, and monitors which files are exported to portable media.


And that’s what I learned about BYOD(evice|estruction) at Virus Bulletin 2012.


Will cyberwar create new rules of engagement? And will there be a draft?

$
0
0

I read an interesting article on ReadWriteWeb yesterday entitled New Cyberwar Rules Of Engagement: Will The U.S. Draft Companies To Fight? by Brian Proffitt.

In it, Proffitt reports on a speech given by CIA director Leon Panetta to business leaders in New York City last Thursday (Oct 11). Panetta discussed how for the first time ever, US military forces were prepared to go on the offensive against cyber attackers who seek to cause harm against US assets, infrastructure or its citizens. This contrasts from its previous policy when the military acknowledged it would only take a defensive stance.

The Washington Post reports that among those new rules of engagement, "for the first time, military cyber-specialists would be able to immediately block malware outside the Pentagon’s networks in an effort to defend the private sector against an imminent, significant physical attack, The Post has reported.

Panetta was careful to state that the government would not be monitoring private networks, nor involved in the day-to-day protection of corporate and other private infrastructure. That raises questions about the effectiveness of the defense strategy, let alone the usefulness of an offensive response, since cyber attacks can happen faster than the blink of any eye.

This is weird. The military would block malware outside of the Pentagon’s networks? What does that even mean? That they would forcibly issue updates for known malware into private networks and corporations? How do they expect to issue these updates?

The article goes on to say that the Third and Fifth Amendments of the US constitution prevent the government putting military assets without the private homeowner (or corporation’s) consent. To get around this, it means that there would have to be significant partnerships between private enterprise and government so that they would agree that if there is a threat detected by the government, they would let the military issue malware signatures.

That’s all well and good, but it’s still a defensive posture, not an offensive one. This is where the article gets interesting:

Nevertheless, at least one academic paper has argued that companies be drafted to participate in cyberwarfare.

"Cyberwarfare… will penetrate the territorial borders of the attacked state and target high-value civilian businesses," wrote University of Dayton Professor Susan Brenner in 2011. "Nation-states will therefore need to integrate the civilian employees of these (and perhaps other) companies into their cyberwarfare response structures if a state is able to respond effectively to cyberattacks.

"While many companies may voluntarily elect to participate in such an effort, others may decline to do so, which creates a need, in effect, to conscript companies for this purpose," Brenner and her co-author, attorney Leo Clarke, added.

Speaking for myself, I have never lived under the possibility of being drafted into the military. Most of my life, I lived during a time when the government used an all volunteer army, and during a time when conscription was not required (the US abolished the draft after the Vietnam War, and it has not been used in Canada since long before that).

I’m now getting older (I’m 33), but I’d make a pretty useless soldier. I’m not physically large, I have two bad hips, I’m near sighted and I have sinus problems in certain climates. In short (pun intended), I’m not who the military is looking for to wage a conventional war.

On the other hand, I am who the military is looking for to conscript for a cyber war. I admittedly have mediocre hacking skills but I know a lot about cyber security. I have good data analytics skills and I can program. I know many of the techniques that hackers use to break in. And unlike in a conventional war, the value of what I bring to the table and could do for the military continues to increase over time.

So, while there’s some interesting debates around whether or not the military might draft individual companies into working with them, there’s also the interesting position of whether or not the military might draft individual people into their service.

The rules of engagement have changed, and with it are ideas about who the military might consider to be persons of interest. The military draft is about finding people to do the fighting. Seeing as how the military is now prepared to go on the cyber attack, if they need people to do that kind of fighting, they’d best start with people who already have a background in it.

Hmm… that’s something for me –and you – to think about.

How should large financial institutions use hosted filtering?

$
0
0

This post is an opinion piece that reflects what I think are best practices. Should large financial institutions use hosted email services? Services like ours (Forefront Online Protection for Exchange, FOPE)? Why am I even asking this question?

I ask this question from a security perspective. The advantages of moving to hosted services are plenty:

  • You no longer need to use your own email infrastructure to host your mail. This saves disk space and bandwidth.

  • You can also just outsource your mail filtering to the cloud but still host your own mail on your own mail servers. This saves bandwidth and having to constantly upgrade your spam filters.

    image

I’m not going to go into all the advantages because there are plenty. I work for the division that does hosted filtering and that’s how I make my living. It’s a good thing to do in many cases for inbound mail.

However, for outbound mail, the situation is different. Outbound mail is the opposite of the above: mail comes from your mail server, flows through us, and then goes to the Internet. The advantages of this, from a spam filtering perspective, is that we do outbound spam management and can typically ensure better outbound IP reputation and therefore improve (but not guarantee) better delivery. There are other advantages, but for spam that is one of the biggest ones.

image

The reason I ask the question above is because for inbound mail, multiple customers (everyone who wants to use hosted mail) use the same set of resources (our mail servers) as everyone else. This doesn’t matter because our service is designed to scale for inbound mail. If we ever start experiencing high traffic load, we just add more servers. Everyone’s mail flows through us, we scan it, and then deliver it to them.

For outbound, we are similarly using the same set of resources, but that also includes outbound IP addresses. This means that everyone sharing the same outbound IP reputation, and depends upon how well we maintain our outbound IP reputation.

We have spent a long time coming up with ways to reduce the amount of spam that comes out of our network. However, if one customer sends spam, it can end up degrading the deliverability of everyone’s mail. That’s part of the price that comes with using a shared set of IP addresses. Fortunately, we’re very good at keeping our IP reputation clean.

Customers using us put two sets of IP addresses into their SPF records:

  1. Their own IP addresses from their on-premise mail servers– When the mail server connects to us, we perform an SPF check to ensure that the customer is not spoofing. If so, we take action upon the mail.

  2. Our service’s outbound IP addresses– When the mail relays from us to the Internet, the 3rd party receiving the mail performs an SPF lookup on our IP addresses since that was the last hop that went to the world.

    image

This means that every outbound customer mail goes through two SPF checks: once by us, and once by the final, intended recipient (it could actually be more SPF checks depending upon how the recipient has things set up).

This is all well and good when it comes to security, but remember, we are a shared IP service for outbound. What happens if Customer A is behaving but Customer B has become compromised and is sending out spam? And they send out spam by spoofing Customer A?

image

In the case of a zero-day spam campaign, before the filters have had time to catch up and catch the spam using some other method, this outbound spam will leak to the world. 3rd party filters on the Internet will do an SPF check and it will pass because it came from shared IP space.

So, the decision of whether to use shared IP space for outbound mail is complicated and involves various tradeoffs:

  1. What is the probability your brand will be spoofed?

    I have my own personal domain. But I’m also just about a nobody. If you say my name to the average person on the street, pretty much no one would recognize it.

    But if you are a large organization like Apple or Microsoft or UPS, then you are a target for spoofing. Spammers like to use those because people will recognize the brands and are more likely to take action to get something they want (such as a free iPod) or avoid something they don’t (such as getting locked out of their bank account).

    If you are a big, recognized company, then the odds that you will be spoofed go up. This means that there may be times when a spammer – either by maliciously signing up for the service or by compromising another customer – will spoof your brand and emit mail from the same set of IPs that you do.

    For large filtering services, remember that there are many, many other customers sharing that same IP space and many of them don’t have the same security policies that you do. While you may not get compromised, they might get hacked much more frequently and send spam from these compromised accounts.

  2. Do you care if your brand is spoofed?

    How closely do you care about whether or not your brand is spoofed? If someone spoofs my personal domain, I don’t care that much. I never send mail from it, I don’t sell anything, I’m just not important enough for someone to be fooled if my domain is spoofed.

    That’s all well and good for small companies, but what about large companies? If US Bank is spoofed, what kind of damage can occur? Obviously, if people fall for phishing scams, that costs people real money and real damage is done to US Bank’s brand. Same with Paypal. Same with Facebook.

    The cost associated with a successful spoof should be a determining factor about whether or not you should use shared IP space. Financial institutions need to adhere to tighter security requirements because of the downside of phishing.

One thing I don’t say is a determining factor is how good your hosted spam filtering service is at catching spoofed mail. Our service is very good at it, but we’re not perfect. No one is, because spammers have the advantage of testing their spam campaigns and tweaking them to avoid filters. Leaks sometimes occur especially if a large service has hundreds of thousands of customers. For this reason, I don’t personally recommend that financial institutions use outbound mail services that use shared IP space unless they are willing to accept the risk it entails.

So what can they do?

There are a couple of options:

  1. Send mail from your own dedicated email infrastructure – High risk institutions can send mail from their own dedicated infrastructure. This means that if anyone ever tries to spoof them, since the sending IPs will not be in the SPF record (other than a rogue internal server), 3rd parties will not give it an SPF pass.

    I realize that this trades off the convenience of hosted filtering for maintaining your own infrastructure. You must weigh the costs of security (and people potentially falling for phishes) against the costs of hardware maintenance.

  2. Send mail from a shared service if they can give you a dedicated IP address– Some shared IP services do provide you with dedicated IP addresses. If they do, you can just put that into your SPF record and then even if someone else spoofs you, it won’t matter because the IP it is coming from is not in your SPF record.

Those are the two best options that I see, and they are the ones that I recommend for organizations that are at high probability and high cost for spoofing.

There are a couple of technologies in the future that can assist with the problem of spoofing and shared IP services but they still a bit of a ways off from realization:

  1. Email over IPv6– I’ve written about email over IPv6 in the past and the risks it entails, but it does solve this spoofing risk. Every filtering service could simply provider their customer with its own dedicated IPv6 addresses from which to send mail. That solves the problem of shared IPs because everyone has their own.

    On the other hand, there are significant challenges to email over IPv6 that I’ve written about previously on this blog.

  2. DMARCDMARC is a new standard this year that is designed to combat certain types of phishing. If an organization said “I always sign with DKIM, and I always send mail that passes an SPF check” then on the receiver’s side, the SPF check is not the only thing to count on. If the message passes SPF but fails  DKIM, the message can be rejected. The spoofer will not succeed. This solves the problem of shared IP space because it relies on a technology (DKIM) that is not dependent upon IP addresses.

    The drawback of DMARC is that it’s still new and is not widely deployed. It is still too early to depend upon it because many email receivers are still not using it.


Those are my views to the problems of sending email through a shared outbound IP service, some workarounds, and future solutions.

Evaluating anti-virus products with field studies

$
0
0

Did you ever wonder how people get malware onto their computer? Or how effective real life A/V software is on zero-day malware? Or just malware in general?

Current A/V evaluations have some drawbacks:

  • They are based on automated tests and therefore are not representative of real life
  • They do not account for user behavior
  • They do not account for the user’s environment
  • The effectiveness of the products against yet to be discovered threats is not evaluated


The École Polytechnique in Montreal decided to perform a field study which they discussed at the Virus Bulletin Conference this past September. They decided to do a clinical trial with actual users and collect feedback in an automated way.

clip_image002

 

To do this, they bought a bunch of laptops and sold them at discounted prices to 50 people. These laptops were running Windows 7 Home, Trend Micro Office Scan 10.5, some diagnostic tools to verify malware infection and some Perl scripts to collect data. They told the volunteers “Here. Go use these laptops the same way you’d use any laptop. They are yours to keep, just bring it in once per week so we can collect data off of them.” The people were not random samples since they responded to an advert, but many were not students.


The idea was to study user behavior and see the relationship between what people do and how it affects security. The users completed surveys and the only restriction was that they could not remove the A/V or the Perl scripts. To determine infections, the study used pre-determined protocol for identifying infection:

  • Unexplained registry entries
  • New suspicious files
  • Checking these files on Virus Total

When and infection was identified or suspected, they asked the user for consent to investigate further. If consent was granted, additional data was collected including a list of websites visited during time window of infection.

What were the results?

Numerous infections were found during each of the four months the study ran. Most of them were trojans but there were also worms, viruses, adware and category “other”.  There were 20 missed detections on 12 different laptops (the missed detection included gray areas like adware). Some were false positives but there was definitely malware on there.

For people infected, 55% of them didn’t notice anything strange. Of the 40% who said “Yes, something is amiss on my laptop”:

  • There were performance decreases
  • There were popup windows, popup windows
  • There were problems with web browsers like URL redirection and changes totheir home page. 

For their A/V software, only 50% noticed a prompt indicating a malware infection. When asked whether or not they were concerned about their security, 35% said yes. But another 30% were annoyed by the popups (the popups that said they were infected).

So what are the risk factors for infection?

I sum it up as “The more you do, the more at risk you are.” 

  • People with more browser history were at greater risk.
  • People who had more downloads were at greater risk.
  • People who used more streaming media sites were at greater risk (there really is something about adult sites).

None of this was surprising to me. However, male vs. female is about the same, and younger crowds are a bit more infected but it is not statistically significant.


All in all, a good study. They want to repeat it with a larger sample size so that requires more money but my guess is that it will merely confirm these results.

A couple of unsurprising tidbits on passwords

$
0
0

Digital Trends published an article yesterday entitled What’s the Worst Password of 2012?

Retaining the number one spot as the least secure password for yet another year, people that continue to use the phrase “password” as their personal password remain at the highest risk when it comes to hacking. Detailed in SplashData’s annual report, the three phrases ”password,” “123456,” and “12345678,” have continued to dominate the top three spots on the list.



SplashData CEO Morgan Slain said “We’re hoping that with more publicity about how risky it is to use weak passwords, more people will start taking simple steps to protect themselves by using stronger passwords and using different passwords for different websites.”

I am not convinced that people don’t understand how risky it is to use weak passwords. There’s definitely some who don’t know, but users are pretty good at recognizing what’s a weak password and what’s not. From Stephen Cobb’s presentation at Virus Bulletin, entitled “What do Consumers Really Know About Antivirus” on page 14:

image.

So you see, they aren’t totally clueless. Most people have had it drilled into them what’s secure and what’s not. However, as I talked about in my blog post about that presentation, there is a behavior gap from what people say they believe and what they actually do (e.g., even though people say they believe that Windows PCs are the most insecure platform, they are still the most used platform).

Thus, it is not necessarily a customer awareness issue of educating them about what is a weak password, it may be a case of getting them to not use weak passwords in spite of knowing that their password is weak.

The advice given by security experts is the typical advice that we give:

In order to create a safer password, SplashData suggests using security phrases with at least eight characters while utilizing a variety of characters within the phrase. This could include using a common phrase that’s broken up by underscores between words or substituting symbols for letters within a word. For instance, the phrase “p@$$w0r6? is more secure than typing out the word using all letters.



Splashdata also recommends using multiple passwords across different types of sites. For instance, using the same security phrase on a social network as you do when accessing your online banking could become problematic if the social network is hacked.

Even though I give out this advice from time to time, I cringe when I do. Why? Because nobody does it! Everyone I talk to engages in some sort of password re-use unless you have a password management software (but are you going to install it across multiple devices? What if you borrow someone else’s device?).

Why does no one do it? Because humans are bad at remembering long, random strings of data that we only use occasionally. In order to get good at memorizing stuff like this, we have to train our memories. Our brains aren’t naturally built this way.

Furthermore, we take shortcuts because we don’t want to reset our passwords. It’s so undesirable that 38% of us would rather clean a toilet than think up a new password:

Increased security can come at a price — way too many usernames and passwords to keep track of. If you find yourself overwhelmed by authentication overload, you’re not alone.

Some 38% of us think attempting to solve world peace would be a more manageable task than trying to deal with yet another set of login credentials, according to a recent Harris Interactive poll.

Another 38% agreed with the statement, “I would rather do house chores (e.g., my laundry, the dishes, clean the toilet) than to have to create another new user name or password.”

Why do people believe this? Because there is a serious mental cost to changing our passwords. We have to memorize a new one. So, if it’s going to be secure, we’re going to use mental heuristics to make it easier. And if we have to manage many passwords, we’re going to use even more heuristics.

What’s a quick mental short cut? Password reuse. And why would we reuse? So we don’t get locked out of our accounts. We’re used to the convenience of doing it once and having it work forever. After all, we use the same keys to get into our houses or cars for years and years. Imagine if you had to rekey your house keys every few months? Did you ever notice that your house key unlocks every room in your house (i.e., the front door is locked but the inside doors are not, other than the bathroom)?

I don’t know what the answer is to the problem of weak passwords, but giving advice that no one uses probably is in the wrong direction. On the other hand, maybe we will all, as a society, get really good at memorization. There’s ways to do it.

But we should probably look at other options, too.

The modern face of mobile malware

$
0
0

At the Virus Bulletin Conference last month in Dallas, Grayson Milbourne and Armando Orozco presented a talk entitled XXX Malware Exposed: An in-depth look at the evolution of XXX Malware. I have renamed it in this blog post to mobile malware because the techniques that malware writers are doing are not unique to any one platform. They could be applied to any mobile environment with a few changes (I have x’ed out a certain smartphone to underscore this even though its name is given in clear text in the actual presentation).

Mobile malware started two years back:

  1. It began with a trojan SMS installer.

  2. They then evolved to a Trojan in the Chinese marketplaces which were delivered by pirated applications.

  3. The next iteration was a mobile trojan called “xxx Dream” that infected the legitimate marketplace; it rooted the device and had bot capabilities that could act as a Command-and-Control center. It also installed a payload.

Mobile malware is now delivered in multiple ways: through social engineering, through rogue marketplaces, through infected applications, through SMS phishing, man-in-the-middle attacks, and drive-by infections. Furthermore, they have started using the same techniques to evade detection as the desktop world: polymorphic distribution (minor changes in every downloads including hashbuster to evade signatures), payload encryption, security app removal, and payloads in embedded files.

Gee, you might think they’ve done this before.

How has the malicious action changed over time? Early versions did not use encryption and send premium SMS messages. Now, they root the device and add it to a bot network that installs payload for its applications.

How can this happen?

Part of the problem is that there is no easy mechanism to update the smartphone OS to the latest version. Many users are running OS’es 2.x versions ago. Manufacturers don’t have easy ways to update (there’s no Windows Update for your phone… yet).

Malware authors know this; if hundreds of millions of people are using an insecure OS, malware authors will exploit it. They do things like:

  • Data loss - Malware that sits on the smartphone and collects contacts info, then sends the data to a remote server without the user’s consent. It uses the collected contact data to spam SMS contacts

  • Malicious apps– There is some phishing SMS stuff, too
           

What are some security tips?

  1. Use smart device policy– Download your apps from a trusted source, not something like a rogue market place or through torrents.

  2. Device access - Use passwords, not swipe lock screens.

  3. Encrypt confidential data – This way, even if you lose your phone, the data is protected.

  4. Remote location and/or wipe – Similar to above, if you lose your phone, you can minimize the damage.

  5. Mobile device management– This is relevant for BYOD. Companies need an access story around allowing 3rd party devices into their network.

  6. Device backup– Keep your data backed up. You don’t want to lose your phone and your data. Easier to replace your phone.

  7. Get help– This stuff is hard. Get help when you need it.

And that’s my summary of the evolution of mobile malware. It looks a lot like the evolution of PC malware, and the security tips for increasing your security are very similar.

Viewing all 243 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>