Quantcast
Channel: Terry Zink: Security Talk
Viewing all 243 articles
Browse latest View live

IT Gangnam Style parody from F5 networks


My behavior has changed when answering my phone due to my suspicious nature of unsolicited email

$
0
0

Nowadays, whenever I get email from someone I don’t recognize, I am instantly suspicious of it. To be sure, there are people I’ve never heard from that I want to hear from, but I am always wary whenever I see an email address that is unfamiliar to me. I never open any attachments, I don’t click on links, I barely even want to read the message. I am instantly suspicious.

This, of course, has been driven by spammers. The odds of me receiving email from someone I’ve never heard before and their being a spammer are higher than their being a legitimate person. I’m just not that special in real life that all sorts of people want to talk to me.

But also, if I ever get email from someone I do know and the message looks suspicious – all upper case SUBJECT LINE, a short internal message, broken grammar, etc., then I make the leap that the person’s account is compromised. If you’re using poor grammar, then I think you’re a spammer. Hey, that rhymes!

I’m surprised at how this has carried over to my cell phone.

I’m even less special when it comes to phone calls than I am with email. Unknown people contact me by email fairly regularly. But they rarely do on my phone (except for that person who keeps calling up and asking for “Chris”). Therefore, when I get a phone call from a number I don’t recognize, I’m usually content to let it go to voice mail. Afterwards, I check my voice mail and if it’s from a friend of mine, or a service I want to call back (like a Massage Therapist, or the bank, or something not personal but important) I will add them as a contact to my smart phone (as an aside, I am biased but I really like my Windows Phone).

In other words, I treat unknown phone callers as suspected spammers.

It never used to be this way. Way back when I was growing up, every phone call was from an unknown caller. There was no way to predict who was on the other end of the line unless you had a good idea of who was going to call you that day or evening (some of my sister’s friends were so predictable we literally could predict them). But so what? The phone rang, you answered it. That’s how it works.

Many of you reading this can remember when call display was introduced. It wasn’t even that long ago. It would give you the phone number of who was dialing and it was a big deal. We were all like “Hey, this is cool! We now can see who is calling before we pick up the phone!”

Today, in the age of cell phones and smart phones, at least to me, to not have call display is bizarre. Why wouldn’t I want to know who is calling before I answered the phone? (Services like Google Voice, Skype and even the corporate line here at Microsoft mask the caller identity by sending from a general number but you still get a number).

Why do I treat unknown callers like potential spammers?

I know that phone calls are unlikely to be spammers, but sometimes they are. Sometimes I get a call from the Symphony asking me to buy tickets to some symphony because I purchased tickets to the video game symphony two years ago. Ugh. I don’t need to spend 5 minutes arguing with you. I also don’t want to be bothered when the gym I thought about signing up to keeps calling me every day with a special offer. Double ugh.

So I guess it’s all about not wanting to deal with the hassle of marketing.

Just like in email how I am suspicious of unknown senders because they are trying to sell me something, this has translated into phone calls with suspicions that they, too, are trying to sell me something. Because sometimes, they are trying to sell me something! But the cost is higher; whereas with spammers I can hit delete so long as there isn’t too many of their messages, with a phone call I have choose to politely sit and listen to their spiel and it wastes 2-10 minutes of my time.

It’s just easier to screen the call and add them to my address book and then if they call back, intentionally choose to ignore them.

What about any of you? Does anyone else do the same thing?

Practical Cybersecurity, Part 1 – The problem of Education

$
0
0

I thought I’d close out the year by presenting my 2011 Virus Bulletin presentation. It builds upon my 2010 presentation about why we fall for scams which I blogged about earlier this year in my series The Psychology of Spamming:

Part 1 - How our brains work
Part 2 - The Limbic system, cognition and affect
Part 3 - External factors that influence our decisions
Part 4 - Why we fall for scams
Parr 5 - Solutions
Part 6 - The Flynn Effect

What follows is the solution to the problem.


Practical Cybersecurity – An Introduction

The cybersecurity industry has a problem.

For years we have been preaching to users that they need to practice better cyber security awareness – don’t click on links in spam, hover your mouse over a link to see where it goes, don’t click on suspicious videos in your Facebook account. But the message never gets through; people fall for hacker tricks every day.

The security industry then moans “Oh, users cannot be taught simple concepts! It’s hopeless!” But is the situation really hopeless? Is the problem the general public’s inability to grasp the message? Or is the problem the message itself? For example, take some standard password advice that the computer industry routinely gives: use a strong password, one that consists of random letters and numbers and contains a lot of letters and numbers. Do this for all of the websites that you use. Yet countless studies demonstrate that humans are only capable of memorizing 7-10 random digits at a time. How are we supposed to memorize 10 random digits, and do this multiple times for the many websites that we use?

 

The advice that the computer industry gives is impractical; we may give people the secret formula to becoming a millionaire: first, get a million dollars…

It’s not that we in the cybersecurity industry don’t have a valuable message to get across to people. We do. However, we need to learn how to give good, practical advice that people can use in real life, and we need to learn how to teach it so people will retain it. To do that, we need to look at successful educational techniques and use them when we evangelize our own message.

Background

At the 2010 Virus Bulletin conference, I presented a paper titled ThePsychology of Spamming. In it, I examined why people fall for scams in email. The reason is that the amount of change in technology has outpaced our biological capacity to absorb it. For example, our bodies evolved to seek out fats, salts and sugars. We need those in order to survive. But today, we can mass produce donuts, salad dressing and yummy French fries. We know that these are not healthy for us, but our brains tell us that they are very tasty and they masquerade as food. We can’t yet tell the difference between good-for-us and not-good-for-us.

Similarly, when it comes to technology, we fall for scams when they involve money, food, sex or revenge.

When a scam hits us and hides behind any of these masks – a phishing scam that threatens to cut off your source of income, or a fake Viagra scam that promises you more sex – the logical part of our brains, the neo-cortex, stops executing and the limbic part of our brain, the part designed to react, takes over. If the correct emotions are triggered, we behave in ways that are contrary to our own best interests. Thus, while technology has helped our lives immensely, it does not replace our basic biological needs and drives. You can’t eat an iPad.

The solution to combating scams is through education. Researchers have determined that over time, people are becoming more intelligent. Educational test scores have not improved, but IQ test scores have. People are better at abstract reasoning now than they were before. For example: what animal is that? A cow. What sounds does a cow make? Moo. How many legs does a cow have? Four. What else has four legs? A dog. How are they similar? They are both mammals. And so forth.

Because people are better at abstract reasoning, they are better at transferring concepts from one topic to another. People today understand moral concepts like theft and robbery and the need to protect your property. If we already teach people the ideas of protection of their physical property and how to recognize physical danger, then through good education techniques we should be able to teach them to recognize cyber danger and protecting their online property.

Transfer

The key to educating people about cyber security is through “transfer”; it is the ability to take what you have learned and transfer it to a new situation. When we are in school, we transfer basic addition to learning our multiplication tables, and transfer multiplication to calculus. We transfer our knowledge gained from walking to running to navigating while driving. We transfer cooking a single food item to preparing complex meals. The learning that we have acquired previously is reused for – transferred to – other situations, and then built upon.

When security experts complain about users’ lack of security awareness, they are really complaining about users’ inability to transfer common sense in real life to a fake Viagra scam in their junk mail folder. They might consider themselves savvy people at recognizing real life scams, but this vigilance does not transfer to computer scams. Instead, they revert back to believing that a deal too good to be true really is true and not think through the possibility that it is most likely a scam.

Why is there this lack of transfer?

Research into learning techniques and education has uncovered methods that support transfer. In order to make our message stick with the general public, we need to use these methods when we are distributing our message.



Part 1 - Introduction 
Part 2 – Expertise
Part 3 – Experience 
Part 4 – Metacognition 
Part 5 – What should we teach?
Part 6 – Bringing it all together  

Practical Cybersecurity, Part 2 – Expertise

$
0
0

Expertise

If we want to teach people to be cyber aware, they need expertise. But how much is enough? Do we want people to become security experts? Or just good enough to resist most types of scams?

In other fields, experts are able to process information differently than novices. In fact, they have a whole bunch of abilities:

  1. Experts have acquired a great deal of content knowledge that is organized in ways that reflect a deep understanding of their subject matter.

  2. Experts notice features and meaningful patterns of information that are not noticed by novices.

  3. Experts are able to flexibly retrieve important aspects of their knowledge with little attentional effort.

  4. Experts have varying levels of flexibility in their approach to new situations.

This expertise is important because it is a powerful tool against scams. In order for us humans to make decisions that act contrary to our own best interest, our emotions must be invoked. At low and intermediate levels, our emotions act in an advisory role. But at higher levels, we make decisions that we would not normally make.

The way to combat this is to increase the decision maker’s level of vigilance. If a person can recognize that a message is a scam they will not fall for it. How can they recognize that a message is a scam? They have a lot of content knowledge and have seen plenty of scams in the past. They can detect features in scams that a novice would not normally notice and can retrieve key aspects of that knowledge with little effort. Almost automatically, they can retrieve those key bits that were scammy before and see them now. Furthermore, when a new scam arrives, they are flexible enough to apply those experiences from before to this new experience.

An expert can recognize scams because they know what scams look like.

How do we teach people to become experts?

People are not born experts. There is no such thing as innate talent where a person has a natural instinct for almost any ability. The way to transform a person from a novice into an expert is through an activity called Deliberate Practice. Deliberate Practice is different from regular practice in a number of important ways:

  1. It deliberately works in improving key skills.

  2. It receives consistent feedback.

  3. It can be repeated a lot.

  4. It isn’t much fun.

Researchers have found that the amount of time required to become an expert in any particular field requires 10,000 hours of deliberate practice. If we work 2000 hours per year at our jobs, that’s 5 years to become an expert. It is unrealistic to expect people to become experts at computer security because no one can put in that much time to learning how to use the Internet.

If we can’t get the public to become experts, then we can at least bring up their level of awareness to “good enough” and leverage the key principles of developing expertise.

In order for the general public to gain sufficient expertise in cyber awareness, they must have a level of competence that is more than just cursory. When experts think about a subject, they have a deep foundation of knowledge to draw from. They don’t know a lot about one narrow band of subject but instead know a lot about a lot of related subjects as well.

Experts do not just know a lot about different subjects, they are able to organize that knowledge so that they can retrieve it quickly. The knowledge is not random, either. It is relevant to what they need to understand.

For example, given a chessboard of an actual game, expert chess players can look at the board for a few seconds and then place twenty or so pieces based upon memory, whereas novice players can only place five or six. However, when given chess boards of randomly placed pieces, both experts and novices could only place a few pieces. This shows that chess experts recall relevant information – a random chessboard doesn’t occur in real life, but an actual game could because both players implement strategies that could lead to that particular board.


For cyber security, people need to understand a wide variety of tactics that hackers use to steal information as well as a wide variety of defenses. It is not enough to say “Do this to protect from spam” but instead we must look at where spam comes from, how spammers try to trick the public and what countermeasures users can take. By looking at the problem from multiple angles, users gain a much deeper level of understanding.

But the security industry has a heavy responsibility. It is not up to the user to figure out what they need to know, the security must deliberately outline the relevant principles and organize them in a way that users can understand them. A bullet list of do’s and don’ts is not enough to guard against scams because users will not be able to recall them. Experts start from abstract concepts (be cautious) and then build out techniques (hovering a mouse over a link verify that it goes to the page it says it is going to).

The security industry must target the principles that are important and present them in a way such that people retain them.


Part 1 – Introduction
Part 2 – Experience
Part 3 – Experience
Part 4 – Metacognition
Part 5 – What should we teach?
Part 6 – Bringing it all together

Practical Cybersecurity, part 3 – Experience

$
0
0

Whenever people learn new information, they do it in a way that fits in to their current experiences of how they view the world. There is a children’s book called Fish is Fish. The book is about a fish who lives in the ocean and wants to see the rest of the world, so he asks his friend Frog to venture out on land and report back to him. Frog agrees and goes to see the rest of the world.

A couple of days later he comes back and tells Fish what the world is like. When Frog returns, he tells Fish about all of things he saw. He saw birds in the air, dogs on the ground running around, and large buildings where people would go into and out of. Fish, however, imagines these things according to his own experiences. A bird is a fish with wings, a dog is a fish with feet and buildings are large rocks that fish normally dart in and out of. Fish models this new world after the world he is familiar with.

Similarly, for children, their model of the world is that the world is flat. When told that the earth is round, they picture that it is round like a disc. When told that it is round like a sphere, they picture a disc within a sphere. The children are not stupid, only ignorant. They do not have the knowledge to be able to change their model of the world, but instead fit this new knowledge according to what they do know.

clip_image002

People’s minds are like a ball of yarn and their existing ideas are like the strands of yarn, some unconnected, some loosely interwoven. Instruction is like helping students unravel individual strands, labeling them and then weaving them back into a fabric so that the understanding is more complete. Later understanding is built upon earlier beliefs. While new strands of belief are introduced, rarely is an earlier belief pulled out and replaced. Instead of denying existing beliefs, teachers need to differentiate them from actual beliefs and integrate them into more conceptual beliefs.

In the classroom, if prior beliefs are not engaged, people revert back to their preconceptions after the test has been taken. An experiment was performed where the subject was tested on memorizing long strings of random numbers. At first, the student could only remember about seven numbers, but over time managed to get to 70 or more. He did this by breaking down the numbers into chunks (give example). However, after the researchers tried to get him remember letters, he reverted back to only being able to remember about 7 characters.

This demonstrates why random password advice is next to useless. People can only remember about 7 random characters, and nobody in real life has to remember random strings of anything. The things we do remember are song lyrics, names of people, events, television shows, comedy patter, and so forth. They are things that have emotional meaning to us, and therefore we can better remember them. They are not random characters but instead are characters (words) that hold meaning and therefore can be recalled.

It also explains why people fall for fake A/V software. People are used to being told by the industry that they need it. They already have an existing level of trust built up with the software industry about security practices. We can tell them that they shouldn’t click on pop up links or be suspicious about adverts for A/V software. They might say “Yes, I will be careful in the future” but we haven’t engaged their prior beliefs that:

  1. They need A/V, and

  2. That they trust us to tell them the truth.

When a scam crosses their eyes they revert back to the belief that they need A/V, the person behind the scam is telling them the truth, and they click to install it.


Instead of preaching to users that they have to be careful about scams, we should integrate it into the message that we already teach – they need A/V software and they should only ever get it from trustworthy sources. This uses beliefs they already have (I need A/V) and adds a new strand of yarn (get it from a place I trust). We then must educate users about who is trustworthy.

Students remember more abstract concepts better than a contextualized one. We must teach that users must only download A/V software from trustworthy sites. We do not necessarily start off by saying “Look for the https” or “Is it from a site I recognize?” Those concepts come later. The expert is able to take the more general concept and then find the specifics. In this case, the expertise we want the user to acquire is to first stop and think “Is the site I am downloading this from trustworthy?” What comes next to the expert is the question “How do I tell that this site is trustworthy? Oh, it says https://avg.com! I know that the ‘s’ means that it is secure, and I have heard of AVG!”

This relates to my previous point about expertise. An expert can draw from large bodies of information and they are able to recall organized knowledge and apply it to new situations. Someone who has learned about fake A/V could now see a pharmacy site. They have learned to ask the question “Is this Internet site trustworthy?” A user would look for signs to see if the site is to be trusted or not. Does it use https? Do they recognize any logos or the URL? Their preconception in this case is that the Internet is a place to buy things. But they have learned that they should only buy things from trustworthy sources, otherwise don’t do it. The abstract concept was added to the ball of yarn and is applied to the new scam.

When we engage pre-existing beliefs, we improve transfer.


Part 1 – Introduction
Part 2 – Expertise
Part 3 – Experience
Part 4 – Metacognition
Part 5 – What should we teach?
Part 6 – Bringing it all together

Practical Cybersecurity, part 4 – Metacognition

$
0
0

Metacognition

A third technique that supports transfer is teaching methods that incorporate metacognition. Metacognition is “thinking about thinking” – understanding the reason behind a concept. For example, we all know that the North Pole is cold. Why is it cold? Because it receives less direct sunlight than the equator. Is the South Pole warm or cold? Well, since the South Pole receives less sunlight than the equator, it too must be cold.

Metacognitive approaches helps students take control of their learning and organize their knowledge. For many of us, history is a boring list of names, dates and events. But one public schoolteacher was determined to change that. Rather than telling the class about the events of the American Revolution, she assigned students the roles of the loyalists and another group the role of the rebels.

   image

The class gathered one day not to recite dates and names, but to debate the merits and detriments of the colonies’ rule by the British. The rebels’ first speaker begins[1]:

England says she keeps troops here for our own protection. On face value, this seems reasonable enough, but there is really no substance to their claims. First of all, who do they think they are protecting us from? The French? Quoting from our friend Mr. Bailey on page 54, ‘By the settlement in Paris in 1763, French power was thrown completely off the continent of North America.’

Clearly not the French then. Maybe they need to protect us from the Spanish? Yet the same war also subdued the Spanish, so they are no real worry either. In fact, the only threat to our order is the Indians . . . but . . . we have a decent militia of our own. . . . So why are they putting troops here? The only possible reason is to keep us in line. With more and more troops coming over, soon every freedom we hold dear will be stripped away. The great irony is that Britain expects us to pay for these vicious troops, these British squelchers of colonial justice.

The loyalists respond:

We moved here, we are paying less taxes than we did for two generations in England, and you complain? Let’s look at why we are being taxed— the main reason is probably because England has a debt of £140,000,000. . . . This sounds a little greedy, I mean what right do they have to take our money simply because they have the power over us.

But did you know that over one-half of their war debt was caused by defending us in the French and Indian War. . . . Taxation without representation isn’t fair. Indeed, it’s tyranny. Yet virtual representation makes this whining of yours an untruth. Every British citizen, whether he had a right to vote or not, is represented in Parliament. Why does this representation not extend to America?

Students then argued amongst themselves regarding the role of paying taxes to the Crown and the benefits they receive. The teacher interrupted the internal debate, and they continued onward, but the point is made – understanding the rationale for both positions strengthens the understanding of the events leading up to the Declaration of Independence. History is no longer names and dates. There is meaning to it. When history comes alive, students retain the information and can transfer names, dates and the rationale behind the American Revolution. The learning sticks.

When it comes to cyber security, we need to take a similar approach. We often give users advice on how not to fall for phishing scams. Your bank will never ask you to log in to their site with your username and password otherwise you will be locked out, or respond back with your username and password in an email. So, don’t do it. But why won’t your bank ever do this?

We must tell users why the bank won’t do this: their employees are never allowed access to their users’ accounts, only bad guys ask for passwords. They don’t lock users out of their accounts because they would lose customers due to bad customer service. And so forth. Users must be made aware of the rationale behind this.

How could we go about teaching users to do this?

We could start by writing training programs that shows what it is like on the other side. Imagine a computer program where the user gets to play the part of the hacker:

clip_image002

As the hacker, you are given a scenario wherein your goal is to figure out a way to trick the user into giving up his username and password. The user then gets points when they succeed in doing nefarious things.

The next level would be that you get to play the part of a bank trying to teach its users to be secure, so what could you do to prevent users from losing their passwords, while still keeping things easy (you know, which is pretty much exactly what cyber experts do in real life). The gamer gets points when they pick actual cyber strategies.

Obviously, this would just be a game, but by seeing what it is like to be on the other side of the computer, users are better prepared for when they themselves are targeted. Thinking about both sides reinforces what people learn and subsequently transfer. By learning how to extract underlying themes and principles from their learning exercises, people learn how to apply that knowledge to new situations.


Part 1 – Introduction
Part 2 – Expertise
Part 3 – Experience
Part 4 – Metacognition
Part 5 – What should we teach?
Part 6 – Bringing it all together 


[1] These excerpts are taken from How People Learn: Brain, Mind, Experience and School; National Academy of Sciences, 2004.

Practical Cybersecurity, Part 5 – What should we teach?

$
0
0

What concepts should we teach?

What topics are the most important ones for users to learn? There are so many possibilities that it is hard to narrow down to only a handful. If we only got to pick three, here are the three I would choose:

  • The Internet is fun but only deal with trustworthy sources.

    This is the most important piece of advice we can give users because it is an abstract concept. All other pieces of advice derive from this. You can buy antivirus software online but make sure you buy it from a website you trust. You can shop for pharmaceuticals but you must only buy them from a source you trust.

    By teaching people an underlying abstract concept, other security countermeasures emerge out of this. It is abstract concepts that support transfer, not contextualized advice. Once users get the idea that they should only deal with trustworthy sources, their behavior changes. They know to login to secure sites because those ones can be trusted. They use different passwords with different websites because they don’t know if some of them can be trusted to keep their information secure, and so forth.



  • Keep your software up-to-date

    This is the most important piece of contextualized advice we can give users. In order to make sure that people remember it, we should build upon experiences that they already know and do every day.
    One activity that everyone in the west knows about is brushing their teeth. We do it in order to prevent our teeth from decaying and falling out. Tooth pain is very painful and brushing helps prevent that.

    Furthermore, brushing our teeth is something that we have to do every single day, even twice a day. It is not something that we do once and forget about, it’s daily maintenance and we have to do it every day for the rest of our lives. If we don’t, our teeth go bad.

    Keeping our software up-to-date is like brushing our teeth:

    - It’s good for our health.
    - If we don’t do it there are bad consequences.
    - We have to do it every day (or at least regularly) for the rest of our lives.

Once we have built the necessary foundational knowledge for users, and once they understand that they need to stay up-to-date, software must make it easy for users to stay up-to-date. Microsoft Windows should have automatic updates enabled by default, and so should web browsers. There must be an easy way for users to see if their software is configured to update automatically, and they need to know how to check to see what the settings are.     

image

  • Learn to recognize scams.

    Next to keeping your computer up-to-date, the ability to recognize a scam is the most important thing. Criminals do not need to exploit vulnerabilities in computers to cause harm, they only need to trick the user into doing something like sending them money or handing over their username and password.

    Experts are able to transfer information that they learned in one context and apply it to another. If someone is going to recognize a computer scam then it will be much easier if they borrow from pre-existing knowledge and apply it to computers. For example, many parents will know when their children are trying to manipulate them. If they have two kids and come home one day and find that the cookie jar is empty or worse yet, has been knocked over and is broken, and then both kids deny it, something is wrong. Parents often rely on cues their kids gave them in response to their answers to detect deception, such as averting their eyes, inconsistent or evasive answers or turning their bodies away from direct questioning.

    When teaching people to recognize phishing, a connection should be made by linking a broken cookie jar to a bank telling someone to log in to their account and update their information. Parents already know how to tell if something is wrong in their house and if the emotional connection can be made between that and something with their email notifications, then rather than fear being invoked, suspicion is aroused. If suspicion is aroused, then fear is only a low level intensity emotion and acts in an advisory role. If people think through what they are doing and equate cyber scams with real life ones then they are less likely to fall for them.





Part 1 – Introduction
Part 2 – Expertise
Part 3 – Experience
Part 4 – Metacognition
Part 5 – What should we teach?
Part 6 – Bringing it all together

Practical Cybersecurity, Part 6 – Bringing it all together

$
0
0

How young to start?

Where should we teach cyber security? Should it be something that people learn on their own time? Or is it something that should be included into formal education?

Paypal recently (when I first wrote this paper) released a whitepaper on combating cybercime. In it, the authors assert that today’s educational efforts are good but do not scale to the required level of millions of computer users and requires significant investment by the government and private industry. Significantly more funding is needed.

The advantages of formally incorporating cyber awareness into the education system are clear:

By starting early, students have more time to gain exposure to a wide range of topics. This helps them build the level of deep expertise needed to bring together knowledge from different sources. With a formal curriculum in place, educators could organize the relevant knowledge organized to make it easier to absorb and recall.

  1. Formal education about a topic at an early age creates the early experiences that people build upon. Whereas educators must address students’ pre-existing experience to get them to learn about a topic, setting the foundation early means that there will be fewer pre-experiences to overcome later on.

  2. Assignments could leverage metacognition. When students have to think about why they are doing something, it helps learning and transfer. Home assignments could include teaching their parents about cyber security and what they learned. This helps reinforce what the students learn and there’s an added bonus – the government gets to use the students for free to teach their parents! That’s like getting two for the price of one!

On the other hand, creation of a cyber security curriculum in school is a major undertaking. It requires collaboration between industry and government and the knowledge is very specialized. Most adults today understand basic arithmetic, writing skills, reading skills, and social studies. Nearly all teachers are capable of teaching other subjects if they had to. However, expertise in computer security is not widespread. How many people in the world are experts on botnets? Malware? Hacking? Worse yet, how many people in the security industry have a background in education, teaching, and organizing their knowledge? The people who are good at teaching don’t know the subject, and the subject matter experts can’t teach it [1].

This is not an insurmountable problem but it would require a significant investment from both the private and public sector.

The Security Industry’s Responsibility

Software companies are not off the hook. Not only do we have a responsibility to educate the public, but we have a responsibility to write software in a way that makes it easy for users to be secure. We can achieve this by using a mechanism called “Choice Architecture.”

Choice Architecture is a principle that influences people’s decisions based upon the way that options are presented. People’s decisions can be swayed by a number of influences including ordering, peer pressure, and default choices.

For example, in a restaurant fast-food menu where people have lots of choices, most people will choose the first item. The public school system has experimented with this. Rather than placing unhealthy selections like French fries and hamburgers at the top, they put healthier selections like vegetables and yogurt at the top of the menu. The result? Students make more healthy selections than when the unhealthy choices are presented first. The same items are on the menu but the ordering influences their decisions.

A more powerful influence is the power of the default choice. Many employers today offer their workers a savings plan for retirement, such as a 401(k) or 403(b). This is where employees contribute to a plan, and frequently the employer also contributes. It’s almost “free money” for the employee if they are part of the plan. When employees by default are not opted into the plan and need to enroll themselves, enrollment is low – less than 50%. However, when their employer opts them into the plan by default and the employee must opt out in order to not participate, compliance is very high – over 90%.

The “power of default” is one of the most powerful tools that the security industry can use. Whatever the default setting is for a piece of software, the vast majority of users will stick with that. It doesn’t matter how much we tell users to switch to another setting, the “stickiness” of the default is what will remain. To use this, security vendors should make their software secure by default. In real terms, this means that software is set to update automatically and the user must opt out of downloading and installing the updates.

Modern software does this – Microsoft Windows has Windows Update, and Adobe regularly updates Adobe Acrobat; it prompts users if it wants to install after it has already updated. However, other pieces of software such as Internet browsers do not update by default. The browser is particularly vulnerable because it is the hacker’s weapon of choice for creating malware. These should be set up so that automatic updates are enabled upon installation and prompt the user to install when they are ready.

Although some browsers upgrade by default or prompt the user to update by default, not every piece of software upgrades by default. In my Firefox browser, I am running several plugins – Adobe Flash, Java, Shockwave, Quicktime, Silverlight and Media Player. Honestly, some of those plugins I use so rarely that I would never think to update them. However, a browser plugin called BrowserCheck from Qualys lets you scan your browser and tell you if any of the pieces are out of date. If so, there is a link that you can click on that will take you to the latest version:

image

I had to go and install this Qualys plugin myself, it wasn’t preconfigured on my browser. However, it should be. It’s useful because it consolidates a whole bunch of disparate plugins so I don’t need to keep track of them myself. Plugins like BrowserCheck should be standard on every browser, and there should be some sort of notification to let the user know when one of their plugins is out of date. Having a browser plugin checker installed by default forces users to be notified of security problems… and thereby help reduce the risk from one of the biggest attack vectors today.

Conclusion

In this series, I have looked at the problem of how to educate the public to become more aware of cyber security. I looked at why people don’t retain the message (because our teaching methods are poor) and how we can improve upon those.

However, I only looked at a small fraction of better educational teaching techniques; the subject is too vast for me to cover in 6000 words. What is encouraging about this is that because so much research has been done into formal learning, we know what works and what doesn’t:

  1. Students need to know a lot of stuff, and organize it well, in order that stuff to become useful to real life.

  2. Students take new knowledge and weave it into their pre-existing knowledge. Teachers need to know their students’ prior beliefs.

  3. Students retain knowledge when they have to think about why they are learning something, and why things are the way they are.

There is no shortcut to being aware of the Internet threat landscape and giving people the skills they need to traverse it. But we do have a responsibility to tell users what they have to do and we also have a responsibility to ensure that they are learning, retaining, and using what we tell them. We do that by looking at ourselves and seeing what we can do to help.

And then, maybe one day, the cyber security industry won’t have such a big problem.



Part 1 – Introduction
Part 2 – Expertise
Part 3 – Experience
Part 4 – Metacognition
Part 5 – What should we teach?
Part 6 – Bringing it all together


[1] If we knew how to teach it I wouldn’t be writing this article.


Teaching consumers security habits

$
0
0

I thought I’d round out the year with a summary of Randy Abrams’ talk from Virus Bulletin entitled Teaching Consumers Security Habits from this past year’s 2012 Virus Bulletin Conference in Dallas, TX. I wanted to write about it long ago but I wanted to post my series Practical Cybersecurity first. The two topics naturally fit together.

Abrams began his talk by saying that technology is not the only solution to the security problem even though we in the security industry think so. This is despite years of evidence that contradicts this belief.

Think about it for a second. If the way we have always done things is best, then why are some of the best universities giving away their courses? Our education system uses 300 year old principles that developed because books were rare and the professor essentially read the contents. However, this is 2012 (or 2013, which is when you’re probably reading this). We're wired. We can do better.

Researchers have known for a long time that breaking a video lecture into small chunks helps students retain information better. Embedded quizzes keep them focused. Drawings appeal to the visual learner. This is evident with Khan Academy.

What can we learn from behavioral researchers? How can we use what users naturally do to form good security habits?

We need to understand The Habit Loop. This was first written up in the book The Power and Habit Charles Duhigg.

What is The Habit Loop? It is the following sequence of events:

  1. Trigger
  2. Routine
  3. Reward

Knowing something like this, a retailer (Target) might know a woman was pregnant before her family did by monitoring her shopping habits.

How do we change a habit (such as a poor security habit like using the same password everywhere)? Well, as it turns out, a brain doesn’t forget a habit. The only way to break a habit is to change the routine.

Studies have shown that when we continue doing the same thing (well, running a mouse through a maze which acts as a proxy for “us”), brain activity goes down and mouse isn't thinking about running a maze anymore. A habit is like a subroutine so we can do things and our brains can think about other things.

However, there is a pleasure spike with the activity. But in a habit it moves the reward trigger to when a habit has kicked off, instead of at the end of the action like the first time when you first started doing the habit. That is, when we do things for the first time we go through the activity and then at the end there is a reward. But in a habit, as soon as we decide to do the activity, the reward is then, even before we have completed the activity.

First time: Action. . . . . . . . . . . . .Activity. . . . . . . . . . . Reward
Habit: Action. . . . Reward. . . . . Activity. . . . . . . .. . . . Smaller reward

The reward reinforces the activity. If your friend sends you funny videos in email, when you click the first time you get a reward from it (laughing at the funny video). The next time this occurs, the habit of clicking is in your brain because your brain remembers.

As habits form, the brain stops participating in decision making. The pattern unfolds automatically unless you deliberately fight it.
   
Habit routines must be replaced. Some common habits:

  • Stress -> Cigarette –> Satisfaction
  • Stress -> Exercise –> Satisfaction
  • Email -> Click -> Funny Video
  • Email -> THINK –> Reward. this is the part that has to change; we have to teach users to THINK first and break that habit.   

Are there any examples of this working in real life on a large scale? Absolutely. We have an example of changing social habits. This example involves lowering the infant mortality rate in the rural United States during the 1950’s and 1960’s which was much higher than urban areas.
   
To change this, researchers identified the major sources and the major causes. The solution was social change. This is documented by Paul O'Neill: biology became part of the core curriculum; to talk about proper nutrition which cut down on malnutrition, and infant mortality dropped by 62%. 62%!

This sounds like great news! The problem is that for students and education, it will take at least two generations. Ouch.

image

What sorts of real things can we do to teach consumers security habits:

  1. We can create games that teach the proper concepts. If they are fun, people will remember them better because it binds emotions to actions.

  2. Examples where people get to see which phishing attacks are most useful at working in real life.

  3. Weak passwords: Security professionals can't just explain why passwords are weak because everyone nods their heads without really understanding... but put their passwords through a password cracker to see how quickly it can be broken (someone guessing vs. machine breaking) and that underscores the reality of weak passwords.

So, to conclude, we have to teach consumers security habits in a smarter way. The current methods are not working, and using only technology won’t work either. We have to fight habits with habit remediation, and we have to fight ignorance with education.

And then maybe one day, we in the security industry won’t have such a big problem.

Out of the office for a while

$
0
0

I’m out of the office for a while so there won’t be many updates to this blog in January, 2013. See you when I return!

If you’re wondering where I am, here’s a clue:

Yes, experts all say that you shouldn’t tell others you’re gone when you’re gone. Well, I have virtually nothing of value back in my home anyhow. Meh.

Phishing infographic – how phishing works

$
0
0

A reader sent me the following infographic detailing how phishing works. Check it out:

  • It contains statistics on the prevalence of phishing
  • Some characteristics of phishing messages, and
  • Some advice on how to protect yourself

Good stuff.

Phishing With Bait - Spam Threats in 2013

Source: Phishing advisory infographic by Lifelock.com

Hanging around Buenos Aires

$
0
0

For the last bit of December 2012 and the first part of January 2013, my wife and I were traveling in Argentina and Chile in Patagonia, the southern part of the country. The final two days were spent in Buenos Aires, the capital of Argentina.

I didn’t have many expectations of the place before I got there, I just knew that it was a large city (11 million, one of the top three in South America depending on how you count it, after Rio de Janeiro and Sao Paolo). But the city is amazing!

Buenos Aires is like a European city with the ridiculous expense of Europe (i.e., everything costs almost double what it costs in North America). Instead, the costs in Buenos Aires are slightly less than North America for some things (restaurants) and much less for others (hostels and the subway).

To give you an idea of the architecture, below is the Casa Rosada which is where the main parliament of the country takes place. It’s located in Plaza de Mayo (that’s may be wrong but I can’t be bothered to look it up right now) which is the main political square of the country, where mass protests regularly take place. There are tours during the day on weekends but since we were there on a Friday, we couldn’t go inside.

The statues in front like this is reminiscent of Spain or Italy:

image

 

Another section of the city houses the Palacio de las Aguas Corrientes (literally: Palace of Water Flows, according to Bing Translator). For some reason, at first I thought it was called Palacio de las Aguas Calientes, or Hot Water Palace. That made me think it was an engineering facility for the city’s water flow.

image

I was thinking to myself “Man, that is the nicest public works building in history! Nothing even comes close to it!” It was only later that I discovered my pronunciation was wrong and that it is now a museum. But according to Wikipedia, it originally was built to accommodate supply tanks of running water for the city in the late 19th century.

I don’t know if the story is true or not, but one of the locals told us that the building was designed in Belgium and shipped to Buenos Aires where it was reconstructed locally. If so, that’s amazing. And a lot of effort.

Whenever I’m in South America (and Europe), I like to check out the Catholic churches. I do it because the architecture and art within them is so much nicer than in Protestant churches in the United States and Canada. I may not be Catholic but their churches are way nicer everywhere in the world. Even the Church of England buildings in the UK, which are very nice (Westminster Abbey, St. Paul’s Cathedral) were originally Catholic.

This church is located near the Casa Rosada on the other side of the square. In the picture below you can see me waltzing around acting like such a tourist, snapping photos:

image

image


But my favorite part of the city’s various amusements is the Necropolis – the Recoleta Cemetery. It is a huge square encompassed by high walls and takes up many city blocks. Inside are large graves belonging to very important people within the city – presidents, generals, nobles, and high ranking officials. It takes forever to walk around the place:

image

image

image

image

If you’re not thinking “Wow, some of those graves are pretty big!” you should be. I calculated that a few of them were larger than our two-bedroom condos.

And many of them were nicer than our two-bedroom condos. How is it possible that dead people have a better place to live than me?

Along the way I found a lazy cat just kind of lying around. Unlike my cat at home, this one was pretty skinny:

image


It took us two days to walk around Buenos Aires and we probably could have easily spent a couple more. It was very hot those two days and that contributed to draining us of energy.

But I liked the city.

And that’s my story of our time sightseeing in Buenos Aires.

Still no blog posts this year

$
0
0
You may have noticed I haven't posted much this year. The reason is that I have been very unmotivated. I don't know why; I guess after six and a half years of writing I am running out of things to say. I'm still here, though. I'm just working on other things at the moment.

How to use Safe Senders in EOP and FOPE

$
0
0

In the EOP (Exchange Online Protection, our newer service) and FOPE (Forefront Online Protection for Exchange, our older service), there are some nuances that end users should be aware of when using the safe senders and blocked senders feature.

Customers who use Outlook as their mail client and sync their safe and blocked sender lists to EOP or FOPE can have their individual user lists respected by the service. However, there are some differences between FOPE and EOP:

  1. FOPE respects only safe senders. Blocked senders and domains are still blocked (deposited into Junk) by the email client. Safe domains are not respected.

  2. EOP respects safe senders and domains, and blocked senders and domains. The spam action for Blocked Senders/Domains is the same as for all other spam blocked by the content filter.


However, users who want to use safe and blocked senders need to know that if they are using EOP or FOPE, Outlook and EOP/FOPE handle it differently: EOP and FOPE respect Safe Senders and Domains by inspecting the RFC 5321.MailFrom while Outlook adds RFC 5322.From to a user’s safe sender list. EOP inspects both the 5321.MailFrom and 5322.From for Blocked Senders and Domains.

This means that what you add as a safe sender or domain in Outlook might not work the way you think!

  1. The SMTP MAIL FROM, otherwise known as the RFC 5321.MailFrom. This is the email address that is used to do SPF checks, and if the mail cannot be delivered, the path where the bounced message is delivered to. It is this email address that goes into the Return-Path in the message headers.

  2. The From: address in the message headers, otherwise known as the RFC 5322.From. This is the email address that is displayed in the mail client.

Much of the time, the 5321.MailFrom and 5322.From are the same. This is typical for person-to-person communication and what people usually want to add safe senders for. However, when email is sent on behalf of someone else, they are frequently different. This usually happens most often for Bulk Email and it is where problems can occur.

For example, suppose that the airline Oceanic Airlines has contracted out Big Communications to send out its email advertising. You then get the following message in your inbox:
 

image

 

In your email client, you see the sender is oceanic@news.oceanicairlines.com. To prevent this message from going to junk, you add it as a safe sender in Outlook. Unfortunately, the next time it comes through, it also gets filtered. What’s going on? You added it as a safe sender!

The reason is that oceanic@news.oceanicairlines.com is the 5322.From address and it is the one you see in Outlook, but EOP and FOPE do not inspect it. The 5321.MailFrom is oceanic.airlines@bigcommunications.com and that is the one FOPE and EOP inspects. But, it does not appear anywhere in the message display.

In order to have it skip filtering, you need to add the 5321.MailFrom to the safe senders manually. To do this:

  1. In the Outlook client, open up the message in a new window by double-clicking on it.

  2. On the top ribbon in Outlook, look for the Tags tab. In the bottom right corner there is a little square with an arrow pointing out of it. Click this little square. This tab is there in Outlook 2010 and 2013. I’m not sure about 2007 and 2003 but there is something similar.




  3. In the Internet Headers section there will be aReturn-Pathheader. The value of this field is the RFC 5321.MailFrom and it is the one you want to put into your safe senders.

    It is difficult to look for this header within this popup window so you should copy-and-paste all of these headers into a text editor like Notepad. There is no way to make this window bigger within Outlook:




  4. Close this popup window and email message and open up your safe senders. To do this in Outlook 2010 and 2013, from the main Outlook window click Junk –> Junk Email Options –> Safe Senders tab.

    Click the Add… button and paste in the value from the Return-Path header. Click OK to close the dialogue window.

You have now added the correct email address to your safe senders list such that it integrates with EOP and FOPE, which will subsequently not mark messages coming from this sender as spam the next time they are delivered to you. Admittedly this is non-intuitive but in my next post I will explain why EOP and FOPE perform safe sender checks on the 5321.MailFrom email address.

Why do safe senders in EOP and FOPE operate on the 5321.MailFrom address instead of the 5322.From?

$
0
0

In my previous blog post How to use Safe Senders in EOP and FOPE, I explained that in the EOP and FOPE service, the spam filter inspects the 5321.MailFrom when doing a safe senders check whereas Outlook adds the 5322.From address (the one you see in your email client) to the safe senders list.

The question is: Why does EOP and FOPE use the 5321.MailFrom and not the 5322.From?

The reason is: Security. By respecting the 5322.From address, spammers could abuse the safe sender functionality to deliver messages to the inbox without filtering.

In EOP and FOPE, the spam filter does not respect a safe sender if the sender hard fails an SPF check. This is to prevent a user from adding a sender as a safe sender, a spammer spoofing it and then the message getting a free pass to the inbox. For example, suppose that I added security@paypal.com to my safe senders list.

Example 1 - Spam filter inspects 5321.MailFrom, Spammer spoofs my safe sender in the 5321.MailFrom and 5322.From

RFC 5321.MailFrom = security@paypal.com [This will hard fail an SPF check!]
RFC 5322.From = security@paypal.com 

Because this message hard fails an SPF check, it will undergo spam filtering where it will be flagged as spam and delivered to my junk folder even though I said "I want to receive all messages from Paypal." The spam filter respects your wishes except when it has probable cause not to. 

If the spam filter inspected the 5322.From address and it matched the safe sender and skipped filtering, the user would have an unfiltered malicious message in his or her inbox. While some spam messages do get past the filter, we want to minimize this where possible.
 

Example 2 - Spam filter inspects 5321.MailFrom, Spammer spoofs my safe sender only in the 5322.From

RFC 5321.MailFrom = security@blah-blah-blah.com [This message is SPF None since blah-blah-blah.com does not exist]
RFC 5322.From = security@paypal.com 

This message will undergo spam filtering because the 5321.From is not on the user's safe sender list. 

If the spam filter inspected the 5322.From address and it matched the safe sender and skipped filtering, the user would have an unfiltered malicious message in his or her inbox.

 

Example 3 - Spam filter inspects 5321.MailFrom, Spammer spoofs my safe sender only in the 5322.From but it passes an SPF

RFC 5321.MailFrom = security@spammer.com [This message passes and SPF check since the spammer registered the domain and set up SPF records]
RFC 5322.From = security@paypal.com 

From Example 1 and 2, you may be tempted to say "Why not respect the 5322.From address if it passes an SPF check?" The reason is that there is not a relationship between the 5321.MailFrom and the 5322.From address. In this example, the message passed an SPF check and the From: address the user sees is on his safe sender list. However, it is not actually coming from the safe sender and instead is an SPF workaround the spammer has implemented. Once again, we want this message to undergo spam filtering.

If the spam filter inspected the 5322.From address and it matched the safe sender and skipped filtering, the user would have an unfiltered malicious message in his or her inbox. 

 

Example 4 - Spam filter inspects 5321.MailFrom, sender legitimately sends email from my safe sender in the 5321.MailFrom 

RFC 5321.MailFrom = security@paypal.com [This message passes an SPF check because it legitimately arrives from the sender]
RFC 5322.From = security@paypal.com 

In this example, the message actually comes from Paypal. The SPF check passes and therefore is delivered to the user's inbox. This doesn't mean that a message must pass an SPF check in order to respect a safe sender, only that it must not fail.


That's the rationale for respecting the 5321.MailFrom address. Respecting the 5322.From address makes it too easy to exploit. But on the other hand, most of the senders you want to skip filtering for will have the same 5321.MailFrom as 5322.From address so adding it to your safe senders from Outlook is a de facto 5321.MailFrom addition. It is a balance between security (avoiding maliciously spoofed email going to the end user because they safe sendered them; we have seen this many times) and having the email client do what the user expects it will do (clicking "Do not block" resulting in one-click addition to safe lists because users do not know the difference between 5321.MailFrom's and 5322.From's).

Where this doesn't work is when the two From: addresses are different which is frequently the case in Bulk Email. When this occurs, users must manually add them to the Safe Senders list.


Smartphone OS market share vs. malware targeted at that OS

$
0
0

I was reading yesterday on Yahoo News (and on Flipboard yesterday on my tablet) that that the Department of Homeland Security issued a report detailing what platform mobile malware targets on your smartphone.

I decided to do a sanity check – how well does the amount of malware targeted at a platform correspond to the number of users using that platform? After all, for years, Microsoft-defenders (including myself) said that the reason malware authors targeted Windows is because it was the most prevalent OS out there and therefore the one that is the most targeted. Switch around the market share and you switch around the amount of malware per platform.

To figure this out, I went to the DHS’s report and got the smartphone distribution, and then compared it to StatCounter’s global distribution of smartphone usage. In StatCounter’s numbers, since the DHS lumps together all others, and the StatCounter has a lot of different phone OS’s, I decided to exclude all others and adjust the numbers for market share accordingly. I did the same thing for malware distribution.

So what do the numbers show?

image

 

Taking a look at the above, the number one platform, Android, has the most malware and it is upwardly disproportional to its market share (that is, the amount of malware targeted at it is more than the market share it controls.

Blackberry, Windows Phone and iOS all have much lower malware targeted at it than the amount of market share it has, but iOS has substantially less malware.

Symbian used to be the number one smartphone and you can see that malware targets it (still) disproportionately upward compared to the market share it has.

The DHS states the following about Android:

Android is the world's most widely used mobile operating system (OS) and continues to be a primary target for malware attacks due to its market share and open source architecture.

Industry reporting indicates 44 percent of Android users are still using versions 2.3.3 through 2.3.7-known as Gingerbread-which were released in 2011 and have a number of security vulnerabilities that were fixed in later versions.

The growing use of mobile devices by federal, state, and local authorities makes it more important than ever to keep mobile OS patched and up-to-date.

So basically:

  1. Because Android is the number 1 platform, it is a primary target for malware attacks. This correlates with the claim that because Windows OS was (and still is) the number one platform on the personal computer, it would therefore be the primary target for those types of attacks.

  2. Android’s open source architecture is another reason it is a target. I don’t know whether or not this is true; I don’t have enough experience in malware to assert one way or the other but there are many other experts out there who can validate this claim better than I can (I suspect that there are advantages both ways).

    I have seen Internet commenters state that Windows is insecure by design. Once again, I don’t know whether or not Windows’s architecture makes it more prone to vulnerabilities or not as I don’t have the necessary expertise in that area.

    However, it does lead to the next and final point.

  3. Just as in Windows and the PC, the key point is that it is ever-the-more important to keep mobile OS patched and up-to-date.

Supporting email over IPv6, part 1 – An introduction

$
0
0

One of the important projects I have been working on for the past few months is supporting email over IPv6. Long time readers of this blog (all four of you) will remember that last year I wrote a series of posts on email over IPv6:

Part 1 – Introduction

Part 2 – Why we can’t use IP blocklists in IPv6

Part 3 – A solution: Whitelists

Part 4 – Population of the whitelists

Part 5 – Removals, key differences, and standards


In case you can’t tell by the above, the backbone of the solution was using IPv6 whitelists and maintaining a list good senders, and then sharing that list amongst the major receivers.

I now entirely refudiate* that idea.

We now have a new solution in mind. I can’t claim credit for inventing any of it. The basic algorithm was developed in close participation with other bright minds in the industry, and the performance issues are addressed and acknowledged by people within my own company.

The new solution is better than the previous one: it is more scalable, easier to manage, and builds upon what is already there in IPv4. However, unlike the previous solution, there is more uncertainty involved with regards to its performance when it comes to running the service.

This series of blog posts will go into the problem of email over IPv6, a technical solution to the problem, and technical considerations into implementing that solution.

Stay tuned for more.

* It’s a word, look it up.

4 Reasons to Stop Freaking Out About the NSA

$
0
0

With all the chatter recently about NSA revelations and how they are spying on everyone, I found this funny video on Cracked that made me laugh a few times. I thought you all might enjoy it, too.

Another humorous anecdote about the NSA story

$
0
0

I found this, posted by one of my friends on Facebook. I thought it was funny and thought all of you might, too.

New features in Office 365

$
0
0

Recently, in Office 365 we introduced two new features in our Forefront Online Protection for Exchange product (FOPE). I refer to this as FOPE 14 because the service runs on Exchange version 14. This is our older service, all of our customers are either migrated or will be migrated to Exchange Online Protection. I refer to this as EOP 15 because the service runs on Exchange version 15.

Below is a description of both.

1. An Additional Spam Filtering (ASF) rule for marking Bulk email as spam

Our existing FOPE 14 customers now have the ability to mark Bulk email as spam using ASF rules (our EOP customers already has this setting). Before we released this feature, the FOPE service would mark messages as bulk by stamping SRV:BULK in the X-Forefront-Antispam-Report header, and then customers could mark them as spam by creating a rule on their local on-premise email server (an ETR in Exchange, or the equivalent for software like Postfix, Sendmail, etc). The process for that is described here:

Bulk Mail Filtering in FOPE
http://social.technet.microsoft.com/wiki/contents/articles/2922.bulk-mail-filtering-in-fope.aspx

While this works, it is counterintuitive to customers to use the FOPE Admin Center for managing their spam filter options and then have to do an additional step on their local mail server.

The change for FOPE 14 is that marking bulk messages as spam is now an option in the Admin Center. By enabling this option, all email with SRV:BULK in the headers will be marked as spam:

image

There is no Test mode for this rule because all bulk email is already stamped with SRV:BULK. This acts as a de facto Test mode. To see what this rule would mark as spam, simply look for this header with this value.

Some notes about this rule:

a) Integration with the rest of the spam filter

This ASF rule acts like any other rule. If a user has configured safe senders for a specific sender and syncs their safe senders list to FOPE, and email arrives from that sender and is marked as Bulk, and this ASF rule is enabled, the spam filter respects the safe sender and the message will be delivered to the user’s inbox.

Respecting safe senders is a big improvement as long as the lists are sync’ed to FOPE .


b) Migration of the setting

This ASF rule is Off by default in FOPE. However, the ASF rule already exists in EOP and it is On by default for new customers. In other words, if you are an existing customer in FOPE, this feature will now appear in your ASF rules but it will be Off. If you are a brand new customer of ours and you sign up for EOP, this ASF rule will be On.

As a FOPE 14 customer, when you are migrated to EOP, this setting will be migrated as well. If it is Off in FOPE it will be Off when it is migrated to EOP. If it is On in FOPE, it will be migrated as On in EOP.

c) Detection capabilities

The mechanism of Bulk Email detection is the same in FOPE as in EOP. One is not better than the other.

Some customers find our bulk email detection too conservative, in a future post I will explain how to expand this detection capability in EOP.

That is the first change we made in FOPE spam filtering this past summer.

2. Expansion of URL filtering

One of the mechanisms that we use in our spam filters is examining the content of the message, extracting the URLs and then checking them against a 3rd party URL list. If it matches, we increase the spam score of the message. At the end of the filtering pipeline, if the spam score is greater than the threshold, the message is marked as spam.

The greater the number of URL lists we use, the wider the coverage of malicious URLs. While our spam rules engine looks for other spammy characteristics within messages (headers, subject line and body content), spammy URLs provides an additional layer of protection. In general, the more URL lists the better. A URL can catch a spam with short, non-sensical body content and a link to a spammy web site better than regular expressions do (in some circumstances).

In the FOPE service, we use the following lists:

a) URIBL
Website: http://www.uribl.com
Delisting URL: https://admin.uribl.com/?section=lookup

b) SURBL (New this past summer)
Website: http://www.surbl.org
Delisting URL: http://www.surbl.org/surbl-analysis 

c) DBL (from Spamhaus) (New this past summer)
Website: http://www.spamhaus.org/dbl/
Delisting URL: http://www.spamhaus.org/lookup/

EOP already used the latter two lists. But what changed even in EOP is that the number of total unique URLs greatly increased so that we now use a much larger overall total of each of the lists. Both environments now use the same lists. However, at present we do not indicate in the message itself what URL is on each list in the case that a message is marked as spam because it contains a URL on one of these three lists.

That’s the second change we made to the FOPE service this past summer.

Viewing all 243 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>