Quantcast
Channel: Terry Zink: Security Talk
Viewing all 243 articles
Browse latest View live

Why we believe strange things

$
0
0


This post doesn’t have anything to do with cyber security. It’s one of those “It’s my blog and I can write what interests me” posts.
* * * * * * * * * * * * *

A couple of years ago I read Robert Cialdini’s book Influence: The psychology of persuasion. It’s considered one of the classics on how to persuade other people to your point of view. In it, Cialdini lists six things:

1. Reciprocity – we do things for others once we they have done things for us

 
2. Commitment and consistency – once we’ve established our position, we tend to want to establish consistency in our positions because being seen as inconsistent is negative

 
3. Likability – we do things for others that we like

 
4. Social proof – we look to others when trying to establish our beliefs (i.e., make purchase decisions based upon recommendations from others

 
5. Authority – we are more likely to take something seriously if it comes from an authority (e.g., an actor dressed in a doctor’s coat will cause us to take his message seriously if he is in a commercial advertising a medication)

 
6. Scarcity – the more scarce a resource is, the more we desire it

 

If part of your job is to persuade or influence others, then you need to read this book.

One thing that struck me about the social proof chapter is why we believe certain things, even when sometimes those beliefs don’t make any sense. For example, in modern society, we have people who believe the moon landing was faked, or have major reservations about genetically modified foods (National Geographic tackled this is a previous issue about why we doubt science).

One of the points that Cialdini makes is that our reasons for believing something today may not be the same reasons for believing something originally. In fact, that reason for original belief may be completely irrelevant today. So, suppose you believe something that sounds odd to everyone else, for example, that the Winnipeg Jets are the best hockey team in the NHL. Cialdini makes the analogy that the belief is a sheet of plywood (the Jets are the best team) and the reason for it is that your favorite player plays for them (let’s go back in time and pick 1992-93 rookie of the year, Teemu Selanne, who holds the record for most goals by a rookie with 76).

Just because Selanne is your favorite player and plays for the Jets does not make them the best team in the NHL. That’s a shaky foundation upon which to build a belief system, so imagine the plank of wood is “The Jets are the best team” and the supporting block of wood is “Selanne plays for them.”

 

2015_05_20_Reinforcing_belief_12015_05_20_Reinforcing_belief_2

Here’s what that belief looks like. Not too stable.

2015_05_20_Reinforcing_belief_3

The Jets make the playoffs that year which indicates that they have reasonable successful team, and that adds another reason – a block of wood – to support the underlying (yet clearly wrong) belief – that the Jets are the best team. The whole system is still shaky, but a little less.

2015_05_20_Reinforcing_belief_4
Next, a couple of years later, the Jets add a Russian goaltender with a fast glove hand, making some unbelievable saves every time you watch TV. They win some close games and stage some comebacks. More evidence that they are the best team. Your belief gains another support structure. It’s still wobbly, but not as much as the first time.

2015_05_20_Reinforcing_belief_5

Let’s now toss in the fact that the Jets are your hometown team and you can go to their home games. Hometown teams are almost always the most popular team with the local fan base. It’s yet another reason why the Jets are the best team; neither of these reasons are particularly strong on their own but look at how they work together to stabilize everything.

2015_05_20_Reinforcing_belief_6

 

The Jets add a couple of more good players and make the playoffs; sure, they don’t get past the first round but it’s a fluke. Too many injuries and besides which, the refs are biased against them. They want the Vancouver Canucks or Edmonton Oilers to win, everyone’s out to get Winnipeg.

You add another base support to your belief system.

 

2015_05_20_Reinforcing_belief_7
Then, halfway through the 1995-96 season, the Jets trade Teemu Selanne to Anaheim for two players in an unequal trade that makes absolutely no sense, stunning the entire city. Why would the Jets make such a boneheaded trade? To get almost no one in return? What a terrible trade!

Your original reason for believing the Jets were the best team in the NHL has been removed from the table structure, so the whole thing should collapse like a house of cards, right? No! The other pillars you’ve added along the way are more than enough to compensate for the loss of your original reason for believing something:

2015_05_20_Reinforcing_belief_8

 

 

The belief system remains in place long after the original reason for believing it has passed on. Because you’ve built up your belief system, it can withstand damage to it and still remain intact.

* * * * * * * * * * * * *

I’ve always wanted to write that up. I try to be careful now of my own beliefs and whether or not the original reasons I believed something are still relevant; I came to the conclusion I’m as biased as anything so I try not to be dogmatic about too much.

But anyhow, make sure you read Cialdini’s book if you haven’t already.


Hooking up additional spam filters in front of or behind Office 365

$
0
0

Note: This blog post reflects my own recommendations.

Over here in Exchange Online Protection (EOP), people sometimes ask me why we don’t recommend hooking up multiple layers of filtering in front of solution. That is, instead of doing one of these:

Internet -> EOP -> hosted mailbox
Internet -> EOP -> on-prem mail server

… a customer wants to do something like this:

Internet -> on-prem mail server -> EOP hosted mailbox
Internet -> on-prem mail server -> EOP -> on-prem mail server

… or even this:

Internet -> another cloud filtering solution -> EOP hosted mailbox
Internet -> another cloud filtering solution -> EOP -> on-prem mail server

.

If you read through our Office 365 Mailflow Best Practices, you’ll see that those ones are listed as not being supported. If you want to put another filter in front of EOP, you should ensure that that other filtering solution is doing your spam filtering. If your on-prem mail server does not have spam filtering, you should install one. In other words, I do not recommend pipelining and double spam filtering your email.

So why do I recommend this?

After all, adding more malware filters gives you better protection. So shouldn’t this result in better spam protection?

No.

EOP makes use of a lot of sending IP reputation. So, suppose the sending IP is 1.2.3.4. In the supported case, it looks like this:

Internet, 1.2.3.4 -> EOP

.

In the unsupported case, it looks like this:

Internet, 1.2.3.4 -> on-prem mail server 12.13.14.15 -> EOP

.

In the first case, EOP sees the original IP. In the second case, it sees the on-prem mail server’s IP. Since the on-prem mail server will never be on an IP reputation list, the email must be spam-filtered instead and not blocked at the network edge. This loss of original IP degrades the overall experience.

But why can’t we simply crawl through the headers of a message, looking for the original IP? After all, some solutions do that.

There are numerous reasons why we don’t do this but here’s the biggest one – our IP throttling doesn’t work.

EOP’s IP throttling is a variant of graylisting. If email comes from a new IP address [1], EOP throttles that IP by issuing a 450 error, instructing the sending mail server to go away and try again later. Most legitimate mail servers retry, whereas most spammers give up and move on to the next message. This is a technique that has been used in spam filtering for years, and when EOP introduced it, we saw a lot of spam from new IPs get blocked (that is, get blocked and not retry)..

But if you do something like this:

Internet, 1.2.3.4 -> on-prem mail server 12.13.14.15 -> EOP

Even if EOP crawled through the headers and extracted the original IP address, and then issued 450’s to the connecting mail server, we’d be issuing the 450’s to the on-prem mail server (12.13.14.15) and not the original spammer. This would then force queues to build up on the on-prem mail server and then everything starts breaking. Either the on-prem mail server falls over because it can’t handle the queues building up, or it retries and tries to shove the message through EOP anyhow. We may not yet have enough reputation to make a spam verdict downstream. IP throttling works very well in the service.

But EOP assumes that in order to give you the best experience, we’re using all of the tricks up our sleeve to stop spam – including IP throttling. But putting another filter in front of EOP removes a key piece of that filtering. It isn’t made up elsewhere downstream.

.

IP throttling is a key piece but the reality is that any throttling, sending IP based or not, that issues 450 responses and assumes that good senders retry and spammers do not, won’t work properly by sticking something in-between the origin and EOP.

And since all of our filtering is not being applied – even if we crawled through headers – we would not get the same filtering experience because of the behavior differences between spammers (who won’t retry) and the mail server (who will). We wouldn’t apply throttling at all if we could get the same experience elsewhere.

That‘s why double spam-filtering is not supported, and why we don’t go out of our way to make it work.

.

That’s not to say you can’t put another service in front of EOP. But if you do:

  • IP reputation blocks, and IP throttling, do not work properly
  • DNS checks that use the sending IP do not work properly, such as PTR lookups, SPF checks, DMARC checks, and even DKIM if you route it through a mail server that modifies the message content
  • IP Allow entries do not work properly because the original sending IP has been lost
  • Some rules in the spam filter will not work properly because they look for a particular IP range or sending PTR
  • The Bulk mail filter does not work properly
  • The antispoofing checks do not work properly

.

All of this will result in more spam getting delivered, and more good email being misclassified as junk.

On the other hand, here’s what does still work:

  • Malware filtering
  • Exchange Transport Rules (ETRs) that don’t rely upon the sending IP
  • Safe and blocked senders
  • Safe and blocked domains
  • Advanced Threat Protection (Safe Links and Safe Attachments)
  • Some parts of the spam filter that only look at content still work, e.g., malicious URLs

So, filtering will work but it won’t be as good.

.

If you are going to use a third-party to do spam filtering, we recommend you do it this way: Using a third-party cloud service with Office 365. That points your organization’s MX record at EOP so that we are in front and the third-party is behind us. Many add-on services recommend you do it this way because they assume you have a spam filter in front of their service. In many cases, you can probably find an equivalent service from Office 365 to use, in place of what you were using that other appliance or cloud-filtering service for so you don’t need to run multiple services or appliances.

If you have to put a third-party in front of EOP such that your MX doesn’t point to EOP, then we recommend that you rely upon this third party to do spam filtering by having it stamp a set of x-headers for spam and non-spam, and then writing ETRs to look for those headers to mark as spam (SCL 5-9) and take the spam action, or non-spam (SCL -1, not 0-4) so it gets delivered to your inbox. It still goes through Malware, ETRs, Safe and blocked senders, and Advanced Threat Protection. Our other services (e.g., Data Leakage Protection, Advanced Security Management) still do, too.

If you have to put a third-party in front of EOP and want double spam-filtering, you will probably notice more misclassified email than if you used either of the above two options.

Hope this helps.


 

[1] There’s much more to IP throttling than simply being from a new IP address without previous history, it’s more complicated than this.

 

Phishing, magic, Stuxnet, and how they all work together

$
0
0

Part 1 – There’s more to me than just fighting spam

If all you know of me is through this blog, then you’ll know I’ve been involved in the fight against spam, malware, and phishing for over a decade.

On the other hand, those of you who know me in person or have checked out my LinkedIn profile know that once upon a time I used to be an amateur magician. Actually, I still am, I just don’t practice as much [1]. For years I specialized in close-up sleight-of-hand magic and then a few years ago I branched in mentalism [2].

One of the shows I watch on YouTube is Penn and Teller: Fool Us. On the show, up-and-coming (and even professional) magicians show a trick to the audience, and Penn and Teller. If they can fool Penn and Teller, they get a chance to perform their trick at the Rio hotel in Las Vegas during one of Penn and Teller’s shows. I enjoy watching the show. It’s not often that magicians can fool both Penn and Teller, but sometimes it happens. When I watch the show, I can work out about 1 in 3 of the tricks. After Penn and Teller explain how it’s done using the secret magic codewords, I can figure out about 2/3 of them because their language is really obscure unless you have a lot more knowledge than I have, or you know how the trick is done.

2016-06-21-Penn-And-Teller-Fool-Us

I’ve had a friend go on the show (he didn’t win) but I often wonder how I could go on… and win. You see, I’m not good enough to fool Penn and Teller. While there’s a 50% chance I could fool Penn, there’s slim chance I could fool the walking encyclopedia of magic that is Teller. That guy’s knowledge is so wide that it’s tough for any of the other acts to fool him.

But I still want to go on the show. So how could I do it?

Is my goal just to get exposure? No, not really. I’m not a professional magician and I have no intention of going full time. If I go on, I want to win it for the glory. You know, the glory of being a magician.

But how?
.

Part 2 – Here’s the plan

Here’s what I would do: Rather than try to fool Penn and Teller by trying to come up with a new method (which is unlikely because I don’t have  enough knowledge to develop something completely new unless it’s electronic), or do a variance of an obscure-but-existing method (which is unlikely to fool Teller), instead I would use my weakness by turning it into a strength. I know I’m not good enough to come up with a new method but I am good enough to send false signals that hopefully the magic duo would notice and falsely conclude that’s how I did it.

That is, let’s suppose I did a card trick where the entire deck vanished and the audience’s selected card turned out to be the only blue card in a red-backed deck. There are multiple ways to accomplish this. My strategy would be to pick four different methods and pretend to use three of them but do them in an almost-sloppy way that a regular person wouldn’t notice but a professional would.

For example, I could have all the equipment to ditch a pack of cards in a fake pocket sewn to the inside of a suit jacket, and even go through the motions to toss them in there under the cover of making another motion. A professional magician would know to look for that especially if I did all the movements necessary to hide the deck that way. But I wouldn’t actually do it, I would only pretend to do it. Penn and Teller don’t always see how a move is done but they do know when it was done because it’s hard to shield a move completely.
.

2016-06-21-sleight-of-hand
.
I would similarly do this with two or three other methods. The idea would be to get them to commit fairly early to the method which I would supposedly use. Then when it came time for them to guess what I did, I would confirm I did all those moves but it’s not how the trick works. They would then be forced to go back into their memories and come up with an alternative explanation, and as long as I did some other faux methods as more obvious than the real method, they would run out of explanations  and be forced to concede that a mediocre magician like myself fooled them.

We humans can’t keep that many things in short-term memory, particularly after we’ve committed to something; I would take away the reason for them to continue paying close attention. By getting them to commit early, I could short-circuit their intentions to figure out what I am doing; their brains would confabulate later on how I did do the trick and wouldn’t be reliable enough to come up with the real explanation except by accident.

So, my whole strategy is to send false signals and violate expectations. Penn and Teller would be expecting me to (1) conceal the method and (2) rely upon my skill (3) in hopes they don’t notice; but instead, my plan would be to assume they will notice, but flood their filters with the wrong data (and also use some behavioral psychology about the way humans make decisions). If you can’t trust the signals you are reading, then you can’t trust the process you’re using. If you can’t trust your entire process, then your ability to succeed shrinks massively.

This is effectively the technique that the authors of the Stuxnet worm used [3] . By causing the hardware to damage the material, yet simultaneously cause the dials to show nothing was wrong, it prevented the operators from troubleshooting the problem. Everything looked normal.

That would be my plan to victory on Penn and Teller: Fool Us.

2016-06-21-Stewie-Griffin-victory


Part 3 – How does this relate to phishing?

From time to time, people in the industry ping me to let me know that a mailbox on outlook.com, Hotmail, or Office 365 are being used as a phishing drop box. That is, a phisher signed up for the service for the purpose of receiving user credentials; they then send email from the same account or usually another one asking users to reply with their usernames and passwords. This can be with an IT phish (Your mailbox is full) or financial fraud (Please reply with your username and password to unlock or verify your account). The account that the user replies to is called a phishing drop box.

Our standard operating procedure is to shut down accounts when they are brought to our attention. After all, it is against the Terms of Use to use our service for the purpose of spam, malware, or phishing.

I want to improve this process.

As soon as you shut down a phishing account, the phisher has been tipped off that they have been discovered. They simply abandon the account, sign up for a new one, and possibly morph the content of the phishing messages they were sending so they can avoid detection a little while longer. But the data that they have collected is still valid – usernames and passwords which can be used to break into user accounts.

Let’s change things up.

A different strategy is to borrow my Penn and Teller Fool Us approach and send false signals. The phisher is harvesting credentials to sell on the black market. Rather than shutting the email account down, we should not tip the phisher off that they have been discovered. Instead, we should intercept the message and modify its content. Where the user has entered their username and password, we should randomly rewrite the password with wrong characters so it no longer works. The message would still be delivered to the spammer so they’d be unaware that the message has been tampered with.

While spammers and phishers use their harvested data to break into accounts, they also prefer to sell large chunks of user credentials on the black market. Because the data they’ve collected will be low quality (“I bought these passwords from you and none of them work!”) it disrupts their business model. Why aren’t these working? If the buyer cannot trust the seller, it undermines the market as a whole.

For the password harvester, it makes it difficult to troubleshoot where it’s all going wrong. The drop box still works, messages are still getting delivered. As they are probably automating the parsing of the emails with usernames and passwords, or at the very least copy/pasting into a file without doing validation, it is difficult to reverse engineer where the signal quality is degrading. Are the users mistyping their information accidentally? On purpose?

Disrupting the business model in this manner raises the cost of business for the phisher. This strategy (raising the cost of business) has been used in the past with some success. Nothing is perfect, but it doesn’t need to be perfect, it just has to raise the cost enough to make it not worthwhile.
.

Part 4 – How might a phisher react?

2016-06-21-action-reaction

One way is to test some of the passwords – data validation. The phisher can randomly try a handful of passwords to make sure that they haven’t been tampered with. But there are two counter-strategies to this:

  1. Only modify some of the passwords. In the event that the phisher gets lazy and only sanitizes some of his data, when he resells them on the black market there will be enough bad data in there that his reputation will be degraded. If you buy a box of apples and 1/3 of them are consistently rotten, you will soon stop buying from that grocery store. If all batches of apple are rotten, then either the apple market goes away, or you have to spend time a long sorting through apples.
    .
  2. Keep track of which passwords you modify, and then when the phisher tries to login with the fake password, let them login the first time (or show a fake user account). Then, toss the fake password away so it can’t be re-used. The phisher will be tricked into believing that the accounts are valid when in reality they are not. What’s more, if the algorithm for password modification is standardized across vendors (Hotmail, Gmail, Yahoo, etc.) with a common shared key, then a drop box on outlook.com that harvests Gmail users’ passwords will be modified in transit using that common key, and give the phisher fake signals when testing it on Gmail. This requires co-ordination among vendors. But it also throws a wrench into the phisher’s plans. They cannot reliably sell the information they are stealing because they will generate a reputation for selling low-quality data.

This is obviously a complex game of counterintelligence. And I’m not even sure it would work. What I would suspect would happen is an acceleration of what is already occurring – a move into targeted spear phishing (i.e., business email compromise) where stealing one person’s password is more useful if that person is a big fish. That way, it is possible to manually verify credentials.

But on the other hand, if it works, it would raise the phisher’s cost of business and force them to go through more hoops.

Anyhow, those are some random thoughts on a Tuesday afternoon. Let me know what you think in the comments below.


[1] I get paid better fighting spam than I did performing magic. I think that played a big reason into why my practicing declined as much as it has.

[2] Mentalism is the branch of magic involving mind reading, making predictions, and so forth. Well-known practitioners include Max Maven, Bob Cassidy, and Derren Brown.

[3] Not exactly but you get my point.

Exchange Online increases its URL filtering

$
0
0

One of the ways in which Exchange Online detects spam, malware, and phishing is through URL filtering. We use a variety of sources, you can find them here:

https://technet.microsoft.com/en-us/library/dn458545(v=exchg.150).aspx

We use URL reputation lists in the following way (including but not limited to):

  1. At time-of-scan, if a message contains a URL that is on one of the lists we use, a weight is added to the message. This weight is added to all the other features of a message to determine a message’s spam/non-spam status, and also sets the Spam Confidence Level (SCL). Different lists have different weights.
    .
  2. The URL lists are also used as inputs into our machine learning algorithms to see if there are any similarities between URLs, and between messages with URLs. This is so our filters can make predictions in the future about messages with URLs that are not yet on any of our lists but may be in the future. That is, we are trying to pre-emptively determine that a message containing a malicious URL is spam, malware, or phishing prior to the URL being added to a reputation list.
    .
  3. Our Safe Links feature, which is part of Office 365’s Advanced Threat Protection, uses mostly (but not completely) the same set of URLs in the spam filter that it does for blocking when a user clicks on a link that we think is malicious (when they have Safe Links enabled).
    .

We publish all the URL lists that we use at the link above. However, going forward, we may or may not publish every list.

For you see, we recently expanded the number of URL sources we pull from. Whereas before with our lists we were going for volume, nowadays adding more and more URL lists does not necessarily give you better coverage. Just stuffing more and more links into a list gives diminishing returns because spammers and phishers churn through them so rapidly. The result is a list that is 10 million entries, 99% of which are never seen.

Instead, we’ve been looking to shore up our lists by quality. We are not necessarily targeting the size of the list, but rather are diversifying based upon origin.

– How frequently does it update?

– What sources does it come from?

– Do they overlap with our existing lists? (this is an important factor)

– Does it overlap much with another list we are evaluating?

– How much additional value does it generate relative to the price the vendor wants to charge us?

– Does is specifically target phishing?

– Does it specifically target malware? These last two are important because we can use some of these lists that target those two types of spam as part of our Safety Tips feature.

The way we try out a new list is to pull it down from the source, push it out to production, and put it in pass-through mode. We observe how much overlap there is between the contents of the list and our own traffic. We then start pushing up the weight of the list but only apply it to time-of-scan. We then watch for false positives. We continue to push up the aggressiveness of the list until it’s as far as it’s going to go, at which point we enable it for machine learning and also for Safe Links. If we get false positives, we either decrease the aggressiveness of the weight of the list, figure out the root cause of the false positives (i.e., syntax errors in the list, problems with the downloaders), or stop using the list altogether.

The goal of this is to get better protection for our customers while avoiding disruption to legitimate mail flow. That’s a balancing act and usually takes about four weeks from when we start to when we complete.

Anyway, as I was saying earlier, we’ve included several new lists over the past few weeks; some of them are being used in #1-3 above, some others are only at #1, and a couple more are at stage 0. But whereas with our previous lists we revealed what they are, we don’t necessarily plan to identify the new ones. This is for a couple of reasons:

  1. The sources have asked not to be identified
    .
  2. By revealing which sources we use, a phisher can try to game the system and we are trying to prevent that

We still manage the false positives by doing cost/benefit analysis on the sources and would stop using the ones that do not provide benefit relative to the negative mailflow disruption they might cause.

So there you go; that’s what’s new in Exchange Online Protection over the past four weeks. We’ve incrementally started making your experience better, all in an effort to ensure you have the best email protection possible.

The outbound IP and HELO format for Office 365

$
0
0

Regularly, Office 365 is asked by other email receivers about the way our mail servers and IP addresses are set up, and the need to conform to a particular standard. That standard (which is more of a convention implemented by some receivers, not all of them) is that the IPs have Forward-Confirmed Reverse DNS, and these also conform to the HELO/EHLO strings that are used when sending outbound email. For example, suppose Office 365 was sending from the IP 1.2.3.4. This would require the following:

  1. The IP 1.2.3.4 has the reverse DNS (PTR) 4.3.2.1.protection.outlook.com
  2. The domain 4.3.2.1.protection.outlook.com has the A-record 1.2.3.4. Combined with #1, this is Forward Confirmed Reverse DNS (FC-rDNS)
  3. The outbound mail servers HELO/EHLO as 4.3.2.1.protection.outlook.com

This is a view of the world that assumes that we have one-mail-server, one-HELO, one-sending-IP, like this:

 

2016-07-15-outbound-mail-servers-one-to-one

Office 365 aligns with the first 2 requirements above, but not the third. Instead:

  1. Our sending IPs have PTR records (so far, the same)
  2. All of those PTR records have A-records that point back to the same IP (so far, the same)
  3. Our EHLO/HELO strings are all generic, that is, they would be something like mail.protection.outlook.com. This is the first difference (our EHLO/HELO strings also include the data center, so it would be more like northamerica.city.mail.protection.outlook.com).
  4. The EHLO/HELO strings all have A-records with some IPs in them, but not all of the sending IPs per data center. You can only fit so many IPs into a UDP packet (it’s less than 16) whereas we have over a hundred outbound IPs that would use this same EHLO/HELO string. Thus, it is not possible to conform to get every IP to point back to the EHLO string.
  5. The PTR domain and the EHLO domain each align with *.protection.outlook.com, and that’s as close as they will get in most cases.
  6. Each EHLO/HELO string has an SPF record which will contain the sending IP

Thus, the architecture of the mail server is that it uses a generic EHLO and can go out any IP within the data center that is in our outbound IP pool, like the below:2016-07-15-outbound-mail-servers-many-to-many

We do it this way because it makes it more flexible when we have delivery problems. Since we are a cloud service, we can’t lock ourselves into certain IPs and need to allow the mail servers to route to a different set of IP addresses when needed (e.g., for load balancing). The one, one, one model may have made sense when the Internet was running on on-premise mail servers, but not now with the advent of cloud services.

As an example, here’s a message I sent from my work account to my Gmail account:

HELO: NAM02-BL2-obe.outbound.protection.outlook.com (PTR: mail-bl2nam02on0131.outbound.protection.outlook.com. [sending IP: 104.47.38.131])

– PTR record of 104.47.38.131 = mail-bl2nam02on0131.outbound.protection.outlook.com

– A-record of mail-bl2nam02on0131.outbound.protection.outlook.com = 104.47.38.131 (matches sending IP, so we have FC-rDNS)

– A-record of NAM02-BL2-obe.outbound.protection.outlook.com = 207.46.163.79 (this is not the sending IP, nor is it even in the same /8)

– SPF record of NAM02-BL2-obe.outbound.protection.outlook.com = v=spf1 include:spf.protection.outlook.com -all (covers both 104.47.38.131 and 207.46.163.79)

– HELO and PTR domains each contain outbound.protection.outlook.com

The PTR and HELO each contain nam02 (North America, #2) and BL2 (referring to the data center). But the PTR has more information in it than the HELO.

That’s how our outbound IPs and EHLO strings currently work, any 3rd parties who require them to match more strict requirements would need to allow Office 365’s IP ranges to bypass this if they want to receive email from customers that are hosted by Office 365.

 

Sending mail with invalid From: addresses to Office 365

$
0
0

One of the changes to go into Office 365 in the past year is an antispam rule that rejects on messages with an invalid From: address. When this occurs, the message is rejected with:

550 5.7.512 Access denied, message must be RFC 5322 section 3.6.2 compliant and include a valid From address

If you look up RFC 5322 section 3.6, it says that each message must have one and only one From: address:

   +----------------+--------+------------+----------------------------+
   | Field          | Min    | Max number | Notes                      |
   |                | number |            |                            |
   +----------------+--------+------------+----------------------------+
   | from           | 1      | 1          | See sender and 3.6.2       |
   +----------------+--------+------------+----------------------------+

The structure of a From address is then described in section 3.6.2.

For many years, Exchange server allowed senders and recipients to send messages with malformatted From: addresses. That is, something like this was permitted:

From: <blah>

From: “Example Sender”

Even though this is against the RFC 5322 (published in 2008), and RFC 2822 (published 2001) before it, there are still lots of mail servers that send malformatted email in this way. However, if you try to send to other services, it doesn’t work. For example, sending a message that way to Hotmail/Outlook.com results in the message bouncing; sending it to Gmail similarly results in the message bouncing. Indeed, Gmail even forces you to put angle brackets around the email address in the SMTP MAIL FROM. For example, the first line below is rejected by Gmail, the second is accepted:

MAIL FROM: not@acceptable.com

MAIL FROM: <okay@acceptable.com>

Exchange accepts them both. So does Office 365.

Exchange has more relaxed enforcements because in the corporate environment, many applications are running on older or buggy platforms but send wanted email; or, people frequently write scripts to transmit email but do not configure them to send RFC-compliant mail. Large services like Gmail and Outlook.com are more picky about protecting their users, but in a corporate environment that sends messages privately, it is not as strictly enforced if it’s just you sending to yourself.

Given all that, late in 2015, we started seeing massive outbound spam attacks from malicious spammers who signed up for the service. They would send spam with an empty MAIL FROM and an empty From: address:

MAIL FROM: <>
From: <>

We measured the proportion of spam using this pattern; 98-99% of it was being marked as spam (and thus delivered out of our high risk delivery pool), and its total volume was well into the millions per day.

This had numerous drawbacks:

  1. The amount of spam being generated was taking up bandwidth from legitimate email
  2. We were still relaying junk to the Internet and the double null-sender was making it difficult to track down the spammers
  3. The misclassified spam was high enough that it was impacting the quality of our low risk outbound delivery pools. This means that customers were impacted because spammers would get our IPs listed on IP blocklists, affecting our entire customer base

Combining the fact that RFC 2822 was published in 2001 and specified the proper format of an email address, and that there was so much outbound spam, and the workaround was for script-owners of system-generated email to fix their scripts (rather than having to continually chase down spammers), Office 365 decided to crack down on these types of messages:

If you send email to Office 365 with a null SMTP MAIL FROM <>, then the From: address must contain <email@address.TopLevelDomain> including the angle brackets.

From time to time, we get senders telling us that we are mistakenly blocking the following message with the previously mentioned error response:

MAIL FROM: <>
From: sender@contoso.com

It is not a mistake, we require a From: address to have angle brackets if the SMTP MAIL FROM is <>. Just as different email services have different requirements – Gmail requires angle brackets around the SMTP MAIL FROM, Hotmail requires a valid From: address always – Office 365 requires that email addresses get formatted in a certain way if the MAIL FROM is <>.

Because Office 365 deals with legacy mail servers sending traffic to the service, there are certain RFC requirements that the service is not in a position to enforce without potentially causing customer disruption. At the same time, we are trying to keep the null sender rule simple; it is too confusing to have a complicated if-then-elseIf-elseIf-else logic tree for sending with a null sender [1]. And Office 365 is still much more relaxed than almost any other mail platform even with this restriction in place.

This is the balance that has been struck in our attempts to support legacy mail servers while running a cloud-based service, yet keeping spammers off the network.


[1] There are lots of other different ways that spammers try to shove email through the system using visual display tricks that are rejected by some recipients, but allowed by others. Yet a complicated AND-OR-NOT would be too difficult to explain to anyone who asked what the logic is, and it wouldn’t be long before even the engineering team couldn’t maintain it. Simplicity is our goal here, and we achieved it.

For example, when someone says their email is getting rejected, it’s a simple explanation to say “Add angle brackets around the From: address.”

How we moved microsoft.com to a p=quarantine DMARC record

$
0
0

In case you hadn’t noticed, Microsoft recently published a DMARC record that says p=quarantine:

_dmarc.microsoft.com. 3600 IN TXT “v=DMARC1; p=quarantine; pct=100; rua=mailto:d@rua.agari.com; ruf=mailto:d@ruf.agari.com; fo=1”

This means that any sender transmitting email either into Microsoft’s corp mail servers or to any other domain that receives email, and the message is spoofed (it doesn’t pass SPF or DKIM, or it does pass one of those two but doesn’t align with the domain in the From: address), the message will be marked as spam.

So how did we do it?

Let me run you through the steps because it took a couple of years.

1. First, the domain MUST publish an SPF record

Microsoft’s SPF record is the following:

microsoft.com.          3600    IN      TXT     “v=spf1 include:_spf-a.microsoft.com include:_spf-b.microsoft.com include:_spf-c.microsoft.com include:_spf-ssg-a.microsoft.com include:spf-a.hotmail.com ip4:147.243.128.24 ip4:147.243.128.26 ip4:147.243.1.153 ip4:147.243.1.47 ip4:147.243.1.48 -all”

It used to be over the 10 DNS-lookup limit, and it was soft fail ~all instead of hard fall -all.

.

2. Second, the domain MUST publish a DMARC record.

I recommend you send your DMARC reports to a 3rd party to avoid having to parse XML reports yourself. Various options include Agari, ValiMail, or DMARCIAN. Microsoft uses Agari (Agari pre-dated the other two options [1] at the time we published DMARC records for Microsoft).

.

3. Start looking at DMARC reports

Various 3rd parties then started sending all of the DMARC reports back to Agari. This is important because Agari’s tools parse through the DMARC reports and make it possible to see who was and was not sending email in an SPF-compliant way.

To do this, I would login to the Agari portal and navigate to ANALYTICS > Data Explorer and then Modify Settings

 

2016-09-26-Agari-login-portal

.
I would change the report settings to the single domain I wanted to look at (in this case, Microsoft). If I didn’t change it, I would be looking at the entire the entire set of Microsoft-protected domains.

2016-09-26-Agari-choose-your-domain

In the above picture, it shows “email.microsoftonline.com” but I could select any domain I wanted.

When selected, the DMARC trend would show up for that domain. Here’s a screenshot from last December for email.microsoftonline.com. You can see that on Dec 23 there were a lot of messages failing DMARC. That doesn’t mean that it was a large spam run, it could have been a large bulk email campaign from an unauthorized sender who was sending on Microsoft’s behalf. Remember, at this point, email.microsoftonline.com only had a soft fail (or hard fail, I forget) in its SPF record, and a DMARC record of p=none, so nobody would have junked this email automatically.

2016-09-26-Agari-spoof-trend

But other than that, most messages were passing authentication which is a good sign. It is the red ones that needed investigation.

Agari inventories all sending IPs and grades them by SBRS – Senderbase Reputation Score, which is Cisco/Ironport’s IP reputation. The higher the score, the better the reputation of the sending IP. In general, anything over 0.2 is probably a good IP or a forwarder. Anything less than zero is suspicious. Anything without an SBRS is probably suspicious but it depends on the PTR record.

For the above IPs that were failing DMARC, sorted by highest to lowest SBRS:

2016-09-26-Agari-with-SBRS

a) *.outbound.protection.outlook.com is Office 365 forwarding to another service like Hotmail, Gmail, Yahoo, etc. As of Sept 2016, Office 365 modifies message content when forwarding email so this can break the DKIM signature. SPF will similarly break, and this is what breaks DMARC. However, you can see below that the number of messages is fairly small, only a handful per day.

b) *.sharepointonline.com is also on the list with a few more messages per day, but still not very high. This may be forwarding.

c) There are a handful of other IPs with good reputation failing DMARC. These are also likely forwarders.  As long as the numbers are not too high, this is fine. It is only when these numbers are in the tens of hundreds that this will cause significant FPs.

Quickly glancing over the below, we don’t see too many good IPs failing DMARC which is good.

If I sort by lowest-to-highest SBRS:

2016-09-26-Agari-with-SBRS-lowest-to-highest

These IPs are sending reasonably high volumes of DMARC failures but all have terrible reputation. This is balanced against a small handful of good sending IPs (above). In general, email.microsoftonline.com passes DMARC and is mostly spoofed by bad sources except on Dec 23, 2015 above when it was spoofed in large amounts.

This domain is likely safe to move to a more aggressive DMARC record. We published p=quarantine for email.microsoftonline.com because it was fairly straightforward.

.

4. Go after the more complicated domains by breaking it down one by one

While email.microsoftonline.com wasn’t too bad, microsoft.com was much more complicated.

a) There were at least 25 different teams that I could find in the good senders list, e.g., visualstudio [at] microsoft [dot] com. Some of them were sending from 3rd party bulk senders, some were sending from our internal SMTP team (which I discovered while doing this project – they are the ones that send MSN and Outlook.com marketing messages), and some were sending from random mail servers from some of the buildings on Microsoft’s campus.

b) To discover these, I would click on the sending IP (sorted by SBRS) and try to look for a message with a DMARC forensic report. If the sending message looked legitimate, I took the localpart of the email address (e.g., visualstudio) and then looked it up in Microsoft’s Global Address List (GAL). About 2/3 of the time, it resolved to a distribution list. I then had to go to another internal tool and look up the owner of that distribution list and contact them personally.

For the other 1/3, I had to do a lot of creative searching of the Global Address List. If I found a sending email address that failed DMARC and it had the alias healthvault [at] microsoft [dot] com and I couldn’t find it in the GAL, I had to type around using auto-complete until I found something that looked similar. Sometimes I had to do it 3, 4, or 5 times. But I managed to track them all down.

When I did, I would get them to either send from a subdomain (e.g., email.microsoft.com) if they were sending from a 3rd party like SendGrid, use the internal SMTP solution, or send from a real mailbox within Microsoft IT’s infrastructure. I sometimes had to cut a series of tickets requesting DNS updates to @microsoft.com, @email.microsoft.com, and a couple of other subdomains to ensure that they had the right 3rd party bulk mailers in the SPF record. I had to do this for 25 different teams.

If that sounds like a lot of work, that’s because it was.

But, nobody pushed back on it. Whenever I contacted anyone, they would make the required changes. Sometimes within a day, sometimes within two weeks. But it got done.

When we figured we had enough senders covered, we published SPF hard fail in the SPF record for microsoft.com.

.

5. Wait for any additional false positive complaints and fix them as you find them

We knew that probably was going to be insufficient. Even though a lot of third parties send email as Microsoft to the outside world, probably just as many send it into Microsoft-only – not to third parties – which meant they were being sent through Office 365. At the time, Office 365 didn’t send DMARC reports (we still don’t, not as of this writing) but we also didn’t have a good way to detect who was spoofing the domain. But because Microsoft published an SPF hard fail, these messages would frequently get marked as spam.

So, as we found one-off senders, we simply added them to local overrides within the Office 365 service. We either added them to IP Allow lists, or we added them to Exchange Transport Rules that skips filtering, or we jiggered around the SPF record to get them in if it made sense.

We did this for about a month but at no point did we revert the SPF hard fail. Once we reached that point, there was no going back.

Doing a proactive analysis didn’t find all the potential false positives, it was only through publishing a more aggressive policy that we were able to find more legitimate senders.

.

6. Set up DKIM for your corporate traffic

We continued in this manner for about a year. Occasionally a 3rd party sender would ask us to set up DKIM, so I would assist by creating the necessary change requests for the DNS team to make the update. Along the way, I found that there were at least five different processes for updating DNS records for domains owned by Microsoft.

I wouldn’t be surprised if it’s the same at other large organizations.

But the day came when Office 365 released outbound DKIM signing. The very first customer I got this working for was Microsoft itself. I knew right then that this was the key to getting Microsoft to p=quarantine.

For you see, you should not go to p=quarantine without setting up both SPF and DKIM. If one fails, you can usually fall back on the other to rescue a message. I know that a lot of Microsoft’s corp traffic is forwarded so it had to have DKIM signatures attached. I know that other third parties don’t set up DKIM, but I also knew that a large chunk of them could. Believe me, if legitimate senders can’t get their email delivered, they find a way to contact me to help. At that point, I would either get them into the SPF record or more preferably, set up DKIM so they could sign on Microsoft’s behalf.

.

7. Publish an even more aggressive policy for messages sent to your domain

At this point, we were ready to roll.

Within Microsoft’s tenant settings in Office 365, we created an Exchange Transport Rule (ETR) – if the message failed DMARC, mark the message as spam. This was the same as publishing a DMARC record of p=quarantine internally, and p=none externally.

Before we did this, I pulled all the data for messages sending into Microsoft and failing DMARC, looking for good senders. This was much harder because I didn’t have Agari’s portal. We then added a bunch of good IPs into an ETR Allow list (sending IP + From: domain = microsoft.com) and went live with the rule.

We immediately started seeing false positives all over the place. But we didn’t roll back the rule, we just added them into the local overrides. This lasted for about a month and then it stabilized. Yes, people didn’t like that legitimate messages were going to junk; but we explained that we were clamping down on spoofing and phishing. When we added the local overrides, the problem went away.

.
8. Publish a stronger DMARC policy and roll it up slowly, fixing false positives as you find them

We waited several months to ensure that nothing else would break. The occasional good sender would ask to be allowed to send. Microsoft’s SPF record is full so it’s not easy to add new senders, we try to add only senders from infrastructure that we control (e.g., our own data centers, or sending IPs in Azure that are locked to Microsoft).

We then decided to publish a DMARC record of p=quarantine at 1%. I knew that it wouldn’t affect sending any inbound traffic to Microsoft Corp because we’d had that equivalent in place for a few months. I wasn’t sure what would happen for external email.

We published it and… almost nothing happened.

I may be misremembering, but the only incident I can remember (or maybe it’s the only big incident) is a couple of mailing lists were being sent to Gmail, and they were being junked. I’m not sure if all messages were being junked, or only 1%, but it sure felt like it was all of them.

Fortunately, Gmail’s system learns to override DMARC failures if you rescue them enough. The problem seemed to resolve itself eventually.

We then moved to 5%, then 10%, then 25%, then 60%. The whole time we waited for false positive complaints, but almost none came. We finally published p=quarantine. Nothing happened, I haven’t seen any major complaints since we did that. I think it’s because we cleaned up so much ahead of time and were able to predict in advance what would happen. And once you reach that harder security stance, it’s rare to flip it back the other way. These days, at least with regards to email security, the direction only moves forward.

.

Can I summarize this quickly?

Hmm, maybe.

  1. If you’re going through the process of tightening your email authentication records, you don’t get that much pushback as long as you take a great deal of care ahead of time to avoid problems down the road. If you do that, you will build a lot of trust. This is even more true if you have a plan, publish it, and execute on it.
    .
  2. The amount of messages that you can prevent being spoofed doesn’t move the needle that much when you’re trying to justify to your superiors about why you should do that work. Hundreds of messages per day has about the same psychological impact as millions per day. However, blocking several hundred or thousands of legitimate email per day is really, really bad. That undermines your effort, so avoid that at all costs.
    .
  3. It takes a long time to do the work. It also requires a lot of analysis, so make sure you have the right tools.
    .
  4. You’re going to get false positives no matter what. Be prepared to fix them.
    .
  5. Once you go strict, don’t go back (as long as you’ve done #1-3). Just fix the problems when you find them.

So that’s how we published p=quarantine for Microsoft.com. It took a while, but now it’s complete. Hopefully others will find this helpful.

.


[1] Sometimes people ask me which service they should use. I respond back by saying that DMARCIAN has a lot of do-it-yourself tools that are good for small and medium sized organizations. Agari is geared towards larger organizations but have since branched out into more products besides DMARC reports. Valimail does DMARC reports but they help you semi-automate the procedure so you can get to p=quarantine/reject faster than if you do it yourself.

Messages going to Junk even though they aren’t spam? Check to see if you have Safe-Lists-Only enabled

$
0
0

Recently, I’ve been seeing a spike in customer escalations saying that messages that aren’t marked as spam are nevertheless getting sent to the Junk Mail folder. This is despite the message headers indicating that the message is non-spam, that is, the X-Forefront-Antispam-Report header says “SFV:NSPM” (Spam Filter Verdict: Non-spam) and “SCL:1”.

The most common reason this happens is because the user has “Safe Lists Only” enabled in their Outlook email client, or has it set that way in Outlook Web Access (OWA).

For users whose mailboxes are hosted by Office 365, then if checking your email in OWA, there will be a yellow Safety Tip at the top of the message that says this:

2016-10-12-marked-junk-because-only-safe-senders-list

If your organization has been enabled for inline Safety Tips, it will say the same thing. We are in the process of rolling this functionality out for everyone, so just wait if you don’t see it yet in your non-OWA client.

For the user to check if that option is set, in Outlook navigate to Junk > Junk E-mail Options… and then look for the Safe Lists Only radio button. If selected, all non-safe-sendered traffic will go to Junk. There is nothing in the message headers that indicates that this option is set.

2016-10-12-Outlook-junk-email-options

2016-10-12-Outlook-safe-senders-only

For a user checking it in OWA, click on Options (the ‘gear’ icon in the top right) > Block or Allow and then scroll down to the bottom for a checkbox Don’t trust email unless it comes from someone in my Safe Senders or Recipients list:

2016-10-12-OWA-mail-options

2016-10-12-OWA-safe-senders-only

The user may then select or deselect as desired. Checking the option may send a lot of email to Junk that is actually legitimate unless they have a large safe senders list (but having a lot of safe senders may cause spam or phish to get delivered to the inbox).

As an adminstrator, rather than having the user check their email in OWA for the yellow Safety Tip, or having them navigate Outlook or OWA, you can check it directly.

  1. Connect to Exchange Online using Powershell
    .
  2.  Run the following cmdlet:
    Get-MailboxJunkEmailConfiguration user@example.com | fl TrustedListsOnly
    
    TrustedListsOnly : True

    If the TrustedListsOnly is True, then it means that setting is enabled.

If your users are saying that messages are going to Junk despite them being marked as non-spam, you can start here. If that doesn’t help, you may need to create a support ticket.


Related article:

  • Prevent email from being marked as spam in EOP and Office 365
    https://support.office.com/en-us/article/Prevent-email-from-being-marked-as-spam-in-EOP-and-Office-365-74aaade0-efc0-46ac-b949-f2d1d59256fa?ui=en-US&rs=en-US&ad=US&fromAR=1

Hotmail/Outlook.com evaluates DKIM a little differently than Office 365

$
0
0

If you’re a user in Hotmail, Outlook.com, or any other of Microsoft’s consumer email services, you may notice that it evaluates DKIM a little differently than you might expect (you would only notice this mostly as someone who is trying to troubleshoot delivery, as an average user you probably wouldn’t notice it at all unless you were forwarding email).

Suppose you get a message that is DKIM-signed by a large email provider, let’s called them bulksender.com. They are sending email on behalf of a financial institution woodgrovebank.com. Bulk Sender sends the email campaign and sets themselves as the SMTP MAIL FROM, and signs it with their own DKIM key. Woodgrove Bank has neither SPF nor DKIM nor DMARC records set up. So we have the following:

Return-Path: <notifications+random_guid@bulksender.com>
DKIM-Signature: v=1; s=s1024; d=bulksender.com; q=dns/txt; c=relaxed/relaxed;
From: Woodgrove Bank <notifications@woodgrovebank.com>

Many different email services would evaluate the SPF, DKIM, and DMARC on this message and combine the results afterwards, and put them into the Authentication-Results header. For example, Office 365 would do this:

Authentication-Results: spf=pass (sending IP is 1.2.3.4)
  smtp.mailfrom=bulksender.com; dkim=pass (signature was
  verified) header.d=bulksender.com; dmarc=none action=none
  header.from=woodgrovebank.com

However, Hotmail/Outlook.com makes a key difference [1] – it will not say that DKIM passed if the signing domain is not the same as the From: domain, for example:

Authentication-Results: hotmail.com; spf=pass (sender IP is 1.2.3.4)
  smtp.mailfrom=notifications+random_guid@bulksender.com;
  dkim=none header.d=woodgrovebank.com; x-hmca=pass
  header.id=notifications@woodgrovebank.com

Even though the DKIM signature should have passed, Hotmail says that DKIM did not exist even though it has a header.d value.

The reason for this is that Hotmail expects the signing domain in the DKIM signature to be the same as (or align with) the domain in the From: address. Because in this case the signing domain is different than the From: domain, Hotmail does not consider that to be valid.

This is basically an antispoofing mechanism that requires a tighter relationship between what the user sees and who is authenticating the email. Even though DKIM does not require they be the same, and DMARC uses one or the other of either SPF or DKIM, Hotmail takes it a step further by not letting the sender send on behalf of another without the From: domain giving explicit consent (by setting up a public DKIM record). The fix for this is for the domain in the From: address to set up a public DKIM key at the same selector that the email infrastructure is sending with.

So if you do see this behavior in Hotmail, that’s what is going on.


[1] I just found out about this recently, so I thought I’d document this behavior in case anyone in the future ever asks.

Troubleshooting the red (Suspicious) Safety Tip for fraud detection checks

$
0
0

Introduction

It has now been about 8 months since we released our antispoofing protection in Office 365, a feature that defends against Business Email Compromise, where the From and To domains are the same. You can read more about that feature at http://aka.ms/AntispoofingInOffice365. To summarize, it defends against others spoofing your domain in the From: address – the one that appears in your email client – by figuring out if the sender is legitimate or malicious. It’s similar to how DMARC works, the key difference is that while DMARC looks for a DNS record published at _dmarc.<domain>, the antispoofing protection interpolates if it does not exist. That is, if the domain in the From: address does not publish a DMARC record, what would it say if it did?

It has also been a few months since we first started rolling out Safety Tips to Office 365 customers (everyone will get them by mid-November). Since that time, the question that has arisen the most is this: Why am I getting a red Safety Tip in my email? This is a legitimate sender!

Suspicious_Fraud

.

Figuring out the base conditions of why it was stamped

There are 2 common reasons why a “legitimate” message gets a red Safety Tip about a message failing fraud detection checks when the From: domain and To: domain are the same, and 1 less common reason:
.

  1. The sender is sending emails from an unauthorized source that is sending as your domain in the SMTP MAIL FROM, but not in your domain’s SPF record, and is also not signing the emails with DKIM
    This occurs when a department within your organization spins up a mail server, or outsources to a Software-as-a-Service provider, who sends email as your domain but they are not in your SPF record. To determine this, open up the message headers and look at the Authentication-Results header. Suppose the service that is sending emails as you (or in the example below, contoso.com) is HRnotifications.com and they are sending from the IP 1.2.3.4:
    Authentication-Results: spf=fail (sender IP is 1.2.3.4)
      smtp.mailfrom=contoso.com; dkim=none (message not signed)
      header.d=none; dmarc=none action=none
      header.from=contoso.com

    In this case, the SMTP MAIL FROM is contoso.com, and the From: domain is also contoso.com. Contoso.com also does not publish a DMARC record. A case like this will almost always get a red Safety Tip and mark as spam because even though a DMARC record is not published, it would have failed had it been.

    I say “almost always” because Office 365 does suppress the Safety Tip and mark as spam action if we have enough reputation data to determine that this is a valid message. It works most of the time, but not for small senders, nor for senders with insufficient reputation, or senders with poor reputation.

    Even if your domain does not have an SPF record, the antispoofing check will still apply.

    Authentication-Results: spf=none (sender IP is 1.2.3.4)
      smtp.mailfrom=contoso.com; dkim=none (message not signed)
      header.d=none; dmarc=none action=none
      header.from=example.com

    .

  2. The sender is sending emails that pass SPF or DKIM, but neither the domain that passes SPF nor the domain the passes DKIM aligns with the From: domain
    Just passing SPF or DKIM is insufficient to suppress the Safety Tip or mark as spam. For example:
    Authentication-Results: spf=pass (sender IP is 1.2.3.4)
      smtp.mailfrom=HRnotifications.com; dkim=pass (signature was valid)
      header.d=HRnotifications.com; dmarc=none action=none
      header.from=example.com

    Even though this passed SPF and DKIM, HRnotifications.com != example.com. One of the SPF-passing or DKIM-signing domains must align with example.com. If it doesn’t, a Safety Tip will be inserted unless Office 365 determines that the sending IP or domain has enough reputation to suppress the Safety Tip.
    .

  3. [In process of being fixed] The sender publishes a DMARC record, but Office 365 had a DNS lookup failure and DMARC TempError’ed
    A corner case we are working on fixing is when a message ought to have passed DMARC, but didn’t because of a DNS lookup failure:
    Authentication-Results: spf=pass (sender IP is 1.2.3.4)
      smtp.mailfrom=HRnotifications.com; dkim=pass (signature was valid)
      header.d=notifications.example.com; dmarc=temperror action=none
      header.from=example.com

    In this case, DMARC should have passed by aligning on the DKIM domain with the From: domain, but it couldn’t because of the DNS lookup failure. A case like this may fail the antispoofing check and insert the red Safety Tip.

    We are working on this to treat it as a DMARC BestGuessPass and supersede the TempError.
    .

Common scenarios when this occurs

When do the first two occur most often?

The IETF has published RFC 7960 – Interoperability Issues between DMARC and Indirection Email Flows. This document describes all the times when a valid message fails DMARC.

Since Office 365’s Exact-Domain antispoofing check is similar to DMARC, it is subject to some of the same problems. The difference between regular DMARC and Office 365’s antispoofing checks is that our checks will try to figure out automatically that the sender, though failing authentication, is legitimate so you don’t have to do anything; whereas DMARC makes you explicitly configure things to make it work. Office 365 doesn’t always figure it out automatically so the more you configure it, the lower the chances that legitimate email will get marked as spam.

RFC 7960 was written by some smart people, and have done a great job to inventory as many different scenarios that are possible. The ones that I see the most often are the following, along with their solutions (I discuss this more in http://aka.ms/AntispoofingInOffice365):
.

  1. A third party is sending email, on your domain’s behalf, but is not in your SPF record or DKIM-signing as your domain
    This constitutes the majority of cases. Even we here at Microsoft see this all the time. Not a week goes by where someone doesn’t ping me asking why they are getting the red Safety Tip. It’s because Microsoft has clamped down on spoofing because the problem of exact-domain spear phishing is so prevalent.The fix for this, either:a) Add the sender’s IPs to your SPF record, or add the include:<their domain> to your SPF record. See Set up SPF in Office 365.

    OR


    b)
    Have the sender DKIM-sign as your domain. This requires them giving you to publish a public key at a selector you specify, e.g., myselector, and then publishing the key at myselector._domainkey.<yourdomain>.

    OR
    c) Add the sender’s IPs to your IP Allow list, see Configure the Connection Filter Policy in Office 365.
    OR


    d)
    Create an Exchange Transport Rule (ETR) allow rule:

    – Add the sender’s IPs to an ETR Allow rule
    – Or, if you want to be more secure, add the sender’s IPs + the sender domain to an ETR Allow rule
    – Or, if you want to impress your friends, create an ETR Allow rule that looks for “spf=pass” in the Authentication-Results header and the sending domain in the envelope is the domain sending on your behalf (e.g., HRnotifications.com). Alternatively, you look for “dkim=pass (signature was verified) header.d=<signing domain>” in the Authentication-Results header. You would use this ETR in the case that the sending domain passes SPF or DKIM, but is not the same as your domain (i.e., they are sending on your behalf and authenticating, but not aligning)Both option (c) and option (d) will skip filtering and suppress the Safety Tip. End user Safe Senders will skip filtering but will not suppress the Safety Tip.See Transport rules in Office 365.
    OR


    e)
    Use the Set-PhishFilterPolicy to specify senders who are allowed to send to you on your behalf. See Set-PhishFilterPolicy. This is only available to Advanced Threat Protection customers.
    .

  2. You are sending to a mailing list which is modifying and replaying the message back into your environment 

    This occurs with the following routing path:

    Office 365 -> Mailing list (pass SPF, DKIM, DMARC... then modify the message) -> Office 365

    This is one of the hardest ones to solve. Section 4.1.3 of RFC 7960 talks about this. If you are in control of the mailing list, there are a couple of good options you can make to your mail server:

    a) Set it up to not modify the message (not the most popular option)

    b) Rewrite the From: address so that it doesn’t fail alignment

    Suppose the message you send looks like this when sending out:

    From: Example User <example@contoso.com>
    To: Mailing List <list@mailingList.org>

    When it goes to the list and gets replayed, it could be rewritten like this:

    From: Example User via Mailing list <list@mailingList.org>
    To: Example User <example@contoso.com>
    Reply-To: Example User <example@contoso.com>

    Not everyone in the email industry likes this solution, but I think it’s pretty good if the list server is under your control.

    c) Submit it as a false positive to Office 365 and we’ll see if we can add it to our list of “Do not enforce antispoofing checks from these senders” list. This is a manually maintained list and we only add mailing lists to it, and it takes a long time to update. But if the list is not under your control, this is an option.
    .

  3. You are forwarding email into and out of Office 365 and modifying the message. Sometimes on purpose, but sometimes not.
    If your email path goes like this:
    Internet -> Office 365 -> on-prem mail server -> Office 365

    If the on-prem mail server modifies the message, it could fail antispoofing. This occurs if you are running Exchange on-prem which modifies messages (see here for details), or you have another appliance that inserts footers into a message or does some other modification.

    If you are running Exchange, the solution for this is to set up connectors so that the headers we stamp on the first pass through Office 365 are respected when you relay it back into the service. If you are not running Exchange, you will have to set up a few ETRs to respect the original spam or non-spam verdicts and route the message accordingly.

Those are the ones I see the most for customers of Office 365.
.

What about disabling Safety Tips?

As a last-ditch attempt to solve the problem, you may be tempted to disable Safety Tips altogether. My advice is this – Don’t do it!

Safety Tips are like a seat belt in your car. If it’s too tight, or sits too high, or scratches you, the solution is to adjust it or buy one of those thingies that adds padding to it. The solution is not to drive without it.

Safety Tips helps protect you against spoofing, but it also helps defend against phishing (another red safety tip), to tell you when a sender is trusted, when you’ve skipped filtering due to IP Allows or Safe Senders, and other types of impersonation (coming soon in 2017). So by turning off Safety Tips, you’ve disabled all that protection.

You are much better off working through the issues about by inventorying IPs, getting proper DKIM signing,  fixing mailing lists, and making sure connectors are set up properly. That ensures you have the best protection possible. It is a little inconvenient (it’s inconvenient here at Microsoft, too) but it is worth the effort.

But if you must turn off Safety Tips, you can do that here: Enable or disable Safety Tips in Office 365.
.

Conclusion

Hopefully this article helps you figure out why a message has that red Safety Tip inserted above. We do our best to make sure we only insert it in cases where we think the email is fraudulent, but sometimes legitimate email gets the tip. Fortunately, you can fix it yourself by following the steps above.

If you have questions, just let me know in the comments.

Where email authentication is not so great at stopping phishing – random IT phishing scams

$
0
0

On this blog, I’ve written a lot about email authentication and preached its virtues. If you are a domain owner, you should definitely set up SPF, DKIM, and DMARC records both so that emails to you can be identified between authentic and not, and so that other email receivers (e.g., Gmail, Hotmail/Outlook.com, Comcast, etc.) can identify which ones are legitimate and which are not.

If you’re an Office 365 customer, you can find out what you need to do here:

If you don’t have them set up, we still protect your domain from Exact-Domain spoofing, as I talk about here: http://aka.ms/AntispoofingInOffice365 (this is sometimes referred to as Business Email Compromise, or BEC, but I find that too broad a term and instead call it Exact-Domain spoofing because that’s the technique that is used).

But that’s not the only type of phishing that we see.

.

IT Phish Example 1

Where email authentication struggles is with a phishing scam that looks like the following:

2016-11-22-it-phishing-scam

We refer to this as an IT phish, that is, a phishing message that looks like it came from your IT department. In the case above, the phisher is impersonating Microsoft and asking the user to click on a malicious link to avoid service interruptions. It’s worded a little weird, but you’re probably not reading it too closely before you click on the link.

Let’s break down what makes it so difficult to detect:

  1. The Display name is “Microsoft Team”. Even though the email address has absolutely nothing to do with Microsoft, it’s still the first thing users see in their email client. And, if they’re using a smart phone, many email clients don’t even show the full email address because they are trying to save on screen real estate.
    .
  2. The email address is a random domain. It either doesn’t authenticate and therefore neither SPF, DKIM, DMARC, nor our Exact-Domain antispoofing protect against impersonation, or the domain it does authenticate (the spammer/phisher set up the authentication records) but the domain is not yet on anyone’s reputation list.
    .
  3. Speaking of reputation lists, many of these phishing messages come from infrastructure have good reputation, or more often neutral reputation. So, blocking them at the network edge based upon IP reputation doesn’t work.
    .
  4. The body of the message is similarly impersonating Microsoft even though it contains none of Microsoft’s logos, or even the language that Microsoft normally uses in a notification email. The language is awkward, although we’ve all gotten notification emails that were oddly phrased.
    .
  5. The URL itself was not on any reputation lists at time of delivery (don’t retype it from the image; the full link I modified slightly but the root domain is malicious).
    .

2016-11-22-it-phishing-scam-explained

All those features combine to make this message difficult to detect using standard antiphishing techniques, but still troublesome to the end user. These usually aren’t spear phishing attacks, but instead the spammer or phisher is trying to either install malware on the end user’s system when they click on the link (by executing a drive-by download, or downloading another piece of malware locally and hoping the user executes it), or by tossing up a phishing page and tricking the user into entering their user credentials.

.

IT Phish Example 2

The next example is just as insidious, yet no less difficult to catch. It, too, looks like an IT phishing message coming from Microsoft.

2016-11-22-it-phishing-scam-example-2

It is using similarly techniques to the first message:

  1. It has a familiar brand in the Display Name, Microsoft Outlook. Outlook is a known brand associated with Microsoft
    .
  2. Unlike the first example, this one does have a visually similar email domain that is off by one letter – service.outlook.com vs. service.out1ook.com, a ‘1’ instead of an ‘L’. This is designed to trick the end user in the event they look at the domain name, but don’t look too closely. It doesn’t matter whether the domain authenticates or not because this one may be under the full control of the phisher
    .
  3. The instructions contain suspicious keywords, don’t otherwise look too out of place
    .
  4. The email link, upon first glance, looks like a legitimate domain belonging to Microsoft, but hovering shows that it actually goes to the lookalike domain. This is not something you would notice if you didn’t look too closely. Once again, it contains a URL that is not yet on a reputation list (Again, don’t type in that domain because it is malicious, although I did make some modifications to the message for discussion purposes)

2016-11-22-it-phishing-scam-example-2-explained

These are two real life examples of phishing messages impersonating Microsoft brands. However, phishers will also impersonate your IT department. For example, if you are a university or a corporation, a phisher will impersonate your brand in the Display Name, and in the body of the message, in an attempt to get your students or employees to reset their passwords (but instead they get compromised). Even though the domain name in the From: address doesn’t match your organization at all, users will still click. When the domain is off by a little bit, users will be fooled a little more often… and will click.

.

 Stopping this type of phishing

With all the news this year about Business Email Compromise, the above examples are actually a return to traditional phishing. While spear phishing is all about targeting specific individuals, “traditional” IT phishing is about getting inside the organization for the purpose of spreading more phish or malware. Sometimes this is done to build up a botnet, but other times it’s designed to be used as a springboard attack – send malicious content from the inside because it’s less likely to be filtered when it originate inside the house. A phisher can also send true Business Email Compromise from a real account, rather than by impersonating one.

In terms of stopping this type of phishing, two years ago I wrote a blog post Why does spam and phishing get through Office 365, and what can be done about it?

The four things I said that you, as a customer, can do are still accurate (see that blog post for more details):

  1. Submit spam and phishing samples back to us
  2. Submit malware to Microsoft
  3. Enable bulk mail filtering
  4. Invest in user education

I then listed a series of features that we were working on, all of which have been completed (and then some).

However, as we’ve seen these types of attacks increase, over the past 6 months we’ve gone back and have been working on several new features to combat phishing. We still do everything I’ve talked about in previous blog posts, but we will be doing more.

I will not be discussing what this “more” is in great detail other than to make vague, hand-wavy motions and say it involves machine-learning, sender reputation, and big data. The reason for being so fuzzy is because phishers are constantly probing our system, trying to reverse engineer how things work. I am not going to make their job easier, and my job is hard enough.

Plus, by using industry buzz words, it makes it look more impressive.

But, as I said in my blog post about Safety Tips, you’ll know when we detect phishing because we’ll insert a red (Suspicious) Safety Tip:

Suspicious_Phish

That way, if any of your users ever start browsing through their junk email folders or spam quarantines, they’ll be given an extra visual cue that the message really is malicious.

Conclusion

So, that’s one of the types of phishing scams we’re seeing (and you are, too). We’re aware of it and are hard at work coming up with a solution.

Let me know in the comments what else you’re seeing, I do plan on writing a couple more blog posts on what other sorts of phishing is common beyond Exact-Domain spoofing.


Next up: Where email authentication is totally great at stopping phishing – springboard attacks (and filling in the gaps)

Where email authentication is totally great at stopping phishing – springboard attacks (and filling in the gaps)

$
0
0

As I was saying in my other blog post about email authentication, and how it struggles to stop random IT phishing attacks, there is a type of attack that it is great at stopping – springboard attacks.

What do I mean by a springboard attack?

2016-11-28-springboard_attack_4

I use the term in the context of “Business Email Compromise” (BEC). A traditional BEC is where a phisher spoofs your domain, usually a high-ranking executive, and sends email to another high ranking executive or perhaps someone in the Finance or HR departments. The phisher tries to trick the receiver into surrendering sensitive like tax forms, or wire money to an account which is controlled by the phisher. I renamed this to “Exact-Domain” spear phishing because the attacker is spoofing your exact-domain.

A springboard attack is a variant of the Exact-Domain attack; rather than impersonating your own domain, they are spoofing a domain that doesn’t belong to you but with whom you do business with.

So, suppose that you are the president of Woodgrove Bank and you are talking to one of your vendors, Fabrikam. Fabrikam provides outsourced payroll services, which is common in the enterprise environment. If a phisher reasonably can guess that there is a relationship between the two companies, and even figures out a way to guess which two people would be talking to each other, then they can impersonate one of the two companies. The phisher may impersonate Woodgrove Bank, send a message to Fabrikam, and trick Fabrikam into taking some action.

For example:

2016-11-28-springboard_attack_1

This looks like a BEC, but because the domains are different, I call this a springboard attack:

  1. Guy Incognito is the president of Woodgrove Bank, and you can see his exact-domain woodgrovebank.com is being spoofed. His email address may or may not be what you see in the picture above, but many people (most?) wouldn’t notice, and if you were viewing this on a mobile device, you wouldn’t see the email at all. It looks like a regular contact. The phisher got the name of the CEO by browsing the web, getting information off of LinkedIn, etc.
    .
  2. The domain woodgrovebank.com has weak authentication records, that is, it doesn’t have a DMARC record, and its SPF record (if it has one) is only a soft fail, ~all. Some domains don’t publish SPF records at all. That’s uncommon in the consumer email space (i.e., trying to send to outlook.com or gmail.com) but reasonably common in the enterprise space.
    .
  3. Both of these recipient domains are Office 365 customers. That’s not important to this attack, but it is information that can be used later on in the filtering pipeline (which I’ll discuss below).
    .

2016-11-28-springboard_attack_2-explanation

BEC is hard to detect with content filtering because the content is so normal-looking, and frequently is grammatically correct as the above example shows. It also comes from IP addresses with neutral or even good reputation. Even though the sending domain is spoofed, the SMTP MAIL FROM may be a spammer’s own random domain that passes SPF (though it may not), and because woodgrovebank.com does not have a DMARC record, it cannot be detected as a spoof using regular email authentication techniques.

Thus, woodgrovebank.com has been used as a “springboard” by the phisher to get inside Fabrikam. Just like in regular BEC, a finance person may go ahead and execute a payment if it came from the spoofed CEO, in the case above the finance person may forward the message along to the finance person to send payment to the above account. And what’s worse, the email from Fabrikam’s CEO (in this case, Joey Joe-Joe Jr. Shabadoo) to the finance or HR department would not be spoofed, it would be an internal message. Joey Joe-Joe Jr. was fooled by the original spear phish, but there would be no way to determine that after he started forwarding it internally within Fabrikam, unless someone got suspicious and asked for the original email and inspected it.
.

Fixing springboard attacks with email authentication

The straightforward fix for springboard attacks is with email authentication. The attack above is an actual attack we saw in the past month with the names changed to fictional brands. However, this is what DMARC was designed to protect right from the beginning – preventing your own brand from being spoofed. Large domains like Paypal or Twitter used to get spoofed all the time by phishers, and their users, who sign up to those services using free email accounts, would keep falling for attacks and losing sums of money, or get hacked and send out more spam.

To stop that, the email industry came up with DMARC. DMARC has its shortcomings, but it does stop spoofing. It works for traditional phishing (spoofing large brands like Paypal), exact-domain spear phishing (CEO-to-CFO where the sending and receiving domain is the same), and exact-domain springboard attacks (CEO-to-CFO where the sender and receiver are different organizations).

So, when I recommend DMARC as a fix to stop your domain from getting spoofed sending “from” you to you, I recommend it not only to protect your own organization, but also to protect others who do business with you. While you may be able to inventory all of your senders and create rules to allow or block various spoofers, others have a much hard time doing this. By publishing no or weak authentication records, you are making it easier for phishers to impersonate you and get inside others’ organizations.
.
.
How Office 365 sees the problem

Note: As of this writing, Nov 28, 2016, this protection is under development. I write this blog post to reveal how we’re thinking about the problem, it won’t necessarily use these techniques because there may be better ways to do it. This is more my own view of how things can be done (even though there’s a chance they will be done this way); I write this blog post before the feature is fully deployed because it’s important to discuss spear phishing and its variants.

The reason why we came out with our own antispoofing solution (http://aka.ms/AntispoofingInOffice365) is because while DMARC is great, it still suffers from the problem that it is hard to set up. That’s we protect your domain with Exact-Domain spear phishing protection.

The next step in this is to address the problem above – I said in #3 that both domains are Office 365 customers. Given that we already inventory who sends as your domain to yourself, it makes sense to extend that logic for customer-to-customer mail, too. Thus, Office 365 can apply similar logic for stopping spoofing if the sending and receiving domains are Office 365 customers.
2015_02_23_Antispoofing_infographic

 

Thus, even if you don’t publish SPF, DKIM, or DMARC (which you should), Office 365 can still make a best-guess as to whether or not the sender is authorized to send on your behalf. You could call it “implicit DMARC” – what would happen if the domain did publish a DMARC record, and didn’t come from a source we don’t trust?

When detecting an exact-domain springboard attack, we would mark up the message similar to how we mark it up for exact-domain spear phish:

2016-11-28-springboard_attack_3_with_fix

Looking at the message headers would reveal why we think a message is spoofing.

As with Exact-Domain spoofing, inventorying your own SPF, DKIM, and DMARC makes this work better. But if you don’t publish it, we’ll try to figure out if the source is legitimately or illegitimately spoofing you, if the destination if another customer of ours.

This only works for when the destination is an Office 365 customer, and the MX record points at Exchange Online Protection. If the recipient is not protected by Office 365, your domain can still be used in a springboard attack.
.
.
Closing thoughts

So, as you can see, we’re working on protecting your domain from different variants of spear phishing. The industry preaches SPF, DKIM, and DMARC (which I do, too) but we also understand that not every domain has that protection. But we still need to stop spoofing.

We’re aware of the problem, and we’re working hard on solving it.

I still have a couple of blog posts to write on other variants of Business Email Compromise spear phishing.


Next: Where email authentication is potentially great – protecting against quasi-impersonation attacks (spoofed messages from domains with weak authentication)

Previous: Where email authentication is not so great at stopping phishing – random IT phishing scams

A security story that is kind of disturbing

$
0
0

I’ve got a story for you. As a security person, it’s a little disturbing.

I was driving in the car with my wife yesterday who works in the health care industry (she’s not a doctor). She was telling me that earlier that day, she was trying to email a file to some other organization and it wasn’t getting there. She explained that they were looking at something and the file had a weird extension, .paz (or something like that). I don’t know what extension that is, it has to be something unique to the health care field that some software or hardware makers recognize, but is not common in the rest of the world.

For example, I don’t know how x-ray files are stored (as images?) but suppose they aren’t images, but instead are in a format called .xry and can only be opened with certain software. That’s what I assume is happening here.

Anyway, she was saying that as an organization, they have to transmit this particular file to another organization, and it wasn’t getting there. She would email it, and a while later her contact on the other side would email back and say “Uh, I’m not getting anything.”

My wife would send it, and resend it, and nothing would get through. She would send it to her Gmail account, and again nothing was getting through. “What’s going on?” she asked. My wife is an ordinary Internet user, my being in the security industry has had almost no influence on her in the years we’ve been married.

I surmised that her spam/malware filter was silently deleting it and not notifying her. When I told her my theory she said “That’s stupid! How am I supposed to know it’s not getting through?!?”

Let’s go to the thought bubble.

Thought Bubble

Before continuing with this story, let me interrupt with another story.

A year or two ago, my wife’s place of work switched from using Office 365 to Proofpoint for its spam filtering. Now, I am well aware that Proofpoint is one of our competitors. But as a low-to-mid level engineer, I don’t see it as that fierce a competition. While some senior executives at corporations famously don’t use the products of other companies, people like me aren’t that ideological because we can’t be.

For you see, I have several friends at numerous organizations who work on anti-abuse, Proofpoint among them. We talk to each other at conferences, we sit on panels and discuss strategies for fighting spam and phishing. And really, it’s a fluke I ended up where I am; I could have easily ended up at another company, including Proofpoint. I may end up there yet if (when?) I get fired for writing something I shouldn’t on this blog.

So, I am not ideological about anything.

But even though she had no say in it whatsoever, and there are many reasons why companies switch, when I found out my wife’s place of work switched from Office 365 to Proofpoint… I took that personally.

It was like a punch in the stomach.

Thanks thought bubble.

IT folks will often implement security policies that are designed to reduce risk for organizations. Yet IT and security people (that’s you and me, btw) seem to always forget that people are trying to get work done. If IT and security policies are preventing people from doing their work, they will find even less secure ways to get it accomplished.

And that means all of our good intentions have been circumvented.

I asked my wife how she was able to proceed.

Well, first they take the file off the USB drive and insert it into the computer. Those of you currently having heart attacks can calm down (a little), it’s not like they found it on the street but instead need it to transfer data from the device to the computer.

Next, they open up a personal Gmail account and upload the file that way, and then send it onto the final destination. Presto! Problem solved!

Yes, for a functionality standpoint, this worked. But from a security standpoint it failed miserably. But only because from a usability standpoint, it failed just as hard.

I don’t know why the message didn’t go through originally, and I know that security teams try to abstract the policies away from their users. But users will go around security policies if they interfere with the work they are trying to do and they see no alternative.

You may say “Oh! But there are plenty of services that let you upload a file to it, and then send a link to that file!”

Yes, that’s a workaround, but not every Internet user knows about it. It’s way easier just to send a file as an attachment… that’s what “Send a file as an attachment” is for!

I don’t have a lot of good advice for spam filters other than to make sure that your security measures work. I’m also not picking on Proofpoint because anyone can nitpick about many things even in our own service (Office 365), for all I know it was a corporate policy, not service-wide policy, that deleted the message (update on 2016-12-02: it was an organization policy block, not a Proofpoint service block). But in this case, deleting a .paz file because it was a security risk (assuming that is what is was) ended up potentially creating an even bigger security risk because it trains users to look for other ways to get around our security policies.

Let’s not make our jobs harder than they already are.

Where email authentication is potentially great – protecting against spoofing from domains with weak authentication

$
0
0

So, in the past couple of posts, I’ve talked about how email authentication is not that great against phishing attacks that use random parameters in the sender, but is well-designed to work against springboard spear-phishing attacks.

There’s another scenario where it is simultaneously well-positioned to protect against spear-phishing, yet not in a good position to actually defend against it.

And that’s a spear-phishing attack where the sender uses a free email account from a sender that looks like it is legitimate, but doesn’t really have anything to do with the recipient. I call this a quasi-impersonation spear phishing attack.

A quasi-impersonation spear phish is when a phisher uses a technique that tricks you into think someone important is contacting you and so you take action on it. The sender doesn’t actually have any relation to you, but you assume that they do and therefore open the attachment or click the link.

Here’s an example message:

2016-12-03-big-domains-with-weak-authentication

This is an actual spear phish with the recipient removed, but the sender’s email address still intact.

  1. There is no relationship between the sender and receiver, but doesn’t it look legitimate? If you’re in the finance department you may very well think that this person is invoicing you for something that your company purchased. After all, you’re busy and you see this stuff all the time, so of course you want to help out. You may not know who Russell Robinson is, but it sounds plausible.
    .
    BTW, I’m sure that there is a real-life Russell Robinson, and I apologize to that person, but his name is being spoofed in the message. The account is malicious so if you’re reading this blog post you can safely block this sender email account.
    .
  2. The email address, from @outlook.com, is spoofed. The message did not actually come from Outlook.com’s email infrastructure, it came from somewhere else. Yet because @outlook.com publishes a soft fail ~all in its SPF record, and a DMARC policy of p=none, a spam filter cannot use email authentication to automatically treat it suspiciously when it fails auth. The fallback policy says not to enforce anything.Even though the domain is spoofed, if you hit Reply, the reply goes back to the sender in the From: address. Thus, the phisher is under control of that email address, yet sending from infrastructure not associated with @outlook.com.
    .
  3. The grammar in the message body is correct, you could easily see that in a real-life message.

Combining all of these together, you might be tempted to click on the link to at least find out what invoice this is for. And if the link leads to malware, your machine could get infected.

So this is what I mean by a quasi-impersonation attack. The sender is not someone who you actually know (like a CEO, CFO or other high-ranked person in either your own organization or someone else’s domain you do business with), but instead who you don’t know but make sense that could have a business relationship with them. You could also potentially have a personal relationship with them if they are contacting you out of the blue, asking for a favor (such as “Hey, I read your article on your blog, I wrote something similar. What do you think?”).
.

2016-12-03-big-domains-with-weak-authentication-explanation

.

Can email authentication fix this?

The key part is that the email address is spoofed. As I said in my previous post, not publishing strong authentication records means that your domain can be used in springboard spear-phishing attacks. If outlook.com were to publish a DMARC record of p=quarantine or p=reject, then any service could block this message since it failed DMARC.

Yet publishing a DMARC record for a large domain like outlook.com has its own set of challenges. While large organizations may not know who all their legitimate senders are, outlook.com does know most of its senders (I set up its DMARC records and I review the data). But the problem with going to a stronger authentication policy is that there are a lot of users on outlook.com that would lose legitimate email if a stronger authentication policy were published.

For example:

  • Mailing lists that replay messages and modify the body content, yet retain the same From: address, can no longer be used on those mailing lists. This is one of the biggest complaints against DMARC.
    .
  • Mail servers that forward messages to another destination have delivery problems if they modify message content (this blog post is the one with the most comments over the past few months, and generates more dissatisfaction than any other issue)
    .
  • Some bulk email providers let you put free email accounts into the From: address, they will have delivery problems with outlook.com publishes stronger DMARC records
    .
  • And on and on. There’s a long tail of these.

I would say that any one of these problems we could live with, but when putting them all together it becomes less plausible to live with all of those disruptions. You can publish strong authentication records in an enterprise like Microsoft where the company owns its own domain and brand and all user email associated with it, and while technically Microsoft owns outlook.com, we don’t want to disrupt all of our users. People want their email to work, so we approach publishing a stronger DMARC record with caution.

Thought Bubble

I use the example of outlook.com, but it applies to any free email account like gmail.com, hotmail.com, and so forth, which don’t publish strong DMARC records. While gmail-to-gmail can figure out if something is legitimate or not, gmail-to-anyone-else (springboards) don’t have that luxury.

So as you can see, email authentication is in a position to stop attacks like these, yet the cost of doing it (disrupting the flow of legitimate email) is a high price to pay. It has potential, yet will be difficult to realize.

At least with strong authentication policies, that is. So how can we fix it?

Stopping spoof attacks with implicit authentication

Earlier this year, Gmail introduced a change to their UX where if an email does not authenticate, it puts a color-coded question mark in the sender photo. It means that the message did not authenticate with either SPF or DKIM:

gmail_unauthenticated_sender

If you hover your mouse over the picture, it tells you what that means:

gmail_unauthenticated_sender_with_explanation

Thus, the experience is degraded for unauthenticated message. In the above example, it would have qualified for a red question mark. That is one way of dealing with this type of message, if you can’t be sure if the message is legitimate (neither authentication nor content filtering is authoritative), then visual cues can warn the user to not interact with it. Machines are good at filtering, but humans are better in some cases for giving a message a second look.

But there’s another way to do it.

Note: Once again, this is all theoretical. This should not be taken to mean that it will be done this way, but rather, an idea of how things can be done.

In my previous post on springboard attacks, I went over a technique that builds off of Office 365’s Exact-Domain antispoofing. It uses a similar algorithm exact-domain spoof and extends it customer-to-customer spoof. Yet what difference does it make whether the email originates from an Office 365 customer vs. whether it originates from some random place on the Internet, be it GoDaddy, Rackspace, Google Apps… whatever?

2015_02_23_Antispoofing_infographic

If a sending domain is big enough, it’s going to authenticate. And the instances where it doesn’t authenticate but comes from a legitimate source – such as a mailing list, forwarder, bulk email provider, and so forth – these can all be figured out with a reasonable degree of accuracy.

This means that given a sender domain with a weak authentication, you check your own authentication history. If it fails and comes from a source you don’t more-or-less trust, then you treat it as if it did have strong authentication policies published. It’s like creating a DMARC record for domains that send you email; but while the domain owner may not know all the ways its domain is being used (e.g., on mailing list), receivers can make a reasonable guess.

Perhaps the message looks something like this:

2016-12-03-big-domains-with-weak-authentication-safety-tipped

Thus, even if your domain doesn’t publish strong authentication policies, my prediction is that email receivers will start treating as if it does regardless and will build its own reputation tables.

This means that mistakes will be made, false positives will occur, spammers will try to exploit workarounds, and there will be an initial (continuous?) adjustment period, especially as new sending sources come online. Yet all that stuff exists anyway; IP addresses that appear out of nowhere get throttled. It’s no different than what we’ve already been dealing with in the spam fighting industry for years.

The solution is to authenticate email, send messages that users want, and maintain good sending history. In other words, it’s the same advice that email receivers have been giving for years. The difference that authenticating your email becomes even more important because unauthenticated email be treated really suspiciously if it comes from a source that we’ve never seen before.

Closing thoughts

The challenge here is that doing inventories of all senders is hard to scale up, I’m aware of this and I am not sure how small-to-mid-sized email receivers would scale up to handle it.

Yet it’s clear that the problem of spear phishing, and the difficulty of getting domains to publish stronger email authentication, will force the hand of larger receivers. I think that this is where the industry is going to go.

 


Previous: Where email authentication is totally great at stopping phishing – springboard attacks (and filling in the gaps)

Where email authentication falls flat at stopping phishing – impersonation attacks using display tricks

$
0
0

In this series so far, we’ve seen how email authentication is a great thing at stopping phishing under some circumstances, and where it isn’t that useful in other circumstances. A circumstance where it isn’t that useful is a variant of Business Email Compromise (BEC) that we call an Impersonation Attack. An Impersonation Attack is when the phisher uses a visual display trick and makes you think that who you are communicating with is a person that you know, but in reality is not that person.

You may think “That’s what Business Email Compromise is” and you’re right, but those can be further classified into Exact-Domain attacks (where the sender and recipient domain is the same), or Springboard attacks (where the sender and recipient have a relationship, but the sender domain is spoofed).

An Impersonation Attack is more general and makes use of many different techniques to fool the user:

  • The message has a lookalike sending domain: where one letter is substituted for another
    • One letter is substituted for another, microsoft.com vs. micros0ft.com
    • Two letters are substituted for a single, nn -> m, cl -> d, rn -> m, and so forth
    • The letters are rearranged where your eyes gloss over, mircosoft.com
    • The letters in the sender domain use different charsets
  • The message is sent from a free email account using the name of a high ranking executive in the Display Name, but the email address is completely random
  • And so forth

But, what sets them apart is that the sending domain is not being spoofed.

For example:

2016-12-05-impersonation-attack

In this above example:

  1. The sender domain, tovota-europe.com, is a lookalike of a real domain, toyota-europe.com, which is a real domain and brand associated with Toyota. If you weren’t looking closely you wouldn’t have noticed. The example below is a real message, and the phisher actually registered the domain with a registrar
    .
  2. The sender’s display name is a real contact within the actual organization
    .
  3. The sending domain is hosted on legitimate infrastructure, such as Office 365 or Google Apps (or some other place that regularly hosts email). This happens because these services offer free sign ups with low messaging limits, but as a phisher you don’t need a lot of email to send out, you just need legitimate infrastructure to avoid IP reputation lists
    .
  4. The contents of the message contain no malware nor URL, it’s text-based. That makes it harder for filters to find malicious content

The key point is #1 and #2. The phisher has registered this domain for the intent of sending phishing messages, and they have even set up SPF records for the domain. Using email authentication to detect this as a spoof will fail, because it is not spoofing the domain; instead, it is spoofing the brand by using a heuristic that is easy for humans to interpret (and be fooled by), but hard for machines to interpret meaningfully. The filters do not have a lot of content to pick up, and the sending infrastructure is legitimate.

The Impersonation Attack is one of the more difficult phishing attacks to defend against.

2016-12-05-impersonation-attack-with-explanation

.

Stopping Impersonation Attacks with techniques other than email authentication

Disclaimer: As I say in other blog posts where we haven’t released the protection yet to the general public, this is me theorizing about how something could be done, not that it will be done this way.

One method that we are thinking about within Office 365 is leveraging Safety Tips to give users visual warnings when something in their email stream is out of place. One of the things we are working on is First Contact.

First Contact builds a sender/recipient profile of users; when you get a message from someone you haven’t seen before, we’ll add a gray Safety Tip that says it’s one of the first messages you’ve received from that person. 

The idea is you regularly communicate with the same people over and over. So, when spammers and phishers try to trick you by impersonating someone you trust, this new warning makes it clear that we – your spam filter protecting you – don’t recognize the sender as a regular communicator, so you may want to be more cautious when you interact with it. 

Of course, not every First Contact is bad, that’s why we’re refining the algorithms to make them smarter. We exclude: 

  • Marketing email from known bulk mailers 
  • Email discussion lists 
  • Senders with good reputation 

 First Contact messages go away after we’ve generated enough history of them sending to you. 

This is currently being trialed internally. Initial feedback is that it’s too noisy, so we’re going back and making some more refinements. The risk of notifying too much is that when you actually have something to say, people will ignore it. The jury is out on this one.

//unsure smiley face

First Contact is only available to customers with Advanced Threat Protection.

2016-12-05-impersonation-attack-with-first-contact

But the power of First Contact is not necessarily when it is used by itself. No, the power of First Contact is when it is combined with Impersonation Detection. For you see, if we keep track of who normally sends you email, and then you get a message from someone outside of the way they normally send you email, that is suspicious. And probably even fraudulent.

In that case, just adding a safety tip that says it failed fraud detection checks wouldn’t be enough. Why? Because you might say “Huh? What’s wrong the message? I can’t see anything!” So, we’d add a red safety tip that says the message appears like someone you normally communicate with, but isn’t that person:

2016-12-05-impersonation-attack-with-red-safety-tip

I think this is a powerful feature. By detecting anomalies in sending patterns from people you communicate with, it tightens up against tricks that phishers use to get around email authentication.

This means that if someone you normally communicate with usually, or always, fails authentication, and sends a legitimate message from a new source that also fails authentication, it will be treated as suspicious since the de facto sending patterns it has already established a baseline for have now been deviated from.

Impersonation Detection initially will only be available to Advanced Threat Protection customers.

Conclusion

At this point, we’ve seen where email authentication is useful in the fight against phishing. However, we should not be lulled into a false sense of security that it solves all spoofing or phishing problems.

But what it does do is tighten up the perimeter so we can be more strict about enforcing security, forcing the cost of spamming to go up. That means the title of this post is incorrect; while email authentication falls flat when fighting Impersonation attacks, what it can do is drive a wedge into unauthenticated email and force phishers and spammers into places where they can be identified instead of hiding behind a mask.

There is no such thing as perfect security, only defense-in-depth. We use email authentication to protect ourselves, protect others from us being spoofed, and then add in other pieces of technology to stop the rest.


Previous: Where email authentication is potentially great against stopping phishing – protecting against spoofed domains with weak authentication


Would a DMARC reject record have prevented Donald Trump from getting elected?

$
0
0

One of the reasons I just wrote that four part series on where email authentication is helpful against phishing, and where it is not-so-helpful, is because I wanted to examine the John Podesta email hacks.

In case you’re not aware, John Podesta was the Chair of the Democratic Campaign to elect Hillary Clinton for President of the United States. Earlier this year, his email was hacked by an unknown party, and his emails were leaked to Wikileaks. This caused a tailspin in the election campaign of Hillary Clinton.

Opponents of Clinton seized upon some of the more sensitive (?) emails that showed the party colluding against Bernie Sanders in the primary, and purportedly showed some of the negatives of the Clinton campaign overall. Proponents of Clinton sought to downplay this as the content not being that bad as it’s how politics work, or that the criticisms were overblown, or that the Trump campaign was benefiting from their campaign not being hacked by a foreign power and thus not having a chance to have their own inner workings exposed.

Some (perhaps many) believe that this affected the outcome of the election by demotivating enough voters to not show up and vote, thus giving the election to Donald Trump. While there are other factors that contributed to the result, it’s probably true that removing some of them could have caused a different result. And it may be true that removing this one may have caused a different result.

Thought Bubble

I understand that after the results of the 2016 US Presidential election, some of you reading this blog reacted like this:

2016-12-23-homer-simpson-celebrate

But others of you reacted like this:

2016-12-23-homer-simpson-depressed

In this blog post, I’m not going to debate the merits or drawbacks of the results of the election.

Similarly, depending on what side of the fence you are on:

  • If you were a Clinton supporter, you probably believe that hacking of various high-level Democrat operatives and leaking it to Wikileaks (while simultaneously not exposing any Republican dirty laundry) played a pivotal role swinging a handful of swing states to Trump instead of Clinton.
  • By contrast, if you are a Trump support, you may not even believe that Democrat leaders were hacked by an Advanced Persistent Threat. And if you do believe it, you may think that it played little to no role in flipping the election results (that is, it didn’t make enough of an impact); or, you may indeed believe they were hacked by a foreign adversary but think they did a public service in that it exposed the inner workings of another party, and thus tipped the election in your favor.

I’m not going to debate the pros or cons of that, either.

So, in the comments there’s no need to post ideological rants, there’s a whole rest of the Internet for that.

Thanks, Thought Bubble.

Let’s assume for a moment that had Podesta not been hacked, Hillary Clinton would have won [1]. How could Podesta have avoided being hacked?

When I first started reading in my Facebook feed [2] that Podesta had probably clicked on a phishing scam, entered in his username and password, and that’s how the hackers got into his account, I saw someone post “If the spoofed domain had published a #DMARC record, he would have never been hacked.”

Is that true?

I went and started doing some investigation.

First, I assumed that the message Podesta presumably clicked on was a direct phishing message. That may not be the case. Instead, here’s what happened:

  1. Podesta got a phishing message from “Google <no-reply@accounts.googlemail.com>” indicating someone had his password, and that Google blocked the sign in from an IP address . The IP address was geo-located to the Ukraine, and that he should change his password immediately. There is then a link to a bit.ly URL that redirects to a phishing page. It is not clear that Podesta acted on this email although it sure looks like a real Google notification.
    .
  2. An email thread then ensues between an IT representative of the Clinton campaign with the above phishing message forwarded inline. His advice is that it is “a legitimate email [3] and that Podesta should change his password immediately.” He then advises to change the password at https://myaccount.google.com/security. In other words, he provided the correct advice.
    .
  3. The reply got forwarded around, eventually going to Podesta as well as another Clinton staffer, who replies that they will get Podesta to change his email address and also use two step verification to sign in.
    .
  4. At some point, someone (Podesta, in all likelihood) clicked on the link to reset his password but it appears he clicked on bit.ly link, and not the actual Google link.

Let’s look to see how technology could have helped.
.

First, DMARC wouldn’t have helped

I couldn’t find the original email message (the direct phishing) that was sent to Podesta, I could only find the email chain that contained the forwarded phishing message. Thus, I don’t know what IP address it was sent from.

However, we can see that it was spoofing accounts.googlemail.com.

As of today, accounts.googlemail.com does not publish a DMARC record. However, the parent domain googlemail.com publishes a DMARC reject record, with a subdomain policy of quarantine:

googlemail.com | “v=DMARC1; p=quarantine; sp=quarantine; rua=mailto:mailauth-reports@google.com”

I did a quick search of our own email logs, and on March 19, 2016, googlemail.com had a DMARC record published. So, Google didn’t just add it after this hack was announced, it was in place at the time of the original phish.

Since this was a spoofed message, it would have failed DMARC and gotten marked as spam. So, unless the recipient of the message went digging through their spam folder and thought it was a real message, Podesta should never have seen it in the first place.

Now we move into speculation territory. I don’t know why I can’t find the original email, I can only find the forwarded version between the campaign staffers. How did this even come across someone’s eyes to begin with?

I know that sometimes with senior executives in corporations, both an administrator and the executive have access to the exec’s inbox. They do this so they can sort through their messages and separate out the less important ones, so that the exec is only focused on the important messages. I haven’t bothered to do the research in this case (I’m just a blogger on the Internet), but if this is the case here, then did a staffer dig into the spam folder, find this message and mistake it for a real message, and advise Podesta to change his password?

People digging through spam folders, rescuing malicious messages, and getting compromised is extremely common. That’s why we add messaging to our Safety Tips in Office 365 about why we marked it as spam or phish.

The only way DMARC would have helped is that instead of publishing a subdomain policy of sp=quarantine, the domain published sp=reject (or no subdomain policy at all, so any *.googlemail.com domain would inherit the parent domain policy of p=reject). But then again, Google doesn’t necessarily reject all messages with that record that fail DMARC (neither does Office 365), they sometimes go to the Junk folder. So even that is not a guarantee.
.

Second, I do think that the IT department made a big mistake

The one big mistake I do think the IT department made (assuming that the message was not originally in the spam folder and subsequently rescued and forwarded [or even if it was]) was not “defanging” the malicious URL.

“Defanging” is my term for making a dangerous URL not dangerous. For example, suppose this was a malicious URL:

http://malicious.example.com

A defanged URL might be this:

http://malicious [dot] example [dot] com

The above link is no longer clickable. You can see that the IT person did provide the correct URL to Google’s password reset page, but Podesta clicked on the wrong one. The IT person no doubt thought he was providing the right advice about changing the password, but he left the dangerous content still in the message. There was still room for error, and in this case it mattered.

Before forwarding the message, he should have either deleted the link entirely, or defanged it. That would have totally prevented Podesta from doing the wrong thing.

It’s unclear whether two-factor authentication was ever set up. Many (most?) people don’t use it, but right from Day 1 there ought to have been a policy in place to require it, especially for executives.

Third, I don’t blame Podesta for clicking on the URL

I was reading on Slashdot and some of the commenters were calling Podesta an idiot for ignoring the actual URL and instead clicking on the bit.ly link.

Yet if he were an average Internet user using a mobile device, and was advised to change his password by people on his own team, it’s natural to assume he would scroll down the page, see the Google sign-in page, and gloss over the details in the middle. We all rely upon mental shortcuts, and all of us also know that high-ranking executives don’t read email in detail (I spend a long time editing my emails when I want an executive to weigh in on something).

Besides which, on a mobile device, it’s not like he can hover-to-uncover where the link goes to.

2016-12-23-gmail-phish

So for someone to be told to change his password, and then while scrolling down quickly he were to see the picture, it’s not a stretch for most of us to click it.
.

Fourth, even if nobody fell for this hack, there’s still plenty of other ways to get hacked

My guess is that this original message was marked as spam due to email authentication, but somehow it was rescued and still managed to trick the user. But even if the phisher wasn’t spoofing googlemail.com, they could have spoofed Google in any number of ways such as random IT phishing attacks, weakly protected domain attacks, and impersonation attacks).

Would Podesta himself have fallen for this? Would his staff? It’s unclear.

But one thing we know for sure, the attackers would have kept hacking until they finally did get access. If not on Podesta himself, then someone else.
.

Fifth, this is not the first time I have seen a hack like this, and a combination of technologies is required, along with a security policy

Earlier this year, I saw an attack where a phisher sent a message with a malicious link to an executive and it got through to him. He forwarded it to his assistant where she clicked on the link and got infected with malware. The original target wasn’t compromised, but someone else within the organization was.

This Podesta phishing attack doesn’t seem to have fooled the recipient, but still succeeded by accident.

Thus, an attack has multiple paths to success.

One thing we do at Microsoft is apply policy. I can’t check my corporate email on my phone without two-factor authentication; I have an iPhone SE [4] and I had to install an app from Microsoft and put in a PIN number which was verified with a phone call. I have to renew that authenticator app every so often. I can’t access my work email on my laptop unless I am using Windows 10, and it forces me to login using my fingerprint. So there’s multifactor authentication that way.

You can see my IT department has taken the decision out of my hands, and that it is a corporate policy. It’s still possible to hack me, but it’s way harder.

People in high ranking positions need to be aware they are under attack, and their security departments need to implement policy that make it easy for them to get their word done. This is my personal recommendation to all government departments – I preach the virtues of email authentication, and that’s important. But securing the endpoint is also important because attacks can still succeed, even by accident.

Just ask John Podesta.


[1] Yes, yes, I know that’s not necessarily true. See the Thought Bubble.

[2] As we all know, our Facebook feeds are not the most reliable source of accurate news.

[3] There’s a story floating about that the staffer who wrote “This is a legitimate email” meant to write “This is an illegitimate email”, and that’s the reason why Podesta clicked on the link. Had he wrote it the first way, Podesta never would have clicked. I doubt that, the crux of the message was that he had to change his password, not whether or not the message was legitimate. I think the URL should have been defanged.

[4] Yes, Microsoft employees can have iPhones.

Sending mail with invalid From: addresses to Office 365

$
0
0

One of the changes to go into Office 365 in the past year is an antispam rule that rejects on messages with an invalid From: address. When this occurs, the message is rejected with:

550 5.7.512 Access denied, message must be RFC 5322 section 3.6.2 compliant and include a valid From address

If you look up RFC 5322 section 3.6, it says that each message must have one and only one From: address:

   +----------------+--------+------------+----------------------------+
   | Field          | Min    | Max number | Notes                      |
   |                | number |            |                            |
   +----------------+--------+------------+----------------------------+
   | from           | 1      | 1          | See sender and 3.6.2       |
   +----------------+--------+------------+----------------------------+

The structure of a From address is then described in section 3.6.2.

For many years, Exchange server allowed senders and recipients to send messages with malformatted From: addresses. That is, something like this was permitted:

From: <blah>

From: “Example Sender”

Even though this is against the RFC 5322 (published in 2008), and RFC 2822 (published 2001) before it, there are still lots of mail servers that send malformatted email in this way. However, if you try to send to other services, it doesn’t work. For example, sending a message that way to Hotmail/Outlook.com results in the message bouncing; sending it to Gmail similarly results in the message bouncing. Indeed, Gmail even forces you to put angle brackets around the email address in the SMTP MAIL FROM. For example, the first line below is rejected by Gmail, the second is accepted:

MAIL FROM: not@acceptable.com

MAIL FROM: <okay@acceptable.com>

Exchange accepts them both. So does Office 365.

Exchange has more relaxed enforcements because in the corporate environment, many applications are running on older or buggy platforms but send wanted email; or, people frequently write scripts to transmit email but do not configure them to send RFC-compliant mail. Large services like Gmail and Outlook.com are more picky about protecting their users, but in a corporate environment that sends messages privately, it is not as strictly enforced if it’s just you sending to yourself.

Given all that, late in 2015, we started seeing massive outbound spam attacks from malicious spammers who signed up for the service. They would send spam with an empty MAIL FROM and an empty From: address:

MAIL FROM: <>
From: <>

We measured the proportion of spam using this pattern; 98-99% of it was being marked as spam (and thus delivered out of our high risk delivery pool), and its total volume was well into the millions per day.

This had numerous drawbacks:

  1. The amount of spam being generated was taking up bandwidth from legitimate email
  2. We were still relaying junk to the Internet and the double null-sender was making it difficult to track down the spammers
  3. The misclassified spam was high enough that it was impacting the quality of our low risk outbound delivery pools. This means that customers were impacted because spammers would get our IPs listed on IP blocklists, affecting our entire customer base

Combining the fact that RFC 2822 was published in 2001 and specified the proper format of an email address, and that there was so much outbound spam, and the workaround was for script-owners of system-generated email to fix their scripts (rather than having to continually chase down spammers), Office 365 decided to crack down on these types of messages:

If you send email to Office 365 with a null SMTP MAIL FROM <>, then the From: address must contain <email@address.TopLevelDomain> including the angle brackets.

From time to time, we get senders telling us that we are mistakenly blocking the following message with the previously mentioned error response:

MAIL FROM: <>
From: sender@contoso.com

It is not a mistake, we require a From: address to have angle brackets if the SMTP MAIL FROM is <>. Just as different email services have different requirements – Gmail requires angle brackets around the SMTP MAIL FROM, Hotmail requires a valid From: address always – Office 365 requires that email addresses get formatted in a certain way if the MAIL FROM is <>.

Because Office 365 deals with legacy mail servers sending traffic to the service, there are certain RFC requirements that the service is not in a position to enforce without potentially causing customer disruption. At the same time, we are trying to keep the null sender rule simple; it is too confusing to have a complicated if-then-elseIf-elseIf-else logic tree for sending with a null sender [1]. And Office 365 is still much more relaxed than almost any other mail platform even with this restriction in place.

This is the balance that has been struck in our attempts to support legacy mail servers while running a cloud-based service, yet keeping spammers off the network.


[1] There are lots of other different ways that spammers try to shove email through the system using visual display tricks that are rejected by some recipients, but allowed by others. Yet a complicated AND-OR-NOT would be too difficult to explain to anyone who asked what the logic is, and it wouldn’t be long before even the engineering team couldn’t maintain it. Simplicity is our goal here, and we achieved it.

For example, when someone says their email is getting rejected, it’s a simple explanation to say “Add angle brackets around the From: address.”

Would a DMARC reject record have prevented Donald Trump from getting elected?

$
0
0

One of the reasons I just wrote that four part series on where email authentication is helpful against phishing, and where it is not-so-helpful, is because I wanted to examine the John Podesta email hacks.

In case you’re not aware, John Podesta was the Chair of the Democratic Campaign to elect Hillary Clinton for President of the United States. Earlier this year, his email was hacked by an unknown party, and his emails were leaked to Wikileaks. This caused a tailspin in the election campaign of Hillary Clinton.

Opponents of Clinton seized upon some of the more sensitive (?) emails that showed the party colluding against Bernie Sanders in the primary, and purportedly showed some of the negatives of the Clinton campaign overall. Proponents of Clinton sought to downplay this as the content not being that bad as it’s how politics work, or that the criticisms were overblown, or that the Trump campaign was benefiting from their campaign not being hacked by a foreign power and thus not having a chance to have their own inner workings exposed.

Some (perhaps many) believe that this affected the outcome of the election by demotivating enough voters to not show up and vote, thus giving the election to Donald Trump. While there are other factors that contributed to the result, it’s probably true that removing some of them could have caused a different result. And it may be true that removing this one may have caused a different result.

Thought Bubble

I understand that after the results of the 2016 US Presidential election, some of you reading this blog reacted like this:

2016-12-23-homer-simpson-celebrate

But others of you reacted like this:

2016-12-23-homer-simpson-depressed

In this blog post, I’m not going to debate the merits or drawbacks of the results of the election.

Similarly, depending on what side of the fence you are on:

  • If you were a Clinton supporter, you probably believe that hacking of various high-level Democrat operatives and leaking it to Wikileaks (while simultaneously not exposing any Republican dirty laundry) played a pivotal role swinging a handful of swing states to Trump instead of Clinton.
  • By contrast, if you are a Trump support, you may not even believe that Democrat leaders were hacked by an Advanced Persistent Threat. And if you do believe it, you may think that it played little to no role in flipping the election results (that is, it didn’t make enough of an impact); or, you may indeed believe they were hacked by a foreign adversary but think they did a public service in that it exposed the inner workings of another party, and thus tipped the election in your favor.

I’m not going to debate the pros or cons of that, either.

So, in the comments there’s no need to post ideological rants, there’s a whole rest of the Internet for that.

Thanks, Thought Bubble.

Let’s assume for a moment that had Podesta not been hacked, Hillary Clinton would have won [1]. How could Podesta have avoided being hacked?

When I first started reading in my Facebook feed [2] that Podesta had probably clicked on a phishing scam, entered in his username and password, and that’s how the hackers got into his account, I saw someone post “If the spoofed domain had published a #DMARC record, he would have never been hacked.”

Is that true?

I went and started doing some investigation.

First, I assumed that the message Podesta presumably clicked on was a direct phishing message. That may not be the case. Instead, here’s what happened:

  1. Podesta got a phishing message from “Google <no-reply@accounts.googlemail.com>” indicating someone had his password, and that Google blocked the sign in from an IP address . The IP address was geo-located to the Ukraine, and that he should change his password immediately. There is then a link to a bit.ly URL that redirects to a phishing page. It is not clear that Podesta acted on this email although it sure looks like a real Google notification.
    .
  2. An email thread then ensues between an IT representative of the Clinton campaign with the above phishing message forwarded inline. His advice is that it is “a legitimate email [3] and that Podesta should change his password immediately.” He then advises to change the password at https://myaccount.google.com/security. In other words, he provided the correct advice.
    .
  3. The reply got forwarded around, eventually going to Podesta as well as another Clinton staffer, who replies that they will get Podesta to change his email address and also use two step verification to sign in.
    .
  4. At some point, someone (Podesta, in all likelihood) clicked on the link to reset his password but it appears he clicked on bit.ly link, and not the actual Google link.

Let’s look to see how technology could have helped.
.

First, DMARC wouldn’t have helped

I couldn’t find the original email message (the direct phishing) that was sent to Podesta, I could only find the email chain that contained the forwarded phishing message. Thus, I don’t know what IP address it was sent from.

However, we can see that it was spoofing accounts.googlemail.com.

As of today, accounts.googlemail.com does not publish a DMARC record. However, the parent domain googlemail.com publishes a DMARC reject record, with a subdomain policy of quarantine:

googlemail.com | “v=DMARC1; p=quarantine; sp=quarantine; rua=mailto:mailauth-reports@google.com”

I did a quick search of our own email logs, and on March 19, 2016, googlemail.com had a DMARC record published. So, Google didn’t just add it after this hack was announced, it was in place at the time of the original phish.

Since this was a spoofed message, it would have failed DMARC and gotten marked as spam. So, unless the recipient of the message went digging through their spam folder and thought it was a real message, Podesta should never have seen it in the first place.

Now we move into speculation territory. I don’t know why I can’t find the original email, I can only find the forwarded version between the campaign staffers. How did this even come across someone’s eyes to begin with?

I know that sometimes with senior executives in corporations, both an administrator and the executive have access to the exec’s inbox. They do this so they can sort through their messages and separate out the less important ones, so that the exec is only focused on the important messages. I haven’t bothered to do the research in this case (I’m just a blogger on the Internet), but if this is the case here, then did a staffer dig into the spam folder, find this message and mistake it for a real message, and advise Podesta to change his password?

People digging through spam folders, rescuing malicious messages, and getting compromised is extremely common. That’s why we add messaging to our Safety Tips in Office 365 about why we marked it as spam or phish.

The only way DMARC would have helped is that instead of publishing a subdomain policy of sp=quarantine, the domain published sp=reject (or no subdomain policy at all, so any *.googlemail.com domain would inherit the parent domain policy of p=reject). But then again, Google doesn’t necessarily reject all messages with that record that fail DMARC (neither does Office 365), they sometimes go to the Junk folder. So even that is not a guarantee.
.

Second, I do think that the IT department made a big mistake

The one big mistake I do think the IT department made (assuming that the message was not originally in the spam folder and subsequently rescued and forwarded [or even if it was]) was not “defanging” the malicious URL.

“Defanging” is my term for making a dangerous URL not dangerous. For example, suppose this was a malicious URL:

http://malicious.example.com

A defanged URL might be this:

http://malicious [dot] example [dot] com

The above link is no longer clickable. You can see that the IT person did provide the correct URL to Google’s password reset page, but Podesta clicked on the wrong one. The IT person no doubt thought he was providing the right advice about changing the password, but he left the dangerous content still in the message. There was still room for error, and in this case it mattered.

Before forwarding the message, he should have either deleted the link entirely, or defanged it. That would have totally prevented Podesta from doing the wrong thing.

It’s unclear whether two-factor authentication was ever set up. Many (most?) people don’t use it, but right from Day 1 there ought to have been a policy in place to require it, especially for executives.

Third, I don’t blame Podesta for clicking on the URL

I was reading on Slashdot and some of the commenters were calling Podesta an idiot for ignoring the actual URL and instead clicking on the bit.ly link.

Yet if he were an average Internet user using a mobile device, and was advised to change his password by people on his own team, it’s natural to assume he would scroll down the page, see the Google sign-in page, and gloss over the details in the middle. We all rely upon mental shortcuts, and all of us also know that high-ranking executives don’t read email in detail (I spend a long time editing my emails when I want an executive to weigh in on something).

Besides which, on a mobile device, it’s not like he can hover-to-uncover where the link goes to.

2016-12-23-gmail-phish

So for someone to be told to change his password, and then while scrolling down quickly he were to see the picture, it’s not a stretch for most of us to click it.
.

Fourth, even if nobody fell for this hack, there’s still plenty of other ways to get hacked

My guess is that this original message was marked as spam due to email authentication, but somehow it was rescued and still managed to trick the user. But even if the phisher wasn’t spoofing googlemail.com, they could have spoofed Google in any number of ways such as random IT phishing attacks, weakly protected domain attacks, and impersonation attacks).

Would Podesta himself have fallen for this? Would his staff? It’s unclear.

But one thing we know for sure, the attackers would have kept hacking until they finally did get access. If not on Podesta himself, then someone else.
.

Fifth, this is not the first time I have seen a hack like this, and a combination of technologies is required, along with a security policy

Earlier this year, I saw an attack where a phisher sent a message with a malicious link to an executive and it got through to him. He forwarded it to his assistant where she clicked on the link and got infected with malware. The original target wasn’t compromised, but someone else within the organization was.

This Podesta phishing attack doesn’t seem to have fooled the recipient, but still succeeded by accident.

Thus, an attack has multiple paths to success.

One thing we do at Microsoft is apply policy. I can’t check my corporate email on my phone without two-factor authentication; I have an iPhone SE [4] and I had to install an app from Microsoft and put in a PIN number which was verified with a phone call. I have to renew that authenticator app every so often. I can’t access my work email on my laptop unless I am using Windows 10, and it forces me to login using my fingerprint. So there’s multifactor authentication that way.

You can see my IT department has taken the decision out of my hands, and that it is a corporate policy. It’s still possible to hack me, but it’s way harder.

People in high ranking positions need to be aware they are under attack, and their security departments need to implement policy that make it easy for them to get their word done. This is my personal recommendation to all government departments – I preach the virtues of email authentication, and that’s important. But securing the endpoint is also important because attacks can still succeed, even by accident.

Just ask John Podesta.


[1] Yes, yes, I know that’s not necessarily true. See the Thought Bubble.

[2] As we all know, our Facebook feeds are not the most reliable source of accurate news.

[3] There’s a story floating about that the staffer who wrote “This is a legitimate email” meant to write “This is an illegitimate email”, and that’s the reason why Podesta clicked on the link. Had he wrote it the first way, Podesta never would have clicked. I doubt that, the crux of the message was that he had to change his password, not whether or not the message was legitimate. I think the URL should have been defanged.

[4] Yes, Microsoft employees can have iPhones.

Sending mail with invalid From: addresses to Office 365

$
0
0

One of the changes to go into Office 365 in the past year is an antispam rule that rejects on messages with an invalid From: address. When this occurs, the message is rejected with:

550 5.7.512 Access denied, message must be RFC 5322 section 3.6.2 compliant and include a valid From address

If you look up RFC 5322 section 3.6, it says that each message must have one and only one From: address:

   +----------------+--------+------------+----------------------------+
   | Field          | Min    | Max number | Notes                      |
   |                | number |            |                            |
   +----------------+--------+------------+----------------------------+
   | from           | 1      | 1          | See sender and 3.6.2       |
   +----------------+--------+------------+----------------------------+

The structure of a From address is then described in section 3.6.2.

For many years, Exchange server allowed senders and recipients to send messages with malformatted From: addresses. That is, something like this was permitted:

From: <blah>

From: “Example Sender”

Even though this is against the RFC 5322 (published in 2008), and RFC 2822 (published 2001) before it, there are still lots of mail servers that send malformatted email in this way. However, if you try to send to other services, it doesn’t work. For example, sending a message that way to Hotmail/Outlook.com results in the message bouncing; sending it to Gmail similarly results in the message bouncing. Indeed, Gmail even forces you to put angle brackets around the email address in the SMTP MAIL FROM. For example, the first line below is rejected by Gmail, the second is accepted:

MAIL FROM: not@acceptable.com

MAIL FROM: <okay@acceptable.com>

Exchange accepts them both. So does Office 365.

Exchange has more relaxed enforcements because in the corporate environment, many applications are running on older or buggy platforms but send wanted email; or, people frequently write scripts to transmit email but do not configure them to send RFC-compliant mail. Large services like Gmail and Outlook.com are more picky about protecting their users, but in a corporate environment that sends messages privately, it is not as strictly enforced if it’s just you sending to yourself.

Given all that, late in 2015, we started seeing massive outbound spam attacks from malicious spammers who signed up for the service. They would send spam with an empty MAIL FROM and an empty From: address:

MAIL FROM: <>
From: <>

We measured the proportion of spam using this pattern; 98-99% of it was being marked as spam (and thus delivered out of our high risk delivery pool), and its total volume was well into the millions per day.

This had numerous drawbacks:

  1. The amount of spam being generated was taking up bandwidth from legitimate email
  2. We were still relaying junk to the Internet and the double null-sender was making it difficult to track down the spammers
  3. The misclassified spam was high enough that it was impacting the quality of our low risk outbound delivery pools. This means that customers were impacted because spammers would get our IPs listed on IP blocklists, affecting our entire customer base

Combining the fact that RFC 2822 was published in 2001 and specified the proper format of an email address, and that there was so much outbound spam, and the workaround was for script-owners of system-generated email to fix their scripts (rather than having to continually chase down spammers), Office 365 decided to crack down on these types of messages:

If you send email to Office 365 with a null SMTP MAIL FROM <>, then the From: address must contain <email@address.TopLevelDomain> including the angle brackets.

From time to time, we get senders telling us that we are mistakenly blocking the following message with the previously mentioned error response:

MAIL FROM: <>
From: sender@contoso.com

It is not a mistake, we require a From: address to have angle brackets if the SMTP MAIL FROM is <>. Just as different email services have different requirements – Gmail requires angle brackets around the SMTP MAIL FROM, Hotmail requires a valid From: address always – Office 365 requires that email addresses get formatted in a certain way if the MAIL FROM is <>.

Because Office 365 deals with legacy mail servers sending traffic to the service, there are certain RFC requirements that the service is not in a position to enforce without potentially causing customer disruption. At the same time, we are trying to keep the null sender rule simple; it is too confusing to have a complicated if-then-elseIf-elseIf-else logic tree for sending with a null sender [1]. And Office 365 is still much more relaxed than almost any other mail platform even with this restriction in place.

This is the balance that has been struck in our attempts to support legacy mail servers while running a cloud-based service, yet keeping spammers off the network.


[1] There are lots of other different ways that spammers try to shove email through the system using visual display tricks that are rejected by some recipients, but allowed by others. Yet a complicated AND-OR-NOT would be too difficult to explain to anyone who asked what the logic is, and it wouldn’t be long before even the engineering team couldn’t maintain it. Simplicity is our goal here, and we achieved it.

For example, when someone says their email is getting rejected, it’s a simple explanation to say “Add angle brackets around the From: address.”

Would a DMARC reject record have prevented Donald Trump from getting elected?

$
0
0

One of the reasons I just wrote that four part series on where email authentication is helpful against phishing, and where it is not-so-helpful, is because I wanted to examine the John Podesta email hacks.

In case you’re not aware, John Podesta was the Chair of the Democratic Campaign to elect Hillary Clinton for President of the United States. Earlier this year, his email was hacked by an unknown party, and his emails were leaked to Wikileaks. This caused a tailspin in the election campaign of Hillary Clinton.

Opponents of Clinton seized upon some of the more sensitive (?) emails that showed the party colluding against Bernie Sanders in the primary, and purportedly showed some of the negatives of the Clinton campaign overall. Proponents of Clinton sought to downplay this as the content not being that bad as it’s how politics work, or that the criticisms were overblown, or that the Trump campaign was benefiting from their campaign not being hacked by a foreign power and thus not having a chance to have their own inner workings exposed.

Some (perhaps many) believe that this affected the outcome of the election by demotivating enough voters to not show up and vote, thus giving the election to Donald Trump. While there are other factors that contributed to the result, it’s probably true that removing some of them could have caused a different result. And it may be true that removing this one may have caused a different result.

Thought Bubble

I understand that after the results of the 2016 US Presidential election, some of you reading this blog reacted like this:

2016-12-23-homer-simpson-celebrate

But others of you reacted like this:

2016-12-23-homer-simpson-depressed

In this blog post, I’m not going to debate the merits or drawbacks of the results of the election.

Similarly, depending on what side of the fence you are on:

  • If you were a Clinton supporter, you probably believe that hacking of various high-level Democrat operatives and leaking it to Wikileaks (while simultaneously not exposing any Republican dirty laundry) played a pivotal role swinging a handful of swing states to Trump instead of Clinton.
  • By contrast, if you are a Trump support, you may not even believe that Democrat leaders were hacked by an Advanced Persistent Threat. And if you do believe it, you may think that it played little to no role in flipping the election results (that is, it didn’t make enough of an impact); or, you may indeed believe they were hacked by a foreign adversary but think they did a public service in that it exposed the inner workings of another party, and thus tipped the election in your favor.

I’m not going to debate the pros or cons of that, either.

So, in the comments there’s no need to post ideological rants, there’s a whole rest of the Internet for that.

Thanks, Thought Bubble.

Let’s assume for a moment that had Podesta not been hacked, Hillary Clinton would have won [1]. How could Podesta have avoided being hacked?

When I first started reading in my Facebook feed [2] that Podesta had probably clicked on a phishing scam, entered in his username and password, and that’s how the hackers got into his account, I saw someone post “If the spoofed domain had published a #DMARC record, he would have never been hacked.”

Is that true?

I went and started doing some investigation.

First, I assumed that the message Podesta presumably clicked on was a direct phishing message. That may not be the case. Instead, here’s what happened:

  1. Podesta got a phishing message from “Google <no-reply@accounts.googlemail.com>” indicating someone had his password, and that Google blocked the sign in from an IP address . The IP address was geo-located to the Ukraine, and that he should change his password immediately. There is then a link to a bit.ly URL that redirects to a phishing page. It is not clear that Podesta acted on this email although it sure looks like a real Google notification.
    .
  2. An email thread then ensues between an IT representative of the Clinton campaign with the above phishing message forwarded inline. His advice is that it is “a legitimate email [3] and that Podesta should change his password immediately.” He then advises to change the password at https://myaccount.google.com/security. In other words, he provided the correct advice.
    .
  3. The reply got forwarded around, eventually going to Podesta as well as another Clinton staffer, who replies that they will get Podesta to change his email address and also use two step verification to sign in.
    .
  4. At some point, someone (Podesta, in all likelihood) clicked on the link to reset his password but it appears he clicked on bit.ly link, and not the actual Google link.

Let’s look to see how technology could have helped.
.

First, DMARC wouldn’t have helped

I couldn’t find the original email message (the direct phishing) that was sent to Podesta, I could only find the email chain that contained the forwarded phishing message. Thus, I don’t know what IP address it was sent from.

However, we can see that it was spoofing accounts.googlemail.com.

As of today, accounts.googlemail.com does not publish a DMARC record. However, the parent domain googlemail.com publishes a DMARC reject record, with a subdomain policy of quarantine:

googlemail.com | “v=DMARC1; p=quarantine; sp=quarantine; rua=mailto:mailauth-reports@google.com”

I did a quick search of our own email logs, and on March 19, 2016, googlemail.com had a DMARC record published. So, Google didn’t just add it after this hack was announced, it was in place at the time of the original phish.

Since this was a spoofed message, it would have failed DMARC and gotten marked as spam. So, unless the recipient of the message went digging through their spam folder and thought it was a real message, Podesta should never have seen it in the first place.

Now we move into speculation territory. I don’t know why I can’t find the original email, I can only find the forwarded version between the campaign staffers. How did this even come across someone’s eyes to begin with?

I know that sometimes with senior executives in corporations, both an administrator and the executive have access to the exec’s inbox. They do this so they can sort through their messages and separate out the less important ones, so that the exec is only focused on the important messages. I haven’t bothered to do the research in this case (I’m just a blogger on the Internet), but if this is the case here, then did a staffer dig into the spam folder, find this message and mistake it for a real message, and advise Podesta to change his password?

People digging through spam folders, rescuing malicious messages, and getting compromised is extremely common. That’s why we add messaging to our Safety Tips in Office 365 about why we marked it as spam or phish.

The only way DMARC would have helped is that instead of publishing a subdomain policy of sp=quarantine, the domain published sp=reject (or no subdomain policy at all, so any *.googlemail.com domain would inherit the parent domain policy of p=reject). But then again, Google doesn’t necessarily reject all messages with that record that fail DMARC (neither does Office 365), they sometimes go to the Junk folder. So even that is not a guarantee.
.

Second, I do think that the IT department made a big mistake

The one big mistake I do think the IT department made (assuming that the message was not originally in the spam folder and subsequently rescued and forwarded [or even if it was]) was not “defanging” the malicious URL.

“Defanging” is my term for making a dangerous URL not dangerous. For example, suppose this was a malicious URL:

http://malicious.example.com

A defanged URL might be this:

http://malicious [dot] example [dot] com

The above link is no longer clickable. You can see that the IT person did provide the correct URL to Google’s password reset page, but Podesta clicked on the wrong one. The IT person no doubt thought he was providing the right advice about changing the password, but he left the dangerous content still in the message. There was still room for error, and in this case it mattered.

Before forwarding the message, he should have either deleted the link entirely, or defanged it. That would have totally prevented Podesta from doing the wrong thing.

It’s unclear whether two-factor authentication was ever set up. Many (most?) people don’t use it, but right from Day 1 there ought to have been a policy in place to require it, especially for executives.

Third, I don’t blame Podesta for clicking on the URL

I was reading on Slashdot and some of the commenters were calling Podesta an idiot for ignoring the actual URL and instead clicking on the bit.ly link.

Yet if he were an average Internet user using a mobile device, and was advised to change his password by people on his own team, it’s natural to assume he would scroll down the page, see the Google sign-in page, and gloss over the details in the middle. We all rely upon mental shortcuts, and all of us also know that high-ranking executives don’t read email in detail (I spend a long time editing my emails when I want an executive to weigh in on something).

Besides which, on a mobile device, it’s not like he can hover-to-uncover where the link goes to.

2016-12-23-gmail-phish

So for someone to be told to change his password, and then while scrolling down quickly he were to see the picture, it’s not a stretch for most of us to click it.
.

Fourth, even if nobody fell for this hack, there’s still plenty of other ways to get hacked

My guess is that this original message was marked as spam due to email authentication, but somehow it was rescued and still managed to trick the user. But even if the phisher wasn’t spoofing googlemail.com, they could have spoofed Google in any number of ways such as random IT phishing attacks, weakly protected domain attacks, and impersonation attacks).

Would Podesta himself have fallen for this? Would his staff? It’s unclear.

But one thing we know for sure, the attackers would have kept hacking until they finally did get access. If not on Podesta himself, then someone else.
.

Fifth, this is not the first time I have seen a hack like this, and a combination of technologies is required, along with a security policy

Earlier this year, I saw an attack where a phisher sent a message with a malicious link to an executive and it got through to him. He forwarded it to his assistant where she clicked on the link and got infected with malware. The original target wasn’t compromised, but someone else within the organization was.

This Podesta phishing attack doesn’t seem to have fooled the recipient, but still succeeded by accident.

Thus, an attack has multiple paths to success.

One thing we do at Microsoft is apply policy. I can’t check my corporate email on my phone without two-factor authentication; I have an iPhone SE [4] and I had to install an app from Microsoft and put in a PIN number which was verified with a phone call. I have to renew that authenticator app every so often. I can’t access my work email on my laptop unless I am using Windows 10, and it forces me to login using my fingerprint. So there’s multifactor authentication that way.

You can see my IT department has taken the decision out of my hands, and that it is a corporate policy. It’s still possible to hack me, but it’s way harder.

People in high ranking positions need to be aware they are under attack, and their security departments need to implement policy that make it easy for them to get their word done. This is my personal recommendation to all government departments – I preach the virtues of email authentication, and that’s important. But securing the endpoint is also important because attacks can succeed indirectly.

Even by accident.


[1] Yes, yes, I know that’s not necessarily true. See the Thought Bubble.

[2] As we all know, our Facebook feeds are not the most reliable source of accurate news.

[3] There’s a story floating about that the staffer who wrote “This is a legitimate email” meant to write “This is an illegitimate email”, and that’s the reason why Podesta clicked on the link. Had he wrote it the first way, Podesta never would have clicked. I doubt that, the crux of the message was that he had to change his password, not whether or not the message was legitimate. I think the URL should have been defanged.

[4] Yes, Microsoft employees can have iPhones.

Viewing all 243 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>