Part 1 – There’s more to me than just fighting spam
If all you know of me is through this blog, then you’ll know I’ve been involved in the fight against spam, malware, and phishing for over a decade.
On the other hand, those of you who know me in person or have checked out my LinkedIn profile know that once upon a time I used to be an amateur magician. Actually, I still am, I just don’t practice as much [1]. For years I specialized in close-up sleight-of-hand magic and then a few years ago I branched in mentalism [2].
One of the shows I watch on YouTube is Penn and Teller: Fool Us. On the show, up-and-coming (and even professional) magicians show a trick to the audience, and Penn and Teller. If they can fool Penn and Teller, they get a chance to perform their trick at the Rio hotel in Las Vegas during one of Penn and Teller’s shows. I enjoy watching the show. It’s not often that magicians can fool both Penn and Teller, but sometimes it happens. When I watch the show, I can work out about 1 in 3 of the tricks. After Penn and Teller explain how it’s done using the secret magic codewords, I can figure out about 2/3 of them because their language is really obscure unless you have a lot more knowledge than I have, or you know how the trick is done.
I’ve had a friend go on the show (he didn’t win) but I often wonder how I could go on… and win. You see, I’m not good enough to fool Penn and Teller. While there’s a 50% chance I could fool Penn, there’s slim chance I could fool the walking encyclopedia of magic that is Teller. That guy’s knowledge is so wide that it’s tough for any of the other acts to fool him.
But I still want to go on the show. So how could I do it?
Is my goal just to get exposure? No, not really. I’m not a professional magician and I have no intention of going full time. If I go on, I want to win it for the glory. You know, the glory of being a magician.
But how?
.
Part 2 – Here’s the plan
Here’s what I would do: Rather than try to fool Penn and Teller by trying to come up with a new method (which is unlikely because I don’t have enough knowledge to develop something completely new unless it’s electronic), or do a variance of an obscure-but-existing method (which is unlikely to fool Teller), instead I would use my weakness by turning it into a strength. I know I’m not good enough to come up with a new method but I am good enough to send false signals that hopefully the magic duo would notice and falsely conclude that’s how I did it.
That is, let’s suppose I did a card trick where the entire deck vanished and the audience’s selected card turned out to be the only blue card in a red-backed deck. There are multiple ways to accomplish this. My strategy would be to pick four different methods and pretend to use three of them but do them in an almost-sloppy way that a regular person wouldn’t notice but a professional would.
For example, I could have all the equipment to ditch a pack of cards in a fake pocket sewn to the inside of a suit jacket, and even go through the motions to toss them in there under the cover of making another motion. A professional magician would know to look for that especially if I did all the movements necessary to hide the deck that way. But I wouldn’t actually do it, I would only pretend to do it. Penn and Teller don’t always see how a move is done but they do know when it was done because it’s hard to shield a move completely.
.
.
I would similarly do this with two or three other methods. The idea would be to get them to commit fairly early to the method which I would supposedly use. Then when it came time for them to guess what I did, I would confirm I did all those moves but it’s not how the trick works. They would then be forced to go back into their memories and come up with an alternative explanation, and as long as I did some other faux methods as more obvious than the real method, they would run out of explanations and be forced to concede that a mediocre magician like myself fooled them.
We humans can’t keep that many things in short-term memory, particularly after we’ve committed to something; I would take away the reason for them to continue paying close attention. By getting them to commit early, I could short-circuit their intentions to figure out what I am doing; their brains would confabulate later on how I did do the trick and wouldn’t be reliable enough to come up with the real explanation except by accident.
So, my whole strategy is to send false signals and violate expectations. Penn and Teller would be expecting me to (1) conceal the method and (2) rely upon my skill (3) in hopes they don’t notice; but instead, my plan would be to assume they will notice, but flood their filters with the wrong data (and also use some behavioral psychology about the way humans make decisions). If you can’t trust the signals you are reading, then you can’t trust the process you’re using. If you can’t trust your entire process, then your ability to succeed shrinks massively.
This is effectively the technique that the authors of the Stuxnet worm used [3] . By causing the hardware to damage the material, yet simultaneously cause the dials to show nothing was wrong, it prevented the operators from troubleshooting the problem. Everything looked normal.
That would be my plan to victory on Penn and Teller: Fool Us.
Part 3 – How does this relate to phishing?
From time to time, people in the industry ping me to let me know that a mailbox on outlook.com, Hotmail, or Office 365 are being used as a phishing drop box. That is, a phisher signed up for the service for the purpose of receiving user credentials; they then send email from the same account or usually another one asking users to reply with their usernames and passwords. This can be with an IT phish (Your mailbox is full) or financial fraud (Please reply with your username and password to unlock or verify your account). The account that the user replies to is called a phishing drop box.
Our standard operating procedure is to shut down accounts when they are brought to our attention. After all, it is against the Terms of Use to use our service for the purpose of spam, malware, or phishing.
I want to improve this process.
As soon as you shut down a phishing account, the phisher has been tipped off that they have been discovered. They simply abandon the account, sign up for a new one, and possibly morph the content of the phishing messages they were sending so they can avoid detection a little while longer. But the data that they have collected is still valid – usernames and passwords which can be used to break into user accounts.
Let’s change things up.
A different strategy is to borrow my Penn and Teller Fool Us approach and send false signals. The phisher is harvesting credentials to sell on the black market. Rather than shutting the email account down, we should not tip the phisher off that they have been discovered. Instead, we should intercept the message and modify its content. Where the user has entered their username and password, we should randomly rewrite the password with wrong characters so it no longer works. The message would still be delivered to the spammer so they’d be unaware that the message has been tampered with.
While spammers and phishers use their harvested data to break into accounts, they also prefer to sell large chunks of user credentials on the black market. Because the data they’ve collected will be low quality (“I bought these passwords from you and none of them work!”) it disrupts their business model. Why aren’t these working? If the buyer cannot trust the seller, it undermines the market as a whole.
For the password harvester, it makes it difficult to troubleshoot where it’s all going wrong. The drop box still works, messages are still getting delivered. As they are probably automating the parsing of the emails with usernames and passwords, or at the very least copy/pasting into a file without doing validation, it is difficult to reverse engineer where the signal quality is degrading. Are the users mistyping their information accidentally? On purpose?
Disrupting the business model in this manner raises the cost of business for the phisher. This strategy (raising the cost of business) has been used in the past with some success. Nothing is perfect, but it doesn’t need to be perfect, it just has to raise the cost enough to make it not worthwhile.
.
Part 4 – How might a phisher react?
One way is to test some of the passwords – data validation. The phisher can randomly try a handful of passwords to make sure that they haven’t been tampered with. But there are two counter-strategies to this:
- Only modify some of the passwords. In the event that the phisher gets lazy and only sanitizes some of his data, when he resells them on the black market there will be enough bad data in there that his reputation will be degraded. If you buy a box of apples and 1/3 of them are consistently rotten, you will soon stop buying from that grocery store. If all batches of apple are rotten, then either the apple market goes away, or you have to spend time a long sorting through apples.
. - Keep track of which passwords you modify, and then when the phisher tries to login with the fake password, let them login the first time (or show a fake user account). Then, toss the fake password away so it can’t be re-used. The phisher will be tricked into believing that the accounts are valid when in reality they are not. What’s more, if the algorithm for password modification is standardized across vendors (Hotmail, Gmail, Yahoo, etc.) with a common shared key, then a drop box on outlook.com that harvests Gmail users’ passwords will be modified in transit using that common key, and give the phisher fake signals when testing it on Gmail. This requires co-ordination among vendors. But it also throws a wrench into the phisher’s plans. They cannot reliably sell the information they are stealing because they will generate a reputation for selling low-quality data.
This is obviously a complex game of counterintelligence. And I’m not even sure it would work. What I would suspect would happen is an acceleration of what is already occurring – a move into targeted spear phishing (i.e., business email compromise) where stealing one person’s password is more useful if that person is a big fish. That way, it is possible to manually verify credentials.
But on the other hand, if it works, it would raise the phisher’s cost of business and force them to go through more hoops.
Anyhow, those are some random thoughts on a Tuesday afternoon. Let me know what you think in the comments below.
[1] I get paid better fighting spam than I did performing magic. I think that played a big reason into why my practicing declined as much as it has.
[2] Mentalism is the branch of magic involving mind reading, making predictions, and so forth. Well-known practitioners include Max Maven, Bob Cassidy, and Derren Brown.
[3] Not exactly but you get my point.