This sort of attack will become more common as banks require two-factor authentication:
Tatanga checks the user account details including the number of accounts, supported currency, balance/limit details. It then chooses the account from which it could steal the highest amount.
Next, it initiates a transfer.
At this point Tatanga uses a Web Inject to trick the user into believing that the bank is performing a chipTAN test. The fake instructions request that the user generate a TAN for the purpose of this "test" and enter the TAN.
Note that the attack relies on tricking the user, which isn't very hard.
Part of the reason it is so easy to trick users is because so many applications, from the users point of view, are really inconsistent in the first place.
If anything out of the usual happens, I stop and make a call. To make this possible though, you have to plan withdrawals and so forth in advance and give yourself enough time for potential problems. (It is still faster than the old days of going to the bank, though, so it's often worth it.)
Th big problem is too often they rarely know anything about what may or may not be going on, even if you call support to check things out.
@billswift: worse -- even it I caught it, who am I going to tell? Anytime I've called my bank to request even the most basic of tech help, I'm served by a level-1 helpdesk operator who's qualified only to read scripts.
Is it another account at the bank? If so, then it could be tracked and returned.
Is it an account at another bank? For my bank, I have to preregister these external accounts. I suppose Tatanga could steal money in two steps. On one day, setup the external account and on another day, steal the money.
I don't know about this particular case, but I have done some on-line bill paying through a major US bank. Those require the user to provide a payee name, address, and account number.
If that payee doesn't have an account set up at $BigUSBank, or have an electronic-payment agreement with them, then $BigUSBank prints a check and sends it in the mail. (If the payee does have such an account, 1-day electronic transactions are typically done.)
Theoretically, if a MITM attack were made, the attacker could simply change the address/account/routing method for the payee. They might even return the address data to its original state after the payment is allowed.
Of course, $BigUSBank doesn't use TAN at all...which does worry me somewhat.
(I use NoScript in FireFox to increase protection against script-based attacks. But maybe I should go back to scribbling my signature on a bank-supplied piece of paper for making such payments...)
Not that it matters much for the people doing this, but I have been thinking and would like to make extremely clear: MITM attacks are a fraud and a lie, and by carrying one out you are propagating falsehoods.
To me, the idea of walking around with money you received with a lie and falsehood; there's no lower feeling of worth.
MITM attacks also strike a chord w/ me because I have been stolen from; and I don't know who did it. It's probably better that I don't. Plus, thieves be wary. My experimentation is taking a turn, I am now setting traps for would-be thieves, and I operate under my own protocols.
"Implementing endpoint protection against advanced malware like Tatanga, Zeus, and others, is the only way to make sure that the integrity of second factor security measures like chipTAN are not compromised." How about user training and notices about when and how the user will be requested to provide what data.
Most likely it is transferred to a money mule. Then it gets wired out of the country and is much more difficult to trace at that point. Brian Kreb's site has a lot of good info on money mule operations.
This could be solved if the bank spent some extra money on actual out-of-band security measures. The bank could hold the transfer until live phone verification is done with the customer. In this case, the bank representative would call the customer, verify some personal information and explain the dollar amount and recipient address on the phone. An automated system would be a less costly stopgap, with slightly less security.
As soon as the user received the call about a money transfer, it would be quite clear that the chipTAN 'test' was not a test at all and instead was a MITM attack and the user would have no reason to approve the transfer.
Of course this type of confirmation costs money to the bank and slows down 'instant' transfers. But banks are actually quite good (or should be quite good) at determining the risk of such transactions and would only need to do this when the transaction did not match against the user's history.
Also as a user I would be even willing to pay a verification fee (small fee, like 50 cents) per transfer above some customer defined threshold (say $500). I don't make money transfers above $500 very often, and when I do I'd be more than willing to pay 50 cents to have a human verify it with me. In fact the cost could/should be shared between the bank and the transfer initiator.
The real problem is that although it might be "two factor" authentication you are not actually doing,
Two way, Authentication on each transaction, via a reliable out of band side channel which includes the user as part of the authentication process.
Therefor as the Bank set up the system it's the Banks fail not the users.
I've been saying this for longer than this blog has been around and back into the last century, in fact it's getting on for the better part of twenty years... That is back before UK banks used the "Internet" for "Electronic Banking" (Yup you used to have to 'dial in' via modems into their computer systems...).
If the authentication is not fully secure and two way through the user then the oportunity for fraud will exist, and where there is an oportunity eventually it will be exploited.
And as we have seen the bank fraudsters are getting better at what they are doing.
The important thing to keep in your head is, it is the Bank the fraudsters are attacking, not the user. The fraudsters are in all cases. using weaknessess in the Bank systems to carry out the attack. The Banks however want you to think it's the users fault because they can then pretend it is not their systems, proceadures and managment that are at fault. Bruce calls it "externalising the risk" I'd preffer to call it what it is "Deliberate and Criminal Fraud by the Banks" via their "willful negligence" to implement the proceadures properly.
The fact that the legislators are happy to turn a blind eye to the problem because they are in effect "on the take" one way or another either directly or indirectly via the Banks "paid shills" does not help...
But it gets worse, because of the banks "willful negligence" they have quite deliberatly created a series of "faux markets" the cost of which you have to pay not the Banks...
In the Netherlands you can transfer money to ANY account, anywhere in the world via your bank internet banking website. I regularly transfer money to accounts in Europe and I have transfered money to Indonesia and Australia as well. You don't have to pre-register any account you want to transfer to.
Transfers to accounts within the same country are all completed the next day, if it is within the same bank it can even be instant. International transfers take a bit longer, but not much. Especially with instant transfers you have the problem that by the time the user notices the fraud, the money has been withdrawn from an ATM (possibly in another country) already.
The sheer number of transactions make phone verification pretty much impossible, pretty much all transactions in the Netherlands are done via internet banking, including payments for goods bought at some webshop or other.
My bank uses SMS authentication, where it sends the total amount of the transactions you are signing for, together with the authorization code to your mobile phone. If the amount is high enough, you have to authorize that single transaction separately and you get the full transaction details.
To get around this, they use a MITM attack where they display a message from the bank, stating that they accidentally transferred a large amount of money to your account and that you are requested to transfer it back. This way, the user EXPECTS to see the transfer SMS, because he just entered that. The MITM trojan also modifies the payment overview screen, so it looks like you are not missing any money.
Once the attacker has the browser, the bank need never see the legitimate transaction that the user intends to make. So all the authentication in the world will make no difference (unless the user has a known-honest device that can authenticate the ostensible return messages from the bank).
And it's a pretty basic rule that an out-of-band verification call or SMS *from* the bank isn't going to help -- it makes the theft more expensive to fake that call and the response to the bank's call, but probably not by enough to make the scam unprofitable.
So all the authentication in the world will make no difference (unless the user has a known-honest device that can authenticate the ostensible return messages from the bank)
It's known as an "end-run" attack and it is important to realise that the "honest device" is not sufficient on it's own. What happens is the attacker places themselves between the user and the actual end of the authentication chain. That is on a PC only system, the authentication chain does not actually include the PC display, therefor a "driver shim" can manipulate what the user sees after the end of the authentication chain.
Thus if you think about it as long as the attacker has control over what the human sees every security measure up stream of that display is negated. This requires two further things to obviate the negation,
1, The "honest device" is the last step in the chain. 2, The chain includes the users brain prior to the "honest device".
Therefor the "honest device" has to be the last device in the authentication chain, which means it needs it's own display and some kind of human usable input method. Because the human should be in the authentication chain logicaly they need to be the next step up the chain, therefore the "honest device" input needs to be a keypad, so what is displayed on the PC display gets translated by the human brain into key presses.
The "honest device" or token also contains a Real Time Clock and a "secret" unique to the user and known only to the token and the bank.
So the steps are,
1, The user opens a session to the bank on the PC. 2, The user then requests from their token an authenticator code. 3, The token uses the RTC, a nonce and secret to create the autheticator code and displays it on the tokens screen. 3, The user types in their user ID and the autthenticator code on the PC and sends it to the bank. 4, The bank checks the authetication code and also ensure that the time in the tokens RTC is approximatly current and past the last time sent by the token to the bank. 5, The bank then uses the secret to recover the nonce and perform some transformation on the nonce before using the secret to send. the transformed nonce back to the PC display. 6, The user types the returned bank authenticator code into the token. 7, The token produces a "go / no go" code on it's display.
At this point the users token and the Bank have authenticated to each other. And as can be seen by the workload on the user this is an involved process.
The user then carries out other activities with the bank without having to use the token, untill they actually want to perform a financial transaction,
A, The user selects from the PC page the request for a transaction which pulls up the appropriate display on the PC screen.
B, The user then types the transaction information into the token.
C, The token then producess the base transaction code along with a new nonce and RTC all wrapped by the secret.
D, The user types the base transaction code produced by the token into the form displayed on the PC and sends it off to the Bank.
E, The Bank unwraps the transaction details and new nonce, and performs the check steps and nonce transformation wraps it up with the secret and sends it back to the PC the user is using.
D, The user types the returned code into the token and the token checks all the details and if all is OK displays a new Acknowledge code based on the transaction details and nonce etc.
E, The user types the Ack code into the PC and sends it to the bank.
As can be seen the level of typing is becoming excessive and easily prone to Human Error / Token ware etc. However anything less would not be securely authenticated....
And it is this level of "user hassle" that stops the practicality of full two way transaction authentication being used. This is unfortunate as it is the necessary minimum to close the attack holes.
One easy way to stop this kind of attack is for your bank to tell you that every transaction you perform on your account is for real and they will never ask you for your credit card PIN or to perform any operation with your money for test or error correction or any other purpose.
THEY are the ones who have full access to all accounts. They don't need you to perform operations for them. They are paid to do it for you.
Why not detect if a Banking Trojan is resident and active on the endpoint device in the first place. If the Bank knows this, the risk level of that login turns to high and money transfers can be disabled from that session, until a session is logged from a 'clean' device. There are already solutions to achieve this. A backend risk scoring fraud detection system will also work in detecting/preventing out-of-norm transfers.
MIMA attackes have become very strong and because of their precise interception and injection of false information, its difficult to suspecy any fraud. The most we can do is try to verify any unusual request from bank by calling up. It's also important because 'honest device' is the last chain of the authentication as mentioned by Clive.
I wonder what makes us characterize a device as bieng honest
Simple answer (but technicaly unhelpfull) is,
Because it is not cheating us nor can it be used to cheat us.
Technicaly it has to be "functionaly correct at all levels" and "not capable of being modified" and "only usable within the design scope".
In practice we cannot even reasonably meet any of these requirments due to practical limitations. For instance we can encapsulate electronics in a way that makes attempts to modify them highly evident via tamper evident seals etc but even these can be bypassed by a sufficiently skilled adversary given sufficient time and access to the device.
the best way to cheat a user is showing customer a new transaction on browser that actually don't exist and bank display a popup says the guy made wrong transaction to your account and complained against you to return his money (as it happen to people so they believe it ) and telling him to return money to the sender to unlock your account. this method perfectly work on noobs (more than 50%) and there is no way to prevent it without improving knowledge of all customers (which is impossible)
Interesting to read peoples views on all of this, at the end of the day the only thing that can protect you from what I would class as MiTB attacks (man in the browser), rather than between your machine and the bank, is the user knowledge. Our of Band, Biometrics or back end risk profiling are all techniques or authentication or validation that work, but they are heavily reliant on the user, knowing what, when and how they use their security credentials and devices. As I see it the biggest issue with that, other than the elephant in the room that users in the most part simply don't care, is the fact that across the industry there is no consistency. Security methods and usage differs and is always in flux, so how can we expect end users to know better??
Shouldn't give in, but users need to know they are responsible for their own security and Banks need to give customers all the tools and education needed to stay safe.
one of the best systems I have seen that could be used for banking system security is this [www.passwindow.com] (no connection to them, I just like their product) Works on any device from a smartphone to an internet kiosk in a shopping center Cheap (cheaper for the banks than things like the random-generator keyring things I have seen) Secure against any kind of malware stealing your user details as well as most social engineering attacks. Depending on how the server-side part is implemented (the "challenge" part of the interface) it can be implemented so its difficult to display the "challenge" whilst obscuring the fact that the user is being asked to authenticate a big transaction.