Reliable and secure mobile communications are a must for any modern organization, be it a company, a government agency, NGO, whatever. As things stand, the choice is essentially limited to Google’s Android platform or Apple’s iPhones based on iOS. At first glance, the iPhone appears much safer: restrictions on third-party programs; the only tightly controlled marketplace; a fraction of the malware found elsewhere… But let’s dive deeper to see if this is really the case.
Is iOS really that secure?
News about malware infections of Apple devices has become commonplace in recent years, all thanks to the “legal surveillance software” Pegasus. But because Pegasus’s victims were mainly activists, politicians and journalists, the threat was treated more as an urban legend — nasty, yes, but so rare and targeted that the chances of encountering it in reality were tiny (unless you went looking for it). But then it came knocking on our door: in June of this year, we talked about an attack on the Kaspersky management using the Triangulation malware (by the way, at the upcoming Security Analyst Summit we plan to present a detailed analysis of this attack; if you’re interested, join us).
Our company — that is, a privately-owned corporation — which used iPhones as the standard means of mobile communication, came under attack. After carrying out a thorough investigation and releasing the triangle_check utility to automatically search for traces of infection, we set up a mailbox for victims of similar attacks to be able to write to. And the emails poured in from other users of Apple smartphones, claiming that they also found signs of infection on their devices. Trust us — we no longer perceive targeted attacks on iPhones as rare cases.
The illusion of security
Paradoxically, the oft-repeated assertion that iOS is hands-down more secure than Android only makes the situation worse. Public denial of the threat causes people to take their eye off the ball. They say to themselves, “Sure, someone got infected, but chances are I won’t.”
Even some of our colleagues (hardly strangers to information security) refused to believe they had been “Triangulated”. Even after the threat was publicized, some had to be persuaded to check their iPhone for traces of the malware, and were genuinely surprised to learn that they had been targeted.
The thought “Why hack me?” is comforting but dangerous. There could be many reasons. You don’t have to be an interesting target yourself to have your phone hacked. It’s enough to be related to a top executive or government official. Sometimes it’s enough to attend meetings or just be physically near the real target of the attack. Then all of a sudden you find yourself in the firing line because important business information leaked from your device.
The real problem
A closer look at the vulnerabilities market (be it darknet forums, or some gray platform like Zerodium) reveals that iOS and Android exploits are now roughly equal in price. And this indicates how the attacker market views these systems’ level of security. Some exploits for Android are even more expensive than for iOS. In any case, both systems are viable targets.
The real difference lies in the availability of tools for countering attacks. If attackers exploit the latest zero-day vulnerability to bypass Apple’s vaunted security mechanisms, there’s nothing you can do about it. Most likely you won’t even figure out that it happened at all. Due to system restrictions, even top professionals will have a hard time getting to the bottom of what exactly the attackers were after. Meanwhile, an Android-based smartphone might be equipped with a full-fledged security solution — not only an antivirus, but also an MDM (mobile device management) solution that allows remote administration of corporate devices.
Getting even more granular, we see that the reputed advantages of iOS in the event of an attack actually turn out to be disadvantages. The closed nature of its ecosystem, off limits to outside security experts, only plays into the hands of attackers. Sure, Apple engineers have built pretty good foolproof protection: the user can’t accidentally go to a malicious site and download a trojanized APK, say. But in the case of iPhone hacks (which, as practice shows, are well within the capabilities of sophisticated attackers), victims can only hope that Apple itself will come to the rescue. Assuming, of course, that it detects the hack in good time.
The scale of the threat
The argument that all real-life attacks on iOS thus far have been part of targeted campaigns also fails to reassure. It’s generally accepted that the EternalBlue exploit was developed by a government agency and intended for very narrow application. But then, after being leaked by the Shadow Brokers group, it fell into cybercriminal hands and was used to carry out the global WannaCry ransomware attack.
Even Apple’s marketplace can no longer be considered impregnable. Our colleagues recently found a number of scam apps in the App Store which, under certain conditions, phished personal data from the user. Sure, it’s not yet a massive threat, but it sets a precedent: apps bearing a malicious payload were able to bypass Apple’s stringent controls and get published in its official marketplace.
What to do?
Having learned the Triangulation lesson, we, like many other private companies and government agencies, are phasing out the use of iPhones for work purposes. As an alternative for now, we’re using Android equipped with our solution, which we know is effective. This doesn’t mean we think it’s harder to attack. Just that it’s simpler to protect and certainly easier to detect signs of attack.
This is not a permanent solution — an add-on to an OS is not ideal. A security solution operates on the principle of acquired immunity: it protects against threats similar to ones already encountered. In a perfect world, everyone would have a mobile phone with innate immunity, which makes unintended actions impossible by design. Alas, there’s no such phone… yet.