CAT | Computer Security
Arstechnica reports on the discovery of signed malware designed for surveillance on the Mac laptop of an Angolan activist.
The malware was a trojan that the activist obtained through a spear phishing email attack. The news here is that the malware was signed with a valid Apple Developer ID.
The idea is that having all code signed should substantially reduce the amount of malware on the platform. This works because creating a valid Apple Developer ID requires significant effort, and may expose the identity of the hacker unless they take steps to hide their identity. This is not trivial as the Developer ID requires contact information and payment of fees.
The second advantage of signed code is that the Developer’s certificate can be quickly revoked, so the software will be detected as invalid and automatically blocked on every Mac world wide. This limits the amount of damage a given Malware can do, and forces the attacker to create a new Apple Developer ID every time they are detected.
This has been seen to work fairly well in practice, but it is not perfect. If a target is valuable enough, a Developer ID can be set up just to go after that one person or small group. The malware is targeted to just them, so the likelihood of detection is low. In this case, it would continue to be recognized as a legitimates signed valid application for a very long time.
In the case of the Angolan activist, it was discovered at a human rights conference where the attendees were learning how to secure their devices against government monitoring.
The latest Java exploit has given another view into the workings of the cybercrime economy. Although I should not be, I am always startled at just how open and robustly capitalistic the whole enterprise has become. The business is conducted more or less in the open.
Krebs on Security has a nice piece on an auction selling source code to the Java exploit. You can see that there is a high level of service provided, and some warnings about now to ensure that the exploit you paid for stays valuable.
The Washington Post has a good article on social engineering attacks. It is a good treatment of the topic.
Short answer, humans are the weak link, and can be defeated with extremely high probability.
The take away from this whole thing is that we need to be building security systems that don’t rely on humans not being tricked into compromising their own security. A lot of security architects take a “blame the victim” stance. User’s have other things to worry about than security. We need to make sure security happens even if they are not paying attention to it.
Forbs is reporting that Anonymous and Antisec have dropped a file with a million Unique Device ID (UDID) numbers for Apple iOS devices. They claim to have acquired an additional 11 million records which they may release later.
In addition to the identifiers, the file is said to also contain usernames, device names, cell numbers, and addresses. It is this additional personal information that seems to be the real threat here.
The Next Web has set up a tool for checking to see if your information is in the leaked data. You don’t need to enter your full UDID into the field, just the first 5 characters. That way you don’t need to trust them with your information either.
None of my iOS devices showed up on the list, so I downloaded the entire file to look it over. You can see the release and download instructions here.
Looking through the document, I don’t see any examples of particularly sensitive information. In the first field are the claimed UDID. The second field is a 64 digit hex string. After that is the name of the device, frequently something like “Lance’s iPad”. Finally is a description of the device itself: iPad, iPhone, iPod touch.
SHA hashes are 64 hex digits long, and are widely used in forensics to verify that captured evidence has not been changed. My intuition is something like that is what we are seeing in that second column.
I have no idea where the claims about addresses, and account names came from. I am not seeing anything like that.
It is interesting that Anonymous / Antisec claim that this data came from the hacked laptop of an FBI agent. This certainly raises big questions about why he would have this information on his laptop, and why the FBI has it at all.
While 12 million is a big number, it is a tiny fraction of the over 400 million iOS devices sold to date. Still, that would represent a shockingly wide dragnet if these are all being monitored in some way by law enforcement.
Of course, for all we know this list was captured evidence from some other group of hackers.
So, short answer (too late!), you probably don’t have anything to worry about here, but you might want to check to see if your device is in the database anyway.
UPDATE: It appears that the UDID may tie to more information that was immediately apparent. While Apple’s guidelines forbid tying UDIDs to specific account, of course that happens all the time. My friend Steve shared a link with me to an open API from OpenFeint which can tie a UDID to personal information. Certainly there are others which would reveal other information. The existence of these, and the leaked list of UDIDs would allow an app developer to tie a user’s real identity to their activity and use of the app on their iOS device.
UDATE 2: I find it impossible to actually read documents from Anonymous and Antisec, they are just so poorly written. It seems I missed their statement in lines 353,354 of the pastbin where they say that they stripped out the personal information. The 64 digit block is actually the “Apple Push Notification Service DevToken”. SCMagazine is reporting that the FBI is denying the laptop was hacked or that they have the UDIDs.
There has been a lot of attention recently to the arrest of an alleged LulzSec hacker after his anonymity was compromised by the anonymity service he was using, HideMyAss.com. Some articles on the event are here, here and the provider’s explanation here.
The reason this company was able to compromise the privacy of their user was that they had logs of user activity. They know what IP address is assigned to each user and can use that to attribute any activity back to the real identity of the person behind the account.
The real problem with logs is that they exist or they don’t. You can’t keep logs only for “bad users” but not for responsible “good users” because even if it was possible to identify them as such in advance, you would not find anything like agreement about who should fall in which category.
Many operators of privacy services, including myself, feel very strongly that such tools should be usable in countries like China to circumvent the censorship and surveillance there. Such actions are certainly illegal for the user, and probably for the provider. While being a UK company and only responding to UK court orders, they were “forced” to expose the identity of a person in the US who was then arrested by the FBI.
I don’t know enough about this case to debate whether or not this person is guilty or deserved to be arrested. My concern is that this case has demonstrated that anyone who can cause a UK court order to be severed against this company can expose their users. It also makes them a target for hacking, social engineering, infiltration and other attacks which could gain access to these logs without a UK court order.
As a general rule, if information exists and people want it, there is a very good chance it will escape, if only by accident.
I founded this company, Anonymizer.com, and I personally stand behind our services. We have clear privacy policies, we keep no logs of the surfing activities of our users, we have no way of identifying what user may have visited what website. We have an unblemished record of providing robust privacy since 1995.
As I have said in many previous posts, it all comes down to trust. If you don’t know who is providing the service, and don’t have the ability to research their history and gauge their integrity, you should not use that service.