This is a nice collection of the most common scams run during the holidays. I encourage you to pass this around to friends and family.
This is episode 14 of the Privacy Blog Podcast for November,2013.
In this episode I talk about:
How your phone might be tracked, even if it is off
The hidden second operating system in your phone
Advertising privacy settings in Android KitKat
How Google is using your profile in caller ID
and the lengths to which Obama has to go to avoid surveillance when traveling.
This article got me thinking: People’s ignorance of online privacy puts employers at risk – Network World
There is an interesting paradox for security folks. On the one hand, almost two thirds of people feel that security is a matter of personal responsibility. On the other hand, few are actually doing very much to protect themselves.
In the workplace we see this manifest in the BYOD (bring your own device) trend. Workers want to use their own phones, tablets, and often laptops. Because it is their personal device, they don’t think the company has any business telling them how to secure it, or what they can or can’t do with it. Yet they want to be able to work with the company’s documents and intellectual property, and access company sensitive networks from that device.
When that trend intersects with the poor real-world security practiced by most people, the security perimeter of businesses just got both larger and weaker.
Realistically, it is too much to expect that users will be able to fully secure their devices, or that security professionals will be able to do it for them. The productivity impact of locking users out of the devices they use (whether BYOD or company provided) is often too high, especially in the case of technical workers. Spear Phishing attacks eventually penetrate a very high fraction of targets, even against very sophisticated users. How then can we expect average, or below average, users to catch them, and catch them all.
Increasing use of sandboxing and virtualization is allowing a change in the security model. Rather than assuming the user will detect attacks, the attack is encapsulated in a very small environment where it can do little or no damage, and from which it is quickly eliminated and prevented from spreading. The trick will be to get people to actually use these tools on their own devices.
OS News has an interesting article: The second operating system hiding in every mobile phone
It discusses the security implications of the fact that all cell phones run two operating systems. One is the OS that you see and interact with: Android, iOS, Windows Phone, BlackBerry, etc. The other is the OS running on the baseband processor. It is responsible for everything to do with the radios in the phone, and is designed to handle all the real time processing requirements.
The baseband processor OS is generally proprietary, provided by the maker of the baseband chip, and generally not exposed to any scrutiny or review. It also contains a huge amount of historical cruft. For example, it responds to the old Hays AT command set. That was used with old modems to control dialing, answering the phone, and setting up the speed, and other parameters required to get the devices to handshake.
It turns out that if you can feed these commands to many baseband processors, you can tell them to automatically and silently answer the phone, allowing an attacker to listen in on you.
Unfortunately the security model of these things is ancient and badly broken. Cell towers are assumed to be secure, and any commands from them are trusted and executed. As we saw at Def Con in 2010, it is possible for attackers to spoof those towers.
The baseband processor, and its OS, is generally superior to the visible OS on the phone. That means that the visible OS can’t do much to secure the phone against these vulnerabilities.
There is not much you can do about this as an end user, but I thought you should know.
Based on a single line in a Washington Post article, Privacy International has been investigating whether it is possible to track cell phones when they have been turned off. Three of the 8 companies they contacted have responded.
In general they said that when the phone is powered down that there is no radio activity, BUT that might not be the case if the phone had been infected with malware.
It is important to remember that the power button is not really a power switch at all. It is a logical button that tells the phone software that you want to turn the phone off. The phone can then clean up a few loose ends and power down… or not. It could also just behave as though it were shutting down.
They don’t cite any examples of this either in the lab or in the wild, but it certainly seems plausible.
If you really need privacy, you have two options (after turning the phone “off”):
1) If you can remove the phone’s battery, then doing so should ensure that the phone is not communicating.
2) If you can’t remove the battery (hello iPhone) then you need to put the phone in a faraday cage. You can use a few tightly wrapped layers of aluminum foil, or buy a pouch like this one.