By B.K. DeLong
In following the Apple
iPhone location tracking conversation, I’ve thought of another interesting
point not quite raised or being examined, similar to the issue of making
potential high-value targets out of high-profile executives at Fortune 500
firms simply by using
email addresses and other information contained within the Epsilon breach.
Based on the Apple CoreLocation framework, anyone can create
a 3rd party application to obtain geolocation data utilizing
whatever services are available or setup on one’s iPhone – wireless, wi-fi and
GPS. There is no real limit as to how often an application can pull down this
information from those services other than the fact that these operations can
be very power intensive, draining the battery, and take a couple of seconds to
establish location through wireless tower triangulation, GPS satellite or wi-fi
hotspots. The “significant-change location service” is a low-power option for
devices with cellular radios and can “wake-up” suspended apps or those that are
off but is only available on iOS 4 devices.
So of all these 3rd party applications utilizing
location services – how much and how long are they storing the data for? Are they profiling user usage patterns
before they then, perhaps, pass that information on to their own advertisers? Application
security firm Veracode discovered the libraries for at least 5 advertising
firms embedded in an Android application for the Pandora music service
seeking the location data from the phone earlier this month. It
turned out, the Pandora app wasn’t collecting the location data but had it
been, those requests would have been fulfilled. Not only do you have to worry about the
companies running the apps collecting and holding onto your location data, but
all their advertisers as well. Those same libraries were in the iPhone version
of the Pandora app as well, the company asserted, when
it later announced it removed them and re-released both versions of the app.
I spoke with Christien Rioux, chief scientist of Veracode,
regarding some of these theories and he pointed out further challenges.
“Once you accept the terms of an iPhone app, users rarely go
through the permission restrictions and choose the manifest items
they want to allow, such as location services or camera access,” explained Rioux. “Even
worse, you end up giving carte blanche to all updates for that app. So while
the first version may have nothing worth blinking at, future versions may have
concerning features some might want the option to block.”
Apple iOS devices have been steadily charging their way into
the enterprise for the last three years. For the most part, security and IT risk
management teams have been able to hold to corporate policy and keep them from
being used due to the challenge in securing them. But in some situations, they’ve
lost. Who was the culprit? In most cases senior executives who bought an iPhone
or iPad and used it to the point where it became a part of their daily lives
and they felt they needed to be able to use it on the company network, in a
work environment. C-level officers often don’t want to have to deal with
multiple phones if they can help it.
I’ve spoken with many security practitioners over the past
few years about their stories as the iPhone permeated the enterprise, and many
of them asked for insights on how to put security in place to mitigate data
breaches and the difficulty of having any sort of controls to prevent
significant impact to the organization’s risk posture. I’ve noticed the iPhone has
become a device that (in many cases) could no longer be excluded from the
corporate environment despite being one of the hardest smartphones to secure.
Since top executives are often early-adopters of high-tech
gadgetry, many can be found using location-aware apps like FourSquare, Google
Maps, TripIt or Twitter. Using Google and social network sites, criminals can
discover what their target’s role is at their company and what they manage or
have control over at the company – what critical business assets are under
Once a correlation has been drawn to a high-profile target
having such a device, doing Web-based due diligence to craft up a
spear-phishing or simple social engineering attack to get them to install a
compromised app from the store isn’t that difficult. After all, if it’s just
collecting location information how harmful could it be?
How harmful could it be if you had the ability to track an
executive down to 10 meters with GPS over several weeks to a month and utilize
those very 3rd party apps that are posting what they’re doing in
those locations to get a very accurate profile of their daily life?
The more savvy and organized criminal organization could just
invest in or create their own advertising groups and pay to be embedded in some
of these apps and when it really comes down to it, all someone would need is a
few minutes with the phone to install their own, known-compromised app and hide
The worst, persistent threat is the one you never know you
have and remains within the organization/network until the attackers have
everything they need.
In this case, once the criminal’s profile is complete they
could tell when a target gets to his office, various places he visits in the
building (depending on whether the app gathers the GPS altitude data), when he’s
home (confirming both work/home addresses with 3rd party sites such
as LinkedIn, Intelius and such), and what other locations he visits on a regular
basis via aforementioned social location-based apps.
With that information, any criminal has what they need for
nearly any high-value asset robbery, be it digital (intellectual property,
customer data, corporate secrets), physical (prototypes, expensive luxury items
from the home including electronics to high-end cars), or personal.
Far-fetched and movie-worthy? Some may say as much. But is it really all that
hard to execute if someone wants to in this day and age? The amount of
information the digital native generation – especially those that are now
corporate executives – are putting onto the Net about themselves is huge and
easily findable outside a Web of Trust. Something to ponder as we become a more
digitally-converged society and our privacy begins to ever-dwindle.
more aware of what you’re allowing an app to have access to – check the
permissions after you’ve downloaded it and be willing to turn off things
you don’t want it to access.
that it’s not just the company that made the app that gets access to the
information you’re sending it – very often many of the advertisers who are
helping pay for you to use it for free get it as well.
to be more conscious of what you’re putting out about yourself on the Web.
Take advantage of many social networks’ ability to lockdown privacy to a
group of people you trust. The less people you don’t know can find out
about you, the less it can be used against your trust.
B.K. DeLong is an independent security analyst based in the Boston area.