Editor’s Note: The following article is reprinted from Network World.
AT&T’s failure to protect iPad e-mail addresses spotlight the kind of security issues facing enterprise smartphone deployments, according to a company that specializes in software security.
Enterprise security staff should take away four lessons from the AT&T affair, says Dan Cornell, CTO and co-founder of Denim Group, which works with companies to secure software, including a growing number of smartphone applications. He offered his comments in a blog post on the company’s Web site.
The AT&T breach was initially exposed by Gawker.com, drawing on information from a hacking group calling itself Goatse Security. The hackers learned that they could present an HTTP request to AT&T’s public Website, with an iPad User-Agent header and a valid Integrated Circuit Card Identifier (ICC-ID), which uniquely identifies a SIM card. In response, the Web site returned information about an Apple iPad 3G user, specifically, the e-mail address submitted by that user when activating the iPad according to Apple’s requirements.
This breach was limited to iPad 3G users (though these included a high-profile group drawn from entertainment, high tech, government and the military) and apparently only the e-mail address was returned. The danger seems to have been limited to the possibility of being spammed, or possibly subjected to phishing attacks.
According to the original Gawker story, the Goatse Security hackers “notified AT&T.” The carrier, in a brief written statement on which a spokesman declined to expand, flatly denied this. “The person or group who discovered this gap did not contact AT&T,” the statement read. Instead, “”AT&T was informed by a business customer on Monday [June 7] of the potential exposure of their iPad ICC IDs. The only information that can be derived from the ICC IDs is the e-mail address attached to that device.” The carrier “essentially turned off the feature that provided the e-mail addresses” and that was done by Tuesday.
According to Cornell, there are four lessons to be learned from this, in creating secure smartphone applications.
First, effective authentication and authorization are crucial if you’re exposing to users any server resource that deals with sensitive data. Users have to be authenticated as being who they claim to be, and then authorized to access the data being requested.
“We have seen most folks we work with get pretty good about this for Web pages and OK about it for AJAX/RIA [Rich Internet Applications] endpoints, but they are still missing the mark with server endpoints devoted to smartphone applications,” he writes. “Protect your endpoints! If bad guys need credentials before they can attack you then you’ve certainly raised the bar. And if they don’t need to authenticate they are going to run all over you.”
Second, make sure you authenticate requests with values that are truly random. A&T’s lapse was due in part, according to the hackers, because the ICC-IDs were easily guessable. Beware of relying on values that “look random but aren’t,” Cornell says. “We used to see this a lot with Social Security Numbers (SSNs) and we still see a lot of authentication schemes that rely on semi-public information or reasonably guessable values,” he writes.
IT groups can make use of tools like WebScarab, from the Open Web Application Security Project (OWASP). WebScarab is a Java framework for analyzing applications that communicate via HTTP and HTTPS protocols.
“Design your authentication schemes correctly from the beginning because they can be some of the most expensive parts of systems to remediate once they have been deployed,” Cornell warns.
Third, “you can’t trust anything in an HTTP header.” More need to realize that they “have to assume a malicious user has full control of an HTTP request,” Cornell writes. In the A&T iPad incident, “it appears as though a User-Agent header was checked to “verify” that the requesting party was an iPad….” He concludes: “Making any security-critical decision based on a guessable value in an HTTP request header is a bad idea. Period.”
Finally, an obvious admonition that has some far-reaching implications: “Don’t Trust Your Service Providers; Test Them.”
“In the brave new world of Software as a Service (SaaS) it is easy to forget: even though you didn’t write the software, your customers are going to hold you responsible for it,” Cornell writes. “Organizations selecting service providers need to make sure they are properly addressing risk associated with these providers.”
One big question is “how?” And the second, even bigger one is, “what happens if the service provider says ‘no?’”
Cornell recommends that this security validation be negotiated up-front. It can range from “table-top assessments of service provider policies and procedures to technical testing of the software before and after it is brought online.”
In several engagements, Denim Group’s clients “were stonewalled by their service providers who refused to allow testing of their services,” Cornell writes. “Crappy customer service: yes. Legally defensible for the service provider: unfortunately also yes.”
He recommends that information security staffs work closely with business units in dealing with service providers on these issues: it’s vital ahead of time to identify potential vulnerabilities, assess the risks, and identify courses of action in response to exploits.