Meitu isn’t new. The app, from a company of the same name, has been anime-izing people’s faces in China for several years. It just went viral in the U.S. for no apparent reason, explaining all the super-cute (“kawaii”) sparkly and smooth images of people in your social media feeds.
This, in turn, quickly led to scrutiny of how it handles data. Security researchers are always poking at popular apps, especially from China, as developers there often create apps for jailbroken iPhones, but have been the victim of a malware-infested version of Xcode.
This general interest was heightened on the Android side, as the app requests a swath of permissions to access personal data. Apple limits some of this access by design—apps can’t even ask for, say, a list of all Wi-Fi connections or a phone’s unique IMEI number—and requires an app to ask for specific access to Contacts, the camera, and other data and capabilities.
The headline in the Guardian last week was certainly eye-catching: “WhatsApp vulnerability allows snooping on encrypted messages.” The allegation was that a newly discovered flaw could allow messages you’d sent to a known and confirmed party through a highly secure method could be replayed, or sent again to other parties that could insert themselves as trusted recipients.
It turns out, almost none of this is accurate or represented in a way that will help WhatsApp users improve their security. This doesn’t mean that WhatsApp is created perfectly (nor do I allege the Guardian have an agenda). A few months ago, I explained how to configure WhatsApp to be as secure as possible, because defaults and prompts made it more likely that you could have your messages intercepted by criminals or by mass or targeted surveillance from security agencies. For example, the app encourages you to backup your messages on a server, which removes the end-to-end encryption protection of the messaging system; you have to know to say no.
You can be tracked and have your data intercepted from many angles, by legitimate and illegitimate actors alike: governments, criminals, personal enemies, corporate spies, children without moral compasses, you name it. Many techniques let you encrypt and shield your data at rest, on your devices and on remote servers, and in transit.
But there’s one problem with all the shields you can put up: when you need to use to use a website, you’re giving yourself away, whether it’s from your current location or via a virtual private network (VPN) service that encrypts your request out to a data server location from whence it issues. Tracking which sites you visit or observing VPN end points can reveal a lot, even if the contents of sessions can’t be determined. And websites and VPNs can be blocked, as activists and average citizens in many countries have discovered.
There’s a way around this. Anonymous browsing promises some of the benefits of evading tracking from marketers, criminals, and spies, while also giving you access to information you need. It doesn’t work for every website and comes with a long list of provisos. However, it’s extremely easy to set up and use, and even the workarounds in countries that attempt to block anonymous browsing aren’t yet onerous.
Advertisers want to track us. We typically, but not exclusively, prefer not to be tracked. In the best case, we’re aware of the fact, and use opt-out policies and third-party add-ons to profess disinterest in, technically block, or otherwise delete unique codes or patterns designed to sniff our footprints across the Internet and assemble a dossier on us for marketers to more effectively target our interests.
But I give advertisers too hard a time, when it’s really the behavior of advertising networks and other parties that create platforms on which ads are delivered or marketing data collected and then sold. The ecosystem of online advertising involves a lot of different specialized entities, and it would hard to say that, say, Nordstrom knows that some of its ads might appear on sites for reasons that involve violating our intent and possibly our privacy. Many advertisers don’t really know where their ads wind up, even. This should change: advertisers should have outside privacy and technology audits on the networks they use.
The trouble for average folks is that there’s no simple way to defeat determined tracking systems. What you may know is that browser cookies, little persistent bits of text, can be sent by a website to your browser when you visit, and your browser stores it in a local cache. The next time you visit, your browser sends that cookie as part of the page request for every page on the site. This is how you stay logged in at sites where you have an account, and how site preferences can be stored on a per-browser basis without requiring an account at all.
Say you find yourself using the Web on a strange computer—maybe one owned by a relative or friend, or in a library or school. When using a computer or other device that you don’t control, you can enable a private-browsing mode to prevent leaving traces of your activities. That’s good for accidental security, as it keeps the next user of the device from visiting a site with your credentials, set in a cookie or via a login, when they didn’t intend to. This an even better idea for shared computers where the subsequent user is someone you don’t know.
Likewise, you can set someone else up with private browsing on your Mac, so that they don’t have access to your settings (guest mode in Chrome) or can’t change your settings (all modes), although browsers don’t let you “lock” a browser into that mode. You get better protection by setting up a different user account on a desktop Mac or using the guest account option described next.
Each browser’s privacy mode is a little different, so I’ll go through what kinds of data each destroys after a session and how to drop into the mode in each browser. As a general rule, while browsers in a private mode don’t keep a list of downloaded files, those files do persist on your drive after a browsing session is over.
Any malware powerful enough to overcome the defenses that Apple built to resist incursions may also be powerful enough to hide its traces. That’s not quite an axiom of security, but it’s generally true. If an attacker of any sort creates software designed to attack your system quietly, it typically tries to prevent security software and any other kind of inspection from noticing.
That’s very, very hard, and any exploit that’s sufficiently good at being entirely invisible is likely also good enough for a hacker to sell for a million dollars, with the advantage that the sale is probably legal in most places, and thus better than distributing malware that steals financial credentials or holds files for ransom. (I am not a lawyer, and that’s not legal advice.)
Such exploits, once discovered, are fixed at high priority by operating system makers, giving them sometimes short windows of utility. The more widely used the exploit is, the less likely it will remain available to use.
If you’re a regular reader of this column, you know that it’s not paranoia when I say that the Internet is a nest of snakes. Malicious parties are constantly probing and monitoring, mostly automatically, allowing the slighest weakness to be exploited.
The best way to avoid the venom of vipers is to wear a fang-proof suit: end-to-end encryption that lets your data pass across the Internet with very little to practically no chance of interception, depending on the choices you make.
As part of an ongoing series of how to secure your systems and data as if you woke up and found yourself a dissident in the country in which you live, this column looks into how you can evaluate the tools that are available and make the best choices of which to use.