Software designers have to strike a careful balance between letting users do what they want and alerting them to mistakes while avoiding pestering them so often that the warnings go ignored. Worst of all, an app could prevent someone from carrying out an action they want to take, leading them to abandon the app in frustration.
Nowhere is this more fraught than in Web browsers, especially considering that every major operating system ships with an in-house version, even as competition remains in place. People can switch easily, and browser makers know this. If your browser cautions, blocks, or nags you too much, you can just find one that’s less harsh—and I think this philosophy has led companies to allow more security issues to slip past.
Starting around 2015, Google’s Chrome and Mozilla Foundation’s Firefox have changed their approach a lot because of security issues relating to outdated cryptographic standards used in web server digital certificates. It’s spread to many less technical areas. Apple’s Safari, however, has mostly erred on the side of quietly deterring and deflecting without alerting, and there are good reasons it should change.
The gold standard of secure private messaging requires end-to-end encryption between the parties involved with no way for the messaging service to obtain the keys used to scramble the message. Open Whisper Systems’ Signal Protocol and Signal apps implement seemingly the best version of that.
But there’s a small problem: the Signal apps are tied to your phone number, and Open Whisper uses a registration system to match you and find others. Companies like WhatsApp (part of Facebook) have taken the Signal Protocol and built it into their apps, but they have a similar problem of centralized management, even though it’s done well.
However, neither WhatsApp nor Open Whisper can obtain the keys involved in chats because of how the system was designed, which is what’s important. Apple’s iMessage is secured end-to-end, but a weaker design makes it feasible (if unlikely) that Apple could revise the system without users being aware to reduce the ability for messaging to remain absolutely private.
In a time of political turmoil and polarization, you may want to keep friends, family, and colleagues with whom you have fundamental disagreements from seeing some of your routine social media postings, wherever they occur.
Some argue that compartmentalizing what you see and what you let others see increases polarization and an echochamber effect. They worry or insist that it will further distance people’s points of view and make it less likely for some kind of grand reconciliation.
Others try to use the language of healing and security against people, accusing them of maintaining “safe spaces,” being too easily “triggered,” and acting like a “snowflake”—easily melted when exposed to challenging ideas.
The Internet of Things (IoT) has a bad rap these days. As billions of devices, like DVRs and webcams, ship to consumers worldwide, no government or industry organization has any real power to ensure that the devices have proper security and an upgrade path to patch flaws. That’s not new in the computer and consumer electronics worlds. But the IoT makes hackable devices in fixed locations attached to high-speed broadband an incredibly desirable target for security agencies and criminal gangs to use as armies of bots.
While some IoT devices are expensive, like a $200 Nest camera, those are outliers. The vast majority of gear being shipped is already deep in a race to the bottom. The makers of cheap hardware don’t typically put security at the top of their priority list, nor have much compulsion to offer software upgrades for security indefinitely—or even at all.
So what can shift that balance, making consumer privacy and Internet security valuable to makers? Putting teeth into the consequences of failure to meet a basic bar. In some segments of the electronics industry, trade groups have this power through certification and sometimes through encryption. If you want to sell a device that’s labeled Wi-Fi (without having your products blocked and you being sued), you have to pay for testing and to use the Wi-Fi trademarks. Sell it with that label and mark and without approval, and you can wind up sued, have your imports blocked, and find retailers refusing to sell your product.
Meitu isn’t new. The app, from a company of the same name, has been anime-izing people’s faces in China for several years. It just went viral in the U.S. for no apparent reason, explaining all the super-cute (“kawaii”) sparkly and smooth images of people in your social media feeds.
This, in turn, quickly led to scrutiny of how it handles data. Security researchers are always poking at popular apps, especially from China, as developers there often create apps for jailbroken iPhones, but have been the victim of a malware-infested version of Xcode.
This general interest was heightened on the Android side, as the app requests a swath of permissions to access personal data. Apple limits some of this access by design—apps can’t even ask for, say, a list of all Wi-Fi connections or a phone’s unique IMEI number—and requires an app to ask for specific access to Contacts, the camera, and other data and capabilities.
The headline in the Guardian last week was certainly eye-catching: “WhatsApp vulnerability allows snooping on encrypted messages.” The allegation was that a newly discovered flaw could allow messages you’d sent to a known and confirmed party through a highly secure method could be replayed, or sent again to other parties that could insert themselves as trusted recipients.
It turns out, almost none of this is accurate or represented in a way that will help WhatsApp users improve their security. This doesn’t mean that WhatsApp is created perfectly (nor do I allege the Guardian have an agenda). A few months ago, I explained how to configure WhatsApp to be as secure as possible, because defaults and prompts made it more likely that you could have your messages intercepted by criminals or by mass or targeted surveillance from security agencies. For example, the app encourages you to backup your messages on a server, which removes the end-to-end encryption protection of the messaging system; you have to know to say no.
You can be tracked and have your data intercepted from many angles, by legitimate and illegitimate actors alike: governments, criminals, personal enemies, corporate spies, children without moral compasses, you name it. Many techniques let you encrypt and shield your data at rest, on your devices and on remote servers, and in transit.
But there’s one problem with all the shields you can put up: when you need to use to use a website, you’re giving yourself away, whether it’s from your current location or via a virtual private network (VPN) service that encrypts your request out to a data server location from whence it issues. Tracking which sites you visit or observing VPN end points can reveal a lot, even if the contents of sessions can’t be determined. And websites and VPNs can be blocked, as activists and average citizens in many countries have discovered.
There’s a way around this. Anonymous browsing promises some of the benefits of evading tracking from marketers, criminals, and spies, while also giving you access to information you need. It doesn’t work for every website and comes with a long list of provisos. However, it’s extremely easy to set up and use, and even the workarounds in countries that attempt to block anonymous browsing aren’t yet onerous.