In the fall, a technology company’s fancy turns to new products. As we pass the midpoint of October, the holiday buying season has started to solidify, and we’ve gotten our requisite annual announcements from not only Apple, but from Google and Microsoft as well.
Now that these three big tech companies have all laid their cards on the table, we have a chance to look over what each of them discussed and compare and contrast approaches: where are they working on technology in the same vein, and where do their paths diverge. And, of course, there’s an opportunity to look at what Cupertino’s competitors are doing and see if they point out areas that might also be of interest to Apple.
Feel the motion
The Pixel 4 launch felt in some ways underwhelming, as many of its highlights were leaked ahead of time, but the biggest bet from Google in its latest flagship phone is probably the addition of Motion Sense, the gesture feature developed under the codename Project Soli. This miniaturized radar system’s applications are fairly limited so far, with features like controlling music playback, silencing alarms and phone calls, and, uh, waving to Pokémon.

Google says Motion Sense lets you “skip songs, snooze alarms, and silence phone calls, just by waving your hand.”
But Motion Sense does point to a whole possible interaction model that could be useful in cases where touch is challenging. For example, when your phone or hands are wet, a touch interface can often have trouble working. Or if you’re working out, and you want to go to the next trap without trying to hit a small tap target on a screen. And, of course, for users who might have vision or fine-motor challenges, the potential for such a technology is high.
Broadly speaking, I think the most promising example of this tech’s future use case is the fact it can detect when you walk away from your phone and turn off the screen. This idea of presence is one that’s going to become more key to our use of technology—especially mobile devices. Apple has introduced similar concepts using the Apple Watch, which can, for example, unlock your Mac if you’re nearby. But the feature is limited, since it’s based on Bluetooth, so it can’t currently be used to, say, lock your Mac when you walk away.
The U1 chip built into new iPhones, which can detect directionality as well as proximity, might allow for similar presence-based applications, but we just haven’t seen enough about Apple’s plans for it yet.
The pen is mightier
During Microsoft’s press event, the company demoed a small but impressive feature: pen input for Excel. Numbers written into cells in the spreadsheet application can be automatically converted into text.
Now, on its own, it might not sound like much, but it does help point out that Apple’s adoption of pen-based input has been sorely lacking. Steve Jobs’s prejudice against styluses has apparently remained alive and well, even in the age of the Apple Pencil.

That’s because for so much of what Apple does, the Pencil is all about drawing. While it’s a great tool for that pursuit, it only scratches the surface (sorry—Microsoft joke!) of other abilities the Pencil could develop. It could, for example, be a godsend to those of us who see the appeal of a stylus for text input, but have atrocious handwriting—Apple has a long history with handwriting recognition, but it’s let the technology lie fallow for many years now, and it’s possible that it’s still haunted by the spectre of egg freckles.
It’s particularly strange since the iPad now offers a feature where it can recognize handwriting as text in some cases. For example, if you use the Apple Pencil to write something in the Notes app, the system will recognize it as text and even let you search for it; if it’s near the top of the note, it’s also used as the title and excerpt for that note as well. That seems a mere step away from converting pen input into real, malleable text, and perhaps a way-station on the path to allowing for Pencil input anywhere you can use a keyboard.
Re-routing smart speakers
Time for me to climb back on this old hobby horse: I’m still mourning the death of Apple’s line of AirPort routers. The company abandoned the category a couple years back, but I’d make a strong case that it deserves to return now more than ever—and Google’s latest announcement in the Wi-Fi router arena points to one way that could happen: smart speaker integration.
Yep, I’m dragging the HomePod into this. The smart speaker hasn’t hardly been the success that I imagine Apple wanted it to be. As someone who owns a pair, I can say that the sound is solid, but that they’re too big, lack rich support from the rest of Apple’s ecosystem, and are overly expensive for what they do. (Unsurprisingly, they have repeatedly been available for much less than their initial price tag.) Apple, for its part, doesn’t seem to be in any hurry to improve them or roll out new features.

What if Apple released a new HomePod with Wi-Fi router functionality?
Google, meanwhile, has decided to merge a smart speaker into its Wi-Fi router product, and to my mind, the HomePod would be a great candidate for that. As Google points out, Wi-Fi routers work better when they’re not closeted away, and making them both less unsightly and more useful are ways to fix that.
Plus, with companies like Google and Amazon in the router business, users may be on the lookout for alternatives from companies with strong track records on privacy. (Recent Siri gaffes notwithstanding.) Apple announced HomeKit support for third-party routers at WWDC this year, but we haven’t seen any of those products on the market yet, despite the company touting support from the likes of Linksys and Eero. Maybe it’ll be up to Apple itself to pave the way.