Analysis: Intel developer forum introduces your next Mac

Your next Mac’s wireless connection

In today’s wacky wireless world there’s a welter of acronyms battling for a spot on your next laptop’s logic board or ExpressCard slot. The hottest and heaviest action is in long-range wireless broadband, technologies that’ll give you Wi-Fi or better speeds even when you’re many miles from a transmitter. When this technology becomes ubiquitous in the next few years, you can bet your ‘Book that Apple will follow Intel’s lead, seeing as how Intel is planning to release its first long-range wireless broadband chipsets in the next few months.

Of the competing long-range acronyms, Intel is putting its money on WiMAX. However, during the IDF’s WiMAX session the presenter—an Intel engineer with the highly impressive name of Tolis Papathanassiou—admitted that its main competitor, LTE, has a lot of things going for it as well.

WiMAX (Worldwide Interoperability for Microwave Access) is the marketing-friendly name for the IEEE-802.16 standard. WiMAX has been around since 2004, but it was in late 2005 that it began to be taken seriously as a mobile competitor when the 802.16e version was introduced—it’s the version now in wide use in some areas of the world (notably, of all places, in Pakistan). This version is properly known as Mobile WiMAX, but you may also see it called WiMAX Mobile or (and I’m not making this up) WiBro.

WiMAX 1.0, as the current version is called, maxes out at 60-plus Mbps in ideal conditions—”ideal conditions” in this case meaning when you and your Mac are substantially closer to a WiMAX transmission tower than the standard’s maximum 50km (31 mile) range. That’s a comfortable broadband speed, but WiMAX plans to take off in the next couple of years, with version 1.5 (802.16e Rev2) reaching 125-plus Mbps in late 2009 and version 2.0 (802.16m) exceeding 300-plus Mbps in late 2010 or early 2011.

WiMAX—and, for that matter, its prime competitor, LTE (Long-Term Evolution)—will reach these speeds using a technology with the mind-numbing acronym of OFDMA + MIMO (Orthogonal Frequency-Division Multiple Access + Multiple-Input and Multiple-Output). Fear not, I won’t delve into a gearheaded explanation of OFDMA + MIMO (today, at least…), just know that this wireless system breaks a signal up into multiple parts and then sends them into the ether over multiple transmitters and antennas. If you’re studying specs for a wireless-router purchase in a year or two, look for that string of magic letters.

What Papathanassiou repeatedly emphasized about WiMAX and LTE is that due to their being based on OFDMA + MIMO (and also, for that matter, being IP-based at their cores), they’re “revolutionary, not evolutionary.” Meaning that although current competing wireless systems such as EVDO (Evolution, Data-Optimized) and HSPA (High-Speed Packet Access) and all their various and sundry flavors may currently be as fast as WiMAX and LTE, those weaklings are mere evolutionary steps up from mobile-phone technology. WiMAX and LTE, on the other hand, are revolutionary, built from the bottom-up as wireless data-broadband technologies. Both WiMAX and LTE scale better than their ex-phone competitors, according to Papathanassiou, so both WiMAX and LTE will provide faster, more robust performance in the future.

And it’s the near future in which Papathanassiou claims WiMAX has LTE beaten. As noted above, WiMAX is scheduled to reach 300-plus Mbps in late 2010; LTE should reach that pinnacle a year or two later. Will this head start give WiMAX an uncatchable lead? Intel’s market-projectors don’t think so; they claim that by 2015, WiMAX and LTE should have comparable worldwide-subscriber bases of approximately 100 million.

So why is Intel banking so heavily on WiMAX and not hedging its bets by following both paths? After all, later this year Intel is scheduled to release both logic-board chips and add-in cards that support both Wi-Fi and WiMAX, even though Papathanassiou admitted that “[LTE] is better in some aspects than WiMAX and worse in other aspects.”

I was about to ask Papathanassiou exactly which aspects he was referring to when our session ended and I was shunted outside to join the throngs of hungry Intelophiles swarming around the free Mediterranean chicken wraps and cans of ice cold Mountain Dew.

But wait, there’s more!

Pat Gelsinger, Intel’s senior vice president and general manager of the Digital Enterprise Group, shows of wafers during his presentation at the Intel Developer Forum.

If you’ve read this far, you may understandably fear that I’m going to recount every moment of the 170-plus hours of technical instruction provided at IDF. Fear not—it’s time, instead, to wrap up a few details and hint towards future in-depth articles.

Rather than dig deep into each and every one of the following technologies, I’ll just give you a quick peek. If you want to learn more, either search Intel.com or drop a note into the comments below.

• Larrabee: The session on Intel’s upcoming multi-core cross between a traditional multipurpose microprocessor and a hard-wired GPU (Graphics Processing Unit) was the only session I saw that was turning attendees away from the packed auditorium 15 minutes before the session opened, and which had to be repeated later during the Forum.

First off, know that Larrabee, like Nahalem, is an architecture, not a chip; When they’re released in 2009 or 2010, Larrabee chips will each have their own individual names. They will also each be multi-core, although those cores will be simpler than those in a traditional microprocessor. How many cores? Intel’s not saying—but the test-result slides that ex-ATI-and-now-Intel engineer Larry Seiler projected in the crowded session room included results of tests that used up to 64 cores.

Like GPU architectures, Larrabee is designed for throughput-oriented workloads such as graphics and media, and not for general-purpose computing. Unlike GPUs, Larrabee chips will be highly programmable using the familiar IA (Intel Architecture) command set that has been the basis of PC software since the introduction of the aforementioned 8086 processor. By programming Larrabee’s multiple cores with this tried-and-true command set, developers will be able to tailor their graphics code to exactly the need at any particular moment, and not watch it get trapped in hardwired on-chip routines that don’t apply to that particular image-rendering task. As Seiler put it, “The more complex the [graphics task], the better Larrabee does.”

So the answer to the question which one attendee asked, “Is Larrabee a CPU or a GPU?” is Yes. Think floor wax and dessert topping. Yum.

One final note on Larrabee: Seiler specifically pointed out that its highly parallel architecture will greatly benefit the OpenCL language that Apple plans to release next year in Mac OS 10.6, aka Snow Leopard. Hang on to your hats, gamers.

• Mobile Computing Enhancements: In the next fiscal quarter or two, the sales of full-powered notebook computers will pass those of desktop computers for the first time in history. I find it necessary to include the qualifier “full-powered” because Intel also discussed another, lower-powered class of laptops cutely named netbooks. These smaller, lighter, and less-capable units will be powered by Intel’s Atom processor and its successors, and will—according to Intel—sell in the millions as either entry-level units (think developing countries), Junior’s first device, or a traveling exec’s lightweight companion. Apple, of course, is a prime contender for the top of the elegant-netbook heap.

With all these portable units becoming most users’ primary or secondary-but-still-important computer, security is becoming a matter of greater concern. During Dadi Perlmutter’s “Where Will ‘On-the-Go’ Go?” keynote he demonstrated a security system that, when notified that your laptop has been stolen, will remotely encrypt the files on that laptop’s drive, take a photo of the miscreant with the laptop’s built-in webcam (or iSight, of course), track the location of the stolen laptop by means of its built-in GPS, and then allow you to decrypt the files after your laptop has been recovered and returned to you. All Apple needs to do is add a GPS chip to its ‘Books and this peace of mind can be yours.

And in conclusion…

From the proverbial 30,000-foot view, last week’s IDF presented a future in which a melding of consumer electronics, computing, and the Internet into a human/machine interface with which you will, to quote Pat Gelsinger of Intel’s Digital Enterprise Group, interact “24/7 in every modality of your life.” Down in the tech trenches, however, the engineers who will make this possible are wrestling with the complexities of Nehalem, SSDs, WiMAX, Larrabee, and more—much, much more.

Whether or not you actually want 24/7 Internet involvement is, of course, an entirely separate question. But make no mistake: the opportunity to link up big-time, all the time is coming; the Internet will be in your car, you keychain, your home security system, and … oh yes .. in your Mac.

One side effect of this Internetization of everything, as Gelsinger pointed out, is that we’re rapidly reaching the limit of the number of Internet addresses that are possible using the current IPv4 Internet-addressing system; it’ll max out at a paltry 4,294,967,296 addresses. He suggests that the world get off its collective cyber-duff and rapidly embrace the more-powerful IPv6 system, which can manage a cool 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses.

That should be enough to hold us for awhile.

Mac OS X, by the way, is already IPv6-capable. Ah, Apple … always one step ahead of the pack.

[Rik Myslewski has been writing about the Mac since 1989. He has been editor in chief of MacAddict (now Mac|Life), executive editor of MacUser and director of MacUser Labs, and executive producer of Macworld Live. His blog can be found on Myslewski.com.]

Subscribe to the Help Desk Newsletter

Comments