Yup, just like the title says, this post discusses what you should know about the IoT. We provide the must-know material for sussing through the details when developing an application or considering wireless technologies for connecting a device. That’s a high order, so we will update as new developments are made, we receive requests and feedback, and more is revealed about technologies. In short, this is a living source of IoT information.
We break it down into three basic sections:
A brief history and how the IoT wireless landed on low-power, wide-area technology as the wide-area connectivity technology of choice.
Technical review and first principles for assessing wireless protocols including topics such as battery life vs transmit power, range vs coverage vs link budget, capacity vs data rate, IoT security, and others.
The business perspective and how ultimate profitability is tightly integrated with a wireless technology’s capabilities.
We try to quickly discuss issues and provide links to other materials that discuss them in more detail. As you’ll see, some things just need to be talked through. We hope you benefit, and if you do, please click the heart at the bottom to help others find it too! Enjoy!
The Path to Here Enter Low Power Wide Area Connectivity 2015: The Year LPWA Grew Up
Comparing IoT Wireless Protocols Will my app have coverage? Range ≠ Coverage One metric to rule them all: Link budget You don’t know a protocol’s battery life until it’s fully developed IoT security, don’t connect without it Data rate ≠ capacity = link capabilities
Cellular LPWA: NB-IOT and LTE-M Part 1: Introduction Part 2: Cellular LPWA Availability Part 3: 3GPP/GSMA is NOT Providing a Graceful Evolution Path for Machines Part 4: Cellular LPWA Complexity Part 5: Uplink Capacity Part 6: Downlink Capacity Part 7: Firmware Download Part 8: Robustness Part 9: Power Consumption
A Deeper Technical Dive: Categories of Low-Power, Wide-Area Modulation Schemes Back to Fundamentals Time to Get Down and Nerdy
The Business Side of the IoT Without Device Longevity the IoT Will Never Be The L in LPWA Simple IoT Reality Check
The Economics of Receiver Sensitivity and Spectral Efficiency, or How to Run an IoT Business Setting the Stage: The Public Network Business
The LPWA business begins with the carrier Coverage Capacity Comparing Specific Technologies Carrier Growth and Future Profitability Relies on Capacity Scaling, aka Cell Splitting
How Carriers Can Have Their Cake and Serve the IoT Too, or, Why Cellular LPWA Will Never Serve the IoT as it Should It just makes good business sense Economics and misaligned incentives Always second tier
The Path to Here: Legacy IoT Wireless
Wireless sensor networks have historically been served by some combination of traditional cellular or local area solutions like WiFi, mesh, and local RF (Bluetooth, NFC, etc.). These solutions have failed to provide the catalyst needed to push the IoT over the edge and into mainstream adoption for a few basic reasons. First, these traditional approaches (Wi-Fi and Mesh) require a wired power source or changing/charging batteries every 1–2 days. This limits IoT applications to scenarios where there is already a power line or requires installing one, so only the most obvious and strongest cost savings applications are served. Second, they have limited area and depth of coverage per access point. Thus, applications were required to stay within a very limited area around the wireless source preventing many applications from being possible. Thirdly, they were costly to use. Even after several years and over a billion modules used, LTE modules still cost over $40 a piece. Mesh requires an entire network to be built out before it can be used. Local RF solutions require each business to build, manage, and maintain the wireless infrastructure preventing economies of scale.
Enter Low Power Wide Area Connectivity
Publicly available Low Power Wide Area (LPWA) connectivity uniquely solves each of the aforementioned problems. Low-power wide-area connectivity is pretty much what it says: wireless connectivity that covers a wide area using low power. In addition, LPWA can do so with low cost endpoints. LPWA is in contrast to the data and battery intensive 2G, 3G, or LTE cellular wireless technology. It also contrasts with traditional cellular because it is low bandwidth. The vast majority of devices on the IoT will not need the kind of data throughput that traditional cellular is designed to provide. In fact, according to James Brehm & Associates, 86% of IoT devices consume less than 3MB a month.
Of course there will be IoT devices that will need more bandwidth, and those will be served well by higher bandwidth solutions; but the sensors that give us the efficiencies discussed need only to periodically send a few hundred bytes to justify their value.
It should be clear that in order to achieve the grand vision of the IoT we will need publicly available out of the box connectivity for machines and devices.
In other words, the IoT described in the press and in this article must be connected by a ubiquitous wireless service dedicated to machines (much like cellular networks are used today for human driven voice/data connections). Both traditional cellular and LPWA are proposed public network solutions to IoT connectivity. 2G has been used for years to provide this publicly available connectivity that IoT devices need. But with AT&T finishing up their 2G shutdown the end of this year (2016) and others following close behind, it’s clear cellular 2G isn’t the path forward.
2015: The Year LPWA Grew Up
2015 in many ways was the year of LPWA. Three major players have emerged as potential low power wide area connectivity providers: Ingenu, Sigfox, and LoRa. Each of these companies provide different technologies that have long reaching implications on their viability to serve the vision of the IoT. We’ll discuss these in upcoming posts.
Cellular providers are also beginning to join the LPWA movement through 3GPP’s latest work toward creating a standard that matches the LPWA criteria. (Despite the press releases, cellular LPWA isn’t quite there yet. Power usage is notoriously tricky to gauge because it involves so many interactions; so whether cellular’s latest attempts will be low power remains to be seen.) By beginning to develop cellular-LPWA, cellular providers have essentially admitted that traditional cellular is not the appropriate technology to connect the IoT.
What is clear is that LPWA uniquely serves the vision of the IoT. Analysts and wireless carriers agree that LPWA will take the lion’s share of the IoT’s connectivity.
The exact numbers of LPWA connections aren’t important; we know they’ll number in the many billions. What’s important is the nature of the applications these connections enable: truly useful, efficiency enabling applications that are simple, scalable, and improve our lives directly and indirectly.
Comparing IoT Wireless Protocols
Will My App Have Coverage? Range ≠ Coverage
Often people ask, “what’s the range of your protocol?” That’s actually the wrong question. Range is not directly relevant to choosing or building a wireless technology that must have deep reliable coverage. Here’s why. It is very easy to cherry pick a range using ideal conditions. But coverage must account for actual real-world conditions over an entire area. Coverage tells you the probability of getting a message through in an entire area. Range tells you the maximum possible distance getting a message through is possible. Ingenu’s RPMA for example, has closed links over 88 miles. But cherry picking this range is not directly relevant to choosing a wireless protocol that must have deep reliable coverage. We cannot draw an 88 mile radius circle around one of our tower based Access Points and make the credible claim that by πr²,we can cover 24,000 square miles with a single tower. However, there is an indirect relevance in that the same aspect that allows RPMA to be used to build deep, reliable coverage (i.e. link budget) is the same aspect that allows for some truly amazing cherry picked results. Other technologies advertise their cherry pick and that’s one of ours, but it is irrelevant to choosing a protocol. If you want to truly compare two wireless technologies coverage look at link budget. To read more on the pretty awesome 88 mile link closure on one of our customer’s sites, check it out here. Another cool story from our customer in Chile closing a 30 mile link after a local construction accident (which got caught on video, youtube link found in the article) knocked out the closest access point.
One Metric to Rule Them All: Link Budget
Link budget is a single metric, a number in decibel units, that is the simplest way to compare any two wireless technologies. The bigger the link budget, the better coverage a wireless technology will have, period. It accounts for all the other stuff like path loss, propagation loss due to frequency choices (like 900 MHz vs 2.4 GHz), cable loss, modulations choices, receiver sensitivity, and all the rest. Science FTW! Just like balancing your personal budget, when you subtract all expenses from your income you obtain your disposable income, link budget is what is left after all propagation losses are accounted for. Link budget can be “spent” on various tradeoffs between wider coverage, deeper coverage, and more reliable coverage.
tl;dr To instantly know which wireless technology has better coverage, compare their link budgets. That single metric accounts for everything. To read more check out this blog post.
You Don’t Know a Protocol’s Battery life Until It’s Fully Developed
Long battery life is a key driver behind the savings and efficiencies of the IoT.
Steps to Knowing an IoT Protocol’s Battery Life
Finalize the design or standard on paper (For standards bodies this is a 100% finalized and assimilated written standard.)
Build a chip to the completed design or standard (often chipmakers will try and get a marketing headstart by building a chip to an early version of a standard, but these are never the same as the finalized version)
Build an actual commercial product with the chip integrated
Assess chip performance in device in lab conditions
Deploy in real world conditions and confirm battery life performance
It isn’t until after Step 5, that you can actually claim to have met or not met the specification and know the true battery life of a wireless protocol. It is important to not draw conclusions based on steps earlier than step 5 as performance in real world conditions are often very different from the specs. For example, in the book The Qualcomm Equation, Dave Mock discusses how CDMA was supposed to have 10x the capacity of GSM but ended up only having 3x (still a great advantage) once the tech hit the real world. In terms of battery life that’s the difference between, say, 10 years of battery life, and 3 years—not a trivial difference.
Don’t fall for the trap of a single line item comparison between two technologies. Just like you wouldn’t do that for a mobile phone or laptop, don’t do it for your wireless. Here’s an example using battery life of why. Many try to look at the transmit power used to send a signal as a single metric for comparing battery life. Battery life is one of those complicated beasts (unlike coverage which can be summed up using link budget…and that’s just science baby!).
When it comes to battery life, it is better to transmit quickly at a higher power, than to transmit slowly at a lower power. Why? Well, that’s calculus my dear fellow! If battery usage is the area under the curve, then you want to minimize the area under that curve. So sending one acknowledged message at high transmit power very quickly (e.g., RPMA) uses far less battery than sending a single message three times because it isn’t acknowledged using less transmit power (e.g., Sigfox & LoRa technologies). Here’s a picture to demonstrate:
IoT Security: Don’t Connect Without It
Industry-grade security on the IoT is essential. Most protocols used for LPWA connectivity are very light on security. As in 16 and 32 bit authentication rather than the standard 128 bit AES authentication. And that’s a serious problem because those others can just be brute force attacked. Most don’t support compliance with national standards like NERC CIP 002–009, NIST SP 800–5, FIPS 140–2 Level 2, and NISTIR-7628. Some in the industry would like to have IP addresses to everything. In other words, they would like everything to be hackable by the two decades of IP based hacks that any script kiddie can use. Any wireless protocol should be secure by design, not use bolted on approaches. Security is more than encryption. It needs these security guarantees:
Message integrity and replay protection
Authenticated firmware upgrades
Live Panel discussing IoT Security:
Data Rate ≠ Capacity = Link Capabilities
Coverage is important because it assures that you can actually connect your application. But once you are connected, what can you do with that link? That is determined by a wireless technology’s capacity. Capacity is what enables all the things you can do with a link. Capacity is simply the usable throughput that a link has after all the reductions in data rate from putting a MAC on top of the PHY layer, after overhead, security, interference, and other real-world stuff is accounted for. Capacity is the usable throughput, the amount of data you as app developer actually get to play with and use for your users.
A data rate is a PHY layer metric which, anyone connecting their 300 Mbps WiFi router knows, is not the actual throughput you experience. Why? Because there’s a lot more going on than just the physical layer. But capacity is the usable throughput, and that’s how you compare two wireless protocols. Typically this is best done by picking a single data model—like number of 32 byte messages per hour a protocol can send—and see how they all stack up.
Capacity is also different for uplink versus downlink. Some LPWA protocols have almost no downlink. For example, Sigfox only has four 8 byte downlink messages on the most expensive platinum package. That is pretty meager. LoRa, due to duty cycle limitations can only support about 10% downlink and so very selectively acknowledges messages.
Play the simple game here to understand capacity’s role in a network technology’s profitability. Remember, if a network technology can’t sustain itself financially it will go bankrupt, and thus any business built on it will also suffer, at minimum, costs for redesigning their app, at most will also fail.
Here’s a webinar discussing how to compare wireless IoT protocols:
Cellular LPWA: NB-IOT and LTE-M
The following series of posts address the cellular standard roadmap (3GPP/GSMA) answer to Low Power Wide Area (LPWA) Connectivity.
A Deeper Technical Dive: Categories of Low-Power, Wide-Area Modulation Schemes
Atfirst glance, the number of communication technologies being discussed for Low-Power, Wide-Area (LPWA) networks may be a bit overwhelming. What may be helpful is to look at the underlying technology from a fundamental perspective and tune out the marketing component.
Back to Fundamentals
Many of you know that Communication Theory is a very mature field going back many decades with a vast wealth of generated knowledge. Tens of thousands of books, articles, and papers have been published over this time. There are giants in the field — Claude Shannon, Harry Nyquist, Ralph Hartley, Alan Turing, and Andrew Viterbi (who has been an Ingenu strategic advisor from the beginning) whose work we can turn to for clarity. This great body of work gives us frameworks and vocabularies for comparison. It’s often a drier and less interesting world once the marketing innovation is subtracted out — but please, bear with me.
The table below shows four categorizations of the various modulation schemes that are being discussed for LPWA. Bold denotes those technologies being branded as applicable to Low-Power, Wide-Area networking. Since this is a technology treatment, I am defining Local Area Network (LAN) and Wide-Network (WAN) by the underlying technology as opposed to how these approaches are being marketed. Note that the most well-known approaches in each category are the Sigfox® technology, LoRa™, also known as Chirp Spread Spectrum (CSS), Narrow-Band IOT (NB-IOT), and Random Phase Multiple Access (RPMA®). These technologies tend to be those with the best marketing (yes, marketing is very important).
Four categorizations of the various modulation schemes that are being discussed for LPWA. Bold denotes those technologies being branded as applicable to Low-Power, Wide-Area networking.
From a technology perspective, the definition of being appropriate as a WAN is whether the multiple-access considerations of coverage and capacity are taken into account:
Coverage. If you want to build a WAN, you would like a single piece of network infrastructure (often on a tower or rooftop) to cover as much area as possible.
Capacity. There’s not a lot of good in covering a massive area if you cannot support the data needs of all the devices in that footprint (which again, is why we discuss coverage not range because we’re concerend with serving all of the devices in an area, not just one cherry picked one).
Giving a bit more color on the categories:
Ultra-Narrow Band (UNB). The reason many companies have elected this approach is the advantage of low barrier to entry. Companies in this category can leverage commodity radios and skip any technology development. These companies tend to argue that no new technology is required. We disagree for many reasons including the inability to make the economics of LPWA work as discussed in Blog 5: The Economics of Receiver Sensitivity and Spectral Efficiency.
Non-Coherent M-ary Modulation (NC-MM). This is a commonly used modulation in both LAN and WAN applications. Cellular 2G technology was based on GSM/GPRS which uses a modulation approach called Minimum Shift Keying (MSK) and is also being repurposed to Extended Coverage GSM (EC-GSM) which is cellular LPWA in 2G spectrum. The LoRa modulation (CSS) is a member of this category (as is justified in Blog 3: Chirp Spread Spectrum: The Jell-O of Non-Coherent M-ary Modulation. The “spreading” of CSS has no discernible advantage and indeed, as discussed in Blog 4: “Spreading” — A Shannon-Hartley Loophole?, has some significant drawbacks in terms of spectral efficiency.
Direct Sequence Spread Spectrum (DSSS). We described LoRa as “spreading” for no discernible reason. Well it turns out NC-MM does not have the monopoly on this. DSSS has a couple of technologies that also spread for no discernible reason — IEEE 802.11 (the original 1 and 2 Mbps data rates) and Zigbee (based on IEEE 802.15.4). This is just one example to show you that standards bodies are less about technology and more about politics. I will discuss this in more depth in a future blog.
Orthogonal Division Multiple Access (OFDM). This is the way you get extreme spectral efficiency. It’s great for voice high-speed data and has enabled LTE (4G) to become the dominant cellular standard. When you try to point this approach at LPWA (such as NB-IOT), significant problems emerge. I will discuss this in more depth in a future blog.
Time to get Down and Nerdy
In these resources, we visit the building blocks of communication and translate the perfectly good and intuitively understandable terms of coverage and capacity to nerd in Blog 2: Back to Basics — The Shannon-Hartley Theorem.
Coverage translates to receive sensitivity, which is a function of something called Eb/No (energy per bit relative to thermal noise spectral density).
Capacity translates into something called spectral efficiency and we need to go even one level deeper into nerd and assign it a Greek letter (of course) and that Greek letter is…. η.
Using this fundamental framework, in Blog 3: Chirp Spread Spectrum: The Jell-o of Non-Coherent M-ary Modulation, I’ll talk about a category of approaches that is very similar from a technology point of view and one very successfully marketed approach in this category called Chirp Spread Spectrum (also known as LoRa).
And then in Blog 4: “Spreading” — A Shannon-Hartley Loophole?, we discuss in more detail technologies that use links with very low spectral efficiency (η Two costs lay heaviest on the balance sheets of traditional wireless carriers: infrastructure and spectrum.
Cost pressure comes from licensed spectrum as well. Licensed spectrum is an extremely expensive resource. In 2015, traditional wireless providers spent $45 billion on spectrum in the United States. The $45 billion amount is more than 100 countries’ GDP. Spectrum is a valuable resource for a reason: it is the lifeblood of wireless voice and data connectivity. Consumers and businesses are willing to pay good money for that high data throughput and carriers need that licensed spectrum to provide it.
To remain profitable, licensed spectrum must be used by carriers for voice and data connections rather than other uses like machine connectivity. Voice/data connectivity brings carriers the most revenue per Hz (a unit of measure used for amount of spectrum). In the industry, this logic is broken down using average revenue per user, or ARPU. It just makes good business sense for carriers to maximize ARPU, especially with the enormous weight of spectrum costs. It is for this very reason that carriers are shutting down their 2G networks.
Carriers must use precious spectrum for the highest average revenu per user (ARPU).
Carriers must use that precious spectrum for the highest ARPU. Two factors will put additional pressure on carriers. The overall market of voice and data users will increase as will the amount of data each user will require. This only exacerbates the importance of using spectrum for highest ARPU purposes. It’s basic economics. Any deviation from that strategy will result in profit loss and punishment in terms of market share and on Wall Street. So, maximizing ARPU ripples throughout all of their business decisions regarding spectrum usage. And that’s as it should be. Businesses that do well serve their best customers well.
Economics and Misaligned Incentives
But what makes good business sense for traditional wireless carriers doesn’t make sense for IoT devices. At least not with the cellular industry’s current dynamics. Anybody or anything that isn’t high ARPU will naturally and rightfully be relegated to lower priority. And the lowest ARPU customers are the same devices that LPWA is fit to serve. According to James Brehm & Associates, 86% of current IoT devices use less than 3 MB of data per month — those are hardly power users. And the devices that have yet to be developed, the “greenfield applications” as industry insiders would say, are projected by the 3rd Generation Partnership Project (3GPP standards development body) to have an average of 32 KB a month of data. What’s more is that the same 3GPP standards development body has built IoT traffic de-prioritization into its LPWA candidate standards, including LTE-M and others.
Carriers can turn down or turn off machine traffic whenever their expensive spectrum gets clogged with higher ARPU traffic, like during sports events.
In other words, carriers can turn down or turn off machine traffic whenever their expensive spectrum gets clogged with higher ARPU traffic. And it doesn’t take much for that to happen. If you’ve ever been at a sporting event, you’ve probably experienced delays in receiving even a text message because so many people have cell phones connected to the cellular towers in the area.
How would this type of prioritization impact businesses that have their device messages blocked out by the carriers? Naturally, some proportion of the delayed messages will have a minimal impact on business. But some proportion of them will be majorly impacted by these unpredictable interruptions. The more important point is that your business would be subject to the whims of the carriers. And these whims are based on the carriers’ sound business reasoning.
Always Second Tier
The conclusion to be drawn from this is that connected machines will always be second tier to voice/data connections using the same spectrum. Carriers’ current business models depend on this. Their cost structure dictates it. These economic forces will not just go away, and will continue to relegate machine connectivity to the bottom tier.
Voice/data needs are what have pushed the cellular generations from 1G to 2G to 3G to 4G, soon 5G and inevitably to 6G and beyond.
The misaligned incentives between IoT connected businesses and traditional cellular carriers go beyond lower priority machine connectivity. Because human consumption of voice/data is the highest ARPU customer, their needs will continue to be the primary driver of cellular technology’s development in years to come. Voice/data needs are what have pushed the cellular generations from 1G to 2G to 3G to 4G, and in the coming few years, 5G. These cellular generations begin about every nine years.
Sunsets are fine for voice/data customers using smart phones as these devices are upgraded every couple of years. But, the incessant cellular sunsets are completely anathema to the longevity needs of IoT devices. The current cellular ecosystem will be unable to provide adequate IoT device longevity due to the incessant sunsetting cycle which is driven by sound business decisions.
tl;dr Two things keep cellular carriers form serving IoT device needs. 1) They make the most money from voice/data customers so IoT devices will always be second tier to the IoT. 2) The IoT needs long technology life cycles, but the 3GPP, the cellular standards body, is incentivized to change protocols periodically so that the companies participating can grab more of the IP pie and the resulting licensing fees. This IP grab is what pushes standards to change so quickly leading to network sunsets and short IoT lifecycles. This leads to ROI lower than that needed to get to 10s of billions of IoT devices.
Want to know more?
Download our free eBook (fair warning, this is behind a form) that does an extensive technical review and comparison of the leading LPWA protocols.
Want to integrate RPMA into your IoT application? Contact us at firstname.lastname@example.org