by T. Sridhar, Flextronics
Wireless networks can be classified broadly as Wireless Personal-Area Networks (WPAN), Wireless LANs (WLANs), and Wireless Wide-Area Networks (WWANs). WPANs operate in the range of a few feet, whereas WLANs operate in the range of a few hundred feet and WWANs beyond that. In fact, wireless WANs can operate in a wide range—a metropolitan area, cellular hierarchy, or even on intercity links through microwave relays.
This article examines wireless technologies for the WLAN, WPAN, and WWAN areas, with specific focus on the IEEE 802.11 WLAN (often known as Wi-Fi®), Bluetooth (BT) in the WPAN, and WiMAX for WWAN as representative technologies. It discusses key aspects of the technology—medium access and connectivity to the wired network—and concludes by listing some common (mis)perceptions about wireless technology.
The Institute of Electrical and Electronic Engineers(IEEE) defined three major WLAN types in 802.11â€“802.11 b and g, which operate in the 2.4-GHz frequency band, and 802.11a, which operates in the 5-GHz band. The 2.4- and 5-GHz bands used here are in the license-free part of the electromagnetic spectrum, and portions are designated for use in Industrial, Scientific, and Medical (ISM) applications—so these portions are often called ISM bands. More recently, a high-speed 802.11 WLAN has been proposed—the 802.11n WLAN, which operates in both the 2.4- and 5-GHz bands.
The 2.4-GHz frequency band used for 802.11 is the band between 2.4 and 2.485 GHz for a total bandwidth of 85 MHz, with 3 separate nonoverlapping 20-MHz channels. In the 5-GHz band, there are a total of 12 channels in 3 separate subbands—5.15 to 5.25 GHz (100 MHz), 5.25 to 5.35 GHz (100 MHz), and 5.725 to 5.825 GHz (100 MHz).
The more common mode of operation in 802.11 is the infrastructure mode, where the stations communicate with other wireless stations and wired networks (Ethernet typically) through an access point. The other mode is the ad-hoc mode, where the stations can communicate directly with each other without the need for an access point; we will not discuss this mode in this article. The access point bridges traffic between wireless stations through a lookup of the destination address in the 802.11 frame (see Figure 1a).
The Media Access Control (MAC) header of 802.11 has four addresses. Depending upon the value of a FromDS (from access point), or a ToDS (to access point) bits in the header (see Figure 1b), the addresses have different connotations. The first two addresses are for the receiver and transmitter, respectively.
Address 4 is not used except when both FromDS and ToDS are set to 1—it is for a special mode of communication for access point-to-access point traffic, whence addresses 3 and 4 refer to the source- and destination-station MAC addresses, respectively, whereas addresses 1 and 2 refer to the access point addresses (that is, the transmitter and receiver on this inter-access point channel). When FromDS is set to 1, address 1 is the destination-station MAC address, address 2 is the access point address, and address 3 is the source-station MAC address. When ToDS is set to 1, address 1 is the access point MAC address, address 2 is the transmitting-station MAC address, and address 3 is the destination-station MAC address.
Although earlier versions of 802.11 LANs used Frequency Hopping Spread Spectrum (FHSS), 802.11b typically uses Direct Sequence Spread Spectrum (DSSS) for 1-, 2-, 5.5-, and 11-Mbps speeds. Both schemes involve transmission of a narrowband signal over a wider frequency range to mitigate the possibility of interference at any one frequency. The nodes and access points typically transmit at the highest data rate possible based on the current signal-to-noise ratio.
At the MAC level, 802.11 LANs involve the use of Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA). Stations back off if they detect that another station is transmitting on that channel. The station then waits for a random period after the end of the transmission before it attempts to transmit on that channel. In addition, control frames such as Request to Send (RTS) and Clear to Send (CTS) are used to facilitate the actual data transfer. The CTS control frame has the duration for which the transmitting node is allowed to transmit. Other stations sense this frame and back off for at least the specified duration before sensing the radio link again.
When the access points are connected through a LAN, the entire system is known as a Distribution System. The access points perform an integration function—that is, bridging between wired and wireless LANs. In this scenario, (see Figure 1a) the wireless control and data frames are terminated at the access point or tunneled from the access point to a centralized controller over Ethernet. When terminated at the access point, the payload is transmitted from the access point to the network over Ethernet. This transmission is done in the following manner:
The source and destination addresses are set to the station and access point addresses, respectively. At the access point, the payload is stripped from the 802.11 data frame and sent as part of an Ethernet packet either as a broadcast packet or to a specific destination. If the packet sizes (when reassembled) are larger than the Ethernet frame size, they are discarded. In the reverse direction, the Ethernet frame can be directly encapsulated into an 802.11 frame for transmission from the access point to the end node. At the WLAN end node, the complete Ethernet frame shows up at the driver level as though it were a frame received on a pseudo Ethernet interface.
The most common 802.11b WLAN speed is 11 Mbps. However, based on the interframe spacing, preamble, header encapsulation, and acknowledgements for frames required, the actual throughput for user data would be about 50 percent of the actual speeds. This throughput of 50 percent of actual link speed is a common theme on 802.11g and 802.11a also.
Stations connect to the access point through a scanning process. Scanning can be passive or active. In the passive mode, the station searches for access points to find the best access point signal (which contains the Service Set Identifier [SSID], data rates, and so on).
The access point frame that the stations look for is a management frame known as the beacon frame. In the active mode, the station initiates the process by broadcasting a probe frame. All access points that receive the probe send back a probe response, helping the station build up the list of available access points. The sequence of a station "connecting" to an access point involves two steps. The first is authentication, where the station sends an authentication request frame to the access point. Depending upon the authentication through 802.1X or internal configuration, the access point can accept or reject the request with an authentication response. The second step is association, which is required to determine the data rates supported between the access point and the station. At the end of the association phase, the station is allowed to transmit and receive data frames.
Power Concerns in 802.11
Although it is not a part of the standard, the access points might adjust their transmitting power based on the environment they are in (they do have maximum limits based on regional restrictions). If they do not perform this adjustment, all the stations might connect to the access point with the highest transmitting power, even if the access point is far away. The other concern is, of course, the interference between access points. The power adjustment is usually done through configuration and, in some cases, through a monitoring function on the network. In the latter case, the monitoring function reports the information to a central controller.
A new initiative within the IEEE (802.11k) has been started to im- prove traffic distribution within the network. Specifically, it addresses the problem of access point overloading so that stations can connect to underused access points for a more efficient use of network resources.
With respect to power management on the client side, a station can indicate that it is going into a "sleep" or low-power state to the access point through a status bit in a frame header (refer to Figure 1b). The access point then buffers packets for the station instead of forwarding them to the station as soon as they are received. The sleeping station periodically wakes up to receive beacons from the access point. The beacons include information about whether frames are being buffered for the station. The station then sends a request to the access point to send the buffered frames. After receiving the frames, the station can go back to sleep.
802.11a/g Technology—Orthogonal Frequency-Division Multiplexing
Sometimes called discrete multitone (DMT) in the Digital Subscriber Line (DSL) world, Orthogonal Frequency-Division Multiplexing (OFDM) is used as the underlying technology in 802.11g and 802.11a. OFDM is a form of Frequency-Division Multiplexing (FDM); normally, FDM uses multiple frequency channels to carry the information of different users. OFDM uses multicarrier communications, but only between one pair of users—that is, a single transmitter and a single receiver.
Multicarrier communications splits a signal into multiple signals and modulates each of the signals over its own frequency carrier, and then combines multiple frequency carriers through FDM. OFDM uses an approach whereby the carriers are totally independent of (orthogonal to) each other. Note that the total bandwidth consumed with OFDM is the same as with single carrier systems even though multiple carriers are used—because the original signal is split into multiple signals. OFDM is more effective at handling narrowband interference and problems related to multipath fading, simplifying the building of receiver systems.
We can illustrate this process with a simple example—one often used in discussions about OFDM. For a "normal" transmission at 1 Mbps, each bit can take 1 microsecond to send. Consider bit 1 and bit 2 sent with a gap of 1 microsecond. If two copies of bit 1 are received at the destination, one of them is the reflected or delayed copy. If the delay is around 1 microsecond, this delayed copy of bit 1 can interfere with bit 2 as it is received at the destination because they arrive at approximately the same time. Now consider an OFDM transmission rate of 100 kbps, that is, the bits are sent "slower" but over multiple frequencies. A multipath delay of around 1 microsecond will not affect bit 2, because bit 2 is now arriving much slower (around 10 microseconds). The delay in bit arrival (1 microsecond in our example) is not a function of the transmission—rather it is due to the various paths taken by the signal.
Orthogonal Frequency-Division Multiple Access (OFDMA) superimposes the multiple-access mechanism on OFDM channels, so that multiple users can be supported through subsets of the subcarriers assigned to different users. Note that 802.16-2004 ("Fixed" WiMAX) uses OFDM, whereas 802.16e-2005 ("Mobile" WiMAX) uses OFDMA.
MIMO and 802.11n
Multiple Input Multiple Output (MIMO) antennas are the basis for the 802.11n wireless LAN standard, currently in draft form but on the way to final standardization. Signals often reflect off objects and are received at different times and strengths at the receiver, resulting in a phenomenon called multipath distortion. (Note: 802.11n in this article implies the draft 802.11n standard at the time of writing.) MIMO actually takes advantage of this distortion by sending a single data stream split into multiple parts to be transmitted from multiple antennas (typically 3 in 802.11n) and letting the reflected signals be processed at the receiver (through multiple antennas). The transmission of multiple data streams over different spatial channels, sometimes known as Space Division Multiplexing (SDM), also allows a larger amount of data to be sent over the air. Through advances in the Digital Signal Processing (DSP)-based processing, the receiver can process the signals, cross-correlate them, and reconstitute them accurately despite interference. Also, because of the multiple signals received over multiple paths, link reliability is increased.
The 802.11n standard uses three antennas and also supports two radios (for the 2.4- and 5-GHz bands where 802.11n can operate). It can also use 40-MHz channels through channel bonding-that is, two adjacent 20-MHz channels are combined into a single 40-MHz channel, possibly resulting in a data rate of up to 150 Mbps of effective throughput.
One concern with 802.11n that is starting to gain attention is the power requirement of 802.11n access points. With radios in both bands and the use of MIMO, 802.11n access points tend to consume more power than the 802.11 a/b/g access points, leading to problems when the access point is powered by Power over Ethernet (PoE) power-sourcing equipment. The 802.3af standard permits a maximum of 12.95W per Ethernet port, which is often less than the power that most 802.11n APs need. The IEEE 802.3at working group is working toward a higher-power PoE standard. This initiative, commonly called PoE Plus, will peak at 25W per Ethernet port (on Category 5 Ethernet cable).
The access point has two primary functions-connecting wireless clients to each other as well as connecting wireless and wired clients. In the latter, the access point can act as an Ethernet bridge by passing Layer 2 frames between the wired and wireless networks, or as a router, terminating WLAN and Ethernet Layer 2 frames and performing IP-level forwarding. The Layer 3 routing model is less popular and we will not consider it here.
The access point typically terminates WLAN management and control frames. However, there is another model of a thin access point wherein these frames can be backhauled to a WLAN switch for processing. The access point connection to the wired network is typically an Ethernet link to a dedicated Ethernet switch port at 100-Mbps or Gigabit Ethernet speeds. With the advent of 802.11g and 802.11a WLANs, 10-Mbps links are not sufficient because these WLANs can operate at close to 27-Mbps throughput over the wireless network.
When considering 802.11n, we find that 100-Mbps backhaul links to the switch are insufficient for the 802.11n throughput of 150, or even 300 Mbps with channel bonding. Gigabit Ethernet links are often considered for connectivity between the 802.11n access point and the Ethernet switch. The next speed for Ethernet connectivity is 10 Gbps, which is well-established in the enterprise for data center and core Ethernet network applications. Work is ongoing in the IEEE for 40- and 100-Gbps Ethernet, so that should cover advances in wireless speeds for efficient backhaul to the wired network.
Bluetooth started as a "wire-replacement" protocol for operation at short distances. A typical example is the connection of a phone to a PC, which, in turn, uses the phone as a modem (see Figure 2). The technology operates in the unlicensed 2.4-GHz ISM band. The standard uses FHSS technology. There are 79 hops in BT displaced by 1 MHz, starting at 2.402 GHz and ending at 2.480 GHz.
Bluetooth belongs to a category of Short-Range Wireless (SRW) technologies originally intended to replace the cables connecting portable and fixed electronic devices. It is typically used in mobile phones, cordless handsets, and hands-free headsets (though it is not limited to these applications). The specifications detail operation in three different power classes—for distances of 100 meters (long range), 10 meters (ordinary range), and 10 cm (short range).
Bluetooth operates in the unlicensed ISM band at 2.4 GHz (similar to 802.11 b/g wireless), but it is most efficient at short distances and in noisy frequency environments. It uses FHSS technology—that is, it avoids interference from other signals by hopping to a new frequency after transmitting and receiving a packet. Specifically, 79 hops are displaced by 1 MHz, starting at 2.402 GHz and finishing at 2.480 GHz.
Bluetooth can operate in both point-to-point and logical point-to-multipoint modes. Devices using the same BT channel are part of a piconet that includes one master and one or more slaves. The master BT address determines the frequency hopping sequence of the slaves. The channel is also divided into time slots, each 625 microseconds in duration. The master starts its transmission in even-numbered time slots, whereas the slave starts its transmission in odd-numbered slots.
BT specifies two types of links, a Synchronous Connection-Oriented (SCO) link and an Asynchronous Connectionless Link (ACL). The SCO link is a symmetric point-to-point link between a master and a single slave in the piconet, whereas the ACL link is a point-to-multipoint link between the master and all the slaves participating in the piconet. Only a single ACL link can exist in the piconet, as compared to several individual SCO links.
Other than the radio and baseband components (the physical layer of Bluetooth that manages physical channels and links), the Bluetooth stack (see Figure 3) includes a Link Manager Protocol (LMP) used for link management between the endpoints, a Logical Link Control and Adaptation Protocol (L2CAP) for the data link, a Radio Frequency Communication (RFCOMM) protocol to provide emulation of serial ports over L2CAP, and a Service Discovery Protocol (SDP) for the dynamic discovery of services—because the set of services changes dynamically based on the RF proximity of the devices. In addition, the Host Controller Interface (HCI) provides a uniform command interface to the baseband controller and the link manager to have access to the hardware registers.
LMP is required for authentication, encryption, switching of roles between master and slave, power control, and so on. L2CAP provides both connection-oriented and connectionless data services functions, including protocol multiplexing, segmentation and reassembly, and piconet-based group abstraction. As part of the multiplexing function, L2CAP uses the concept of channels, with a channel ID representing a logical channel endpoint on a BT device. L2CAP offers services to the higher layers for connection setup, disconnect, data reading and writing, pinging the endpoint, and so on.
RFCOMM, which provides emulation of serial ports on the BT link, can support up to 60 simultaneous connections between two BT devices. The most common emulation is of the RS-232 interface, which includes emulation of the various signals of this interface such as Request To Send (RTS), Clear To Send (CTS), Data Terminal Ready (DTR), and so on. RFCOMM is used with two types of BT devices-endpoints such as printers and computers and intermediate devices such as modems. In Figure 3, the IP stack over Point-to-Point Protocol (PPP) over RFCOMM emulates the mode of operation over a dialup or dedicated serial link. Because the various BT devices in a piconet may offer or require a different set of services, the Service Discovery Protocol (SDP) is used to determine the nature of the services available on the other nodes. SDP uses a request-response packet scheme for its operation.
BT includes multiple profiles that correlate to the type of services that are available from BT nodes. For example, the BT headset profile is used between an audio source and a headset, both connecting wirelessly through BT—it involves a subset of the well-known AT commands used with modems. The audio source (typically a cell phone or cordless phone) implements the BT audio gateway profile for communicating with the device implementing the headset profile. Other profiles include a basic printing profile (often used for printing between a PC and a BT-enabled printer), dialup networking profile, fax profile, cordless telephony profile, Human Interface Device (HID) profile, and so on. The last profile is used for BT-enabled keyboards and mice-it is based on the HID protocol defined for USB.
The Bluetooth dialup networking profile is interesting from an IP perspective; as shown in Figures 2 and 3, it involves the IP stack running over RFCOMM to provide the appearance of a serial port running PPP, which is very similar to dialup networking over a basic telephone service line.
Bluetooth Frame Format and Speeds
The frame format in BT consists of a 72-bit field for the access code (including a 4-bit preamble, 64-bit synchronization field, and 4 bits of trailer), followed by a 54-bit header field that includes information about the frame type, flow control, acknowledgement indication, sequence number, and header error check. Following the header field is the actual payload, which can be up to 2745 bits. In all, the frame length can be a maximum of 2871 bits. Whereas synchronous BT traffic has periodic reserved slots, asynchronous traffic can be carried on the other slots.
BT ranges can vary from a low-power range of 1 meter (1 mW) for Class 3 devices, 10 meters (2.5 mW) for Class 2 devices, to 100 meters (100 mW) for Class 1 devices. BT Version 1.2 offers a data rate of 1 Mbps, and BT Version 2.0 with Enhanced Data Rate (EDR) supports a data rate of 3 Mbps. BT Version 1.1 was ratified as the IEEE Standard 802.15.1 in 2002.
Bluetooth versus Wi-Fi
A few years ago, some marketing literature tried to emphasize BT and Wi-Fi as competing technologies. Though both operate in the ISM spectrum, they were invented for different reasons. Whereas Wi-Fi was often seen as a "wireless Ethernet," BT was initially seen purely as a cable- or wire-replacement technology. Uses such as dialup networking and wireless headsets fit right into this usage model. Recently, the discussion has focused more on coexistence instead of competition because they serve primarily different purposes. There are still some concerns related to their coexistence because they operate over the same 2.4-GHz ISM band.
To recapitulate, the Bluetooth physical layer uses FHSS with a 1-MHz-wide channel at 1600 hops/second (that is, 625 microseconds in every frequency channel). Bluetooth uses 79 different channels. Standard 802.11b/g uses DSSS with 20-MHz-wide channels—it can use any of the 11 20-MHz-wide channels across the allocated 83.5 MHz of the 2.4-GHz frequency band. Interference can occur either when the Wi-Fi receiver senses a BT signal at the same time that a Wi-Fi signal is being sent to it (this happens when the BT signal is within the 22-MHz-wide Wi-Fi channel) or when the BT receiver senses a Wi-Fi signal.
BT 1.2 has made some enhancements to enable coexistence, including Adaptive Frequency Hopping (AFH) and optimizations such as Extended SCO channels for voice transmission within BT. With AFH, a BT device can indicate to the other devices in its piconet about the noisy channels to avoid. Wi-Fi optimization includes techniques such as dynamic channel selection to skip those channels that BT transmitters are using. Access points skip these channels by determining which channels to operate over based on the signal strength of the interferers in the band. Adaptive fragmentation is another technique that is often used to aid optimization. Here, the level of fragmentation of the data packets is increased or reduced in the presence of interference. For example, in a noisy environment, the size of the fragment can be reduced to reduce the probability of interference.
Another way to implement coexistence is through intelligent transmit power control. If the two communicating (802.11 or Wi-Fi) devices are close to each other, they can reduce the transmit power, thus lowering the probability of interference with other transmitters.
WiBree to Low-Energy Bluetooth
WiBree is a technology first proposed by Nokia to enable low power communication over the 2.4-GHz band for button cell (or equivalent) battery-powered devices. A consequence of the low power requirement is the need for the wireless function to perform a very small set of operations when active and go back to the sleep or to standby mode when inactive.
The WiBree technology has been adapted by the Bluetooth Special Interest Group (SIG) as part of the lower-power BT initiative—also known as Low Energy (LE) BT technology. The LE standard is expected to be finalized sometime in 2009. When this standardization is completed, three types of BT devices will be available: traditional BT, LE BT, and a mixed or dual-mode BT. A mixed-mode device can operate in low power mode when communicating with other LE devices (for example, sensors) and traditional BT mode when communicating with BT devices, implying the presence of both a BT stack and an LE stack on the same device.
WiMAX stands for Worldwide Interoperability for Microwave Access and is defined under the IEEE 802.16 working group. Two standards exist for WiMAX—802.16d-2004 for fixed access, and 802.16e-2005 for mobile stations . The WiMAX forum certifies systems for compatibility under these two standards and also defines network architecture for implementing WiMAX-based networks.
WiMAX can be classified as a last-mile access technology similar to DSL, with a typical range of 3 to 10 kilometers and speeds of up to 5 Mbps per user with non-line of sight coverage. WiMAX access networks can operate over licensed or unlicensed spectra in various regions or countries—though licensed spectrum implementations are more common. WiMAX operation is defined over frequencies between 2 and 66 GHz, parts of which may be unlicensed spectrum deployments in some countries. The lower frequencies can operate over longer ranges and penetrate obstacles, so initial network rollouts are in this part of the spectrum—with 2.3-, 2. 5-, and 3.5-GHz frequency bands being common. Channel sizes vary from 3.5, 5, 7, and 10 MHz for 802.16d-2004 and 5, 8.75, and 10 MHz for 802.16e-2005. WiMAX networks are often used to backhaul data from Wi-Fi access points. In fact, they are often envisaged as replacements for the current implementation of metro Wi-Fi networks that use 802.11b/g for client access and 802.11a for backhaul to connect to the other parts of the network.
The 802.6d-2004 standard uses OFDM similar to 802.16a and 802.16g, whereas 802.16e-2005 uses a technology called Scalable Orthogonal Frequency Division Multiplexed Access (S-OFDMA). This technology is more suited to mobile systems because it uses subcarriers that enable the mobile nodes to concentrate the power on the subcarriers with the best propagation characteristics (because a mobile environment has more dynamic variables). Likewise, the 802.16e radio and signal processing is more complex.
Unlike 802.11, which supports only Time-Division Duplexing (TDD)—where transmit and receive functions occur on the same channel but at different times), 802.16 offers TDD, Frequency-Division Duplexing (FDD) (transmit and receive on different frequencies, which could also be at different times). Another innovation in WiMAX is similar to the scheme in Code Division Multiple Access (CDMA)—subscriber stations are able to adjust their power based on the distance from the base station, unlike the case of client stations in an 802.11 network.
WiMAX base stations use a scheduling algorithm for medium access by the subscriber stations. This access is through an access slot that can be enlarged or contracted (to more or fewer slots) that is assigned to the subscriber stations. Quality-of-Service (QoS) parameters can be controlled through balance of the time-slot assignments among the base stations. The base-station scheduling types can be unsolicited grant service, real-time polling service, non-real time polling service, and best effort. Depending upon the time of traffic and service requested, one of these scheduling types can be used.
WiMAX Network Architecture
The WiMAX network architecture is specified through functional entities (see Figure 4), so you can combine more than one functional entity to reside on a network element. The Mobile Station (MS) connects the Access Service Network (ASN) through the R1 interface—which is based on 802.16d/e. The ASN is composed of one or more base stations (BSs) with one or more ASN gateways to connect to other ASNs and to the Connectivity Service Network (CSN). The CSN provides IP connectivity for WiMAX subscribers and performs functions such as Authentication, Authorization, and Accounting (AAA) [10,11], ASN-CSN tunneling, inter-CSN tunneling for roaming stations, and so on. A critical tenet of the WiMAX Forum network architecture is that the CSN must be independent of the protocols related to the radio protocols of 802.16.
The R3 interface (reference point) is used for the control-plane protocols and bearer traffic between the ASN and CSN for authen-tication, policy enforcement, and mobility management. The base station connects to an ASN gateway to provide the MS with external network access. The R6 interface between the BS and ASN-GW could be open or closed based on the profile—in fact, you could have a co-located base station and ASN gateway (ASN-GW), depending upon the network implementation. The ASN gateway uses the R3 interface to communicate with the AAA services in the visited CSN (that is, the CSN "corresponding" to the ASN). The servers in the visited CSN can communicate with the home CSN (that is, the CSN corresponding to the "home" network of the MS). In the simplest case multiple ASNs (WiMAX networks) connect through ASN gateways to the public Internet (that is, there is only one Network Service Provider (NSP) and the visited and home CSNs are the same). Note that you could implement a WiMAX network with just one ASN and one CSN—in that case, the R3 interface would be completely internal and not exposed.
Three profiles are identified to map ASN functions into ASN-GW and BS functions. These profiles are considered an implementation guideline for how you would build the various devices implementing these functions. Profile A is a strict separation of the BS and ASN-GW functions, where the ASN-GW controls and manages radio resources that are located on the BS and also provides the handover and data-path functions. The R6 interface is exposed in this profile.
Profile B is a more integrated function, where the BS has more functions than in profile A; in fact, the BS might even integrate most of the ASN functions. The R6 interface is a closed interface in this profile. The third profile is profile C, which is similar to profile A except that the base stations to incorporate more functions, including radio resource management and control as well as hand-offs.
IP Connectivity and Data Transfer
The MS can be a fixed IP gateway (think of an 802.11 access point that provides connectivity to users in a coffee shop and connects to the IP network of the service provider through WiMAX) or a mobile end node (for example, a laptop with WiMAX connectivity). The IP address used by the gateway on the connection to the WiMAX network is known as the Point of Attachment (PoA) address. A third type of access is nomadic access, where the IP gateway can be moved from one location to another but connects to the network only after it has been relocated.
When the station is mobile, the WiMAX Forum specifies that the Mobile IP (MIP) architecture and protocols should be used. There are two types of Mobile IP possible: Client Mobile IP (CMIP) and Proxy Mobile IP (PMIP). The former involves changes to the MS protocol stack, but the latter does not.
The architecture can support both models. In the P-MIP scenario (see Figure 5), the ASN implements the Foreign Agent (see William Stallings' article in IPJ on Mobile IP ), and terminates Mobile IP tunnels for the various mobile stations in the same ASN.
In the figure, the MS has an address at the point of attachment that is used to forward packets from the MIP Foreign Agent inside the ASN. Because the ASN acts as a proxy of the attached MS, this implementation is known as a Proxy MIP implementation—also, there is no need for the MS to be aware of the MIP function being performed by the network.
Perspective on WiMAX versus Cellular Services
The WiMAX Forum has specified that the Network Working Group (NWG) architecture should be capable of supporting voice, multimedia services, and priority services such as emergency voice calls. It also supports interfacing with interworking and media gateways. Also, the service permits more than one voice session per subscriber, as well as simultaneous voice and data sessions. Support of IP Broadcast and Multicast services over WiMAX networks is also included. The architecture is also expected to support differentiated QoS levels at a per-MS or -user level (coarse grained) and at a per-service flow (fine-grained) level. It shall also support admission control and bandwidth management.
Initially, WiMAX was touted by some as a replacement for cellular services. An important consideration was using Voice over IP (VoIP) for voice calls—that is, where voice was another service over the data network. This model was in contrast to the existing cellular service where data was an adjunct to the basic service of TDM-based voice. More recently, WiMAX is being positioned as a data-connectivity option for remote locations, especially where it would be difficult to lay new copper or optical cable. Not surprisingly, these options are being pursued aggressively in developing countries.
Common Misperceptions About Wi-Fi, BT, and WiMAX Technologies
We have considered the key aspects of the three technologies—Wi-Fi, BT, and WiMAX—and their position in IP networks. In this section, we will outline and clarify some common perceptions and misperceptions about these technologies.
In this article, we have provided a flavor for IEEE 802.11 WLAN, Bluetooth, and WiMAX technologies and their implementation—specifically, how the nodes on these networks connect to an IP network. These technologies often serve complementary functions for end-to-end connectivity.