Which devices would be described as end devices

Business Continuity and Disaster Recovery in Healthcare

Susan Snedaker, Chris Rima, in Business Continuity and Disaster Recovery Planning for IT Professionals (Second Edition), 2014

End user devices

End user devices run the gamut from desktop and laptop PCs to printers, document scanners, bar code scanners, smart phones, and consumer-oriented tablet devices. These devices are typically easy to replace and inexpensive (individually; replacing 500 of them gets expensive). How do they come into play in your BC/DR planning?

First, look at how your IT department uses these devices. If the data center was destroyed or went dark, what tools would your team need to recover and are those available outside your building? Are these tools backed up? Are they living in the wild on individual’s desktops and laptops? If the IT building was destroyed one night by a bomb or a plane crash, how would you manage to transition to your DR site or implement your BC/DR plan? Chances are good a lot of very key information resides on these end user devices and your IT department may be more at risk than you realize. For healthcare IT, that could mean the difference between recovering in your agreed upon time frames or missing them altogether. Do an inventory of IT systems and assets and correlate them to how you manage your infrastructure. Then, incorporate a plan for ensuring the availability of needed hardware and software in the event of a disaster. For example, you may load up a few laptops with key applications and data (encrypted, of course) and have staff or managers keep them at home or rotate them. The challenge is keeping them up-to-date and out of harm’s way. If you have multiple locations, you can certainly use those locations for redundancy. If you are a single data center facility or don’t have multiple locations, you’ll have to get creative and determine what will work best for you.

From a patient care perspective, end user devices are how data get in the hands of the clinicians. If the EMR is down, how would clinical staff access those data? Is something stored locally on a PC? On a laptop? Can they connect to the wireless network to access data via a remote hot site? The end devices are less important, of course, than how data are provided. However, looking at things like emergency power in clinical areas, what end user devices are plugged into emergency power, which should have uninterruptible power supplies (UPS), which should auto-login, which should auto-reboot, etc., should all be thought through. Planned power outages, whether testing emergency generators on a periodic basis or via facility maintenance activities, give your team the opportunity to ensure end user devices are assessed and understood in the scheme of BC/DR planning.

Finally, developing hardware standards and working through a trusted value-added reseller (VAR) can be extraordinarily helpful when facing a disruptive or disaster event. With a quick phone call to your VAR, you can have a swarm of hot spares, new hardware, and even preconfigured systems, depending on what’s been arranged in advance. Once you have your BC/DR strategy in place, involve your VAR in discussions about what capabilities they can provide. For healthcare IT, that can save time and money in the aftermath of a disaster.

One last note on this topic—be sure your end devices do not store PHI or PCI unless local disk encryption is used. A single laptop can be the source of a breach of hundreds of thousands of names and can be a serious event for any organization—so be sure you’re looking at how and where data are stored on end user devices and ensuring the security of those data in the event of theft or loss. In the event of a disaster, you won’t have to wonder how many laptops are missing, whether or not they are destroyed or intact, and whether you have data exposure as a result.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124105263099761

Spoken Dialogue Systems for Intelligent Environments

W. Minker, ... D. Zaykovskiy, in Human-Centric Interfaces for Ambient Intelligence, 2010

18.2.2 The Role of Spoken Dialogue

Being the most natural way of communication between humans, speech is also an increasingly important means of interaction between humans and computer systems integrated into intelligent environments. The speech modality for information input and output successfully augments and often replaces standard user interfaces, contributing to overall usability and friendliness of technical and information systems.

Spoken dialogue is particularly important in the context of mobile applications, where consistent interaction is complicated by the limitations of end-user devices. Factors such as small screens, simplified and shortened keyboards, and tiny buttons make the speech modality highly desired in human–computer interfaces for mobile scenarios.

The substantial effort made in automatic speech recognition (ASR) throughout the last decades has resulted in effective and reliable speech recognition systems. Modern general-purpose ASR systems use statistical algorithms to convert the audio signal into machine-readable input. Such systems can be structurally decomposed into front-end and back-end (Figure 18.2). In the front-end the process of feature extraction takes place; the back-end searches for the most probable word sequence based on acoustic and language models.

Which devices would be described as end devices

Figure 18.2. Terminal-based ASR—embedded speech recognition.

Since we are mostly concerned with mobile scenarios, where end-user devices provide a certain connectivity, we classify mobile-oriented ASR systems according to the location of the front-end and back-end. This allows us to distinguish three principal system structures:

client-based, or embedded ASR, where both front-end and back-end are implemented on the terminal

server-based, or network speech recognition (NSR), where speech is transmitted over the communication channel and the recognition is performed on the powerful remote server

client–server, or distributed speech recognition (DSR), where the features are calculated on the terminal and classification is on the server side

Each approach has its individual strengths and weaknesses, which may influence the overall performance of the system. Therefore, the appropriate implementation depends on the application and the terminal's properties. Some small recognition tasks can be performed directly on client devices [5, 6]. However, for complex large-vocabulary speech recognition, the computing resources available on current mobile devices are not sufficient. In this case remote speech recognition, which uses powerful servers for classification, is recommended [7].

In the following we consider two possible architectures implementing remote speech recognition: NSR and DSR. We analyze the problems associated with each architecture in detail and provide recommendations for practical realization of the corresponding approach.

Network Speech Recognition

We have adopted the NSR architecture for implementing the pedestrian navigation system to be presented in Section 18.3.1. The main idea behind this is shown in Figure 18.3. We use Voice-over-IP software to call a speech server. Referring to the overall architecture presented in Figure 18.1, the user level consists of a PDA, a mobile phone providing Internet access, and a GPS receiver. Skype is running on the PDA to access the speech server but it is also possible to use any other Voice-over-IP client.

Which devices would be described as end devices

Figure 18.3. Server-based ASR—network speech recognition.

Figure 18.4 shows the design of the system at the user and the application levels in detail. We use a speech server to process the dialogue flow hosted by TellMe [8] and an application server to translate geo-coordinates into a grid of sectors zoning the test route. This is necessary to bridge the gap between the output of the GPS receiver and the spoken directions.

Which devices would be described as end devices

Figure 18.4. User and application level in detail.

One benefit of this system design is that it is easy to implement and fast in place, especially for evaluation purposes. It is also very flexible because different speech servers and Voice-over-IP clients may be used. It is not necessary to have any kind of speech recognizer or synthesizer running on the client device. A disadvantage is the bandwidth required to run the system: A UMTS flat rate was required for system evaluation.

Distributed Speech Recognition

As mentioned previously, speech recognition in a DSR architecture is distributed between the client and the server. Here one part of an ASR system, feature extraction, resides on the client while the ASR search is conducted on the remote server (Figure 18.5).

Which devices would be described as end devices

Figure 18.5. Client–server ASR—distributed speech recognition.

Even though both DSR and NSR make use of the server-based back-end, there are substantial differences in these two schemes.

In the NSR case the features are extracted from the resynthesized speech signal. Since lossy speech codecs are optimized for the best perceptual quality and not for the highest recognition accuracy, the coding and decoding of speech reduces recognition quality [9, 10]. This effect becomes much stronger in the case of transmission errors, where data loss needs to be compensated. Since in DSR we are not constrained to the error mitigation algorithm of the speech codec, better error-handling methods in terms of word error rate (WER) can be developed.

Another factor favoring DSR is the lower bit rate required. For the ASR search, high-quality speech is not required but rather some set of characteristic parameters. Therefore, the generated traffic is lower with respect to NSR.

Finally, since the feature extraction is performed at the client side, the sampling rates may be increased to cover the full bandwidth of the speech signal.

ETSI DSR Front-End Standards

The successful deployment of the DSR technology is only possible in practice if both the front-end and the DSR back-end assume the same standardized procedure for feature extraction and compression. Four standards have been developed under the auspices of the European Telecommunications Standards Institute (ETSI) (see Table 18.1).

Table 18.1. Overview of ETSI Standards for DSR Front-Ends

Speech ReconstructionNoise Robustness
BasicAdvanced
No FE AFE
ES 201 108 ES 202 050
Yes xFE xAFE
ES 202 211 ES 202 212

The first standard, ES 201 108 [11], was published by ETSI in April 2000. It specifies the widely used Mel-cepstrum–based feature extraction algorithm together with compression and transmission error mitigation algorithms. ES 201 108 is our target version of the DSR front-end to be ported into Java ME. This front-end operates at a 4.8-kbit/s bit rate and will be considered in more detail later.

To improve the performance of the DSR system in noisy environments a noise-robust version of the front-end has been developed. This advanced front-end (AFE) [12] version was published as ETSI standard document ES 202 050 in February 2002.

In 2003 both standards were enriched to the extended versions ES 202 211 [13] and ES 202 212 [14], allowing for the cost of additional 0.8-kbit/s reconstruction of the intelligible speech signal out of the feature stream.

Publicly available C implementations exist for all four standards. Moreover, for the extended advanced front-end there is a standardized C realization TS 126 243 [15] using only fixed-point arithmetic.

A Java ME Implementation of the DSR Front-End

Considering the mobile phone as a target device, we had to fathom the possibilities for application development on it. The most widespread technology in this field is Java Micro Edition (Java ME, formerly known as J2ME). The second most widespread, the Symbian technology, is not common on consumer mobile phones. Thus with nearly every new device being shipped with Java, it seemed to be the most attractive choice. To cope with the conditions on mobile devices—low memory and processing power—we chose to implement the ETSI basic front-end standard, ES 201 108, abandoning noise reduction [16]. The front-end performs feature extraction and feature compression using vector quantization (VQ).

Most mobile phones are shipped with low-cost processors lacking a floating-point unit (FPU). Floating-point operations on such devices are feasible in Java; however, they perform poorly since floating-point arithmetic is software-emulated. Accordingly, we implemented two front-end versions, one based on floating-point arithmetic for exhausting the possibilities on FPU devices and another based on fixed-point arithmetic. The latter emulates real numbers by using integer variables, which speeds up processing by a factor of up to 4 (Sony Ericsson W810i).

Moreover, our front-end can be run in single-threading as well as multi-threading mode: The feature extraction and the vector quantization modules can be launched either sequentially or in parallel. The first alternative requires more memory (5.6 kByte/s), since the extracted features have to be buffered before the VQ is launched, which can be crucial regarding larger utterances. The multi-threading version, however, is able to compress the extracted features on the fly, and thus only a small, constant buffer is needed (< 1 kByte). Multi-threaded processing results in slightly slower processing times compared to single-threading mode (on average, a 12% increase).

As can be seen from the results of our performance assessments in Table 18.2, several devices on the market are already capable of performing front-end processing with Java in real time, such as the Nokia 6630 and the Nokia N70. For performance comparison, we ported the ES 201 108 front-end to Symbian C to compare Java and C on the same device. In Symbian C, our Nokia E70 test device processed feature extraction by a real-time factor of 0.6, compared to 1.3 in the Java implementation (FE only, floating-point). This means that Symbian C approaches need to be taken into consideration. Further developments in this direction can be found in [17].

Table 18.2. Time Required for Feature Extraction (FE only) and Compression (FE+VQ) related to Utterance Duration

Cellular PhoneFE OnlyFE+VQ
Single-ThreadedMulti-Threaded
FloatFixedFloatFixedFloatFixed
Nokia 6630, N70 1.3 0.7 1.8 0.9 2.0 1.4
Nokia E70 1.3 0.9 1.8 1.2 1.9 1.3
Nokia 7370 1.2 2.7 1.6 3.7 1.7 3.8
Nokia 7390 0.9 1.6 1.3 2.2 1.4 2.3
Nokia 6136, 6280, 6234 1.1 2.2 1.5 3.0 1.5 3.1
Siemens CX65, CX75 3.1 2.1 4.4 2.7 5.0 3.8
Sony-Ericsson W810i 7.9 2.0 12.5 2.9 13.4 3.1

Today the main stumbling block for broad use of our Java front-end architecture is the neglectful implementation of the recording functionality by device manufacturers. Although defined in the Java specification JSR 135 (MMAPI), so far only a few manufacturers follow the standards defined by the Java Community Process. For instance, virtually all Sony Ericsson devices capture data compressed by the adaptive multirate (AMR) codec, which is worthless for speech recognition. According to our investigations, only devices shipped with implementations of MMAPI on Sun and Symbian currently follow the standard and enable capture of uncompressed voice data. We expect other device manufacturers to further enhance their implementations in the near future.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123747082000188

Free Public Wi-Fi Security in a Smart City Context—An End User Perspective

C. Louw, B. Von Solms, in Smart Cities Cybersecurity and Privacy, 2019

2.2.1 End User Wi-Fi Security Advice

By making sure that the most recent device software has been installed on their device, end users ensure that they receive the most recent security updates and patches to first protect their device that they wish to connect to public Wi-Fi, as well as its operating system (OS).

Additionally, end users may install antivirus software on their device, many of which not only have a free version available, but also Wi-Fi network security shields. Additional software installations that may offer further protection also include a firewall and a Virtual Private Network (VPN).

By confirming a free Wi-Fi network's name (SSID) with the expected provider, instances of connecting to the wrong network (evil twin or rogue network) may be avoided. In the event of a website being accessed while making use of free Wi-Fi, end users should ensure that a secure browsing session is set up, indicated by “https” in the uniform resource locator (URL) of the website.

In general, end users should also minimize the number of sensitive transactions that they perform while connected to public Wi-Fi. Lastly, in the event of Wi-Fi not actively being used on a device, end users should take care to deactivate the Wi-Fi scanning operation on their device. This may eliminate the device automatically connecting to an evil twin or rogue network while the end user is unaware of it.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128150320000093

Security

Stefan Rommer, ... Catherine Mulligan, in 5G Core Networks, 2020

8.3.8 EAP-AKA’ based primary authentication

The Extensible Authentication Protocol (EAP), defined by IETF in RFC 3748, is a protocol framework for performing authentication, typically between an end-user device and a network. It was first introduced for the Point-to-Point Protocol (PPP) to allow additional authentication methods to be used over PPP. Since then it has also been introduced in many other scenarios. EAP is not an authentication method per se, but rather a common authentication framework that can be used to implement specific authentication methods. EAP is therefore extensible in the sense that it enables different authentication methods to be supported and allows for new authentication methods to be defined within the EAP framework. These authentication methods are typically referred to as EAP methods. For more details on EAP in general, please see Chapter 14.

EAP-AKA’ is an EAP method defined by IETF in RFC 5448 (RFC 5448) for performing authentication based on USIM cards. As mentioned above it is used already in EPC/4G for access over non-3GPP access. In 5GS, EAP-AKA’ has a more prominent role as it is now possible to use it for primary authentication over any access.

EAP-AKA runs between the UE and the AUSF as shown in Fig. 8.8.

Which devices would be described as end devices

Fig. 8.8. High level architecture for EAP-AKA’.

When the AMF/SEAF initiates the authentication as described in section above, and the UDM has chosen to use EAP-AKA’, the UDM/ARPF will generate a transformed Authentication Vector (AV’) and provide it to the AUSF. This Authentication Vector from UDM/ARPF is the starting point for the authentication procedure. The AV’ consists of five parameters: an expected result (XRES), a network authentication token (AUTN), two keys (CK’ and IK’), and the RAND. The AV is quite similar to the AV generated in 4G/EPS with the difference that the CK and IK are replaced by CK’ and IK’ which are 5G variants of the CK and IK, derived from CK and IK and the serving network name. For that reason, the AV is called a “transformed Authentication Vector” and denoted with a prime (AV’).

The authentication then proceeds in a similar way as for 5G AKA with the difference that that AMF/SEAF is not actively participating except for forwarding messages. It is only the AUSF that compares the RES received from the UE with the XRES. The AUSF then notifies the AMF/SEAF about the outcome and provides the SEAF key to the SEAF. This procedure is illustrated in Fig. 8.9.

Which devices would be described as end devices

Fig. 8.9. High level procedure for EAP-AKA’.

EAP-AKA′, specified in IETF RFC 5448, is a small revision of EAP-AKA, defined in IETF RFC 4187. The revision made in EAP-AKA′ is the introduction of a new key derivation function that binds the keys derived within EAP-AKA′ to the identity of the access network. In practice, this means that the access network identity is considered in the key derivation schemes. The procedure is thus more aligned with 5G AKA and strengthens key separation.

Now all that remains is to calculate the keys to be used for protecting traffic, which is described in next section.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081030097000089

Mobile Computing

Ric Messier, in Collaboration with Cloud Computing, 2014

Boundaries

Depending on who you are, you may read this differently. From a corporate or an enterprise perspective, you are extending the perimeter of your network out to the end user device that could be anywhere in the world. From an end user perspective, the unacknowledged boundary is allowing work life to intrude into private life. You get used to having your phone with you and checking it just to kill time, whether it’s e-mail, Facebook, Twitter, or just a game of Candy Crush or Angry Birds. At some point, you get your corporate e-mail on your phone and you’re checking e-mail at all hours of the day and night just because you have access to it. This is a boundary that used to be clear but has been slowly eroding, as mobile devices get cheaper and more accessible, whether it’s a laptop or a mobile phone or a tablet.

When you connect your personal device to your network, any EAS policy in place will be pushed down to the device. As discussed previously, EAS has the capability to have extensive control over the device including disabling a lot of functionality that would be desirable for personal use even if it may not be desirable for business use. The question is how much control would you as an owner of a personal device would allow the business to have over your smartphone. The business needs to protect its interests but where those interests and your own personal interests diverge, the business interests will win in the case of an ActiveSync server because the policy gets pushed to the phone regardless of whether it’s a personal phone or not.

This is a decision that each user has to make. Whether to allow the business to have control over their device, including the ability to wipe it, just in order to have some access to business information like e-mail. Keep in mind that in cases where an employee leaves the company, the company may wipe the device in order to ensure that all company data that is stored on the device is removed. A remote wipe may remove business-specific information or it may remove all information on the phone, including personal pictures and contacts. It may also simply remove everything from the phone, leaving it totally unfunctional. The degree to which the remote wipe operates depends on the service ordering the remote wipe and the operating system on the mobile device being wiped.

We’ve covered one aspect of boundaries by talking about protecting your corporate network from direct access from a mobile device. This is primarily because of the potential risks of using wireless networks. Wireless networks, because you have little to no control over where the signal goes to, provide a certain amount of risk. As an example, Figure 7.8 shows a list of all of the networks that are available from where I sit, without any additional effort on my part. Using a VPN gateway as a boundary between the network where your mobile devices, including laptops, connect and the inside of your network is a good idea. While you may consider this to be a complication, it’s likely that you have a VPN gateway in your network already so this is just another use of it. The VPN could also provide a way for your mobile devices to gain access from outside of your network as well if the gateway is already in place. Your VPN gateway then becomes the boundary between your internal network and all mobile devices, regardless of type and location.

Which devices would be described as end devices

Figure 7.8. Using WiFi Explorer

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124170407000071

Office – Macros and ActiveX

Rob Kraus, ... Naomi J. Alpern, in Seven Deadliest Microsoft Attacks, 2010

Using Antivirus and Antimalware

You should install Antivirus and Antimalware software at all layers of your environment to ensure that viruses and malware are detected and neutralized. This includes integration with the border devices, with e-mail servers, and on an end-user device. The reason you need this at all layers is to eliminate the threat from your network as soon as possible, but not all traffic can be scanned at each layer.

For example, let's say your friend knows you enjoy collecting Star Wars action figures and he wants to send you a picture that he had found in an ad for the last one you need for your collection. Since he knows that your company monitors your e-mail, he decides to encrypt the file and names it something generic to circumvent your e-mail filters. Unfortunately, this action means that the content of the encrypted file won't be scanned until someone opens it rather than it being detected at network edge. Therefore, it is vital that scanning occurs at whatever point the mail is opened.

In addition to layering protection throughout the network, controls should also be configured to ensure that viruses are detected before they can actually run. To accomplish this, antivirus and antimalware software should be set to use heuristics as well as the specific virus/malware signatures in the files. The software should also always have real-time scanning enabled as well as a full scan of the hard drive should be performed at least once a week. Using all of these options is a trade-off because it does take more processor cycles to use your antivirus and antimalware software in this manner, but in almost all cases it is worth it.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495516000054

How SDN Works

Paul Goransson, Chuck Black, in Software Defined Networks, 2014

4.4.1 SDN Controller Core Modules

The controller abstracts the details of the SDN controller-to-device protocol so that the applications above are able to communicate with those SDN devices without knowing their nuances. Figure 4.5 shows the API below the controller, which is OpenFlow in Open SDN, and the interface provided for applications. Every controller provides core functionality between these raw interfaces. Core features in the controller include:

End-user device discovery. Discovery of end-user devices such as laptops, desktops, printers, mobile devices, and so on.

Network device discovery. Discovery of network devices that comprise the infrastructure of the network, such as switches, routers, and wireless access points.

Network device topology management. Maintain information about the interconnection details of the network devices to each other and to the end-user devices to which they are directly attached.

Flow management. Maintain a database of the flows being managed by the controller and perform all necessary coordination with the devices to ensure synchronization of the device flow entries with that database.

The core functions of the controller are device and topology discovery and tracking, flow management, device management, and statistics tracking. These are all implemented by a set of modules internal to the controller. As shown in Figure 4.5, these modules need to maintain local databases containing the current topology and statistics. The controller tracks the topology by learning of the existence of switches (SDN devices) and end-user devices and tracking the connectivity between them. It maintains a flow cache that mirrors the flow tables on the various switches it controls. The controller locally maintains per-flow statistics that it has gathered from its switches. The controller may be designed such that functions are implemented via pluggable modules such that the feature set of the controller may be tailored to an individual network’s requirements.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124166752000048

How SDN Works

Paul Göransson, ... Timothy Culver, in Software Defined Networks (Second Edition), 2017

4.4.1 SDN Controller Core Modules

The controller abstracts the details of the SDN controller-to-device protocol so that the applications above are able to communicate with those SDN devices without knowing the nuances of those devices. Fig. 4.5 shows the API below the controller, which is OpenFlow in Open SDN, and the interface provided for applications. Every controller provides core functionality between these raw interfaces. Core features in the controller will include:

End-user Device Discovery: Discovery of end-user devices, such as laptops, desktops, printers, mobile devices, etc.

Network Device Discovery: Discovery of network devices which comprise the infrastructure of the network, such as switches, routers, and wireless access points.

Network Device Topology Management: Maintain information about the interconnection details of the network devices to each other, and to the end-user devices to which they are directly attached.

Flow Management: Maintain a database of the flows being managed by the controller and perform all necessary coordination with the devices to ensure synchronization of the device flow entries with that database.

The core functions of the controller are device and topology discovery and tracking, flow management, device management and statistics tracking. These are all implemented by a set of modules internal to the controller. As shown in Fig. 4.5, these modules need to maintain local databases containing the current topology and statistics. The controller tracks the topology by learning of the existence of switches (SDN devices) and end-user devices and tracking the connectivity between them. It maintains a flow cachewhich mirrors the flow tables on the different switches it controls. The controller locally maintains per-flow statistics that it has gathered from its switches. The controller may be designed such that functions are implemented via pluggable modules such that the feature set of the controller may be tailored to an individual network’s requirements.

Many companies implementing SDN look to these core modules to help them model their network and create abstraction layers. Level 3’s CTO Jack Waters described his company’s efforts with SDN: “The combined company (Level 3 and Time Warner Telecom) has done a number of network trials. With respect to SDN, we’re in the development phase around a few NFV betas so it’s early but we have use cases up and running already. The combined company is laser focused on how to leverage the technology really to help build a network abstraction layer for things like provisioning, configuration management, and provisioning automation is something we think will be great for us and our customers.” [14]

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128045558000041

SDN Applications

Paul Göransson, ... Timothy Culver, in Software Defined Networks (Second Edition), 2017

12.1 Terminology

Consistent terminology is elusive in a dynamic technology such as SDN. Consequently, certain terms are used in different ways, depending on the context. To avoid confusion, in this chapter we will adhere to the following definitions:

Network device: When referring generically to a device such as a router, switch, or wireless access point, in this chapter we will use the term network device or switch. These are the common terms used in the industry when referring to these objects. Note that the Floodlight controller uses the term device to refer to end user device. Hence, in sections of code using Floodlight or supporting text, the word device used alone means end user node.

End user node: There are many examples of end user devices, including desktop computers, laptops, printers, tablets, and servers. The most common terms for these end user devices are host, and end user node; we will use end user node throughout this chapter.

Flow: In the early days of SDN, the only controller-device protocol was OpenFlow, and the only way to affect the forwarding behavior of network devices was through setting flows. With the recent trends toward NETCONF and other protocols to effect forwarding behavior by setting routes and paths, there is now no single term for that which the controller changes in the device. The controller’s actions decompose into the fundamentals of create, modify, and delete actions, but the entity acted upon may be a flow, a path, a route, or, as we shall see, an optical channel. For the sake of simplicity, we sometimes refer to all these things generically as flows. The reader should understand this to mean either setting flows, or configuring static routes, modifying the RIB, setting MPLS LSPs, among other things, depending on the context.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128045558000120

Security in low-power wide-area networks: state-of-the-art and development toward the 5G

Radek Fujdiak, ... Petr Mlynek, in LPWAN Technologies for IoT and M2M Applications, 2020

17.3 Future vision: Internet of things and 5G core network—security overview

LPWA technology has many attractive features for the growing developments of IoT deployments across multiple sectors such as logistics and transportation, utilities, smart cities, and agriculture. The 4C (capacity, consumption, cost, coverage) model which describes the key characteristics of LPWA network justifies its appropriateness for IoT and M2M applications that require to transmit small chunks of data over long ranges with last longing battery life. The reduced complexity and high scalability in LPWA technologies will also facilitate the less human intervention for the IoT applications of the future.

The tendency of empowering different LPWA technologies in numerous application areas depends on their individual needs. For instance, emerging LPWA applications in manufacturing are identified as machine auto-diagnosis, asset control, and location reporting, monitoring, item tracking, etc. NB-IoT modules can be used to achieve high-precision monitoring and operations within factory premises. The single cloud of Sigfox makes this technology beneficial for various pancontinental tracking applications. LoRaWAN can address resource management in smart agriculture and various utility applications. Likewise, each technology has its advantages in the IoT applications and their deployments. Therefore it is foreseen that 5G wireless mobile communication will lead to a connected world of humans and devices while providing global LPWA solutions for IoT applications. This is not yet entirely clear how exactly will this look like and which technologies will form the 5G basis and which ones will adjoin later in the process of 5G evolution. Nonetheless, this is worth to look at how security for 5G is seen now.

The 5G core network comprises the most important network control elements, mobility, user information, and charging elements and functions. The core network of 4G or LTE consisted of elements such as MME, Policy and Charging Rule Functions, HSS, etc. In some of the core network elements, the data and control parts are bundled together, as shown in Fig. 17–5. One of the major shifts that occur in 5G is cloudification of the core network elements through separating the network control functions from the data forwarding planes. This cloudification logically centralizes the core network elements into high-end servers enabling cost-effective scalability, service provisioning, and availability. The detailed architecture and its elements with description are available in the latest 3GPP specifications [39,40]. The 5G core network is IP-based, ensures QoS and quality of experience, and is more dynamic due to novel technological concepts such as cloud technologies, software-defined networking (SDN), and network function virtualization. However, it will bring forth some potential security challenges, especially critical for the low-power IoT devices.

Which devices would be described as end devices

Figure 17–5. 4G EPC architecture simplified, showing control and data planes.

The Next Generation Mobile Networks consortium has provided several key insights into the possible security challenges in 5G in the form of recommendations, as described in [41]. Few of the main security challenges highly related to LPWA network are

1.

Flash network traffic: It is projected that the number of end-user devices, for example, IoT ED, will grow exponentially in 5G that will cause significant changes in the network traffic patterns either accidentally or with malicious intent. Having said that, large swings and burst in traffic will be very common.

2.

DoS attacks: DoS and distributed DoS attacks can exhaust various network resources such as energy, storage, and computing. Sporadic requests or specifically crafted requests generated toward the network in huge number (e.g., by a massive number of compromised IoT EDs or nonauthentic subscribers) can be highly challenging and can possibly bring the network to a halt.

3.

Security of radio interface keys: In previous wireless network generations, including 4G, the radio interface encryption keys are generated in the home network and sent to the visited network over insecure links, causing a clear point of exposure of keys.

The first two attacks are particularly very challenging and interrelated. Due to the massive number, it will be tough to differentiate between legitimate requests and malicious requests meant for resource exhaustion attacks. Moreover, most of the signaling involves the core network elements, which are now either physically or logically centralized. Hence, signaling oriented DoS attacks will be one of the critical challenges. This will be more challenging since LPWA IoT devices may not have enough resources to protect the content from integrity or man-in-the-middle attacks through proper encryption or hashing. Signaling oriented challenge in 4G, highlighted in [42], has been difficult to counter due to the penetration of IP traffic in cellular networks. However, 4G networks have mostly distributed control planes where a security loophole in a system will cause local damage (e.g., DoS attack on a control point, e.g., a gateway). In contrast, the core elements in 5G (shown in Fig. 17–6) are centralized; thus security challenges or loopholes will have more adverse consequences since more control points of the network are centralized into singular nodes.

Which devices would be described as end devices

Figure 17–6. Simplified network architecture of 5G, showing control and data planes.

It is worth noting that LPWA IoT devices will mostly comprise low-power embedded systems. In a large-scale analysis of firmware of low-power embedded devices, the authors in [43] show that most of the firmware is ripe with security vulnerabilities. Therefore, low-power embedded systems are highly vulnerable to be masqueraded for security attacks. Since, the domain of IoT is developing and evolving very vast and still yet to be explored in the context of security, the further challenges cannot all be easily identified and responded. Sensitive systems that are supposed to be highly secure can be exposed to security vulnerabilities due to combining and using insecure IoT for different functionalities, specifically when the Internet of hacked things is on the rise. For example, in 2015, 2.2 million BMWs were infected, where the infection allowed remote unlocking of the car. Similarly, 1.4 million Chryslers had vulnerability in their dashboard computers, which allowed hackers to steer the vehicle, apply brakes, and control the transmission [44]. Security loops in such critical systems can cause damages directly to a human.

Connecting infected systems to a network might expose the network to security loopholes [45]. One example is using the compromised devices to launch insider attacks or DoS attacks on the system these devices operate in, for instance, the 5G core network [46]. Resource-constrained devices in significant numbers will require processing and storage in the cloud. The cloud systems will serve diverse and significant number of services and possibly shared through virtualization among different stakeholders. Since the 5G core network is cloudified, IoT will bring many challenges into the signaling plane in the cloud. In LTE, the HSS has been the main point of attacks under the guise of requests for authentication and authorization [45]. 3GPP suggests that IoT devices should periodically update the security credentials; however, the frequent update will increase the burden on the control plane, making it easily prone to resource exhaustion attacks. In such scenarios, compromised IoT devices can induce vulnerabilities into the whole system.

On the other hand, it is worth mentioning that the 5G core network is supposed to be highly resourced with strict access control procedures. For example, the core network elements, such as MME, are now represented as network functions in software. MME is represented as Access and Mobility management Function (AMF) and Session Management Function (SMF), with clearly stated protocols and reference points for interaction among them, as highlighted in the 3GPP specification release 15 [40]. This solves the scalability issues and enables dynamically scaling the resources based on need from highly resourced cloud infrastructures, making resource exhaustion least likely. Furthermore, to effectively handle the signaling, two approaches are discussed by 5GPP [47]. First, using lightweight AKA protocols for massive IoT communication. Second, using group-based AKA protocols to group IoT devices together, which will minimize the individual signaling traffic [47]. Hence, there are a number of group-based authentication schemes for IoT, such as authentication for NB-IoT [48]. The authentication scheme groups IoT devices with similar attributes together and selects a group leader. The group leader aggregates sensitive information and sends it to the core network, which verifies each node independently. The proposed mechanism also preserves identity privacy besides minimizing the signaling involved in authentication in the core network. The same kind of group-based authentication is proposed for vehicular IoT in [49] using the concepts of SDN.

Moreover, secure core network or secure network control points are highly important for smooth, optimized, efficient, and secure work of LPWA IoT devices. Furthermore, robust load balancing mechanisms in the core network will still be needed due to the emergence of big data through IoT to ensure timely resources for authentication and authorization of resources to IoT devices. Secure core network or the network control plane overlooking the behavior of connected things and stats of network components with the capability to remotely monitor the entire ecosystem can increase the security. For instance, the SDN-enabled centralized control plane that can overlook and control the entire network sees the stats of the traffic passing through each node can significantly improve the network security. Compromised LPWA IoT nodes sending excessive data can be recognized at the data plane by using monitoring applications in the SDN application plane. A simple monitoring application in the SDN application plane that gathers statistics from the data plane can help recognize malicious activity within the network by comparing the statistics against predefined thresholds for different services or registered devices. Hence, the centralized monitoring as enabled by the centralized core network in 5G can highly improve the security of not only the network connecting IoT EDs but also that of the IoT EDs themselves (Fig. 17–6).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128188804000181

What are 3 types of end devices?

Some examples of end devices are: Computers, laptops, file servers, web servers. Network printers. VoIP phones.

Which devices would be described as intermediary devices?

Examples of intermediary network devices are: switches and wireless access points (network access) routers (internetworking) firewalls (security).

Which two devices would be described as end devices choose two answers?

An end device is a device that serves as the interface between users and the underlying communication network. A message's source or destination are end devices. a system's source or destination device connected to a network. A server and a user's PC, for instance, are both examples of end devices.

Is a phone an end device?

An endpoint device is an Internet-capable computer hardware device on a TCP/IP network. The term can refer to desktop computers, laptops, smart phones, tablets, thin clients, printers or other specialized hardware such as sensors, actuators, point of sale terminals (POS terminals) and Smart meters.