Название: Dynamic Spectrum Access Decisions
Автор: George F. Elmasry
Издательство: John Wiley & Sons Limited
Жанр: Отраслевые издания
isbn: 9781119573791
isbn:
With the concept of providing DSA as a set of cloud services, the design should be able to go through an iterative process before the model is deemed workable. The design should include the following steps:
1 Create an initial service agreement driven from requirements and design analysis.
2 Run scripted scenarios to evaluate how the agreement is met during runtime through created metrics.
3 Run post‐processing analysis of these scripted scenarios to gain further knowledge of the properties of the selected metrics.
4 Refine the service agreement.
Figure 5.7 illustrates this iterative concept. The outcome of this processing is a defined service agreement with measurable metrics that a deployed system is expected to meet.
Figure 5.7 Iterative process to create a workable DSA service agreement.
With standard cloud services, a customer should be able to compare two service agreements from two different providers and select the provider that best meets his needs. The provider of an IaaS attempts to optimize the infrastructure resources use dynamically in order to create an attractive service agreement. If the scripted scenarios in Figure 5.7 are selected to represent deployed scenarios accurately, and if the iterative process in Figure 5.7 is run sufficiently enough and with enough samples, the service agreement created should be met with the deployed system. However, there should still be room to refine the cognitive algorithms, policies, rule sets, and configuration parameters after deployment if post‐processing analysis necessitates this change. A good system design should only require refining of policies, rule sets, and configuration parameters without the need for software modification. This system design should allow for the deployed cognitive engine to morph based on post‐processing analysis results.
5.3.3 Examples of DSA Cloud Services Metrics
This section presents some examples of DSA cloud services metrics that can be considered in DSA design. Note that these are examples and the designer can choose to add more metrics depending on the system requirements and design analysis.
5.3.3.1 Response Time
Metric name: Response time.
Metric description: Response time between when an entity requests a DSA service and when the service is granted.
Metric measured property: Time.
Metric scale: Milliseconds.
Metric source: Depends on the hierarchy of the networks. The source is always a DSA cognitive engine but the response can be local, distributed cooperative, or centralized. The response can also be deferred to a higher hierarchy DSA cognitive engine.
Note: Response time can be more than one metric. Response time for a local decision is measured differently from response time from a gateway or a central arbitrator. The design can create more than one response time metric.
5.3.3.2 Hidden Node
Metric name: Hidden node detection/misdetection.
Metric description: Success or failure in detecting a hidden node.
Metric measured property: Success or failure.
Metric scale: Binary.
Metric source: An external entity, a primary user, files a complaint about using its spectrum by the designed system.
Note: Need scripted scenarios to evaluate this metric. It is evaluated by an external entity not the designed system.
5.3.3.3 Meeting Traffic Demand
Metric name: Global throughput.
Metric description: Traffic going through the system over time (global throughput efficiency).
Metric measured property: Averaged over time.
Metric scale: bps.
Metric source: Global measure of traffic going through the system. Successful use of spectrum resources dynamically should increase the wireless network's capacity to accommodate higher traffic in bps.
Note: This metric is system dependent. Some systems, such as cellular systems, link this traffic demand to revenue making. The metric is not only interested in getting insight into achieving higher throughput, but the higher number of users that increases revenues. Some users' rates can be lowered but the service continues in order to accommodate more users as long as the service agreement is met.
5.3.3.4 Rippling
Metric name: Rippling.
Metric description: The stability of the assigned spectrum.
Metric measured property: Time.
Metric scale: Minutes.
Metric source: The DSA cognitive engine can track the time between two consecutive frequency updates.
Note: Rippling can have a negative impact on the previous metric (meeting global throughput). It can reduce the network throughput. This metric can be measured at the node level, at the gateway level, and at the central arbitrator level. The rippling impact at higher levels (e.g., central arbitrator) can have much worse impact than rippling at the local node. Evaluation of this metric depends on where it was measured.
5.3.3.5 Co‐site Interference Impact
Metric name: Co‐site impact.
Metric description: The ability to reduce co‐site impact on the assigned spectrum.
Metric measured property: SNIR.
Metric scale: dB.
Metric СКАЧАТЬ