Internet-Draft Applicability Statement September 2024
Zhang, et al. Expires 15 March 2025 [Page]
Workgroup:
TVR
Internet-Draft:
draft-wqb-tvr-applicability-00
Published:
Intended Status:
Informational
Expires:
Authors:
L. Zhang
Huawei
Q. Ma
Huawei
Q. Wu
Huawei
M. Boucadair
Orange

Applicability of YANG Data Models for Scheduling of Network Resources

Abstract

This document provides an applicability statement on how the Time-Variant Routing (TVR) data model may be used for scheduling in specific time variant network cases, to meet the requirements of a set of representative use cases described in the "TVR (Time-Variant Routing) Requirements" (I-D.ietf-tvr-requirements).

It also presents a framework that elucidates various scheduling scenarios and identifies the entities involved in requesting scheduled changes of network resources.

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."

This Internet-Draft will expire on 15 March 2025.

Table of Contents

1. Introduction

Scheduling-related tasks are usually considered for efficient and deterministic network management. Such scheduling may be characterized as simple recurrent tasks such as planned activation/deactivation of some network accesses, or can be more complex such as scheduling placement of requests and planned invocation of resources to satisfy service demands or adhere to local networks policies. Many common building blocks are required for all these cases for adequate management of schedules and related scheduled actions.

This document provides an applicability statement on how the Time-Variant Routing (TVR) data model may be used for scheduling in specific time variant network cases, to meet the requirements of a set of representative TVR use cases described in [I-D.ietf-tvr-requirements]. By leveraging a reference framework presented in this document, it shows how IETF data models in [I-D.ietf-tvr-schedule-yang] can fit into this framework and streamline the management and orchestration of network resources based on precise date and time parameters.

The document also provides guidelines for implementing scheduling capabilities across diverse network architectures, ensuring that resources are allocated and utilized in a timely and effective manner.

In addition, the document outlines several challenges that must be considered when deploying control mechanisms for scheduling network resources, including:

Key use cases highlight how the proposed framework can be used for scheduling scenarios. The applicable IETF YANG modules are described, as well as other dependencies that are needed.

2. Conventions and Definitions

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.

The following terms are used in this document:

3. A Reference Architecture

Figure 1 presents a reference architecture for the control scheduling of network resources.

          +-------------------------------------------------+
          |            Schedule Service Requester           |
          +-----+------------------------------------^------+
                |                                    |
                |Request                             |Response
                |                                    |
          +-----v------------------------------------+------+
          |            Schedule Service Responder           |
          |                                                 |
          |   +---------+                     +---------+   |
          |   |         |                     |         |   |
          |   | Schedule|                     | Conflict|   |
          |   | Manager |                     | Resolver|   |
          |   |         |                     |         |   |
          |   +---------+                     +---------+   |
          |                                                 |
          |   +---------+                     +---------+   |
          |   |         |                     |         |   |
          |   | Resource|                     | Policy  |   |
          |   | Manager |                     | Engine  |   |
          |   |         |                     |         |   |
          |   +---------+                     +---------+   |
          |                                                 |
          +----------------------+--------------------------+
                                 |
                                 |
                                 |
                                 |
     +---------------------------+-----------------------------+
     |              Network Resources and Inventory            |
     +---------------------------------------------------------+
Figure 1: An Architecture for the Scheduled Network Scenarios

3.1. Functional Components

3.1.1. Scheduled Service Requester

The entity requesting a resource schedule change can vary widely. For example, a network administrator may seek to restrict or limit access to specific network resources based on day or time to optimize performance and enhance security.

Additionally, higher-layer Operations Support System (OSS) components may impose restrictions on network resources in response to changing network conditions, ensuring service continuity and compliance with operational policies. Automated systems and AI-driven components can also request dynamic adjustments based on real-time data, facilitating predictive maintenance and optimizing resource usage to maintain peak network efficiency.

3.1.2. Scheduled Service Responder

This component is responsibile handling scheduling orders. That is, this entity manages and coordinates all network scheduling activities. The extact internal structure of this entity is deployment-specific; however this document describes an example internla decomposition of this entity:

  • Resource Manager:

    Manages the network resources that are subject to scheduling.

  • Schedule Manager:

    Handles creation, modification, deletion, and querying of schedules.

  • Conflict Resolver:

    Detects and resolves scheduling conflicts based on predefined policies and priorities.

  • Policy Engine:

    Enforces scheduling policies and rules, ensuring compliance with organizational requirements.

Examples of a schedule service responder may be a network controller, network management system or the network device itself.

3.2. Functional Interfaces

To support the scheduling of network resources effectively, several functional interfaces are required. These interfaces interconnect different components of the network scheduling system, ensuring seamless integration and operation, these include:

  • Schedule Service Requester and Responder API:

    Schedule resource order creation, order modification, and deletion requests and responses. Querying of current and upcoming schedules, conflict and alert notifications.

  • Schedule Service Responder and Network API:

    Manages interactions with the network resources, inventory systems, planning systems, etc. Capable of querying available resources, allocating and deallocating resources based on current schedule plan, and monitoring resource utilization.

3.3. Data Sources

When scheduling network resources, a variety of data sources are required to accurately assess the network state and make informed scheduling decisions. Here are some example data sources that will be required:

  • Network Topology Information:

    Connection details about the physical and logical layout of the network, including nodes, ports/links, and interconnections.

  • Network Resource Inventory:

    A comprehensive list of deployed network resources that are not currently in service, but may be available if enabled.

  • Current Network Utilization:

    Real-time data on the current usage of network resources, including bandwidth consumption, CPU load, memory usage, and power consumption.

  • Historic Network Utilization:

    Past data on the current usage of network resources, including bandwidth consumption, CPU load, memory usage, and power consumption.

  • Scheduled Maintenance and Planned Outages:

    Information on planned maintenance activities, scheduled downtimes, and service windows.

It is critical to leverage these diverse data sources, so network administrators and automated systems can make well-informed scheduling decisions that optimize resource utilization, maintain network performance, and ensure service reliability.

3.4. State Management

The scheduling state is maintained in the schedule manager, which is responsible for the creation, deletion, modification, and query of scheduling information.

Groupings "schedule-status" and "schedule-status-with-name" in the "ietf-schedule" YANG module [I-D.ietf-netmod-schedule-yang] define common parameters for scheduling management, including status exposure.

3.5. Policy and Enforcement

Policies are a set of rules to administer, manage, and control access to network resources. For example, the following shows an example of a scheduled interface policy:

Disable interface ethernet0/1/1 time 2025-12-01T08:00:00Z
/2025-12-15T18:00:00Z

A set of scheduling policies and rules are maintained by the policy engine, which is responsible for the policy enforcement. Policies are triggered to execute at a certain time based on the scheduling parameters. Each policy might be executed multiple times, depending on the its scheduling type (one-shot vs. recurrence).

3.6. Synchronization

It is critical to ensure all network schedule entities, including controllers and management systems are synchronized to a common time reference. System instability and unpredictability might be caused if there is any time inconsistencies between entities that request/respond to policies or events based on time-varying parameters. Several methods are available to achieve this.

4. TVR Use Case: Tidal Network

4.1. Overview

Tidal network is a typical scenario of Energy Efficient case (Section 3 of [I-D.ietf-tvr-use-cases]). The tidal network means that the volume of traffic in the network changes periodically like the ocean tide. These changes are mainly affected by human activities. Therefore, this tidal effect is obvious in human-populated areas, such as campuses and airports.

In the context of a tidal network, if the network maintains all the devices up to guarantee a maximum throughput all the time, a lot of power will be wasted. The energy-saving methods may include the deactivation of some or all components of network nodes. These activities have the potential to alter network topology and impact data routing/forwarding in a variety of ways. Interfaces on network nodes can be selectively disabled or enabled based on traffic patterns, thereby reducing the energy consumption of nodes during periods of low network traffic.

4.2. Architecture Example

As described in Section 3.1.2 of [I-D.ietf-tvr-requirements], the locality of schedule generation can be centralized or distributed. Depending on different localities of schedule generation, the architecture depicted in Figure 1 may be applicable in tidal network in two different ways, described in Section 4.2.1 and Section 4.2.2, respectively.

4.2.1. Centralized Architecture

In the centralized schedule generation, the Schedule Service Requester in Figure 1 can be a network orchestrator, and the Scheduled Service Responder can be network controller(s), the applied architecture with the orchestrator and controller(s) is shown in Figure 2. The readers may refer to Section 4 of [RFC8309] for an overview of "orchestrator" and "controller" in an SDN system. After generating schedules, the controller needs to determine whether to distribute these schedules based on the schedule Execution Locality defined in Section 3.1.3 of [I-D.ietf-tvr-requirements].

 +-------------------------------------------------+
 |                  Orchestrator                   |
 +-----+------------------------------------^------+
       |                                    |
       |Request                             |Response
       |                                    |
 +-----v------------------------------------+------+
 |               Network Controller(s)             |
 |                                                 |
 |   +---------+                     +---------+   |
 |   |         |                     |         |   |
 |   | Schedule|                     | Conflict|   |
 |   | Manager |                     | Resolver|   |
 |   |         |                     |         |   |
 |   +---------+                     +---------+   |
 |                                                 |
 |   +---------+                     +---------+   |
 |   |         |                     |         |   |
 |   | Resource|                     | Policy  |   |
 |   | Manager |                     | Engine  |   |
 |   |         |                     |         |   |
 |   +---------+                     +---------+   |
 |                                                 |
 +-------------------------------------------------+
Figure 2: An Architecture Example for Centralized Schedule Generation Scenario

4.2.2. Distributed Architecture

In the distributed schedule generation,the Schedule Service Requester in Figure 1 can be a network controller, and the Scheduled Service Responders are the network devices, the applied architecture with the network controller and devices is shown in Figure 3. In this mode, the generation and execution of schedules are both on the same devices, so it does not involve the schedule distribution process.

      +-----------------------------------------------------+
      |              Network Controller(s)                  |
      +-+--------------^------------------+---------------^-+
        |              |                  |               |
        |Request       |Response          |Request        |Response
        |              |                  |               |
+-------v--------------+-------+    +-----v---------------+--------+
|       Network Device A       |    |       Network Device B       |
|                              |    |                              |
|   +---------+  +---------+   |    |   +---------+  +---------+   |
|   | Schedule|  | Conflict|   |    |   | Schedule|  | Conflict|   |
|   | Manager |  | Resolver|   |    |   | Manager |  | Resolver|   |
|   +---------+  +---------+   |    |   +---------+  +---------+   |  …
|                              |    |                              |
|   +---------+  +---------+   |    |   +---------+  +---------+   |
|   | Resource|  | Policy  |   |    |   | Resource|  | Policy  |   |
|   | Manager |  | Engine  |   |    |   | Manager |  | Engine  |   |
|   +---------+  +---------+   |    |   +---------+  +---------+   |
|                              |    |                              |
+------------------------------+    +------------------------------+
Figure 3: An Architecture Example for Distributed Schedule Generation Scenario

4.3. Example Procedures

4.3.1. Building Traffic Profiles

The first step to perform schedules in a tidal network is to analyze the traffic patterns at different network devices and links comprehensively and then establish clear tidal points for lower and upper network traffic. It should be noted that the change regularity of traffic may be different at different time (for example, the traffic regularity in workday and weekend may totally be different), the selection of tidal points should take full count of these factors. How to analyze the traffic patterns and determine the tidal points are outside the scope of this document.

4.3.2. Establish Minimum and Peak Topology

An algorithm is required to calculate the minimum and peak topology to service expected demand at different time slot. Such calculation algorithm for the topology is outside the scope of this document.

4.3.3. Generating Schedule

The schedule request is generated by the Schedule Service Requester according to the switching regularity of the minimum and peak topology and sent to the Schedule Service Responder. For example, the minimum topology enables from 1 AM to 7 AM everyday, then the network administrator need to shutdown some links or devices from 1 AM to 7 AM.

When the Scheduled Service Responder receives the schedule request, it handles it with following procedures:

  • The Conflict Resolver checks whether the current schedule request conflicts with other schedules. If there is no conflict, then go to the next step. Otherwise, an error message is returned to the Schedule Service Requester, indicating that the conflict check fails. A typical failure scenario is that the resource triggered by the current schedule is occupied by another schedule. For example, there is an existing schedule that requests a link to be available from 5 AM to 10 AM every day, but the new schedule disconnects it from 2 AM to 6 AM every two days.

  • The Schedule Manager creates a list of schedule and holds it in a schedule database. For a recurrence schedule, the effective range of occurrence instances will be generated by Schedule Manager. It also handles the modification, deletion, and querying of schedule information.

  • The Policy Engine maintains the received scheduling policies and rules. It enforces the predefined policies when a time trigger maintained in the schedule database indicates each scheduled time occurs. Policy enforcement may also require the interaction with other components, e.g., the Resource Manager.

  • The Resource Manager allocates or reclaims network resources (in this example the resources are the detailed interfaces related to the links) when a request from Policy Engine is received, and it also holds the network resource in a resource database. The allocation of network resources may require a variety of data resources, such as network topology information, network resource inventory, current network utilization, etc.

4.3.4. Distributing Schedule

Schedules distribution means that network schedules are distributed to the execution devices via dedicated management interfaces. Schedules distribution is not mandatory. This depends on the location where the schedules are executed. If the schedules are generated and executed on the same device, schedules distribution is not required. If schedules are generated and executed on different devices, the schedules distribution is then needed. Note that if a schedule affects topology and a distributed routing protocol is used, then the schedule needs to be distributed to all the nodes in the topology, so that other nodes can consider the impact of the schedule when calculating and selecting routes.

4.3.5. Executing Schedule

Schedules execution means that a component (e.g., device) undertakes an action (e.g., allocates and deallocates resources) at specified time points. In a tidal network, the schedule execution indicates to power on/off specific network components (such as interfaces or entire network devices) directly or by commands.

The schedule executor should understand the consequences of the schedule execution. The power on/off of network components usually affects the network topology, the addition and deletion of the topology need to be considered separately.

A link coming up or a node joining a topology should not have any functional change until the change is proven to be fully operational. The routing paths may be pre-computed but should not be installed before all of the topology changes are confired to be operational. The benefits of this pre-computation appear to be very small. The network may choose to not do any pre-installation or pre-computation in reaction to topological additions, at a small cost of some operational efficiency.

Topological deletions are an entirely different matter. If a link or node is to be removed from the topology, then the network should act before the anticipated change to route traffic around the expected topological change. Specifically, at some point before the planned topology change, the routing paths should be pre-computated and installed before the topology change takes place. The required time to perform such planned action will vary depending on the exact network and configuration. When using an IGP or other distributed routing protocols, the affected links may be set to a high metric to direct traffic to alternate paths. This type of change does require some time to propagate through the network, so the metric change should be initiated far enough in advance that the network converges before the actual topological change.

4.4. Applicable Models

The following provides a list of applicable YANG modules that can be used to exchange data between schedule service requester and responder specified in Section 4.2:

  • The "ietf-tvr-topology" YANG module in [I-D.ietf-tvr-schedule-yang] is used to manage the network topology with time-varying attributes (e.g., node/link availability, link bandwidth, or delay).

  • The "ietf-tvr-node" YANG module in [I-D.ietf-tvr-schedule-yang] which is a device model, is designed to manage a single node with scheduled attributes (e.g., powered on/off).

  • [I-D.ietf-netmod-schedule-yang] defines "ietf-schedule" YANG module for scheduling that works as common building blocks for YANG modules described in this section. The module doesn't define any protocol-accessible nodes but a set of reusable groupings applicable to be used in any scheduling contexts.

4.5. Code Examples

Figure 4 indicates the example of a scheduling node that is powered on from 12 AM, December 1, 2025 to 12 AM, December 1, 2026 in UTC and its interface named "interface1" is scheduled to be enabled at 7:00 AM and disabled at 1:00 AM, every day, from December 1, 2025 to December 1, 2026 in UTC. The JSON encoding is used only for illustration purposes.

{
   "ietf-tvr-node:node-schedule":[
      {
         "node-id":12345678,
         "node-power-schedule":{
            "power-default":false,
            "schedules":[
               {
                  "schedule-id":111111,
                  "period-start":"2025-12-01T00:00:00Z",
                  "period-end":"2026-12-01T00:00:00Z",
                  "attr-value":{
                     "power-state":true
                  }
               }
            ]
         },
         "interface-schedule":[
            {
               "name":"interace1",
               "default-available":false,
               "default-bandwidth":1000000000,
               "attribute-schedule":{
                  "schedules":[
                     {
                        "schedule-id":222222,
                        "recurrence-first":{
                           "utc-start-time":"2025-12-01T07:00:00Z",
                           "duration":64800
                        },
                        "utc-until":"2026-12-01T00:00:00Z",
                        "frequency":"ietf-schedule:daily",
                        "interval":1,
                        "attr-value":{
                           "available":true
                        }
                     }
                  ]
               }
            }
         ]
      }
   ]
}
Figure 4: An Example of Interface Activation Scheduling

5. Other Dependencies

This sections presents some outstanding dependencies that need to be considered when deploying the scheduling mechanism.

5.1. Access Control

Access control ensures only authorized control entities can have access to schedule information, including querying, creation, modification, and deletion of schedules. Unauthorized access may lead to unintended consequences.

The Network Access Control Model (NACM) [RFC8341] provides standard mechanisms to restrict access for particular uses to a preconfigured subset of all available NETCONF or RESTCONF protocol operations and content.

5.2. Atomic Operations

Atomic operations are guaranteed to be either executed completely or not executed at all. Deployments based on scheduling must ensure schedule changes based on recurrence rules are applied as atomic transactions. Either all changes are successfully applied, or none at all. For example, a network policy may be scheduled to be active every Tuesday in January of 2025. If the schedule is changed to every Wednesday in January 2025, the recurrence set is changed from January 7, 14, 21, 28 to January 1, 8, 15, 22, 29. If some occurrences can not be applied successfully (e.g., January 1 cannot be scheduled because of conflict), the others in the recurrence set will not be applied as well.

In addition, the scheduling management of network events, policies, services, and resources may involve operations that are performed at particular future time(s). Multiple operations might be involved for each instance in the recurrence set, either all operations are successfully performed, or none at all.

5.3. Rollback Mechanism

Rollback mechanism is useful to ensure that in case of an error, the system can revert back to its previous state. Deployments are required to save the checkpoints (manually or automatically) of network scheduling activities that can be used for rollback when necessary, to maintain network stability.

5.4. Inter-dependency

Enfrocement of some secheduled actions may depend on other schedules actions. Means to identify such dependency are needed.

6. Manageability Considerations

6.1. Multiple Schedule Service Requesters

This document does not make any assumption about the number of schedule service requester entities that interact with schedule service responder. This means that multiple schedule service requesters may send requests to the responder to schedule the same network resources, which may lead to conflicts. If scheduling conflicts occur, some predefined policies or priorities may be useful to reflect how schedules from different sources should be prioritized.

7. Security Considerations

Time synchronization may potentially lead to security threats, e.g., attackers may modify the system time and it thus causes time inconsistencies and affects the normal functionalities for managing and coordinating network scheduling activities. In addition, care must be taken when defining recurrences occurring very often and frequent that can be an additional source of attacks by keeping the system permanently busy with the management of scheduling.

8. IANA Considerations

This document has no IANA actions.

9. References

9.1. Normative References

[RFC2119]
Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, , <https://www.rfc-editor.org/rfc/rfc2119>.
[RFC8174]
Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, , <https://www.rfc-editor.org/rfc/rfc8174>.

9.2. Informative References

[I-D.contreras-opsawg-scheduling-oam-tests]
Contreras, L. M. and V. Lopez, "A YANG Data Model for Network Diagnosis by Scheduling Sequences of OAM Tests", Work in Progress, Internet-Draft, draft-contreras-opsawg-scheduling-oam-tests-02, , <https://datatracker.ietf.org/doc/html/draft-contreras-opsawg-scheduling-oam-tests-02>.
[I-D.ietf-netmod-schedule-yang]
Ma, Q., Wu, Q., Boucadair, M., and D. King, "A Common YANG Data Model for Scheduling", Work in Progress, Internet-Draft, draft-ietf-netmod-schedule-yang-02, , <https://datatracker.ietf.org/doc/html/draft-ietf-netmod-schedule-yang-02>.
[I-D.ietf-opsawg-ucl-acl]
Ma, Q., Wu, Q., Boucadair, M., and D. King, "A YANG Data Model and RADIUS Extension for Policy-based Network Access Control", Work in Progress, Internet-Draft, draft-ietf-opsawg-ucl-acl-05, , <https://datatracker.ietf.org/doc/html/draft-ietf-opsawg-ucl-acl-05>.
[I-D.ietf-tvr-requirements]
King, D., Contreras, L. M., and B. Sipos, "TVR (Time-Variant Routing) Requirements", Work in Progress, Internet-Draft, draft-ietf-tvr-requirements-03, , <https://datatracker.ietf.org/doc/html/draft-ietf-tvr-requirements-03>.
[I-D.ietf-tvr-schedule-yang]
Qu, Y., Lindem, A., Kinzie, E., Fedyk, D., and M. Blanchet, "YANG Data Model for Scheduled Attributes", Work in Progress, Internet-Draft, draft-ietf-tvr-schedule-yang-02, , <https://datatracker.ietf.org/doc/html/draft-ietf-tvr-schedule-yang-02>.
[I-D.ietf-tvr-use-cases]
Birrane, E. J., Kuhn, N., Qu, Y., Taylor, R., and L. Zhang, "TVR (Time-Variant Routing) Use Cases", Work in Progress, Internet-Draft, draft-ietf-tvr-use-cases-09, , <https://datatracker.ietf.org/doc/html/draft-ietf-tvr-use-cases-09>.
[RFC8309]
Wu, Q., Liu, W., and A. Farrel, "Service Models Explained", RFC 8309, DOI 10.17487/RFC8309, , <https://www.rfc-editor.org/rfc/rfc8309>.
[RFC8341]
Bierman, A. and M. Bjorklund, "Network Configuration Access Control Model", STD 91, RFC 8341, DOI 10.17487/RFC8341, , <https://www.rfc-editor.org/rfc/rfc8341>.
[RFC8413]
Zhuang, Y., Wu, Q., Chen, H., and A. Farrel, "Framework for Scheduled Use of Resources", RFC 8413, DOI 10.17487/RFC8413, , <https://www.rfc-editor.org/rfc/rfc8413>.
[RFC8519]
Jethanandani, M., Agarwal, S., Huang, L., and D. Blair, "YANG Data Model for Network Access Control Lists (ACLs)", RFC 8519, DOI 10.17487/RFC8519, , <https://www.rfc-editor.org/rfc/rfc8519>.

Acknowledgments

TODO acknowledge.

Contributors

Daniel King
Lancaster University
United Kingdom
Charalampos (Haris) Rotsos
Lancaster University
Peng Liu
China Mobile
Tony Li
Juniper Networks

Authors' Addresses

Li Zhang
Huawei
Qiufang Ma
Huawei
101 Software Avenue, Yuhua District
Nanjing, Jiangsu
210012
China
Qin Wu
Huawei
101 Software Avenue, Yuhua District
Nanjing, Jiangsu
210012
China
Mohamed Boucadair
Orange