US Patent Application for DATA FUNCTIONS AND PROCEDURES IN THE NON-REAL TIME RADIO ACCESS NETWORK INTELLIGENT CONTROLLER Patent Application (Application #20240196178 issued June 13, 2024) (2024)

CROSS-REFERENCE TO RELATED PATENT APPLICATION(S)

This application claims the benefit of U.S. Provisional Application No. 63/209,279, filed Jun. 10, 2021, the disclosure of which is incorporated by reference as set forth in full.

TECHNICAL FIELD

This disclosure generally relates to systems and methods for wireless communications and, more particularly, to data functions and procedures in the non-real time (RT) radio access network (RAN) intelligent controller (RIC).

BACKGROUND

Wireless devices are becoming widely prevalent and are increasingly requesting access to wireless channels. Open RAN Alliance (O-RAN) is committed to evolve radio access networks. The O-RAN will be deployed based on 3rd Generation Partnership Project (3GPP) defined network slicing technologies.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a network diagram illustrating an example network environment for data functions, in accordance with one or more example embodiments of the present disclosure.

FIGS. 2-7 depict illustrative schematic diagrams for data functions, in accordance with one or more example embodiments of the present disclosure.

FIG. 8 illustrates a flow diagram of illustrative process for an illustrative data functions system, in accordance with one or more example embodiments of the present disclosure.

FIG. 9 illustrates a network, in accordance with one or more example embodiments of the present disclosure.

FIG. 10 illustrates a wireless network, in accordance with one or more example embodiments of the present disclosure.

FIG. 11 is a block diagram illustrating components, in accordance with one or more example embodiments of the present disclosure.

FIG. 12 illustrates an example Open RAN (O-RAN) system architecture.

FIG. 13 illustrates a logical architecture of the O-RAN system of FIG. 12.

DETAILED DESCRIPTION

The following description and the drawings sufficiently illustrate specific embodiments to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, algorithm, and other changes. Portions and features of some embodiments may be included in, or substituted for, those of other embodiments. Embodiments set forth in the claims encompass all available equivalents of those claims.

O-RAN architecture aims to enable intelligent RAN operation and optimization using artificial intelligence (AI) and machine learning (ML) in wireless communication network. Non-Real-Time RAN intelligence controllers (Non-RT RIC) is developed to manage AI/ML-assisted solutions for RAN functions. To support 3rd party modular application (rApps), non-RT RIC framework would provide functionalities and services to retrieve data from rApps and feed data into these rApps.

rApps are modular applications that leverage the functionality exposed by the non-RT RIC/service management and orchestration (SMO) Framework over the R1 interface to perform multi-vendor RAN optimization and assurance.

One of the expected benefits of this architecture is to bring new application developers into an expected application eco-system. rApps can be developed and delivered by any third party. Because rApps are based on open interfaces and are application platform agnostic, it means they can run on any vendor's non-RT RIC.

The non-RT RIC can access data from different network domains such as RAN, core, transport, as well as other external data sources. The non-RT RIC can also use data provided or enriched by rApps themselves. This makes the correlations and decisions done at non-RT RIC much more accurate with a broader visibility and insights to the network performance.

Various embodiments herein include data functions and services provided by the Non-RT RIC framework to enable data registration, discovery, subscription, collection, delivery, processing, and storage, etc. The disclosed data functions and services in the Non-RT RIC framework pave ways to introduce data plane in overall O-RAN architecture. In some embodiments, previous U.S. Provisional Patent Applications by the present inventors on Data policy administration functions and services may be regarded as part of data functions and services provided by the Non-RT RIC framework. A complete data plane design to enable data as a service (DaaS) in accordance with some embodiments may be found in another previously submitted U.S. Provisional Patent Application by the present inventors.

Example embodiments of the present disclosure relate to systems, methods, and devices for data functions and procedures in the non-real time (RT) RAN intelligent controller (RIC).

Various embodiments provide data functions and procedures in Non-RT RIC. rApps may utilize data functionalities exposed by the Non-RT RIC framework for data registration, subscription, delivery, collection, delivery, processing, and storage, etc.

The above descriptions are for purposes of illustration and are not meant to be limiting. Numerous other examples, configurations, processes, algorithms, etc., may exist, some of which are described in greater detail below. Example embodiments will now be described with reference to the accompanying figures.

FIG. 1 depicts an illustrative schematic diagram for data functions, in accordance with one or more example embodiments of the present disclosure.

In one or more embodiments, a data functions system may provide data functions in Non-RT RIC framework. These data functions in Non-RT RIC framework interface rApps to provide: Data registration functionality, Data discovery functionality, Data subscription/request functionality, Data collection functionality, data verification and security functionality, data delivery functionality, data processing functionality, data storage functionality, data policy administration functionality, etc.

When data consumer rApps register in the Non-RT RIC framework, they communicate information about the data types that they consume.

When data consumer rApps subscribe/request data from the Non-RT RIC framework, they communicate information about the data for consumption with specific data type, periodicity, and scope, etc. A “scope” declaration specifies the filter criteria applied to the “subject” of the requested data.

Data consumer rApps communicate information about how the subscribed/requested data is delivered from the Non-RT RIC framework, which can be summarized as a delivery policy. In one embodiment, a delivery policy includes:

    • Time interval between two data deliveries, if the subscribed/requested data is delivered periodically.
    • Event-triggering conditions, if the data delivery is event-triggered.
    • Number of data points/samples contained in one data delivery, etc.

When data producer rApps register in the Non-RT RIC framework, they communicate information about the data types that they produce.

Data producer rApps communicate information about how the registered data types are shared with data consumers (e.g., which data consumer can/cannot discover data types the data producer registers), which can be summarized into a data discovery policy.

Data producer rApps communicate information about how the data is collected by the Non-RT RIC framework, which can be summarized into a data collection policy. In one embodiment, a data collection policy includes:

    • Time interval between two data collections, if the data is collected periodically.
    • Event-triggering conditions, if the data collection is event-triggered.
    • Number of data points/samples within one data collection, etc.

rApps communicate information about the creation and configuration of the data processing policy, to quantize data, label data, correlate data, etc., in the Non-RT RIC framework.

In one embodiment, data functions include data management function, data catalog, and data storage. Data management function interfaces with rApps for data registration, discovery, subscription, collection, delivery, processing, and policy administration. Data catalog tracks available data types in Non-RT RIC framework produced by data producers. Data management function matches registration request for data consumption against known data types by checking the data catalog. Data storage stores data collected from data producers. Data management function interfaces with data storage and performs data read and write operations.

In one embodiment, data management function is further decomposed into: data registration and subscription function for data registration and subscription, data verification and security function to verify the validity of data, data processing function to process data, such as data labelling, data normalization, data quantization, data correlation, and attaching attributes to data, etc., data storage function to perform data read and write operation with data storage, data policy administration function to manage various data policies.

Data functions can provide services to each other within the Non-RT RIC framework. In one embodiment, data functions are connected via service-based interface as illustrated in FIG. 1.

It is understood that the above descriptions are for purposes of illustration and are not meant to be limiting.

In one embodiment, the procedure for a data consumer rApp to register data types that it consumes is illustrated in FIG. 2.

Step 1: data consumer rApp sends discovery request to data management function through R1 termination.

Step 2: The data management function checks data catalog to figure out whether the data type indicated in the discovery request is one of the known data types.

Step 3: Data management function sends the discovery response to the data consumer rApp through R1 termination.

In another embodiment, the procedure for a data consumer rApp to register data types that it consumes is illustrated in FIG. 3.

Step 1: data consumer rApp sends discovery request to data registration and subscription function through R1 termination.

Step 2: The data registration and subscription function checks data catalog to figure out whether the data type indicated in the discovery request is one of the known data types.

Step 3: Data registration and subscription function checks discovery policy about the data type for consumption to find out whether the data consumer rApp is allowed to discover this data type.

Step 4: Data registration and subscription function sends the discovery response to the data consumer rApp through R1 termination.

In one embodiment, the procedure for a data producer to register data types that it produces is illustrated in FIG. 4.

Step 1: Data producer rApp sends registration request to data management function through R1 termination.

Step 2: Data management function updates data catalog and adds the data type indicated in the registration request into the list of known data types.

Step 3: Data management function sends the registration response to the data producer rApp through R1 termination.

In another embodiment, the procedure for a data producer to register data types that it produces is illustrated in FIG. 5.

Step 1: Data producer rApp sends registration request to data registration and subscription function through R1 termination.

Step 2: Data registration and subscription function creates/updates discovery policy for the registered data types, using the service provided by data policy administration function.

Step 3: Data registration and subscription function updates data catalog and adds the data type indicated in the registration request into the list of “known” data types, based on the discovery policy (e.g., inform data catalog about which data consumer rApp can or cannot “know” this data types).

Step 4: Data registration and subscription function sends the registration response to the data producer rApp through R1 termination.

In one embodiment, the procedures for data subscription, collection, and delivery are illustrated in FIG. 6.

Step 1: Data consumer rApp sends subscription request to data management function through R1 termination.

Step 2: Data management function sends subscription response to the data consumer rApp through R1 termination.

Step 3: Data management function identifies the right data producer by checking the data catalog.

Step 4: Data management function sends subscription request to the data producer rApp through R1 termination.

Step 5: Data producer rApp sends subscription response to data management function through R1 termination.

Step 6: Data producer rApp sends a notification to the Non-RT RIC framework through R1 termination, after an event trigger (e.g., production data is ready to be collected). Data producer rApp pushes the data to the Non-RT RIC framework.

Step 7: Data management function conducts processing on the received data (e.g., correlate data with UE or cell IDs, add time stamps to the data, etc.).

Step 8: Data management function writes the data into data storage.

Step 9: Data management function reads the data from data storage after an event trigger (e.g., stored data should be delivered to the data consumer).

Step 10: Data management function conducts processing on the data (e.g., quantization, normalization, etc.).

Step 11: Data management function sends a notification to the data consumer rApp through R1 termination, and it pushes the data to the data consumer rApp.

In another embodiment, the procedures for data subscription, collection, and delivery are illustrated in FIG. 7.

Step 1: Data consumer rApp sends subscription request to data registration and subscription function through R1 termination.

Step 2: Data registration and subscription function sends subscription response to the data consumer rApp through R1 termination.

Step 3: Data registration and subscription function identifies the right data producer by checking the data catalog.

Step 4: Data registration and subscription function creates/updates data delivery policy for subscribed data, using the services provided by data policy administration function.

Step 5: Data registration and subscription function creates/updates data processing policy for data consumer rApp, configuring data processing before the data is delivered.

Step 6: Data registration and subscription function sends subscription request to the data producer rApp through R1 termination.

Step 7: Data producer rApp sends subscription response to data registration and subscription function through R1 termination.

Step 8: Data registration and subscription function creates/updates data collection policy for subscribed data, using the services provided by data policy administration function.

Step 9: Data registration and subscription function creates/updates data processing policy for data producer rApp, configuring data processing before the data is stored.

Step 10: Data producer rApp sends a notification to data storage function through R1 termination, after an event trigger (e.g., production data is ready to be collected). Data producer rApp pushes the data to data storage function.

Step 11: Data processing function conducts processing on the collected data, based on the processing policy specified in Step 9.

Step 12: Data storage function writes the data into data storage.

Step 13: Data storage function reads the data from data storage after an event trigger (e.g., stored data should be delivered to the data consumer).

Step 14: Data processing function conducts processing on the data, based on the processing policy specified in Step 5.

Step 15: Data storage function sends a notification to the data consumer rApp through R1 termination, and it pushes the data to the data consumer rApp.

It is understood that the above descriptions are for purposes of illustration and are not meant to be limiting.

In some embodiments, the electronic device(s), network(s), system(s), chip(s) or component(s), or portions or implementations thereof, of FIGS. 9-13, or some other figure herein, may be configured to perform one or more processes, techniques, or methods as described herein, or portions thereof. One such process is depicted in FIG. 8.

For example, the process may include, at 802, identifying a first request received from a data consumer non-RT RIC application (rApp), wherein the first request is received over an R1 termination interface.

The process further includes, at 804, causing to send a first response to the data consumer rApp in response to the first request.

The process further includes, at 806, identifying a data producer rApp by checking a data catalog in order to satisfy the first request.

The process further includes, at 808, causing to send a notification frame to the data consumer rApp over the R1 termination interface indicating that data will be delivered to the data consumer rApp.

In one or more embodiments, the first request is a data subscription request, and wherein the first response is a data subscription response.

The process my further include causing to send a second request to the data producer rApp over the R1 termination interface and identifying a second response from the data producer rApp.

In one or more embodiments, the second request is a data subscription request, and wherein the second response is a data subscription response.

The process my further include identifying a registration request received from the data producer rApp and causing to send a registration response to the data producer rApp, wherein the registration response is sent after a data management function performs a data catalog check to determine whether a same data type is found in another data producer rApp.

The process my further include identifying a discover request received from the data consumer rApp, and causing to send a discover response to the data consumer rApp.

The process my further include checking a discovery policy associated with a data type of a data registration request received from the data consumer rApp, and determining whether the data consumer rApp is allowed to discover the data type.

In one or more embodiments, the data catalog comprises one or more registered data types associated with one or more data producer rApps.

The process my further include creating or updating a discovery policy for registered data types using a service provided by a data policy administration service produced by data policy administration functions.

The process my further include updating the data catalog, and adding the data type indicated in a registration request received from the data producer rApp into a list of known data types, based on the discovery policy.

For one or more embodiments, at least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below. For example, the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below. For another example, circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below in the example section.

It is understood that the above descriptions are for purposes of illustration and are not meant to be limiting.

FIGS. 9-13 illustrate various systems, devices, and components that may implement aspects of disclosed embodiments.

FIG. 9 illustrates an example network architecture 900 according to various embodiments. The network 900 may operate in a manner consistent with 3GPP technical specifications for LTE or 5G/NR systems. However, the example embodiments are not limited in this regard and the described embodiments may apply to other networks that benefit from the principles described herein, such as future 3GPP systems, or the like.

The network 900 includes a UE 902, which is any mobile or non-mobile computing device designed to communicate with a RAN 904 via an over-the-air connection. The UE 902 is communicatively coupled with the RAN 904 by a Uu interface, which may be applicable to both LTE and NR systems. Examples of the UE 902 include, but are not limited to, a smartphone, tablet computer, wearable computer, desktop computer, laptop computer, in-vehicle infotainment system, in-car entertainment system, instrument cluster, head-up display (HUD) device, onboard diagnostic device, dashtop mobile equipment, mobile data terminal, electronic engine management system, electronic/engine control unit, electronic/engine control module, embedded system, sensor, microcontroller, control module, engine management system, networked appliance, machine-type communication device, machine-to-machine (M2M), device-to-device (D2D), machine-type communication (MTC) device, Internet of Things (IOT) device, and/or the like. The network 900 may include a plurality of UEs 902 coupled directly with one another via a D2D, ProSe, PC5, and/or sidelink (SL) interface. These UEs 902 may be M2M/D2D/MTC/IOT devices and/or vehicular systems that communicate using physical sidelink channels such as, but not limited to, PSBCH, PSDCH, PSSCH, PSCCH, PSFCH, etc. The UE 902 may perform blind decoding attempts of SL channels/links according to the various embodiments herein.

In some embodiments, the UE 902 may additionally communicate with an AP 906 via an over-the-air (OTA) connection. The AP 906 manages a WLAN connection, which may serve to offload some/all network traffic from the RAN 904. The connection between the UE 902 and the AP 906 may be consistent with any IEEE 802.11 protocol. Additionally, the UE 902, RAN 904, and AP 906 may utilize cellular-WLAN aggregation/integration (e.g., LWA/LWIP). Cellular-WLAN aggregation may involve the UE 902 being configured by the RAN 904 to utilize both cellular radio resources and WLAN resources.

The RAN 904 includes one or more access network nodes (ANs) 908. The ANs 908 terminate air-interface(s) for the UE 902 by providing access stratum protocols including RRC, PDCP, RLC, MAC, and PHY/LI protocols. In this manner, the AN 908 enables data/voice connectivity between CN 920 and the UE 902. The ANs 908 may be a macrocell base station or a low power base station for providing femtocells, picocells or other like cells having smaller coverage areas, smaller user capacity, or higher bandwidth compared to macrocells; or some combination thereof. In these implementations, an AN 908 be referred to as a BS, gNB, RAN node, eNB, ng-eNB, NodeB, RSU, TRxP, etc.

One example implementation is a “CU/DU split” architecture where the ANs 908 are embodied as a gNB-Central Unit (CU) that is communicatively coupled with one or more gNB-Distributed Units (DUs), where each DU may be communicatively coupled with one or more Radio Units (RUs) (also referred to as RRHs, RRUs, or the like) (see e.g., 3GPP TS 38.401 v16.1.0 (2020-03)). In some implementations, the one or more RUs may be individual RSUs. In some implementations, the CU/DU split may include an ng-eNB-CU and one or more ng-eNB-DUs instead of, or in addition to, the gNB-CU and gNB-DUs, respectively. The ANs 908 employed as the CU may be implemented in a discrete device or as one or more software entities running on server computers as part of, for example, a virtual network including a virtual Base Band Unit (BBU) or BBU pool, cloud RAN (CRAN), Radio Equipment Controller (REC), Radio Cloud Center (RCC), centralized RAN (C-RAN), virtualized RAN (vRAN), and/or the like (although these terms may refer to different implementation concepts). Any other type of architectures, arrangements, and/or configurations can be used.

The plurality of ANs may be coupled with one another via an X2 interface (if the RAN 904 is an LTE RAN or Evolved Universal Terrestrial Radio Access Network (E-UTRAN) 910) or an Xn interface (if the RAN 904 is a NG-RAN 914). The X2/Xn interfaces, which may be separated into control/user plane interfaces in some embodiments, may allow the ANs to communicate information related to handovers, data/context transfers, mobility, load management, interference coordination, etc.

The ANs of the RAN 904 may each manage one or more cells, cell groups, component carriers, etc. to provide the UE 902 with an air interface for network access. The UE 902 may be simultaneously connected with a plurality of cells provided by the same or different ANs 908 of the RAN 904. For example, the UE 902 and RAN 904 may use carrier aggregation to allow the UE 902 to connect with a plurality of component carriers, each corresponding to a Pcell or Scell. In dual connectivity scenarios, a first AN 908 may be a master node that provides an MCG and a second AN 908 may be secondary node that provides an SCG. The first/second ANs 908 may be any combination of eNB, gNB, ng-eNB, etc.

The RAN 904 may provide the air interface over a licensed spectrum or an unlicensed spectrum. To operate in the unlicensed spectrum, the nodes may use LAA, eLAA, and/or feLAA mechanisms based on CA technology with PCells/Scells. Prior to accessing the unlicensed spectrum, the nodes may perform medium/carrier-sensing operations based on, for example, a listen-before-talk (LBT) protocol.

In V2X scenarios the UE 902 or AN 908 may be or act as a roadside unit (RSU), which may refer to any transportation infrastructure entity used for V2X communications. An RSU may be implemented in or by a suitable AN or a stationary (or relatively stationary) UE. An RSU implemented in or by: a UE may be referred to as a “UE-type RSU”; an eNB may be referred to as an “eNB-type RSU”; a gNB may be referred to as a “gNB-type RSU”; and the like. In one example, an RSU is a computing device coupled with radio frequency circuitry located on a roadside that provides connectivity support to passing vehicle UEs. The RSU may also include internal data storage circuitry to store intersection map geometry, traffic statistics, media, as well as applications/software to sense and control ongoing vehicular and pedestrian traffic. The RSU may provide very low latency communications required for high speed events, such as crash avoidance, traffic warnings, and the like. Additionally or alternatively, the RSU may provide other cellular/WLAN communications services. The components of the RSU may be packaged in a weatherproof enclosure suitable for outdoor installation, and may include a network interface controller to provide a wired connection (e.g., Ethernet) to a traffic signal controller or a backhaul network.

In some embodiments, the RAN 904 may be an E-UTRAN 910 with one or more eNBs 912. The an E-UTRAN 910 provides an LTE air interface (Uu) with the following characteristics: SCS of 15 kHz; CP-OFDM waveform for DL and SC-FDMA waveform for UL; turbo codes for data and TBCC for control; etc. The LTE air interface may rely on CSI-RS for CSI acquisition and beam management; PDSCH/PDCCH DMRS for PDSCH/PDCCH demodulation; and CRS for cell search and initial acquisition, channel quality measurements, and channel estimation for coherent demodulation/detection at the UE. The LTE air interface may operating on sub-6 GHz bands.

In some embodiments, the RAN 904 may be an next generation (NG)-RAN 914 with one or more gNB 916 and/or on or more ng-eNB 918. The gNB 916 connects with 5G-enabled UEs 902 using a 5G NR interface. The gNB 916 connects with a 5GC 940 through an NG interface, which includes an N2 interface or an N3 interface. The ng-eNB 918 also connects with the 5GC 940 through an NG interface, but may connect with a UE 902 via the Uu interface. The gNB 916 and the ng-eNB 918 may connect with each other over an Xn interface.

In some embodiments, the NG interface may be split into two parts, an NG user plane (NG-U) interface, which carries traffic data between the nodes of the NG-RAN 914 and a UPF 948 (e.g., N3 interface), and an NG control plane (NG-C) interface, which is a signaling interface between the nodes of the NG-RAN 914 and an AMF 944 (e.g., N2 interface).

The NG-RAN 914 may provide a 5G-NR air interface (which may also be referred to as a Uu interface) with the following characteristics: variable SCS; CP-OFDM for DL, CP-OFDM and DFT-s-OFDM for UL; polar, repetition, simplex, and Reed-Muller codes for control and LDPC for data. The 5G-NR air interface may rely on CSI-RS, PDSCH/PDCCH DMRS similar to the LTE air interface. The 5G-NR air interface may not use a CRS, but may use PBCH DMRS for PBCH demodulation; PTRS for phase tracking for PDSCH; and tracking reference signal for time tracking. The 5G-NR air interface may operating on FR1 bands that include sub-6 GHz bands or FR2 bands that include bands from 24.25 GHz to 52.6 GHz. The 5G-NR air interface may include an SSB that is an area of a downlink resource grid that includes PSS/SSS/PBCH.

The 5G-NR air interface may utilize BWPs for various purposes. For example, BWP can be used for dynamic adaptation of the SCS. For example, the UE 902 can be configured with multiple BWPs where each BWP configuration has a different SCS. When a BWP change is indicated to the UE 902, the SCS of the transmission is changed as well. Another use case example of BWP is related to power saving. In particular, multiple BWPs can be configured for the UE 902 with different amount of frequency resources (e.g., PRBs) to support data transmission under different traffic loading scenarios. A BWP containing a smaller number of PRBs can be used for data transmission with small traffic load while allowing power saving at the UE 902 and in some cases at the gNB 916. A BWP containing a larger number of PRBs can be used for scenarios with higher traffic load.

The RAN 904 is communicatively coupled to CN 920 that includes network elements and/or network functions (NFs) to provide various functions to support data and telecommunications services to customers/subscribers (e.g., UE 902). The components of the CN 920 may be implemented in one physical node or separate physical nodes. In some embodiments, NFV may be utilized to virtualize any or all of the functions provided by the network elements of the CN 920 onto physical compute/storage resources in servers, switches, etc. A logical instantiation of the CN 920 may be referred to as a network slice, and a logical instantiation of a portion of the CN 920 may be referred to as a network sub-slice.

The CN 920 may be an LTE CN 922 (also referred to as an Evolved Packet Core (EPC) 922). The EPC 922 may include MME 924, SGW 926, SGSN 928, HSS 930, PGW 932, and PCRF 934 coupled with one another over interfaces (or “reference points”) as shown. The NFs in the EPC 922 are briefly introduced as follows.

The MME 924 implements mobility management functions to track a current location of the UE 902 to facilitate paging, bearer activation/deactivation, handovers, gateway selection, authentication, etc.

The SGW 926 terminates an SI interface toward the RAN 910 and routes data packets between the RAN 910 and the EPC 922. The SGW 926 may be a local mobility anchor point for inter-RAN node handovers and also may provide an anchor for inter-3GPP mobility. Other responsibilities may include lawful intercept, charging, and some policy enforcement.

The SGSN 928 tracks a location of the UE 902 and performs security functions and access control. The SGSN 928 also performs inter-EPC node signaling for mobility between different RAT networks; PDN and S-GW selection as specified by MME 924; MME 924 selection for handovers; etc. The S3 reference point between the MME 924 and the SGSN 928 enable user and bearer information exchange for inter-3GPP access network mobility in idle/active states.

The HSS 930 includes a database for network users, including subscription-related information to support the network entities' handling of communication sessions. The HSS 930 can provide support for routing/roaming, authentication, authorization, naming/addressing resolution, location dependencies, etc. An S6a reference point between the HSS 930 and the MME 924 may enable transfer of subscription and authentication data for authenticating/authorizing user access to the EPC 920.

The PGW 932 may terminate an SGi interface toward a data network (DN) 936 that may include an application (app)/content server 938. The PGW 932 routes data packets between the EPC 922 and the data network 936. The PGW 932 is communicatively coupled with the SGW 926 by an S5 reference point to facilitate user plane tunneling and tunnel management. The PGW 932 may further include a node for policy enforcement and charging data collection (e.g., PCEF). Additionally, the SGi reference point may communicatively couple the PGW 932 with the same or different data network 936. The PGW 932 may be communicatively coupled with a PCRF 934 via a Gx reference point.

The PCRF 934 is the policy and charging control element of the EPC 922. The PCRF 934 is communicatively coupled to the app/content server 938 to determine appropriate QoS and charging parameters for service flows. The PCRF 932 also provisions associated rules into a PCEF (via Gx reference point) with appropriate TFT and QCI.

The CN 920 may be a 5GC 940 including an AUSF 942, AMF 944, SMF 946, UPF 948, NSSF 950, NEF 952, NRF 954, PCF 956, UDM 958, and AF 960 coupled with one another over various interfaces as shown. The NFs in the 5GC 940 are briefly introduced as follows.

The AUSF 942 stores data for authentication of UE 902 and handle authentication-related functionality. The AUSF 942 may facilitate a common authentication framework for various access types.

The AMF 944 allows other functions of the 5GC 940 to communicate with the UE 902 and the RAN 904 and to subscribe to notifications about mobility events with respect to the UE 902. The AMF 944 is also responsible for registration management (e.g., for registering UE 902), connection management, reachability management, mobility management, lawful interception of AMF-related events, and access authentication and authorization. The AMF 944 provides transport for SM messages between the UE 902 and the SMF 946, and acts as a transparent proxy for routing SM messages. AMF 944 also provides transport for SMS messages between UE 902 and an SMSF. AMF 944 interacts with the AUSF 942 and the UE 902 to perform various security anchor and context management functions. Furthermore, AMF 944 is a termination point of a RAN-CP interface, which includes the N2 reference point between the RAN 904 and the AMF 944. The AMF 944 is also a termination point of NAS (N1) signaling, and performs NAS ciphering and integrity protection.

AMF 944 also supports NAS signaling with the UE 902 over an N3IWF interface. The N3IWF provides access to untrusted entities. N3IWF may be a termination point for the N2 interface between the (R)AN 904 and the AMF 944 for the control plane, and may be a termination point for the N3 reference point between the (R)AN 914 and the 948 for the user plane. As such, the AMF 944 handles N2 signalling from the SMF 946 and the AMF 944 for PDU sessions and QoS, encapsulate/de-encapsulate packets for IPSec and N3 tunnelling, marks N3 user-plane packets in the uplink, and enforces QoS corresponding to N3 packet marking taking into account QoS requirements associated with such marking received over N2. N3IWF may also relay UL and DL control-plane NAS signalling between the UE 902 and AMF 944 via an N1 reference point between the UE 902 and the AMF 944, and relay uplink and downlink user-plane packets between the UE 902 and UPF 948. The N3IWF also provides mechanisms for IPsec tunnel establishment with the UE 902. The AMF 944 may exhibit an Namf service-based interface, and may be a termination point for an N14 reference point between two AMFs 944 and an N17 reference point between the AMF 944 and a 5G-EIR (not shown by FIG. 9).

The SMF 946 is responsible for SM (e.g., session establishment, tunnel management between UPF 948 and AN 908); UE IP address allocation and management (including optional authorization); selection and control of UP function; configuring traffic steering at UPF 948 to route traffic to proper destination; termination of interfaces toward policy control functions; controlling part of policy enforcement, charging, and QoS; lawful intercept (for SM events and interface to LI system); termination of SM parts of NAS messages; downlink data notification; initiating AN specific SM information, sent via AMF 944 over N2 to AN 908; and determining SSC mode of a session. SM refers to management of a PDU session, and a PDU session or “session” refers to a PDU connectivity service that provides or enables the exchange of PDUs between the UE 902 and the DN 936.

The UPF 948 acts as an anchor point for intra-RAT and inter-RAT mobility, an external PDU session point of interconnect to data network 936, and a branching point to support multi-homed PDU session. The UPF 948 also performs packet routing and forwarding, packet inspection, enforces user plane part of policy rules, lawfully intercept packets (UP collection), performs traffic usage reporting, perform QoS handling for a user plane (e.g., packet filtering, gating, UL/DL rate enforcement), performs uplink traffic verification (e.g., SDF-to-QOS flow mapping), transport level packet marking in the uplink and downlink, and performs downlink packet buffering and downlink data notification triggering. UPF 948 may include an uplink classifier to support routing traffic flows to a data network.

The NSSF 950 selects a set of network slice instances serving the UE 902. The NSSF 950 also determines allowed NSSAI and the mapping to the subscribed S-NSSAIs, if needed. The NSSF 950 also determines an AMF set to be used to serve the UE 902, or a list of candidate AMFs 944 based on a suitable configuration and possibly by querying the NRF 954. The selection of a set of network slice instances for the UE 902 may be triggered by the AMF 944 with which the UE 902 is registered by interacting with the NSSF 950; this may lead to a change of AMF 944. The NSSF 950 interacts with the AMF 944 via an N22 reference point; and may communicate with another NSSF in a visited network via an N31 reference point (not shown).

The NEF 952 securely exposes services and capabilities provided by 3GPP NFs for third party, internal exposure/re-exposure, AFs 960, edge computing or fog computing systems (e.g., edge compute node, etc. In such embodiments, the NEF 952 may authenticate, authorize, or throttle the AFs. NEF 952 may also translate information exchanged with the AF 960 and information exchanged with internal network functions. For example, the NEF 952 may translate between an AF-Service-Identifier and an internal 5GC information. NEF 952 may also receive information from other NFs based on exposed capabilities of other NFs. This information may be stored at the NEF 952 as structured data, or at a data storage NF using standardized interfaces. The stored information can then be re-exposed by the NEF 952 to other NFs and AFs, or used for other purposes such as analytics.

The NRF 954 supports service discovery functions, receives NF discovery requests from NF instances, and provides information of the discovered NF instances to the requesting NF instances. NRF 954 also maintains information of available NF instances and their supported services. The NRF 954 also supports service discovery functions, wherein the NRF 954 receives NF Discovery Request from NF instance or an SCP (not shown), and provides information of the discovered NF instances to the NF instance or SCP.

The PCF 956 provides policy rules to control plane functions to enforce them, and may also support unified policy framework to govern network behavior. The PCF 956 may also implement a front end to access subscription information relevant for policy decisions in a UDR of the UDM 958. In addition to communicating with functions over reference points as shown, the PCF 956 exhibit an Npcf service-based interface.

The UDM 958 handles subscription-related information to support the network entities' handling of communication sessions, and stores subscription data of UE 902. For example, subscription data may be communicated via an N8 reference point between the UDM 958 and the AMF 944. The UDM 958 may include two parts, an application front end and a UDR. The UDR may store subscription data and policy data for the UDM 958 and the PCF 956, and/or structured data for exposure and application data (including PFDs for application detection, application request information for multiple UEs 902) for the NEF 952. The Nudr service-based interface may be exhibited by the UDR 221 to allow the UDM 958, PCF 956, and NEF 952 to access a particular set of the stored data, as well as to read, update (e.g., add, modify), delete, and subscribe to notification of relevant data changes in the UDR. The UDM may include a UDM-FE, which is in charge of processing credentials, location management, subscription management and so on. Several different front ends may serve the same user in different transactions. The UDM-FE accesses subscription information stored in the UDR and performs authentication credential processing, user identification handling, access authorization, registration/mobility management, and subscription management. In addition to communicating with other NFs over reference points as shown, the UDM 958 may exhibit the Nudm service-based interface.

AF 960 provides application influence on traffic routing, provide access to NEF 952, and interact with the policy framework for policy control. The AF 960 may influence UPF 948 (re)selection and traffic routing. Based on operator deployment, when AF 960 is considered to be a trusted entity, the network operator may permit AF 960 to interact directly with relevant NFs. Additionally, the AF 960 may be used for edge computing implementations.

The 5GC 940 may enable edge computing by selecting operator/3rd party services to be geographically close to a point that the UE 902 is attached to the network. This may reduce latency and load on the network. In edge computing implementations, the 5GC 940 may select a UPF 948 close to the UE 902 and execute traffic steering from the UPF 948 to DN 936 via the N6 interface. This may be based on the UE subscription data, UE location, and information provided by the AF 960, which allows the AF 960 to influence UPF (re)selection and traffic routing.

The data network (DN) 936 may represent various network operator services, Internet access, or third party services that may be provided by one or more servers including, for example, application (app)/content server 938. The DN 936 may be an operator external public, a private PDN, or an intra-operator packet data network, for example, for provision of IMS services. In this embodiment, the app server 938 can be coupled to an IMS via an S-CSCF or the I-CSCF. In some implementations, the DN 936 may represent one or more local area DNS (LADNs), which are DNs 936 (or DN names (DNNs)) that is/are accessible by a UE 902 in one or more specific areas. Outside of these specific areas, the UE 902 is not able to access the LADN/DN 936.

Additionally or alternatively, the DN 936 may be an Edge DN 936, which is a (local) Data Network that supports the architecture for enabling edge applications. In these embodiments, the app server 938 may represent the physical hardware systems/devices providing app server functionality and/or the application software resident in the cloud or at an edge compute node that performs server function(s). In some embodiments, the app/content server 938 provides an edge hosting environment that provides support required for Edge Application Server's execution.

In some embodiments, the 5GS can use one or more edge compute nodes to provide an interface and offload processing of wireless communication traffic. In these embodiments, the edge compute nodes may be included in, or co-located with one or more RAN 910, 914. For example, the edge compute nodes can provide a connection between the RAN 914 and UPF 948 in the 5GC 940. The edge compute nodes can use one or more NFV instances instantiated on virtualization infrastructure within the edge compute nodes to process wireless connections to and from the RAN 914 and UPF 948.

The interfaces of the 5GC 940 include reference points and service-based interfaces. The reference points include: N1 (between the UE 902 and the AMF 944), N2 (between RAN 914 and AMF 944), N3 (between RAN 914 and UPF 948), N4 (between the SMF 946 and UPF 948), N5 (between PCF 956 and AF 960), N6 (between UPF 948 and DN 936), N7 (between SMF 946 and PCF 956), N8 (between UDM 958 and AMF 944), N9 (between two UPFs 948), N10 (between the UDM 958 and the SMF 946), N11 (between the AMF 944 and the SMF 946), N12 (between AUSF 942 and AMF 944), N13 (between AUSF 942 and UDM 958), N14 (between two AMFs 944; not shown), N15 (between PCF 956 and AMF 944 in case of a non-roaming scenario, or between the PCF 956 in a visited network and AMF 944 in case of a roaming scenario), N16 (between two SMFs 946; not shown), and N22 (between AMF 944 and NSSF 950). Other reference point representations not shown in FIG. 9 can also be used. The service-based representation of FIG. 9 represents NFs within the control plane that enable other authorized NFs to access their services. The service-based interfaces (SBIs) include: Namf (SBI exhibited by AMF 944), Nsmf (SBI exhibited by SMF 946), Nnef (SBI exhibited by NEF 952), Npcf (SBI exhibited by PCF 956), Nudm (SBI exhibited by the UDM 958), Naf (SBI exhibited by AF 960), Nnrf (SBI exhibited by NRF 954), Nnssf (SBI exhibited by NSSF 950), Nausf (SBI exhibited by AUSF 942). Other service-based interfaces (e.g., Nudr, N5geir, and Nudsf) not shown in FIG. 9 can also be used. In some embodiments, the NEF 952 can provide an interface to edge compute nodes 936x, which can be used to process wireless connections with the RAN 914.

In some implementations, the system 900 may include an SMSF, which is responsible for SMS subscription checking and verification, and relaying SM messages to/from the UE 902 to/from other entities, such as an SMS-GMSC/IWMSC/SMS-router. The SMS may also interact with AMF 942 and UDM 958 for a notification procedure that the UE 902 is available for SMS transfer (e.g., set a UE not reachable flag, and notifying UDM 958 when UE 902 is available for SMS).

The 5GS may also include an SCP (or individual instances of the SCP) that supports indirect communication (see e.g., 3GPP TS 23.501 section 7.1.1); delegated discovery (see e.g., 3GPP TS 23.501 section 7.1.1); message forwarding and routing to destination NF/NF service(s), communication security (e.g., authorization of the NF Service Consumer to access the NF Service Producer API) (see e.g., 3GPP TS 33.501), load balancing, monitoring, overload control, etc.; and discovery and selection functionality for UDM(s), AUSF(s), UDR(s), PCF(s) with access to subscription data stored in the UDR based on UE's SUPI, SUCI or GPSI (see e.g., 3GPP TS 23.501 section 6.3). Load balancing, monitoring, overload control functionality provided by the SCP may be implementation specific. The SCP may be deployed in a distributed manner. More than one SCP can be present in the communication path between various NF Services. The SCP, although not an NF instance, can also be deployed distributed, redundant, and scalable.

FIG. 10 schematically illustrates a wireless network 1000 in accordance with various embodiments. The wireless network 1000 may include a UE 1002 in wireless communication with an AN 1004. The UE 1002 and AN 1004 may be similar to, and substantially interchangeable with, like-named components described with respect to FIG. 9.

The UE 1002 may be communicatively coupled with the AN 1004 via connection 1006. The connection 1006 is illustrated as an air interface to enable communicative coupling, and can be consistent with cellular communications protocols such as an LTE protocol or a 5G NR protocol operating at mmWave or sub-6 GHZ frequencies.

The UE 1002 may include a host platform 1008 coupled with a modem platform 1010. The host platform 1008 may include application processing circuitry 1012, which may be coupled with protocol processing circuitry 1014 of the modem platform 1010. The application processing circuitry 1012 may run various applications for the UE 1002 that source/sink application data. The application processing circuitry 1012 may further implement one or more layer operations to transmit/receive application data to/from a data network. These layer operations may include transport (for example UDP) and Internet (for example, IP) operations

The protocol processing circuitry 1014 may implement one or more of layer operations to facilitate transmission or reception of data over the connection 1006. The layer operations implemented by the protocol processing circuitry 1014 may include, for example, MAC, RLC, PDCP, RRC and NAS operations.

The modem platform 1010 may further include digital baseband circuitry 1016 that may implement one or more layer operations that are “below” layer operations performed by the protocol processing circuitry 1014 in a network protocol stack. These operations may include, for example, PHY operations including one or more of HARQ acknowledgement (ACK) functions, scrambling/descrambling, encoding/decoding, layer mapping/de-mapping, modulation symbol mapping, received symbol/bit metric determination, multi-antenna port precoding/decoding, which may include one or more of space-time, space-frequency or spatial coding, reference signal generation/detection, preamble sequence generation and/or decoding, synchronization sequence generation/detection, control channel signal blind decoding, and other related functions.

The modem platform 1010 may further include transmit circuitry 1018, receive circuitry 1020, RF circuitry 1022, and RF front end (RFFE) 1024, which may include or connect to one or more antenna panels 1026. Briefly, the transmit circuitry 1018 may include a digital-to-analog converter, mixer, intermediate frequency (IF) components, etc.; the receive circuitry 1020 may include an analog-to-digital converter, mixer, IF components, etc.; the RF circuitry 1022 may include a low-noise amplifier, a power amplifier, power tracking components, etc.; RFFE 1024 may include filters (for example, surface/bulk acoustic wave filters), switches, antenna tuners, beamforming components (for example, phase-array antenna components), etc. The selection and arrangement of the components of the transmit circuitry 1018, receive circuitry 1020, RF circuitry 1022, RFFE 1024, and antenna panels 1026 (referred generically as “transmit/receive components”) may be specific to details of a specific implementation such as, for example, whether communication is TDM or FDM, in mmWave or sub-6 gHz frequencies, etc. In some embodiments, the transmit/receive components may be arranged in multiple parallel transmit/receive chains, may be disposed in the same or different chips/modules, etc.

In some embodiments, the protocol processing circuitry 1014 may include one or more instances of control circuitry (not shown) to provide control functions for the transmit/receive components.

A UE 1002 reception may be established by and via the antenna panels 1026, RFFE 1024, RF circuitry 1022, receive circuitry 1020, digital baseband circuitry 1016, and protocol processing circuitry 1014. In some embodiments, the antenna panels 1026 may receive a transmission from the AN 1004 by receive-beamforming signals received by a plurality of antennas/antenna elements of the one or more antenna panels 1026.

A UE 1002 transmission may be established by and via the protocol processing circuitry 1014, digital baseband circuitry 1016, transmit circuitry 1018, RF circuitry 1022, RFFE 1024, and antenna panels 1026. In some embodiments, the transmit components of the UE 1004 may apply a spatial filter to the data to be transmitted to form a transmit beam emitted by the antenna elements of the antenna panels 1026.

Similar to the UE 1002, the AN 1004 may include a host platform 1028 coupled with a modem platform 1030. The host platform 1028 may include application processing circuitry 1032 coupled with protocol processing circuitry 1034 of the modem platform 1030. The modem platform may further include digital baseband circuitry 1036, transmit circuitry 1038, receive circuitry 1040, RF circuitry 1042, RFFE circuitry 1044, and antenna panels 1046. The components of the AN 1004 may be similar to and substantially interchangeable with like-named components of the UE 1002. In addition to performing data transmission/reception as described above, the components of the AN 1008 may perform various logical functions that include, for example, RNC functions such as radio bearer management, uplink and downlink dynamic radio resource management, and data packet scheduling.

FIG. 11 illustrates components of a computing device 1100 according to some example embodiments, able to read instructions from a machine-readable or computer-readable medium (e.g., a non-transitory machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically, FIG. 11 shows a diagrammatic representation of hardware resources 1100 including one or more processors (or processor cores) 1110, one or more memory/storage devices 1120, and one or more communication resources 1130, each of which may be communicatively coupled via a bus 1140 or other interface circuitry. For embodiments where node virtualization (e.g., NFV) is utilized, a hypervisor 1102 may be executed to provide an execution environment for one or more network slices/sub-slices to utilize the hardware resources 1100.

The processors 1110 include, for example, processor 1112 and processor 1114. The processors 1110 include circuitry such as, but not limited to one or more processor cores and one or more of cache memory, low drop-out voltage regulators (LDOs), interrupt controllers, serial interfaces such as SPI, I2C or universal programmable serial interface circuit, real time clock (RTC), timer-counters including interval and watchdog timers, general purpose I/O, memory card controllers such as secure digital/multi-media card (SD/MMC) or similar, interfaces, mobile industry processor interface (MIPI) interfaces and Joint Test Access Group (JTAG) test access ports. The processors 1110 may be, for example, a central processing unit (CPU), reduced instruction set computing (RISC) processors, Acorn RISC Machine (ARM) processors, complex instruction set computing (CISC) processors, graphics processing units (GPUs), one or more Digital Signal Processors (DSPs) such as a baseband processor, Application-Specific Integrated Circuits (ASICs), an Field-Programmable Gate Array (FPGA), a radio-frequency integrated circuit (RFIC), one or more microprocessors or controllers, another processor (including those discussed herein), or any suitable combination thereof. In some implementations, the processor circuitry 1110 may include one or more hardware accelerators, which may be microprocessors, programmable processing devices (e.g., FPGA, complex programmable logic devices (CPLDs), etc.), or the like.

The memory/storage devices 1120 may include main memory, disk storage, or any suitable combination thereof. The memory/storage devices 1120 may include, but are not limited to, any type of volatile, non-volatile, or semi-volatile memory such as random access memory (RAM), dynamic RAM (DRAM), static RAM (SRAM), synchronous DRAM (SDRAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), Flash memory, solid-state storage, phase change RAM (PRAM), resistive memory such as magnetoresistive random access memory (MRAM), etc., and may incorporate three-dimensional (3D) cross-point (XPOINT) memories from Intel® and Micron®. The memory/storage devices 1120 may also comprise persistent storage devices, which may be temporal and/or persistent storage of any type, including, but not limited to, non-volatile memory, optical, magnetic, and/or solid state mass storage, and so forth.

The communication resources 1130 may include interconnection or network interface controllers, components, or other suitable devices to communicate with one or more peripheral devices 1104 or one or more databases 1106 or other network elements via a network 1108. For example, the communication resources 1130 may include wired communication components (e.g., for coupling via USB, Ethernet, Ethernet, Ethernet over GRE Tunnels, Ethernet over Multiprotocol Label Switching (MPLS), Ethernet over USB, Controller Area Network (CAN), Local Interconnect Network (LIN), DeviceNet, ControlNet, Data Highway+, PROFIBUS, or PROFINET, among many others), cellular communication components, NFC components, Bluetooth® (or Bluetooth® Low Energy) components, WiFi® components, and other communication components. Network connectivity may be provided to/from the computing device 1100 via the communication resources 1130 using a physical connection, which may be electrical (e.g., a “copper interconnect”) or optical. The physical connection also includes suitable input connectors (e.g., ports, receptacles, sockets, etc.) and output connectors (e.g., plugs, pins, etc.). The communication resources 1130 may include one or more dedicated processors and/or FPGAs to communicate using one or more of the aforementioned network interface protocols.

Instructions 1150 may comprise software, a program, an application, an applet, an app, or other executable code for causing at least any of the processors 1110 to perform any one or more of the methodologies discussed herein. The instructions 1150 may reside, completely or partially, within at least one of the processors 1110 (e.g., within the processor's cache memory), the memory/storage devices 1120, or any suitable combination thereof. Furthermore, any portion of the instructions 1150 may be transferred to the hardware resources 1100 from any combination of the peripheral devices 1104 or the databases 1106. Accordingly, the memory of processors 1110, the memory/storage devices 1120, the peripheral devices 1104, and the databases 1106 are examples of computer-readable and machine-readable media.

FIG. 12 provides a high-level view of an Open RAN (O-RAN) architecture 1200. The O-RAN architecture 1200 includes four O-RAN defined interfaces-namely, the A1 interface, the O1 interface, the O2 interface, and the Open Fronthaul Management (M)-plane interface-which connect the Service Management and Orchestration (SMO) framework 1202 to O-RAN

network functions (NFs) 1204 and the O-Cloud 1206. The SMO 1202 (described in [O13]) also connects with an external system 1210, which provides enrighment data to the SMO 1202. FIG. 12 also illustrates that the A1 interface terminates at an O-RAN Non-Real Time (RT) RAN Intelligent Controller (RIC) 1212 in or at the SMO 1202 and at the O-RAN Near-RT RIC 1214 in or at the O-RAN NFs 1204. The O-RAN NFs 1204 can be VNFs such as VMs or containers, sitting above the O-Cloud 1206 and/or Physical Network Functions (PNFs) utilizing customized hardware. All O-RAN NFs 1204 are expected to support the O1 interface when interfacing the SMO framework 1202. The O-RAN NFs 1204 connect to the NG-Core 1208 via the NG interface (which is a 3GPP defined interface). The Open Fronthaul M-plane interface between the SMO 1202 and the O-RAN Radio Unit (O-RU) 1216 supports the O-RU 1216 management in the O-RAN hybrid model as specified in [O16]. The Open Fronthaul M-plane interface is an optional interface to the SMO 1202 that is included for backward compatibility purposes as per [O16], and is intended for management of the O-RU 1216 in hybrid mode only. The management architecture of flat mode and its relation to the O1 interface for the O-RU 1216 is for future study. The O-RU 1216 termination of the O1 interface towards the SMO 1202 as specified in [O12].

FIG. 13 shows an O-RAN logical architecture 1300 corresponding to the O-RAN architecture 1200 of FIG. 12. In FIG. 13, the SMO 1302 corresponds to the SMO 1202, O-Cloud 1306 corresponds to the O-Cloud 1206, the non-RT RIC 1312 corresponds to the non-RT RIC 1212, the near-RT RIC 1314 corresponds to the near-RT RIC 1214, and the O-RU 1316 corresponds to the O-RU 1216 of FIG. 13, respectively. The O-RAN logical architecture 1300 includes a radio portion and a management portion.

The management portion/side of the architectures 1300 includes the SMO Framework 1302 containing the non-RT RIC 1312, and may include the O-Cloud 1306. The O-Cloud 1306 is a cloud computing platform including a collection of physical infrastructure nodes to host the relevant O-RAN functions (e.g., the near-RT RIC 1314, O-CU-CP 1321, O-CU-UP 1322, and the O-DU 1315), supporting software components (e.g., OSs, VMMs, container runtime engines, ML engines, etc.), and appropriate management and orchestration functions.

The radio portion/side of the logical architecture 1300 includes the near-RT RIC 1314, the O-RAN Distributed Unit (O-DU) 1315, the O-RU 1316, the O-RAN Central Unit-Control Plane (O-CU-CP) 1321, and the O-RAN Central Unit-User Plane (O-CU-UP) 1322 functions. The radio portion/side of the logical architecture 1300 may also include the O-e/gNB 1310.

The O-DU 1315 is a logical node hosting RLC, MAC, and higher PHY layer entities/elements (High-PHY layers) based on a lower layer functional split. The O-RU 1316 is a logical node hosting lower PHY layer entities/elements (Low-PHY layer) (e.g., FFT/iFFT, PRACH extraction, etc.) and RF processing elements based on a lower layer functional split. Virtualization of O-RU 1316 is FFS. The O-CU-CP 1321 is a logical node hosting the RRC and the control plane (CP) part of the PDCP protocol. The O O-CU-UP 1322 is a a logical node hosting the user plane part of the PDCP protocol and the SDAP protocol.

An E2 interface terminates at a plurality of E2 nodes. The E2 nodes are logical nodes/entities that terminate the E2 interface. For NR/5G access, the E2 nodes include the O-CU-CP 1321, O-CU-UP 1322, O-DU 1315, or any combination of elements as defined in [O15]. For E-UTRA access the E2 nodes include the O-e/gNB 1310. As shown in FIG. 13, the E2 interface also connects the O-e/gNB 1310 to the Near-RT RIC 1314. The protocols over E2 interface are based exclusively on Control Plane (CP) protocols. The E2 functions are grouped into the following categories: (a) near-RT RIC 1314 services (REPORT, INSERT, CONTROL and POLICY, as described in ([O15]); and (b) near-RT RIC 1314 support functions, which include E2 Interface Management (E2 Setup, E2 Reset, Reporting of General Error Situations, etc.) and Near-RT RIC Service Update (e.g., capability exchange related to the list of E2 Node functions exposed over E2).

FIG. 13 shows the Uu interface between a UE 1301 and O-e/gNB 1310 as well as between the UE 1301 and O-RAN components. The Uu interface is a 3GPP defined interface (see e.g., sections 5.2 and 5.3 of [O07]), which includes a complete protocol stack from L1 to L3 and terminates in the NG-RAN or E-UTRAN. The O-e/gNB 1310 is an LTE eNB [O04], a 5G gNB or ng-eNB that supports the E2 interface. The O-e/gNB 1310 may be the same or similar as eNB 912, gNB 916, and/or AN 1004 discussed previously. The UE 1301 may correspond to UEs 902 and/or 1002 discussed previously, and/or the like. There may be multiple UEs 1301 and/or multiple O-e/gNB 1310, each of which may be connected to one another the via respective Uu interfaces. Although not shown in FIG. 13, the O-e/gNB 1310 supports O-DU 1315 and O-RU 1316 functions with an Open Fronthaul interface between them.

The Open Fronthaul (OF) interface(s) is/are between O-DU 1315 and O-RU 1316 functions [17]. The OF interface(s) includes the Control User Synchronization (CUS) Plane and Management (M) Plane. FIGS. 12 and 13 also show that the O-RU 1316 terminates the OF M-Plane interface towards the O-DU 1315 and optionally towards the SMO 1302 as specified in [O16]. The O-RU 1316 terminates the OF CUS-Plane interface towards the O-DU 1315 and the SMO 1302.

The F1-c interface connects the O-CU-CP 1321 with the O-DU 1315. As defined by 3GPP, the F1-c interface is between the gNB-CU-CP and gNB-DU nodes [O10]. However, for purposes of O-RAN, the F1-c interface is adopted between the O-CU-CP 1321 with the O-DU 1315 functions while reusing the principles and protocol stack defined by 3GPP and the definition of interoperability profile specifications.

The F1-u interface connects the O-CU-UP 1322 with the O-DU 1315. As defined by 3GPP, the F1-u interface is between the gNB-CU-UP and gNB-DU nodes [O10]. However, for purposes of O-RAN, the F1-u interface is adopted between the O-CU-UP 1322 with the O-DU 1315 functions while reusing the principles and protocol stack defined by 3GPP and the definition of interoperability profile specifications.

The NG-c interface is defined by 3GPP as an interface between the gNB-CU-CP and the AMF in the 5GC [O06]. The NG-c is also referred as the N2 interface (see [O06]). The NG-u interface is defined by 3GPP, as an interface between the gNB-CU-UP and the UPF in the 5GC [O06]. The NG-u interface is referred as the N3 interface (see [O06]). In O-RAN, NG-c and NG-u protocol stacks defined by 3GPP are reused and may be adapted for O-RAN purposes.

The X2-c interface is defined in 3GPP for transmitting control plane information between eNBs or between eNB and en-gNB in EN-DC. The X2-u interface is defined in 3GPP for transmitting user plane information between eNBs or between eNB and en-gNB in EN-DC (see e.g., [O05], [O06]). In O-RAN, X2-c and X2-u protocol stacks defined by 3GPP are reused and may be adapted for O-RAN purposes.

The Xn-c interface is defined in 3GPP for transmitting control plane information between gNBs, ng-eNBs, or between an ng-eNB and gNB. The Xn-u interface is defined in 3GPP for transmitting user plane information between gNBs, ng-eNBs, or between ng-eNB and gNB (see e.g., [O06], [O08]). In O-RAN, Xn-c and Xn-u protocol stacks defined by 3GPP are reused and may be adapted for O-RAN purposes.

The E1 interface is defined by 3GPP as being an interface between the gNB-CU-CP (e.g., gNB-CU-CP 3728) and gNB-CU-UP (see e.g., [O07], [O09]). In O-RAN, E1 protocol stacks defined by 3GPP are reused and adapted as being an interface between the O-CU-CP 1321 and the O-CU-UP 1322 functions.

The O-RAN Non-Real Time (RT) RAN Intelligent Controller (RIC) 1312 is a logical function within the SMO framework 1202, 1302 that enables non-real-time control and optimization of RAN elements and resources; AI/machine learning (ML) workflow(s) including model training, inferences, and updates; and policy-based guidance of applications/features in the Near-RT RIC 1314.

The O-RAN near-RT RIC 1314 is a logical function that enables near-real-time control and optimization of RAN elements and resources via fine-grained data collection and actions over the E2 interface. The near-RT RIC 1314 may include one or more AI/ML workflows including model training, inferences, and updates.

The non-RT RIC 1312 can be an ML training host to host the training of one or more ML models. ML training can be performed offline using data collected from the RIC, O-DU 1315 and O-RU 1316. For supervised learning, non-RT RIC 1312 is part of the SMO 1302, and the ML training host and/or ML model host/actor can be part of the non-RT RIC 1312 and/or the near-RT RIC 1314. For unsupervised learning, the ML training host and ML model host/actor can be part of the non-RT RIC 1312 and/or the near-RT RIC 1314. For reinforcement learning, the ML training host and ML model host/actor may be co-located as part of the non-RT RIC 1312 and/or the near-RT RIC 1314. In some implementations, the non-RT RIC 1312 may request or trigger ML model training in the training hosts regardless of where the model is deployed and executed. ML models may be trained and not currently deployed.

In some implementations, the non-RT RIC 1312 provides a query-able catalog for an ML designer/developer to publish/install trained ML models (e.g., executable software components). In these implementations, the non-RT RIC 1312 may provide discovery mechanism if a particular ML model can be executed in a target ML inference host (MF), and what number and type of ML models can be executed in the MF. For example, there may be three types of ML catalogs made discoverable by the non-RT RIC 1312: a design-time catalog (e.g., residing outside the non-RT RIC 1312 and hosted by some other ML platform(s)), a training/deployment-time catalog (e.g., residing inside the non-RT RIC 1312), and a runtime catalog (e.g., residing inside the non-RT RIC 1312). The non-RT RIC 1312 supports necessary capabilities for ML model inference in support of ML assisted solutions running in the non-RT RIC 1312 or some other ML inference host. These capabilities enable executable software to be installed such as VMs, containers, etc. The non-RT RIC 1312 may also include and/or operate one or more ML engines, which are packaged software executable libraries that provide methods, routines, data types, etc., used to run ML models. The non-RT RIC 1312 may also implement policies to switch and activate ML model instances under different operating conditions.

The non-RT RIC 132 is be able to access feedback data (e.g., FM and PM statistics) over the O1 interface on ML model performance and perform necessary evaluations. If the ML model fails during runtime, an alarm can be generated as feedback to the non-RT RIC 1312. How well the ML model is performing in terms of prediction accuracy or other operating statistics it produces can also be sent to the non-RT RIC 1312 over O1. The non-RT RIC 1312 can also scale ML model instances running in a target MF over the O1 interface by observing resource utilization in MF. The environment where the ML model instance is running (e.g., the MF) monitors resource utilization of the running ML model. This can be done, for example, using an ORAN-SC component called ResourceMonitor in the near-RT RIC 1314 and/or in the non-RT RIC 1312, which continuously monitors resource utilization. If resources are low or fall below a certain threshold, the runtime environment in the near-RT RIC 1314 and/or the non-RT RIC 1312 provides a scaling mechanism to add more ML instances. The scaling mechanism may include a scaling factor such as an number, percentage, and/or other like data used to scale up/down the number of ML instances. ML model instances running in the target ML inference hosts may be automatically scaled by observing resource utilization in the MF. For example, the Kubernetes® (K8s) runtime environment typically provides an auto-scaling feature.

The A1 interface is between the non-RT RIC 1312 (within or outside the SMO 1302) and the near-RT RIC 1314. The A1 interface supports three types of services as defined in [O14], including a Policy Management Service, an Enrichment Information Service, and ML Model Management Service. A1 policies have the following characteristics compared to persistent configuration [O14]: A1 policies are not critical to traffic; A1 policies have temporary validity; A1 policies may handle individual UE or dynamically defined groups of UEs; A1 policies act within and take precedence over the configuration; and A1 policies are non-persistent, e.g., do not survive a restart of the near-RT RIC.

    • [O04] 3GPP TS 36.401 v15.1.0 (2019-01-09).
    • [O05] 3GPP TS 36.420 v15.2.0 (2020-01-09).
    • [O06] 3GPP TS 38.300 v16.0.0 (2020-01-08).
    • [O07] 3GPP TS 38.401 v16.0.0 (2020-01-09).
    • [O08] 3GPP TS 38.420 v15.2.0 (2019-01-08).
    • [O09] 3GPP TS 38.460 v16.0.0 (2020-01-09).
    • [O10] 3GPP TS 38.470 v16.0.0 (2020-01-09).
    • [O12] O-RAN Alliance Working Group 1, O-RAN Operations and Maintenance Architecture Specification, version 2.0 (December 2019) (“O-RAN-WG1.OAM-Architecture-v02.00”).
    • [O13] O-RAN Alliance Working Group 1, O-RAN Operations and Maintenance Interface Specification, version 2.0 (December 2019) (“O-RAN-WG1.O1-Interface-v02.00”).
    • [O14] O-RAN Alliance Working Group 2, O-RAN A1 interface: General Aspects and Principles Specification, version 1.0 (October 2019) (“ORAN-WG2.A1.GA&P-v01.00”).
    • [O15] O-RAN Alliance Working Group 3, Near-Real-time RAN Intelligent Controller Architecture & E2 General Aspects and Principles (“ORAN-WG3.E2GAP.0-v0.1”).
    • [O16] O-RAN Alliance Working Group 4, O-RAN Fronthaul Management Plane Specification, version 2.0 (July 2019) (“ORAN-WG4.MP.0-v02.00.00”).
    • [O17] O-RAN Alliance Working Group 4, O-RAN Fronthaul Control, User and Synchronization Plane Specification, version 2.0 (July 2019) (“ORAN-WG4.CUS.0-v02.00”).

For one or more embodiments, at least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below. For example, the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below. For another example, circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below in the example section.

Additional examples of the presently described embodiments include the following, non-limiting implementations. Each of the following non-limiting examples may stand on its own or may be combined in any permutation or combination with any one or more of the other examples provided below or throughout the present disclosure.

For one or more embodiments, at least one of the components set forth in one or more of the preceding figures may be configured to perform one or more operations, techniques, processes, and/or methods as set forth in the example section below. For example, the baseband circuitry as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below. For another example, circuitry associated with a UE, base station, network element, etc. as described above in connection with one or more of the preceding figures may be configured to operate in accordance with one or more of the examples set forth below.

The following examples pertain to further embodiments.

Example 1 may include an apparatus of a non-real time radio access network intelligent controller (non-RT RIC) network node in an open radio access network (O-RAN) comprising processing circuitry coupled to storage, the processing circuitry configured to: identify a first request received from a data consumer non-RT RIC application (rApp), wherein the first request may be received over an R1 termination interface; cause to send a first response to the data consumer rApp in response to the first request; identify a data producer rApp by checking a data catalog in order to satisfy the first request; and cause to send a notification frame to the data consumer rApp over the R1 termination interface indicating that data will be delivered to the data consumer rApp.

Example 2 may include the device of example 1 and/or some other example herein, wherein the first request may be a data subscription request, and wherein the first response may be a data subscription response.

Example 3 may include the device of example 1 and/or some other example herein, wherein the processing circuitry may be further configured to: cause to send a second request to the data producer rApp over the R1 termination interface; and identify a second response from the data producer rApp.

Example 4 may include the device of example 3 and/or some other example herein, wherein the second request may be a data subscription request, and wherein the second response may be a data subscription response.

Example 5 may include the device of example 1 and/or some other example herein, wherein the processing circuitry may be further configured to: identify a registration request received from the data producer rApp; and cause to send a registration response to the data producer rApp, wherein the registration response may be sent after a data management function performs a data catalog check to determine whether a same data type may be found in another data producer rApp.

Example 6 may include the device of example 1 and/or some other example herein, wherein the processing circuitry may be further configured to: identify a discover request received from the data consumer rApp; and cause to send a discover response to the data consumer rApp.

Example 7 may include the device of example 1 and/or some other example herein, wherein the processing circuitry may be further configured to: check a discovery policy associated with a data type of a data registration request received from the data consumer rApp; and determine whether the data consumer rApp may be allowed to discover the data type.

Example 8 may include the device of example 1 and/or some other example herein, wherein the data catalog comprises one or more registered data types associated with one or more data producer rApps.

Example 9 may include the device of example 1 and/or some other example herein, wherein the processing circuitry may be further configured to create or update a discovery policy for registered data types using a service provided by a data policy administration service produced by data policy administration functions.

Example 10 may include the device of example 9 and/or some other example herein, wherein the processing circuitry may be further configured to: update the data catalog; and add the data type indicated in a registration request received from the data producer rApp into a list of known data types, based on the discovery policy.

Example 11 may include a computer-readable medium storing computer-executable instructions which when executed by one or more processors of a non-real time radio access network intelligent controller (non-RT RIC) network node in an open radio access network (O-RAN) result in performing operations comprising: identifying a first request received from a data consumer non-RT RIC application (rApp), wherein the first request may be received over an R1 termination interface; causing to send a first response to the data consumer rApp in response to the first request; identifying a data producer rApp by checking a data catalog in order to satisfy the first request; and causing to send a notification frame to the data consumer rApp over the R1 termination interface indicating that data will be delivered to the data consumer rApp.

Example 12 may include the computer-readable medium of example 11 and/or some other example herein, wherein the first request may be a data subscription request, and wherein the first response may be a data subscription response.

Example 13 may include the computer-readable medium of example 11 and/or some other example herein, wherein the operations further comprise: causing to send a second request to the data producer rApp over the R1 termination interface; and identifying a second response from the data producer rApp.

Example 14 may include the computer-readable medium of example 13 and/or some other example herein, wherein the second request may be a data subscription request, and wherein the second response may be a data subscription response.

Example 15 may include the computer-readable medium of example 11 and/or some other example herein, wherein the operations further comprise: identifying a registration request received from the data producer rApp; and causing to send a registration response to the data producer rApp, wherein the registration response may be sent after a data management function performs a data catalog check to determine whether a same data type may be found in another data producer rApp.

Example 16 may include the computer-readable medium of example 11 and/or some other example herein, wherein the operations further comprise: identifying a discover request received from the data consumer rApp; and causing to send a discover response to the data consumer rApp.

Example 17 may include the computer-readable medium of example 11 and/or some other example herein, wherein the operations further comprise: checking a discovery policy associated with a data type of a data registration request received from the data consumer rApp; and determining whether the data consumer rApp may be allowed to discover the data type.

Example 18 may include the non computer-readable medium of example 11 and/or some other example herein, wherein the data catalog comprises one or more registered data types associated with one or more data producer rApps.

Example 19 may include the computer-readable medium of example 11 and/or some other example herein, wherein the operations further comprise creating or updating a discovery policy for registered data types using a service provided by a data policy administration service produced by data policy administration functions.

Example 20 may include the computer-readable medium of example 19 and/or some other example herein, wherein the operations further comprise: updating the data catalog; and adding the data type indicated in a registration request received from the data producer rApp into a list of known data types, based on the discovery policy.

Example 21 may include a method comprising: identifying, by one or more processors of a non-real time radio access network intelligent controller (non-RT RIC) network node in an open radio access network (O-RAN), a first request received from a data consumer non-RT RIC application (rApp), wherein the first request may be received over an R1 termination interface; causing to send a first response to the data consumer rApp in response to the first request; identifying a data producer rApp by checking a data catalog in order to satisfy the first request; and causing to send a notification frame to the data consumer rApp over the R1 termination interface indicating that data will be delivered to the data consumer rApp.

Example 22 may include the method of example 21 and/or some other example herein, wherein the first request may be a data subscription request, and wherein the first response may be a data subscription response.

Example 23 may include the method of example 21 and/or some other example herein, further comprising: causing to send a second request to the data producer rApp over the R1 termination interface; and identifying a second response from the data producer rApp.

Example 24 may include the method of example 23 and/or some other example herein, wherein the second request may be a data subscription request, and wherein the second response may be a data subscription response.

Example 25 may include the method of example 21 and/or some other example herein, further comprising: identifying a registration request received from the data producer rApp; and causing to send a registration response to the data producer rApp, wherein the registration response may be sent after a data management function performs a data catalog check to determine whether a same data type may be found in another data producer rApp.

Example 26 may include the method of example 21 and/or some other example herein, further comprising: identifying a discover request received from the data consumer rApp; and causing to send a discover response to the data consumer rApp.

Example 27 may include the method of example 21 and/or some other example herein, further comprising: checking a discovery policy associated with a data type of a data registration request received from the data consumer rApp; and determining whether the data consumer rApp may be allowed to discover the data type.

Example 28 may include the method of example 21 and/or some other example herein, wherein the data catalog comprises one or more registered data types associated with one or more data producer rApps.

Example 29 may include the method of example 21 and/or some other example herein, further comprising creating or updating a discovery policy for registered data types using a service provided by a data policy administration service produced by data policy administration functions.

Example 30 may include the method of example 29 and/or some other example herein, further comprising: updating the data catalog; and adding the data type indicated in a registration request received from the data producer rApp into a list of known data types, based on the discovery policy.

Example 31 may include an apparatus of a non-real time radio access network intelligent controller (non-RT RIC) network node in an open radio access network (O-RAN) comprising means for: identifying a first request received from a data consumer non-RT RIC application (rApp), wherein the first request may be received over an R1 termination interface; causing to send a first response to the data consumer rApp in response to the first request; identifying a data producer rApp by checking a data catalog in order to satisfy the first request; and causing to send a notification frame to the data consumer rApp over the R1 termination interface indicating that data will be delivered to the data consumer rApp.

Example 32 may include the apparatus of example 31 and/or some other example herein, wherein the first request may be a data subscription request, and wherein the first response may be a data subscription response.

Example 33 may include the apparatus of example 31 and/or some other example herein, further comprising: causing to send a second request to the data producer rApp over the R1 termination interface; and identifying a second response from the data producer rApp.

Example 34 may include the apparatus of example 33 and/or some other example herein, wherein the second request may be a data subscription request, and wherein the second response may be a data subscription response.

Example 35 may include the apparatus of example 31 and/or some other example herein, further comprising: identifying a registration request received from the data producer rApp; and causing to send a registration response to the data producer rApp, wherein the registration response may be sent after a data management function performs a data catalog check to determine whether a same data type may be found in another data producer rApp.

Example 36 may include the apparatus of example 31 and/or some other example herein, further comprising: identifying a discover request received from the data consumer rApp; and causing to send a discover response to the data consumer rApp.

Example 37 may include the apparatus of example 31 and/or some other example herein, further comprising: checking a discovery policy associated with a data type of a data registration request received from the data consumer rApp; and determining whether the data consumer rApp may be allowed to discover the data type.

Example 38 may include the apparatus of example 31 and/or some other example herein, wherein the data catalog comprises one or more registered data types associated with one or more data producer rApps.

Example 39 may include the apparatus of example 31 and/or some other example herein, further comprising create or creating or updating policy for registered data types using a service provided by a data policy administration service produced by data policy administration functions.

Example 40 may include the apparatus of example 39 and/or some other example herein, further comprising: updating the data catalog; and adding the data type indicated in a registration request received from the data producer rApp into a list of known data types, based on the discovery policy.

Example 41 may include an apparatus comprising means for performing any of the methods of examples 1-40.

Example 42 may include a network node comprising a communication interface and processing circuitry connected thereto and configured to perform the methods of examples 1-40.

Example 43 may include an apparatus comprising means to perform one or more elements of a method described in or related to any of examples 1-40, or any other method or process described herein.

Example 44 may include one or more non-transitory computer-readable media comprising instructions to cause an electronic device, upon execution of the instructions by one or more processors of the electronic device, to perform one or more elements of a method described in or related to any of examples 1-40, or any other method or process described herein.

Example 45 may include an apparatus comprising logic, modules, or circuitry to perform one or more elements of a method described in or related to any of examples 1-40, or any other method or process described herein.

Example 46 may include a method, technique, or process as described in or related to any of examples 1-40, or portions or parts thereof.

Example 47 may include an apparatus comprising: one or more processors and one or more computer-readable media comprising instructions that, when executed by the one or more processors, cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-40, or portions thereof.

Example 48 may include a signal as described in or related to any of examples 1-40, or portions or parts thereof.

Example 49 may include a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-40, or portions or parts thereof, or otherwise described in the present disclosure.

Example 50 may include a signal encoded with data as described in or related to any of examples 1-40, or portions or parts thereof, or otherwise described in the present disclosure.

Example 51 may include a signal encoded with a datagram, packet, frame, segment, protocol data unit (PDU), or message as described in or related to any of examples 1-40, or portions or parts thereof, or otherwise described in the present disclosure.

Example 52 may include an electromagnetic signal carrying computer-readable instructions, wherein execution of the computer-readable instructions by one or more processors is to cause the one or more processors to perform the method, techniques, or process as described in or related to any of examples 1-40, or portions thereof.

Example 53 may include a computer program comprising instructions, wherein execution of the program by a processing element is to cause the processing element to carry out the method, techniques, or process as described in or related to any of examples 1-40, or portions thereof.

Example 54 may include a signal in a wireless network as shown and described herein. Example 55 may include a method of communicating in a wireless network as shown and described herein.

Example 56 may include a system for providing wireless communication as shown and described herein.

Example 57 may include a device for providing wireless communication as shown and described herein.

An example implementation is an edge computing system, including respective edge processing devices and nodes to invoke or perform the operations of the examples above, or other subject matter described herein. Another example implementation is a client endpoint node, operable to invoke or perform the operations of the examples above, or other subject matter described herein. Another example implementation is an aggregation node, network hub node, gateway node, or core data processing node, within or coupled to an edge computing system, operable to invoke or perform the operations of the examples above, or other subject matter described herein. Another example implementation is an access point, base station, road-side unit, street-side unit, or on-premise unit, within or coupled to an edge computing system, operable to invoke or perform the operations of the examples above, or other subject matter described herein. Another example implementation is an edge provisioning node, service orchestration node, application orchestration node, or multi-tenant management node, within or coupled to an edge computing system, operable to invoke or perform the operations of the examples above, or other subject matter described herein. Another example implementation is an edge node operating an edge provisioning service, application or service orchestration service, virtual machine deployment, container deployment, function deployment, and compute management, within or coupled to an edge computing system, operable to invoke or perform the operations of the examples above, or other subject matter described herein. Another example implementation is an edge computing system operable as an edge mesh, as an edge mesh with side car loading, or with mesh-to-mesh communications, operable to invoke or perform the operations of the examples above, or other subject matter described herein. Another example implementation is an edge computing system including aspects of network functions, acceleration functions, acceleration hardware, storage hardware, or computation hardware resources, operable to invoke or perform the use cases discussed herein, with use of the examples above, or other subject matter described herein. Another example implementation is an edge computing system adapted for supporting client mobility, vehicle-to-vehicle (V2V), vehicle-to-everything (V2X), or vehicle-to-infrastructure (V2I) scenarios, and optionally operating according to ETSI MEC specifications, operable to invoke or perform the use cases discussed herein, with use of the examples above, or other subject matter described herein. Another example implementation is an edge computing system adapted for mobile wireless communications, including configurations according to an 3GPP 4G/LTE or 5G network capabilities, operable to invoke or perform the use cases discussed herein, with use of the examples above, or other subject matter described herein. Another example implementation is a computing system adapted for network communications, including configurations according to an O-RAN capabilities, operable to invoke or perform the use cases discussed herein, with use of the examples above, or other subject matter described herein.

Any of the above-described examples may be combined with any other example (or combination of examples), unless explicitly stated otherwise. The foregoing description of one or more implementations provides illustration and description, but is not intended to be exhaustive or to limit the scope of embodiments to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a,” “an” and “the” are intended to include plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specific the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operation, elements, components, and/or groups thereof.

For the purposes of the present disclosure, the phrase “A and/or B” means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase “A, B, and/or C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C). The description may use the phrases “in an embodiment,” or “In some embodiments,” which may each refer to one or more of the same or different embodiments. Furthermore, the terms “comprising,” “including,” “having,” and the like, as used with respect to embodiments of the present disclosure, are synonymous.

The terms “coupled,” “communicatively coupled,” along with derivatives thereof are used herein. The term “coupled” may mean two or more elements are in direct physical or electrical contact with one another, may mean that two or more elements indirectly contact each other but still cooperate or interact with each other, and/or may mean that one or more other elements are coupled or connected between the elements that are said to be coupled with each other. The term “directly coupled” may mean that two or more elements are in direct contact with one another. The term “communicatively coupled” may mean that two or more elements may be in contact with one another by a means of communication including through a wire or other interconnect connection, through a wireless communication channel or ink, and/or the like.

The term “circuitry” as used herein refers to, is part of, or includes hardware components such as an electronic circuit, a logic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group), an Application Specific Integrated Circuit (ASIC), a field-programmable device (FPD) (e.g., a field-programmable gate array (FPGA), a programmable logic device (PLD), a complex PLD (CPLD), a high-capacity PLD (HCPLD), a structured ASIC, or a programmable SoC), digital signal processors (DSPs), etc., that are configured to provide the described functionality. In some embodiments, the circuitry may execute one or more software or firmware programs to provide at least some of the described functionality. The term “circuitry” may also refer to a combination of one or more hardware elements (or a combination of circuits used in an electrical or electronic system) with the program code used to carry out the functionality of that program code. In these embodiments, the combination of hardware elements and program code may be referred to as a particular type of circuitry.

The term “processor circuitry” as used herein refers to, is part of, or includes circuitry capable of sequentially and automatically carrying out a sequence of arithmetic or logical operations, or recording, storing, and/or transferring digital data. Processing circuitry may include one or more processing cores to execute instructions and one or more memory structures to store program and data information. The term “processor circuitry” may refer to one or more application processors, one or more baseband processors, a physical central processing unit (CPU), a single-core processor, a dual-core processor, a triple-core processor, a quad-core processor, and/or any other device capable of executing or otherwise operating computer-executable instructions, such as program code, software modules, and/or functional processes. Processing circuitry may include more hardware accelerators, which may be microprocessors, programmable processing devices, or the like. The one or more hardware accelerators may include, for example, computer vision (CV) and/or deep learning (DL) accelerators. The terms “application circuitry” and/or “baseband circuitry” may be considered synonymous to, and may be referred to as, “processor circuitry.”

The term “memory” and/or “memory circuitry” as used herein refers to one or more hardware devices for storing data, including RAM, MRAM, PRAM, DRAM, and/or SDRAM, core memory, ROM, magnetic disk storage mediums, optical storage mediums, flash memory devices or other machine readable mediums for storing data. The term “computer-readable medium” may include, but is not limited to, memory, portable or fixed storage devices, optical storage devices, and various other mediums capable of storing, containing or carrying instructions or data.

The term “interface circuitry” as used herein refers to, is part of, or includes circuitry that enables the exchange of information between two or more components or devices. The term “interface circuitry” may refer to one or more hardware interfaces, for example, buses, I/O interfaces, peripheral component interfaces, network interface cards, and/or the like.

The term “user equipment” or “UE” as used herein refers to a device with radio communication capabilities and may describe a remote user of network resources in a communications network. The term “user equipment” or “UE” may be considered synonymous to, and may be referred to as, client, mobile, mobile device, mobile terminal, user terminal, mobile unit, mobile station, mobile user, subscriber, user, remote station, access agent, user agent, receiver, radio equipment, reconfigurable radio equipment, reconfigurable mobile device, etc. Furthermore, the term “user equipment” or “UE” may include any type of wireless/wired device or any computing device including a wireless communications interface.

The term “network element” as used herein refers to physical or virtualized equipment and/or infrastructure used to provide wired or wireless communication network services. The term “network element” may be considered synonymous to and/or referred to as a networked computer, networking hardware, network equipment, network node, router, switch, hub, bridge, radio network controller, RAN device, RAN node, gateway, server, virtualized VNF, NFVI, and/or the like.

The term “computer system” as used herein refers to any type interconnected electronic devices, computer devices, or components thereof. Additionally, the term “computer system” and/or “system” may refer to various components of a computer that are communicatively coupled with one another. Furthermore, the term “computer system” and/or “system” may refer to multiple computer devices and/or multiple computing systems that are communicatively coupled with one another and configured to share computing and/or networking resources.

The term “appliance,” “computer appliance,” or the like, as used herein refers to a computer device or computer system with program code (e.g., software or firmware) that is specifically designed to provide a specific computing resource. A “virtual appliance” is a virtual machine image to be implemented by a hypervisor-equipped device that virtualizes or emulates a computer appliance or otherwise is dedicated to provide a specific computing resource. The term “element” refers to a unit that is indivisible at a given level of abstraction and has a clearly defined boundary, wherein an element may be any type of entity including, for example, one or more devices, systems, controllers, network elements, modules, etc., or combinations thereof. The term “device” refers to a physical entity embedded inside, or attached to, another physical entity in its vicinity, with capabilities to convey digital information from or to that physical entity. The term “entity” refers to a distinct component of an architecture or device, or information transferred as a payload. The term “controller” refers to an element or entity that has the capability to affect a physical entity, such as by changing its state or causing the physical entity to move.

The term “cloud computing” or “cloud” refers to a paradigm for enabling network access to a scalable and elastic pool of shareable computing resources with self-service provisioning and administration on-demand and without active management by users. Cloud computing provides cloud computing services (or cloud services), which are one or more capabilities offered via cloud computing that are invoked using a defined interface (e.g., an API or the like). The term “computing resource” or simply “resource” refers to any physical or virtual component, or usage of such components, of limited availability within a computer system or network. Examples of computing resources include usage/access to, for a period of time, servers, processor(s), storage equipment, memory devices, memory areas, networks, electrical power, input/output (peripheral) devices, mechanical devices, network connections (e.g., channels/links, ports, network sockets, etc.), operating systems, virtual machines (VMs), software/applications, computer files, and/or the like. A “hardware resource” may refer to compute, storage, and/or network resources provided by physical hardware element(s). A “virtualized resource” may refer to compute, storage, and/or network resources provided by virtualization infrastructure to an application, device, system, etc. The term “network resource” or “communication resource” may refer to resources that are accessible by computer devices/systems via a communications network. The term “system resources” may refer to any kind of shared entities to provide services, and may include computing and/or network resources. System resources may be considered as a set of coherent functions, network data objects or services, accessible through a server where such system resources reside on a single host or multiple hosts and are clearly identifiable. As used herein, the term “cloud service provider” (or CSP) indicates an organization which operates typically large-scale “cloud” resources comprised of centralized, regional, and edge data centers (e.g., as used in the context of the public cloud). In other examples, a CSP may also be referred to as a Cloud Service Operator (CSO). References to “cloud computing” generally refer to computing resources and services offered by a CSP or a CSO, at remote locations with at least some increased latency, distance, or constraints relative to edge computing.

As used herein, the term “data center” refers to a purpose-designed structure that is intended to house multiple high-performance compute and data storage nodes such that a large amount of compute, data storage and network resources are present at a single location. This often entails specialized rack and enclosure systems, suitable heating, cooling, ventilation, security, fire suppression, and power delivery systems. The term may also refer to a compute and data storage node in some contexts. A data center may vary in scale between a centralized or cloud data center (e.g., largest), regional data center, and edge data center (e.g., smallest).

As used herein, the term “edge computing” refers to the implementation, coordination, and use of computing and resources at locations closer to the “edge” or collection of “edges” of a network. Deploying computing resources at the network's edge may reduce application and network latency, reduce network backhaul traffic and associated energy consumption, improve service capabilities, improve compliance with security or data privacy requirements (especially as compared to conventional cloud computing), and improve total cost of ownership). As used herein, the term “edge compute node” refers to a real-world, logical, or virtualized implementation of a compute-capable element in the form of a device, gateway, bridge, system or subsystem, component, whether operating in a server, client, endpoint, or peer mode, and whether located at an “edge” of an network or at a connected location further within the network. References to a “node” used herein are generally interchangeable with a “device”, “component”, and “sub-system”; however, references to an “edge computing system” or “edge computing network” generally refer to a distributed architecture, organization, or collection of multiple nodes and devices, and which is organized to accomplish or offer some aspect of services or resources in an edge computing setting.

Additionally or alternatively, the term “Edge Computing” refers to a concept that enables operator and 3rd party services to be hosted close to the UE's access point of attachment, to achieve an efficient service delivery through the reduced end-to-end latency and load on the transport network. As used herein, the term “Edge Computing Service Provider” refers to a mobile network operator or a 3rd party service provider offering Edge Computing service. As used herein, the term “Edge Data Network” refers to a local Data Network (DN) that supports the architecture for enabling edge applications. As used herein, the term “Edge Hosting Environment” refers to an environment providing support required for Edge Application Server's execution. As used herein, the term “Application Server” refers to application software resident in the cloud performing the server function.

The term “Internet of Things” or “IoT” refers to a system of interrelated computing devices, mechanical and digital machines capable of transferring data with little or no human interaction, and may involve technologies such as real-time analytics, machine learning and/or AI, embedded systems, wireless sensor networks, control systems, automation (e.g., smarthome, smart building and/or smart city technologies), and the like. IoT devices are usually low-power devices without heavy compute or storage capabilities. “Edge IoT devices” may be any kind of IoT devices deployed at a network's edge.

As used herein, the term “cluster” refers to a set or grouping of entities as part of an edge computing system (or systems), in the form of physical entities (e.g., different computing systems, networks or network groups), logical entities (e.g., applications, functions, security constructs, containers), and the like. In some locations, a “cluster” is also referred to as a “group” or a “domain”. The membership of cluster may be modified or affected based on conditions or functions, including from dynamic or property-based membership, from network or system management scenarios, or from various example techniques discussed below which may add, modify, or remove an entity in a cluster. Clusters may also include or be associated with multiple layers, levels, or properties, including variations in security features and results based on such layers, levels, or properties.

The term “application” may refer to a complete and deployable package, environment to achieve a certain function in an operational environment. The term “AI/ML application” or the like may be an application that contains some AI/ML models and application-level descriptions. The term “machine learning” or “ML” refers to the use of computer systems implementing algorithms and/or statistical models to perform specific task(s) without using explicit instructions, but instead relying on patterns and inferences. ML algorithms build or estimate mathematical model(s) (referred to as “ML models” or the like) based on sample data (referred to as “training data,” “model training information,” or the like) in order to make predictions or decisions without being explicitly programmed to perform such tasks. Generally, an ML algorithm is a computer program that learns from experience with respect to some task and some performance measure, and an ML model may be any object or data structure created after an ML algorithm is trained with one or more training datasets. After training, an ML model may be used to make predictions on new datasets. Although the term “ML algorithm” refers to different concepts than the term “ML model,” these terms as discussed herein may be used interchangeably for the purposes of the present disclosure.

The term “machine learning model,” “ML model,” or the like may also refer to ML methods and concepts used by an ML-assisted solution. An “ML-assisted solution” is a solution that addresses a specific use case using ML algorithms during operation. ML models include supervised learning (e.g., linear regression, k-nearest neighbor (KNN), decision tree algorithms, support machine vectors, Bayesian algorithm, ensemble algorithms, etc.) unsupervised learning (e.g., K-means clustering, principle component analysis (PCA), etc.), reinforcement learning (e.g., Q-learning, multi-armed bandit learning, deep RL, etc.), neural networks, and the like. Depending on the implementation a specific ML model could have many sub-models as components and the ML model may train all sub-models together. Separately trained ML models can also be chained together in an ML pipeline during inference. An “ML pipeline” is a set of functionalities, functions, or functional entities specific for an ML-assisted solution; an ML pipeline may include one or several data sources in a data pipeline, a model training pipeline, a model evaluation pipeline, and an actor. The “actor” is an entity that hosts an ML assisted solution using the output of the ML model inference). The term “ML training host” refers to an entity, such as a network function, that hosts the training of the model. The term “ML inference host” refers to an entity, such as a network function, that hosts model during inference mode (which includes both the model execution as well as any online learning if applicable). The ML-host informs the actor about the output of the ML algorithm, and the actor takes a decision for an action (an “action” is performed by an actor as a result of the output of an ML assisted solution). The term “model inference information” refers to information used as an input to the ML model for determining inference(s); the data used to train an ML model and the data used to determine inferences may overlap, however, “training data” and “inference data” refer to different concepts.

The terms “instantiate,” “instantiation,” and the like as used herein refers to the creation of an instance. An “instance” also refers to a concrete occurrence of an object, which may occur, for example, during execution of program code. The term “information element” refers to a structural element containing one or more fields. The term “field” refers to individual contents of an information element, or a data element that contains content. As used herein, a “database object”, “data structure”, or the like may refer to any representation of information that is in the form of an object, attribute-value pair (AVP), key-value pair (KVP), tuple, etc., and may include variables, data structures, functions, methods, classes, database records, database fields, database entities, associations between data and/or database entities (also referred to as a “relation”), blocks and links between blocks in block chain implementations, and/or the like.

An “information object,” as used herein, refers to a collection of structured data and/or any representation of information, and may include, for example electronic documents (or “documents”), database objects, data structures, files, audio data, video data, raw data, archive files, application packages, and/or any other like representation of information. The terms “electronic document” or “document,” may refer to a data structure, computer file, or resource used to record data, and includes various file types and/or data formats such as word processing documents, spreadsheets, slide presentations, multimedia items, webpage and/or source code documents, and/or the like. As examples, the information objects may include markup and/or source code documents such as HTML, XML, JSON, Apex®, CSS, JSP, MessagePack™, Apache® Thrift™, ASN.1, Google® Protocol Buffers (protobuf), or some other document(s)/format(s) such as those discussed herein. An information object may have both a logical and a physical structure. Physically, an information object comprises one or more units called entities. An entity is a unit of storage that contains content and is identified by a name. An entity may refer to other entities to cause their inclusion in the information object. An information object begins in a document entity, which is also referred to as a root element (or “root”). Logically, an information object comprises one or more declarations, elements, comments, character references, and processing instructions, all of which are indicated in the information object (e.g., using markup).

The term “data item” as used herein refers to an atomic state of a particular object with at least one specific property at a certain point in time. Such an object is usually identified by an object name or object identifier, and properties of such an object are usually defined as database objects (e.g., fields, records, etc.), object instances, or data elements (e.g., markup language elements/tags, etc.). Additionally or alternatively, the term “data item” as used herein may refer to data elements and/or content items, although these terms may refer to difference concepts. The term “data element” or “element” as used herein refers to a unit that is indivisible at a given level of abstraction and has a clearly defined boundary. A data element is a logical component of an information object (e.g., electronic document) that may begin with a start tag (e.g., “<element>”) and end with a matching end tag (e.g., “</element>”), or only has an empty element tag (e.g., “<element/>”). Any characters between the start tag and end tag, if any, are the element's content (referred to herein as “content items” or the like).

The content of an entity may include one or more content items, each of which has an associated datatype representation. A content item may include, for example, attribute values, character values, URIs, qualified names (qnames), parameters, and the like. A qname is a fully qualified name of an element, attribute, or identifier in an information object. A qname associates a URI of a namespace with a local name of an element, attribute, or identifier in that namespace. To make this association, the qname assigns a prefix to the local name that corresponds to its namespace. The qname comprises a URI of the namespace, the prefix, and the local name. Namespaces are used to provide uniquely named elements and attributes in information objects. Content items may include text content “<element>content item</element>”), attributes (e.g., “<element attribute=”attributeValue“>”), and other elements referred to as “child elements” (e.g., “<element1><element2>content item</element2></element1>”). An “attribute” may refer to a markup construct including a name-value pair that exists within a start tag or empty element tag. Attributes contain data related to its element and/or control the element's behavior.

The term “channel” as used herein refers to any transmission medium, either tangible or intangible, which is used to communicate data or a data stream. The term “channel” may be synonymous with and/or equivalent to “communications channel,” “data communications channel,” “transmission channel,” “data transmission channel,” “access channel,” “data access channel,” “link,” “data link,” “carrier,” “radiofrequency carrier,” and/or any other like term denoting a pathway or medium through which data is communicated. Additionally, the term “link” as used herein refers to a connection between two devices through a RAT for the purpose of transmitting and receiving information. As used herein, the term “radio technology” refers to technology for wireless transmission and/or reception of electromagnetic radiation for information transfer. The term “radio access technology” or “RAT” refers to the technology used for the underlying physical connection to a radio based communication network. As used herein, the term “communication protocol” (either wired or wireless) refers to a set of standardized rules or instructions implemented by a communication device and/or system to communicate with other devices and/or systems, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and/or the like.

As used herein, the term “radio technology” refers to technology for wireless transmission and/or reception of electromagnetic radiation for information transfer. The term “radio access technology” or “RAT” refers to the technology used for the underlying physical connection to a radio based communication network. As used herein, the term “communication protocol” (either wired or wireless) refers to a set of standardized rules or instructions implemented by a communication device and/or system to communicate with other devices and/or systems, including instructions for packetizing/depacketizing data, modulating/demodulating signals, implementation of protocols stacks, and/or the like. Examples of wireless communications protocols may be used in various embodiments include a Global System for Mobile Communications (GSM) radio communication technology, a General Packet Radio Service (GPRS) radio communication technology, an Enhanced Data Rates for GSM Evolution (EDGE) radio communication technology, and/or a Third Generation Partnership Project (3GPP) radio communication technology including, for example, 3GPP Fifth Generation (5G) or New Radio (NR), Universal Mobile Telecommunications System (UMTS), Freedom of Multimedia Access (FOMA), Long Term Evolution (LTE), LTE-Advanced (LTE Advanced), LTE Extra, LTE-A Pro, cdmaOne (2G), Code Division Multiple Access 2000 (CDMA 2000), Cellular Digital Packet Data (CDPD), Mobitex, Circuit Switched Data (CSD), High-Speed CSD (HSCSD), Universal Mobile Telecommunications System (UMTS), Wideband Code Division Multiple Access (W-CDM), High Speed Packet Access (HSPA), HSPA Plus (HSPA+), Time Division-Code Division Multiple Access (TD-CDMA), Time Division-Synchronous Code Division Multiple Access (TD-SCDMA), LTE LAA, MuLTEfire, UMTS Terrestrial Radio Access (UTRA), Evolved UTRA (E-UTRA), Evolution-Data Optimized or Evolution-Data Only (EV-DO), Advanced Mobile Phone System (AMPS), Digital AMPS (D-AMPS), Total Access Communication System/Extended Total Access Communication System (TACS/ETACS), Push-to-talk (PTT), Mobile Telephone System (MTS), Improved Mobile Telephone System (IMTS), Advanced Mobile Telephone System (AMTS), Cellular Digital Packet Data (CDPD), DataTAC, Integrated Digital Enhanced Network (iDEN), Personal Digital Cellular (PDC), Personal Handy-phone System (PHS), Wideband Integrated Digital Enhanced Network (WiDEN), iBurst, Unlicensed Mobile Access (UMA), also referred to as also referred to as 3GPP Generic Access Network, or GAN standard), Bluetooth®, Bluetooth Low Energy (BLE), IEEE 802.15.4 based protocols (e.g., IPv6 over Low power Wireless Personal Area Networks (6LoWPAN), WirelessHART, MiWi, Thread, 802.11a, etc.) WiFi-direct, ANT/ANT+, ZigBee, Z-Wave, 3GPP device-to-device (D2D) or Proximity Services (ProSe), Universal Plug and Play (UPnP), Low-Power Wide-Area-Network (LPWAN), Long Range Wide Area Network (LoRA) or LoRaWAN™ developed by Semtech and the LoRa Alliance, Sigfox, Wireless Gigabit Alliance (WiGig) standard, Worldwide Interoperability for Microwave Access (WiMAX), mmWave standards in general (e.g., wireless systems operating at 10-300 GHz and above such as WiGig, IEEE 802.11ad, IEEE 802.11ay, etc.), V2X communication technologies (including 3GPP C-V2X), Dedicated Short Range Communications (DSRC) communication systems such as Intelligent-Transport-Systems (ITS) including the European ITS-G5, ITS-G5B, ITS-G5C, etc. In addition to the standards listed above, any number of satellite uplink technologies may be used for purposes of the present disclosure including, for example, radios compliant with standards issued by the International Telecommunication Union (ITU), or the European Telecommunications Standards Institute (ETSI), among others. The examples provided herein are thus understood as being applicable to various other communication technologies, both existing and not yet formulated.

The term “access network” refers to any network, using any combination of radio technologies, RATs, and/or communication protocols, used to connect user devices and service providers. In the context of WLANs, an “access network” is an IEEE 802 local area network (LAN) or metropolitan area network (MAN) between terminals and access routers connecting to provider services. The term “access router” refers to router that terminates a medium access control (MAC) service from terminals and forwards user traffic to information servers according to Internet Protocol (IP) addresses.

The term “SMTC” refers to an SSB-based measurement timing configuration configured by SSB-MeasurementTimingConfiguration. The term “SSB” refers to a synchronization signal/Physical Broadcast Channel (SS/PBCH) block, which includes a Primary Syncrhonization Signal (PSS), a Secondary Syncrhonization Signal (SSS), and a PBCH. The term “a “Primary Cell” refers to the MCG cell, operating on the primary frequency, in which the UE either performs the initial connection establishment procedure or initiates the connection re-establishment procedure. The term “Primary SCG Cell” refers to the SCG cell in which the UE performs random access when performing the Reconfiguration with Sync procedure for DC operation. The term “Secondary Cell” refers to a cell providing additional radio resources on top of a Special Cell for a UE configured with CA. The term “Secondary Cell Group” refers to the subset of serving cells comprising the PSCell and zero or more secondary cells for a UE configured with DC. The term “Serving Cell” refers to the primary cell for a UE in RRC_CONNECTED not configured with CA/DC there is only one serving cell comprising of the primary cell. The term “serving cell” or “serving cells” refers to the set of cells comprising the Special Cell(s) and all secondary cells for a UE in RRC_CONNECTED configured with CA. The term “Special Cell” refers to the PCell of the MCG or the PSCell of the SCG for DC operation; otherwise, the term “Special Cell” refers to the Pcell.

The term “A1 policy” refers to a type of declarative policies expressed using formal statements that enable the non-RT RIC function in the SMO to guide the near-RT RIC function, and hence the RAN, towards better fulfilment of the RAN intent.

The term “A1 Enrichment information” refers to information utilized by near-RT RIC that is collected or derived at SMO/non-RT RIC either from non-network data sources or from network functions themselves.

The term “A1-Policy Based Traffic Steering Process Mode” refers to an operational mode in which the Near-RT RIC is configured through A1 Policy to use Traffic Steering Actions to ensure a more specific notion of network performance (for example, applying to smaller groups of E2 Nodes and UEs in the RAN) than that which it ensures in the Background Traffic Steering.

The term “Background Traffic Steering Processing Mode” refers to an operational mode in which the Near-RT RIC is configured through O1 to use Traffic Steering Actions to ensure a general background network performance which applies broadly across E2 Nodes and UEs in the RAN.

The term “Baseline RAN Behavior” refers to the default RAN behavior as configured at the E2 Nodes by SMO

The term “E2” refers to an interface connecting the Near-RT RIC and one or more O-CU-CPs, one or more O-CU-UPs, one or more O-DUs, and one or more O-eNBs.

The term “E2 Node” refers to a logical node terminating E2 interface. In this version of the specification, ORAN nodes terminating E2 interface are: for NR access: O-CU-CP, O-CU-UP, O-DU or any combination; and for E-UTRA access: O-eNB.

The term “Intents”, in the context of O-RAN systems/implementations, refers to declarative policy to steer or guide the behavior of RAN functions, allowing the RAN function to calculate the optimal result to achieve stated objective.

The term “O-RAN non-real-time RAN Intelligent Controller” or “non-RT RIC” refers to a logical function that enables non-real-time control and optimization of RAN elements and resources, AI/ML workflow including model training and updates, and policy-based guidance of applications/features in Near-RT RIC.

The term “Near-RT RIC” or “O-RAN near-real-time RAN Intelligent Controller” refers to a logical function that enables near-real-time control and optimization of RAN elements and resources via fine-grained (e.g., UE basis, Cell basis) data collection and actions over E2 interface.

The term “O-RAN Central Unit” or “O-CU” refers to a logical node hosting RRC, SDAP and PDCP protocols.

The term “O-RAN Central Unit-Control Plane” or “O-CU-CP” refers to a logical node hosting the RRC and the control plane part of the PDCP protocol.

The term “O-RAN Central Unit-User Plane” or “O-CU-UP” refers to a logical node hosting the user plane part of the PDCP protocol and the SDAP protocol.

The term “O-RAN Distributed Unit” or “O-DU” refers to a logical node hosting RLC/MAC/High-PHY layers based on a lower layer functional split.

The term “O-RAN eNB” or “O-eNB” refers to an eNB or ng-eNB that supports E2 interface.

The term “O-RAN Radio Unit” or “O-RU” refers to a logical node hosting Low-PHY layer and RF processing based on a lower layer functional split. This is similar to 3GPP's “TRP” or “RRH” but more specific in including the Low-PHY layer (FFT/iFFT, PRACH extraction).

The term “O1” refers to an interface between orchestration & management entities (Orchestration/NMS) and O-RAN managed elements, for operation and management, by which FCAPS management, Software management, File management and other similar functions shall be achieved.

The term “RAN UE Group” refers to an aggregations of UEs whose grouping is set in the E2 nodes through E2 procedures also based on the scope of A1 policies. These groups can then be the target of E2 CONTROL or POLICY messages.

The term “Traffic Steering Action” refers to the use of a mechanism to alter RAN behavior. Such actions include E2 procedures such as CONTROL and POLICY.

The term “Traffic Steering Inner Loop” refers to the part of the Traffic Steering processing, triggered by the arrival of periodic TS related KPM (Key Performance Measurement) from E2 Node, which includes UE grouping, setting additional data collection from the RAN, as well as selection and execution of one or more optimization actions to enforce Traffic Steering policies.

The term “Traffic Steering Outer Loop” refers to the part of the Traffic Steering processing, triggered by the near-RT RIC setting up or updating Traffic Steering aware resource optimization procedure based on information from A1 Policy setup or update, A1 Enrichment Information (EI) and/or outcome of Near-RT RIC evaluation, which includes the initial configuration (preconditions) and injection of related A1 policies, Triggering conditions for TS changes.

The term “Traffic Steering Processing Mode” refers to an operational mode in which either the RAN or the Near-RT RIC is configured to ensure a particular network performance. This performance includes such aspects as cell load and throughput, and can apply differently to different E2 nodes and UEs. Throughout this process, Traffic Steering Actions are used to fulfill the requirements of this configuration.

The term “Traffic Steering Target” refers to the intended performance result that is desired from the network, which is configured to Near-RT RIC over O1.

Furthermore, any of the disclosed embodiments and example implementations can be embodied in the form of various types of hardware, software, firmware, middleware, or combinations thereof, including in the form of control logic, and using such hardware or software in a modular or integrated manner. Additionally, any of the software components or functions described herein can be implemented as software, program code, script, instructions, etc., operable to be executed by processor circuitry. These components, functions, programs, etc., can be developed using any suitable computer language such as, for example, Python, PyTorch, NumPy, Ruby, Ruby on Rails, Scala, Smalltalk, Java™, C++, C#, “C”, Kotlin, Swift, Rust, Go (or “Golang”), EMCAScript, JavaScript, TypeScript, Jscript, ActionScript, Server-Side JavaScript (SSJS), PHP, Pearl, Lua, Torch/Lua with Just-In Time compiler (LuaJIT), Accelerated Mobile Pages Script (AMPscript), VBScript, JavaServer Pages (JSP), Active Server Pages (ASP), Node.js, ASP.NET, JAMscript, Hypertext Markup Language (HTML), extensible HTML (XHTML), Extensible Markup Language (XML), XML User Interface Language (XUL), Scalable Vector Graphics (SVG), RESTful API Modeling Language (RAML), wiki markup or Wikitext, Wireless Markup Language (WML), Java Script Object Notion (JSON), Apache® MessagePack™, Cascading Stylesheets (CSS), extensible stylesheet language (XSL), Mustache template language, Handlebars template language, Guide Template Language (GTL), Apache® Thrift, Abstract Syntax Notation One (ASN.1), Google® Protocol Buffers (protobuf), Bitcoin Script, EVM® bytecode, Solidity™, Vyper (Python derived), Bamboo, Lisp Like Language (LLL), Simplicity provided by Blockstream™, Rholang, Michelson, Counterfactual, Plasma, Plutus, Sophia, Salesforce® Apex®, and/or any other programming language or development tools including proprietary programming languages and/or development tools. The software code can be stored as a computer- or processor-executable instructions or commands on a physical non-transitory computer-readable medium. Examples of suitable media include RAM, ROM, magnetic media such as a hard-drive or a floppy disk, or an optical medium such as a compact disk (CD) or DVD (digital versatile disk), flash memory, and the like, or any combination of such storage or transmission devices.

Unless used differently herein, terms, definitions, and abbreviations may be consistent with terms, definitions, and abbreviations defined in 3GPP TR 21.905 v16.0.0 (2019-06). For the purposes of the present document, the following abbreviations may apply to the examples and embodiments discussed herein.

TABLE 1 Abbreviations: 3GPP Third Generation IBE In-Band Emission PUSCH Physical Uplink Shared Partnership Project Channel 4G Fourth Generation IEEE Institute of Electrical QAM Quadrature Amplitude and Electronics Modulation Engineers 5G Fifth Generation IEI Information Element QCI QoS class of identifier Identifier 5GC 5G Core network IEIDL Information Element QCL Quasi co-location Identifier Data Length AC Application Client IETF Internet Engineering QFI QoS Flow ID, QoS Task Force Flow Identifier ACK Acknowledgement IF Infrastructure QoS Quality of Service ACID Application Client IM Interference QPSK Quadrature Identification Measurement, (Quaternary) Phase Intermodulation, IP Shift Keying Multimedia AF Application Function IMC IMS Credentials QZSS Quasi-Zenith Satellite System AM Acknowledged Mode IMEI International Mobile RA-RNTI Random Access RNTI Equipment Identity AMBR Aggregate Maximum Bit IMGI International mobile RAB Radio Access Bearer, Rate group identity Random Access Burst AMF Access and Mobility IMPI IP Multimedia Private RACH Random Access Management Function Identity Channel AN Access Network IMPU IP Multimedia PUblic RADIUS Remote Authentication identity Dial In User Service ANR Automatic Neighbour IMS IP Multimedia RAN Radio Access Network Relation Subsystem AP Application Protocol, IMSI International Mobile RAND RANDom number Antenna Port, Access Subscriber Identity (used for Point authentication) API Application Programming IOT Internet of Things RAR Random Access Response Interface APN Access Point Name IP Internet Protocol RAT Radio Access Technology ARP Allocation and Retention Ipsec IP Security, Internet RAU Routing Area Update Priority Protocol Security ARQ Automatic Repeat Request IP-CAN IP-Connectivity RB Resource block, Radio Access Network Bearer AS Access Stratum IP-M IP Multicast RBG Resource block group ASP Application Service IPv4 Internet Protocol REG Resource Element Provider Version 4 Group ASN.1 Abstract Syntax Notation IPv6 Internet Protocol Rel Release One Version 6 AUSF Authentication Server IR Infrared REQ REQuest Function AWGN Additive White Gaussian IS In Sync RF Radio Frequency Noise BAP Backhaul Adaptation IR Integration Reference RI Rank Indicator Protocol Point BCH Broadcast Channel ISDN Integrated Services RIV Resource indicator Digital Network value BER Bit Error Ratio ISIM IM Services Identity RL Radio Link Module BFD Beam Failure Detection ISO International RLC Radio Link Control, Organisation for Radio Link Control Standardisation layer BLER Block Error Rate ISP Internet Service RLC AM RLC Acknowledged Provider Mode BPSK Binary Phase Shift Keying IWF Interworking-Function RLC UM RLC Unacknowledged Mode BRAS Broadband Remote I-WLAN Interworking WLAN RLF Radio Link Failure Access Server BSS Business Support System Constraint length of the RLM Radio Link Monitoring convolutional code, USIM Individual key BS Base Station kB Kilobyte (1000 bytes) RLM-RS Reference Signal for RLM BSR Buffer Status Report kbps kilo-bits per second RM Registration Management BW Bandwidth Kc Ciphering key RMC Reference Measurement Channel BWP Bandwidth Part Ki Individual subscriber RMSI Remaining MSI, authentication key Remaining Minimum System Information C-RNTI Cell Radio Network KPI Key Performance RN Relay Node Temporary Identity Indicator CA Carrier Aggregation, KQI Key Quality Indicator RNC Radio Network Certification Authority Controller CAPEX CAPital Expenditure KSI Key Set Identifier RNL Radio Network Layer CBRA Contention Based Random ksps kilo-symbols per RNTI Radio Network Access second Temporary Identifier CC Component Carrier, KVM Kernel Virtual ROHC RObust Header Country Code, Machine Compression Cryptographic Checksum CCA Clear Channel Assessment L1 Layer 1 (physical RRC Radio Resource layer) Control, Radio Resource Control layer CCE Control Channel Element L1-RSRP Layer 1 reference RRM Radio Resource signal received power Management CCCH Common Control Channel L2 Layer 2 (data link RS Reference Signal layer) CE Coverage Enhancement L3 Layer 3 (network RSRP Reference Signal layer) Received Power CDM Content Delivery Network LAA Licensed Assisted RSRQ Reference Signal Access Received Quality CDMA Code-Division Multiple LAN Local Area Network RSSI Received Signal Access Strength Indicator CFRA Contention Free Random LADN Local Area Data RSU Road Side Unit Access Network CG Cell Group LBT Listen Before Talk RSTD Reference Signal Time difference CGF Charging Gateway LCM LifeCycle RTP Real Time Protocol Function Management CHF Charging Function LCR Low Chip Rate RTS Ready-To-Send CI Cell Identity LCS Location Services RTT Round Trip Time CID Cell-ID (e.g., positioning LCID Logical Channel ID Rx Reception, Receiving, method) Receiver CIM Common Information LI Layer Indicator S1AP S1 Application Model Protocol CIR Carrier to Interference LLC Logical Link Control, S1-MMES1 for the control plane Ratio Low Layer Compatibility CK Cipher Key LPLMN Local PLMN S1-U S1 for the user plane CM Connection Management, LPP LTE Positioning S-GW Serving Gateway Conditional Mandatory Protocol CMAS Commercial Mobile Alert LSB Least Significant Bit S-RNTI SRNC Radio Network Service Temporary Identity CMD Command LTE Long Term Evolution S-TMSI SAE Temporary Mobile Station Identifier CMS Cloud Management LWA LTE-WLAN SA Standalone operation System aggregation mode CO Conditional Optional LWIP LTE/WLAN Radio SAE System Architecture Level Integration with Evolution IPsec Tunnel COMP Coordinated Multi-Point LTE Long Term Evolution SAP Service Access Point CORESET Control Resource Set M2M Machine-to-Machine SAPD Service Access Point Descriptor COTS Commercial Off-The- MAC Medium Access SAPI Service Access Point Shelf Control (protocol Identifier layering context) CP Control Plane, Cyclic MAC Message authentication SCC Secondary Component Prefix, Connection Point code Carrier, Secondary CC (security/encryption context) CPD Connection Point MAC-A MAC used for SCell Secondary Cell Descriptor authentication and key agreement (TSG T WG3 context) CPE Customer Premise MAC-I MAC used for data SCEF Service Capability Equipment integrity of signalling Exposure Function messages (TSG T WG3 context) CPICH Common Pilot Channel MANO Management and SC-FDMA Single Carrier Orchestration Frequency Division Multiple Access CQI Channel Quality Indicator MBMS Multimedia Broadcast SCG Secondary Cell Group and Multicast Service CPU CSI processing unit, MBSFN Multimedia Broadcast SCM Security Context Central Processing Unit multicast service Management Single Frequency Network C/R Command/Response field MCC Mobile Country Code SCS Subcarrier Spacing bit CRAN Cloud Radio Access MCG Master Cell Group SCTP Stream Control Network, Cloud RAN Transmission Protocol CRB Common Resource Block MCOT Maximum Channel SDAP Service Data Occupancy Time Adaptation Protocol, Service Data Adaptation Protocol layer CRC Cyclic Redundancy Check MCS Modulation and coding SDL Supplementary scheme Downlink CRI Channel-State Information MDAF Management Data SDNF Structured Data Resource Indicator, CSI- Analytics Function Storage Network RS Resource Indicator Function C-RNTI Cell RNTI MDAS Management Data SDP Session Description Analytics Service Protocol CS Circuit Switched MDT Minimization of Drive SDSF Structured Data Tests Storage Function CSAR Cloud Service Archive ME Mobile Equipment SDU Service Data Unit CSI Channel-State Information MeNB master eNB SEAF Security Anchor Function CSI-IM CSI Interference MER Message Error Ratio SeNB secondary eNB Measurement CSI-RS CSI Reference Signal MGL Measurement Gap SEPP Security Edge Length Protection Proxy CSI-RSRP CSI reference signal MGRP Measurement Gap SFI Slot format indication received power Repetition Period CSI-RSRQ CSI reference signal MIB Master Information SFTD Space-Frequency Time received quality Block, Management Diversity, SFN and Information Base frame timing difference CSI-SINR CSI signal-to-noise and MIMO Multiple Input SFN System Frame Number interference ratio Multiple Output CSMA Carrier Sense Multiple MLC Mobile Location SgNB Secondary gNB Access Centre CSMA/CA CSMA with collision MM Mobility Management SGSN Serving GPRS Support avoidance Node CSS Common Search Space, MME Mobility Management S-GW Serving Gateway Cell-specific Search Space Entity CTF Charging Trigger Function MN Master Node SI System Information CTS Clear-to-Send MNO Mobile Network SI-RNTI System Information Operator RNTI CW Codeword MO Measurement Object, SIB System Information Mobile Originated Block CWS Contention Window Size MPBCH MTC Physical SIM Subscriber Identity Broadcast CHannel Module D2D Device-to-Device MPDCCH MTC Physical SIP Session Initiated Downlink Control Protocol CHannel DC Dual Connectivity, Direct MPDSCH MTC Physical SiP System in Package Current Downlink Shared CHannel DCI Downlink Control MPRACH MTC Physical SL Sidelink Information Random Access CHannel DF Deployment Flavour MPUSCH MTC Physical Uplink SLA Service Level Shared Channel Agreement DL Downlink MPLS MultiProtocol Label SM Session Management Switching DMTF Distributed Management MS Mobile Station SMF Session Management Task Force Function DPDK Data Plane Development MSB Most Significant Bit SMS Short Message Service Kit DM-RS, Demodulation MSC Mobile Switching SMSF SMS Function DMRS Reference Signal Centre DN Data network MSI Minimum System SMTC SSB-based Information, MCH Measurement Timing Scheduling Configuration Information DNN Data Network Name MSID Mobile Station SN Secondary Node, Identifier Sequence Number DNAI Data Network Access MSIN Mobile Station SoC System on Chip Identifier Identification Number DRB Data Radio Bearer MSISDN Mobile Subscriber SON Self-Organizing ISDN Number Network DRS Discovery Reference MT Mobile Terminated, SpCell Special Cell Signal Mobile Termination DRX Discontinuous Reception MTC Machine-Type SP-CSI-RNTI Semi-Persistent Communications CSI RNTI DSL Domain Specific mMTC massive MTC, massive SPS Semi-Persistent Language. Digital Machine-Type Scheduling Subscriber Line Communications DSLAM DSL Access Multiplexer MU-MIMO Multi User MIMO SQN Sequence number DwPTS Downlink Pilot Time Slot MWUS MTC wake-up signal, SR Scheduling Request MTC WUS E-LAN Ethernet Local Area NACK Negative SRB Signalling Radio Network Acknowledgement Bearer E2E End-to-End NAI Network Access SRS Sounding Reference Identifier Signal ECCA extended clear channel NAS Non-Access Stratum, SS Synchronization Signal assessment, extended Non-Access Stratum CCA layer ECCE Enhanced Control NCT Network Connectivity SSB Synchronization Signal Channel Element, Topology Block Enhanced CCE ED Energy Detection NC-JT Non-Coherent Joint SSID Service Set Identifier Transmission EDGE Enhanced Datarates for NEC Network Capability SS/PBCH Block GSM Evolution (GSM Exposure Evolution) EAS Edge Application Server NE-DC NR-E-UTRA Dual SSBRI SS/PBCH Block Connectivity Resource Indicator, Synchronization Signal Block Resource Indicator EASID Edge Application Server NEF Network Exposure SSC Session and Service Identification Function Continuity ECS Edge Configuration Server NF Network Function SS-RSRP Synchronization Signal based Reference Signal Received Power ECSP Edge Computing Service NFP Network Forwarding SS-RSRQ Synchronization Signal Provider Path based Reference Signal Received Quality EDN Edge Data Network NFPD Network Forwarding SS-SINR Synchronization Signal Path Descriptor based Signal to Noise and Interference Ratio EEC Edge Enabler Client NFV Network Functions SSS Secondary Virtualization Synchronization Signal EECID Edge Enabler Client NFVI NFV Infrastructure SSSG Search Space Set Identification Group EES Edge Enabler Server NFVO NFV Orchestrator SSSIF Search Space Set Indicator EESID Edge Enabler Server NG Next Generation, Next SST Slice/Service Types Identification Gen EHE Edge Hosting NGEN-DC NG-RAN E-UTRA- SU-MIMO Single User MIMO Environment NR Dual Connectivity EGMF Exposure Governance NM Network Manager SUL Supplementary Uplink tableManagement Function EGPRS Enhanced GPRS NMS Network Management TA Timing Advance, System Tracking Area EIR Equipment Identity N-POP Network Point of TAC Tracking Area Code Register Presence eLAA enhanced Licensed NMIB, N-MIB Narrowband MIB TAG Timing Advance Group Assisted Access, enhanced LAA EM Element Manager NPBCH Narrowband Physical TAI Tracking Area Identity Broadcast CHannel eMBB Enhanced Mobile NPDCCH Narrowband Physical TAU Tracking Area Update Broadband Downlink Control CHannel EMS Element Management NPDSCH Narrowband Physical TB Transport Block System Downlink Shared CHannel eNB evolved NodeB, E- NPRACH Narrowband Physical TBS Transport Block Size UTRAN Node B Random Access CHannel EN-DC E-UTRA-NR Dual NPUSCH Narrowband Physical TBD To Be Defined Connectivity Uplink Shared CHannel EPC Evolved Packet Core NPSS Narrowband Primary TCI Transmission Synchronization Signal Configuration Indicator EPDCCH enhanced PDCCH, NSSS Narrowband TCP Transmission enhanced Physical Secondary Communication Downlink Control Cannel Synchronization Signal Protocol EPRE Energy per resource NR New Radio, Neighbour TDD Time Division Duplex element Relation EPS Evolved Packet System NRF NF Repository TDM Time Division Function Multiplexing EREG enhanced REG, enhanced NRS Narrowband Reference TDMA Time Division Multiple resource element groups Signal Access ETSI European NS Network Service TE Terminal Equipment Telecommunications Standards Institute ETWS Earthquake and Tsunami NSA Non-Standalone TEID Tunnel End Point Warning System operation mode Identifier eUICC embedded UICC, NSD Network Service TFT Traffic Flow Template embedded Universal Descriptor Integrated Circuit Card E-UTRA Evolved UTRA NSR Network Service TMSI Temporary Mobile Record Subscriber Identity E-UTRAN Evolved UTRAN NSSAI Network Slice TNL Transport Network Selection Assistance Layer Information EV2X Enhanced V2X S-NNSAI Single-NSSAI TPC Transmit Power Control F1AP F1 Application Protocol NSSF Network Slice TPMI Transmitted Precoding Selection Function Matrix Indicator F1-C F1 Control plane interface NW Network TR Technical Report F1-U F1 User plane interface NWUS Narrowband wake-up TRP, TRxP Transmission signal, Narrowband Reception Point WUS FACCH Fast Associated Control NZP Non-Zero Power TRS Tracking Reference CHannel Signal FACCH/F Fast Associated Control O&M Operation and TRx Transceiver Channel/Full rate Maintenance FACCH/H Fast Associated Control ODU2 Optical channel Data TS Technical Channel/Half rate Unit-type 2 Specifications, Technical Standard FACH Forward Access Channel OFDM Orthogonal Frequency TTI Transmission Time Division Multiplexing Interval FAUSCH Fast Uplink Signalling OFDMA Orthogonal Frequency Tx Transmission, Channel Division Multiple Transmitting, Access Transmitter FB Functional Block OOB Out-of-band U-RNTI UTRAN Radio Network Temporary Identity FBI Feedback Information OOS Out of Sync UART Universal Asynchronous Receiver and Transmitter FCC Federal Communications OPEX OPerating EXpense UCI Uplink Control Commission Information FCCH Frequency Correction OSI Other System UE User Equipment CHannel Information FDD Frequency Division OSS Operations Support UDM Unified Data Duplex System Management FDM Frequency Division OTA over-the-air UDP User Datagram Multiplex Protocol FDMA Frequency Division PAPR Peak-to-Average UDSF Unstructured Data Multiple Access Power Ratio Storage Network Function FE Front End PAR Peak to Average Ratio UICC Universal Integrated Circuit Card FEC Forward Error Correction PBCH Physical Broadcast UL Uplink Channel FFS For Further Study PC Power Control, UM Unacknowledged Personal Computer Mode FFT Fast Fourier PCC Primary Component UML Unified Modelling Transformation Carrier, Primary CC Language feLAA further enhanced Licensed PCell Primary Cell UMTS Universal Mobile Assisted Access, further Telecommunications enhanced LAA System FN Frame Number PCI Physical Cell ID, UP User Plane Physical Cell Identity FPGA Field-Programmable Gate PCEF Policy and Charging UPF User Plane Function Array Enforcement Function FR Frequency Range PCF Policy Control URI Uniform Resource Function Identifier FQDN Fully Qualified Domain PCRFPolicy Control and URL Uniform Resource Name Charging Rules Locator Function G-RNTI GERAN Radio Network PDCP Packet Data URLLC Ultra-Reliable and Low Temporary Identity Convergence Protocol, Latency Packet Data Convergence Protocol layer GERAN GSM EDGE RAN, GSM PDCCH Physical Downlink USB Universal Serial Bus EDGE Radio Access Control Channel Network GGSN Gateway GPRS Support PDCP Packet Data USIM Universal Subscriber Node Convergence Protocol Identity Module GLONASS GLObal'naya PDN Packet Data Network, USS UE-specific search NAvigatsionnaya Public Data Network space Sputnikovaya Sistema (Engl.: Global Navigation Satellite System) gNB Next Generation NodeB PDSCH Physical Downlink UTRA UMTS Terrestrial Shared Channel Radio Access gNB-CUg NB-centralized unit, Next PDU Protocol Data Unit UTRAN Universal Terrestrial Generation NodeB Radio Access Network centralized unit gNB-DUg NB-distributed unit, Next PEI Permanent Equipment UwPTS Uplink Pilot Time Slot Generation NodeB Identifiers distributed unit GNSS Global Navigation PFD Packet Flow V2I Vehicle-to- Satellite System Description Infrastruction GPRS General Packet Radio P-GW PDN Gateway V2P Vehicle-to-Pedestrian Service GPSI Generic Public PHICH Physical hybrid-ARQ V2V Vehicle-to-Vehicle Subscription Identifier indicator channel GSM Global System for Mobile PHY Physical layer V2X Vehicle-to-everything Communications, Groupe Special Mobile GTP GPRS Tunneling Protocol PLMN Public Land Mobile VIM Virtualized Network Infrastructure Manager GTP-U GPRS Tunnelling Protocol PIN Personal Identification VL Virtual Link, for User Plane Number GTS Go To Sleep Signal PM Performance VLAN Virtual LAN, Virtual (related to WUS) Measurement Local Area Network GUMMEI Globally Unique MME PMI Precoding Matrix VM Virtual Machine Identifier Indicator GUTI Globally Unique PNF Physical Network VNF Virtualized Network Temporary UE Identity Function Function HARQ Hybrid ARQ, Hybrid PNFD Physical Network VNFFG VNF Forwarding Automatic Repeat Request Function Descriptor Graph HANDO Handover PNFR Physical Network VNFFGD VNF Forwarding Function Record Graph Descriptor HFN HyperFrame Number POC PTT over Cellular VNFM VNF Manager HHO Hard Handover PP, PTP Point-to-Point VOIP Voice-over-IP, Voice- over-Internet Protocol HLR Home Location Register PPP Point-to-Point Protocol VPLMN Visited Public Land Mobile Network HN Home Network PRACH Physical RACH VPN Virtual Private Network HO Handover PRB Physical resource VRB Virtual Resource Block block HPLMN Home Public Land Mobile PRG Physical resource WiMAX Worldwide Network block group Interoperability for Microwave Access HSDPA High Speed Downlink ProSe Proximity Services, WLAN Wireless Local Area Packet Access Proximity-Based Network Service HSN Hopping Sequence PRS Positioning Reference WMAN Wireless Metropolitan Number Signal Area Network HSPA High Speed Packet Access PRR Packet Reception WPAN Wireless Personal Area Radio Network HSS Home Subscriber Server PS Packet Services X2-C X2-Control plane HSUPA High Speed Uplink Packet PSBCH Physical Sidelink X2-U X2-User plane Access Broadcast Channel HTTP Hyper Text Transfer PSDCH Physical Sidelink XML extensible Markup Protocol Downlink Channel Language HTTPS Hyper Text Transfer PSCCH Physical Sidelink XRES EXpected user Protocol Secure (https is Control Channel RESponse http/1.1 over SSL, i.e. port 443) I-Block Information Block PSSCH Physical Sidelink XOR exclusive OR Shared Channel ICCID Integrated Circuit Card PSCell Primary SCell ZC Zadoff-Chu Identification IAB Integrated Access and PSS Primary ZP Zero Po Backhaul Synchronization Signal ICIC Inter-Cell Interference PSTN Public Switched Coordination Telephone Network ID Identity, identifier PT-RS Phase-tracking reference signal IDFT Inverse Discrete Fourier PTT Push-to-Talk Transform IE Information element PUCCH Physical Uplink Control Channel

The foregoing description provides illustration and description of various example embodiments, but is not intended to be exhaustive or to limit the scope of embodiments to the precise forms disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of various embodiments. Where specific details are set forth in order to describe example embodiments of the disclosure, it should be apparent to one skilled in the art that the disclosure can be practiced without, or with variation of, these specific details. It should be understood, however, that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.

US Patent Application for DATA FUNCTIONS AND PROCEDURES IN THE NON-REAL TIME RADIO ACCESS NETWORK INTELLIGENT CONTROLLER Patent Application (Application #20240196178 issued June 13, 2024) (2024)
Top Articles
Latest Posts
Article information

Author: Virgilio Hermann JD

Last Updated:

Views: 6178

Rating: 4 / 5 (41 voted)

Reviews: 80% of readers found this page helpful

Author information

Name: Virgilio Hermann JD

Birthday: 1997-12-21

Address: 6946 Schoen Cove, Sipesshire, MO 55944

Phone: +3763365785260

Job: Accounting Engineer

Hobby: Web surfing, Rafting, Dowsing, Stand-up comedy, Ghost hunting, Swimming, Amateur radio

Introduction: My name is Virgilio Hermann JD, I am a fine, gifted, beautiful, encouraging, kind, talented, zealous person who loves writing and wants to share my knowledge and understanding with you.