Skip to content

This repository is the central repository for the microservices NWDAF

License

Notifications You must be signed in to change notification settings

merce-fra/5G-NWDAF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

80 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

5G NWDAF

This project implements a microservices-based 5G NWDAF (Network Data Analytics Function) following the 3GPP specifications.

It is developed entirely in Python and uses Apache Kafka for internal communication. Each microservice is running in its own Docker container, and the whole system is deployed with Docker Compose.

The project is structured to maximise modularity and reusability through two libraries: nwdaf-api and nwdaf-libcommon.

Dependencies

The project uses two different internal libraries:

  1. nwdaf-api, a package based on the nwdaf-3gpp-apis repository

    • Contains model classes generated from official 3GPP OpenAPI YAML files.
    • Primarily used for message payloads, ensuring compliance with 3GPP-defined data structures.
  2. nwdaf-libcommon, a package based on the nwdaf-libcommon repository

    • Provides platform-level code to streamline the development of new NWDAF services (e.g., AnLF, MTLF).
    • Includes utility functions, common patterns, and boilerplate code to minimise the effort required for implementing new microservices. It contains all the Kafka-related code.

You will need to have Docker and Docker Compose installed on your system to build and deploy the NWDAF.

Overview

Services

The project consists of multiple microservices, each with a specific role within the NWDAF ecosystem.

  • API Gateway: Acts as the single point of entry for incoming 3GPP-compliant requests, handling analytics subscriptions and coordinating the flow between services. It is based on the ApiGatewayService class from nwdaf-libcommon.

  • Throughput AnLF: Computes UE_LOC_THROUGHPUT analytics using data collected from GMLC and RAN, and an LSTM model that predicts throughput from location data. It is based on the AnlfService class from nwdaf-libcommon.

  • Throughput MTLF: Provides and manages the ML model used by the Throughput AnLF to make inferences, ensuring the model remains up-to-date and efficient. It is based on the MtlfService class from nwdaf-libcommon.

NB: The UE_LOC_THROUGHPUT analytics type and the RAN Event Exposure service are not defined in the 3GPP specifications, and have been implemented here for example purposes.

In addition to these main NWDAF services, a whole set of additional services are deployed in the system:

  • GMLC: a GMLC stub serving an event exposure endpoint compliant with 3GPP specifications. It can either generate random location data, or provide data received from the CSV File Player.
  • RAN: a RAN stub serving a non-3GPP-compliant event exposure endpoint. It can either generate random RSRP data, or provide data received from the CSV File Player.
  • CSV File Player: a service that reads a CSV file line by line and pushed the data to other services (in our case, the GMLC and the RAN stubs). If this service is not running, the NF stubs will use randomly generated data.
  • Notification Client: a service that can receive analytics notifications, display their content and push it to other services (e.g., Prometheus, Grafana, etc.)

Finally, a few technical services are also deployed:

  • Zookeeper: centralised server for maintaining configuration information, naming, and providing distributed synchronisation and group services. Kafka uses Zookeeper under the hood, it is therefore mandatory to deploy it in order to use Kafka.
  • Kafka: an event-streaming platform enabling topic-based producer/consumer interactions. In this project, Kafka is used for internal messaging between microservices.
  • Kafka Topics Init: A service that initialises all the Kafka topics in advance before all the NWDAF services are launched. It is necessary to initialise these topics in advance to avoid time-consuming re-balancing operations when a message is first sent on a given topic.
  • Grafana: A data visualisation tool that can read data from HTTP endpoints and display it. In our case, it is used to read data pushed to Prometheus by the Notification Client.
  • Prometheus: A metrics storage system that provides easy to use data push/pull mechanisms. In our case, it is simply used as a middle-man between the Notification Client and Grafana.

Kafka

All these services communicate with each other via Apache Kafka. The topics that are used and the way microservices interact with Kafka are detailed in the Kafka Topics Specification document.

Services configuration

Each service's hostname, port and log level can be configured through the .env file that will be used by the Docker Compose deployment. For example, this is the default API Gateway's configured values:

API_GW_SERVICE_NAME = api-gateway
API_GW_SERVICE_PORT = 5000
API_GW_LOG_LEVEL = INFO

Build Docker images

Copy local packages

If you built nwdaf-3gpp-apis and nwdaf-libcommon locally, you will need to copy them into the containers in order to be able to build the services. For example, it can be done like this:

mkdir -p local_packages
cp ../nwdaf-3gpp-apis/output/dist/nwdaf_api-0.0.0-py3-none-any.whl ./local_packages
cp ../nwdaf-libcommon/dist/nwdaf_libcommon-0.0.0-py3-none-any.whl ./local_packages

Of course, replace all the paths and version numbers with values that are relevant to your own environment

You will also need to set this environment variable in the .env file:

USE_LOCAL_PACKAGES=1

Build containers

To build all the microservices, run the following command:

docker compose build

If re-building the containers from scratch is needed, the --no-cache option can be passed to the command.

Deploy the services

This command deploys the NWDAF and all the aforementioned additional services:

docker compose up

The -d option can be added if you want to run your NWDAF in detached mode.

How to test the analytics subscription?

Here is an example analytics subscription payload that can be sent to the NWDAF in order to test it:

POST 127.0.0.1:5000/nnwdaf-eventssubscription/v1/subscriptions

{
  "eventSubscriptions": [
    {
      "event": "UE_LOC_THROUGHPUT",
      "tgtUe": {
        "supis": [
          "imsi-208890000000003"
        ]
      }
    }
  ],
  "notificationURI": "http://notification-client:8181/analytics-notification",
  "notifCorrId": "test"
}

The NWDAF should reply with a 3GPP-compliant response, and analytics notifications should be sent periodically to the Notification Client on port 8181.

About

This repository is the central repository for the microservices NWDAF

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published