Use Cases of Machine Learning in Queueing Theory Based on a GI/G/K System

Research output: Contribution to journalArticlepeer-review

Abstract

Machine learning (ML) in queueing theory combines the predictive and optimization capabilities of ML with the analytical frameworks of queueing models to improve performance in systems such as telecommunications, manufacturing, and service industries. In this paper we give an overview of how ML is applied in queueing theory, highlighting its use cases, benefits, and challenges. We consider a classical 𝐺𝐼/𝐺/𝐾
-type queueing system, which is at the same time rather complex for obtaining analytical results, consisting of K homogeneous servers with an arbitrary distribution of time between incoming customers and equally distributed service times, also with an arbitrary distribution. Different simulation techniques are used to obtain the training and test samples needed to apply the supervised ML algorithms to problems of regression and classification, and some results of the approximation analysis of such a system will be needed to verify the results. ML algorithms are used also to solve both parametric and dynamic optimization problems. The latter is achieved by means of a reinforcement learning approach. It is shown that the application of ML in queueing theory is a promising technique to handle the complexity and stochastic nature of such systems.
Original languageEnglish
Article number776
Number of pages36
JournalMathematics
Volume13
Issue number5
DOIs
Publication statusPublished - 25 Feb 2025

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 9 - Industry, Innovation, and Infrastructure
    SDG 9 Industry, Innovation, and Infrastructure

Fields of science

  • 101 Mathematics
  • 101019 Stochastics
  • 101018 Statistics
  • 101014 Numerical mathematics
  • 101024 Probability theory

JKU Focus areas

  • Digital Transformation

Cite this