COMP is a Python library designed for mathematical modeling and coordinated planning in two-level organizational-production systems. It provides tools to find compromise solutions that balance the interests of a central coordinating body (the "Center") and its subordinate elements (subsystems or production units). The library implements various mathematical models (linear and preparatory work for combinatorial) and coordination strategies based on the research of Prof. O. A. Pavlov and Bachelor M. Ye. Kyselov. It also includes a graphical user interface (GUI) for easier interaction, data management, and visualization of results.
The package is available on PyPI: lp-comp
.
- Getting Started
- Core Concepts & Theory
- Prerequisites
- Installation & Setup
- Running the Application
- Project Structure
- Code Overview
- Visualization of Empirical Analysis
- Testing
- Publishing to PyPI (For Maintainers)
- License
The COMP library provides a framework for addressing coordinated planning problems in hierarchical systems, specifically two-level organizational-production systems (Дворівнева Організаційно-Виробнича Система - ДОВС). In such systems, a central authority (Center) needs to coordinate the plans of several subordinate Elements, each with its own local goals and constraints. The core challenge is to find plans that are not only feasible but also represent a fair compromise between the Center's global goals and the Elements' local interests.
This project implements:
- Data models to represent the structure and parameters of the Center and Elements.
- Solver modules for different types of Element-level optimization problems (currently linear, with a foundation for future combinatorial models).
- Three distinct coordination strategies for the Center to derive compromise plans.
- Parallel execution of subproblems using a heuristic scheduling algorithm based on empirical complexity estimates of the simplex method.
- A PyQt5-based GUI for data loading, configuration, execution of planning tasks, and visualization/saving of results.
- Mathematical Modeling: Define two-level systems with a Center and multiple Elements.
- Element Models:
ElementLinearFirst
: Standard linear programming model for an element.ElementLinearSecond
: Linear programming model with additional private decision variables for an element, allowing for more nuanced negotiation.
- Coordination Strategies:
STRICT_PRIORITY
: Center prioritizes its goals first, then allows elements to optimize within those bounds.GUARANTEED_CONCESSION
: Center ensures elements achieve a certain percentage of their individual optimal performance.WEIGHTED_BALANCE
: Center uses a weighted sum approach to balance its goals with those of the elements, iterating to find a preferred solution.
- Solvers: Utilizes Google OR-Tools (specifically the GLOP solver) for underlying linear programming tasks.
- Parallelization: Employs a custom heuristic based on an empirical model of LP solver complexity to distribute tasks across multiple processor cores for faster computation.
- Data Handling:
- JSON format for loading and saving system configurations and results.
- Data generation capabilities for creating test scenarios.
- Graphical User Interface (GUI):
- Load system data from JSON files.
- Configure Center settings (coordination type, parallelization parameters).
- Inspect Element data.
- Run coordinated planning calculations.
- View results in a user-friendly format.
- Copy results to clipboard or save detailed results to a JSON file.
- Extensibility: Designed with a modular structure to facilitate the addition of new element models or coordination algorithms.
The system architecture considered is a two-level hierarchy:
- Center: The central decision-making unit responsible for overall system performance and coordination.
- Elements: Subordinate units (e.g., departments, production lines) that have their own local goals, resources,
and constraints.
Elements can be of different types (e.g.,
DECENTRALIZED
,NEGOTIATED
).
The fundamental problem is to determine plans
Subject to:
where
The COMP library focuses on constructive methods to find such compromise plans based on modifying objective functions and constraints.
The library implements three primary strategies for linear models, based on the work of Prof. O. A. Pavlov and Bachelor M. Ye. Kyselov:
The Center first dictates terms based on its own goals.
- For each element
$e$ , the Center determines the optimal value of its own functional:$f_{c_opt_e} = \max (d_e^T y_e)$ Subject to element$e$ 's constraints:$A^e y_e \le b_e$ ,$0 \le b_{ej}^1 \le y_{ej} \le b_{ej}^2$ . - Then, element
$e$ optimizes its local objective$c_e^T y_e$ under the additional constraint that the Center's goal for it must be met:
Subject to element
The Center ensures that each element achieves at least a certain fraction of its own best possible performance.
- For each element
$e$ , its individual optimal functional value$f_{el_opt_e}$ is determined:- For
ElementLinearFirst
type:$f_{el_opt_e} = \max (c_e^T y_e)$ - For
ElementLinearSecond
type (with private variables$y_e^* $ ):$f_{el_opt_e} = \max (c_e^T y_e^* )$ Subject to its own constraints.
- For
- The Center then optimizes its functional
$d_e^T y_e$ for element$e$ , subject to element$e$ 's original constraints and the concession constraint:- For
ElementLinearFirst
type:$c_e^T y_e \ge f_{el_opt_e} \cdot (1 - \delta_e)$ - For
ElementLinearSecond
type:$c_e^T y_e^* \ge f_{el_opt_e} \cdot (1 - \delta_e)$ where$\delta_e$ is the concession parameter ($0 \le \delta_e \le 1$ ).
- For
The Center uses a weighted sum to combine its goals with the element's goals.
For each element element_data.w
:
The following combined goal is maximized:
- For
ElementLinearFirst
type:
- For
ElementLinearSecond
type:
Subject to element
After solving for all
While the current implementation focuses on linear models,
the theoretical framework also encompasses combinatorial models for
elements, particularly for scheduling problems.
The
Element's Objective (Combinatorial Model):
The unconditional criterion for the operational efficiency of the
where:
-
$\sigma_k$ is a feasible schedule for element$k$ . -
$n_k$ is the number of jobs for an element$k$ . -
$\omega_j^{el}(T_k) > 0$ are weight coefficients for the$j$ -th job of element$k$ . -
$T_k$ is a target completion time or a parameter influencing weights. -
$C_{kj}(\sigma_k)$ is the completion time of job$j$ for an element$k$ under schedule$\sigma_k$ . This is an NP-hard combinatorial optimization problem, and its efficient solution (PSC-algorithm) is presented in [10] of the article.
Center's Objective (for Combinatorial Elements): The criterion for the organizational-production system as a whole, when elements have combinatorial models, is:
where:
-
$m$ is the total number of elements. -
$\omega_j^{c}(T_k) > 0$ are weight coefficients set by the Center for a job$j$ of element$k$ .
The problem of finding a compromise solution for this model decomposes into
Compromise Criteria (Combinatorial Model - Theoretical):
First Compromise Criterion (Strict Priority by Center):
The element
where
And
where
Second Compromise Criterion (Guaranteed Concession for Element):
The Center chooses a schedule for element
Subject to:
Where
to manage the trade-off between
This section outlines the compromise solution for a two-level system where elements have linear models and share a
common resource constraint.
The components of the non-negative vectors
Compromise Solution:
The Center seeks to find plans
Subject to:
- Element constraints:
$A^l y_l \le b_l$ for each element$l$ . - Shared resource constraint:
$\sum_{l=1}^m b_l \le \mathbf{B}$ . - Non-negativity and bounds:
$b_l \ge 0$ ,$0 \le b_{lj}^1 \le y_{lj} \le b_{lj}^2$ (original bounds on decision variables). - Performance guarantee for each element:
$c_l^T y_l \ge f_l$ for each element$l$ , where$f_l$ are target performance levels set by the Center.
This formulation extends the basic linear models by introducing interdependent resource allocation decided by the Center to maximize its global goal while satisfying individual element performance targets.
To efficiently parallelize the solving of multiple element subproblems, the COMP library uses a heuristic scheduling
algorithm implemented in the get_order
function.
This algorithm distributes the LP tasks (element subproblems)
among available processor threads aiming to minimize the makespan (total time to complete all tasks).
The effectiveness of such a heuristic depends on accurately estimating the duration of each subproblem.
The duration of each LP task is estimated using an empirical formula derived from statistical analysis of the standard simplex method's performance:
Where
Derivation of the Empirical Formula:
This formula was developed through dedicated research to model the computational complexity of the simplex method:
-
Problem: The core challenge was to find an analytical expression for the number of arithmetic operations (as a
proxy for execution time) based on the LP problem's dimensions (
$m$ constraints,$n$ variables). - Models Explored: Various models were analyzed, including those based on known theoretical estimates (e.g., interpretations of Borgwardt's model, smoothed analysis, Adler-Megiddo) and proposed generalized linear and mixed interpretations.
-
Best Fit Model: A mixed generalized model of the form
$am^b n^c (\ln n)^d + km^g n^h$ was found to provide the best fit to empirical data. -
Experimental Validation: The model and its coefficients were validated through extensive simulation experiments.
This involved:
- Five series of independent simulations using two distinct datasets (one for parameter estimation, one for verification).
- Varying
$m$ (constraints) and$n$ (variables) over a wide range (e.g., 200 to 2000). - Generating 5 independent LP problems for each
$(m, n)$ combination, resulting in a large dataset (e.g., 13,690 LP tasks per series). - Recording the exact number of arithmetic operations for each solved LP task.
-
Resulting Coefficients: The statistical analysis yielded the
coefficients
$a \approx 0.63, b \approx 2.96, c \approx 0.02, d \approx 1.62, k \approx 4.04, g \approx -4.11, h \approx 2.92$ , leading to the formula above.
The get_order
Heuristic using Estimated Durations:
The get_order
function uses these estimated durations within the get_multi_device_order_A0
scheduling heuristic
as follows:
-
Operation Representation: Each element's LP subproblem, characterized by its size
$(m, n)$ , is treated as anOperation
object. Theempiric(size_tuple)
function calculates its estimated duration using the formula. -
Initial Assignment (LPT - Longest Processing Time):
- The
get_multi_device_heuristic_order
function performs an initial assignment. - All
Operation
objects (LP tasks) are sorted by their estimated durations in descending order. - Each operation, starting with the longest, is assigned to the device (representing a thread) that currently has the minimum accumulated total processing time.
- The
-
Iterative Refinement (Load Balancing):
- After the initial LPT assignment,
get_multi_device_order_A0
attempts to further balance the load across devices. - It calculates an
average_deadline
(total estimated work divided by the number of threads). - The heuristic then iteratively identifies the most "lagged" device (whose total workload significantly exceeds the average deadline) and "advanced" devices (whose total workload is significantly below the average).
- It attempts to perform task swaps between the most lagged device and the advanced devices using various
permutation strategies:
-
make_permutation_1_1
: Swap one task from lagged with one from advanced. -
make_permutation_1_2
: Swap one task from lagged with two from advanced. -
make_permutation_2_1
: Swap two tasks from lagged with one from advanced. -
make_permutation_2_2
: Swap two tasks from lagged with two from advanced.
-
- A swap is made if it reduces the lagged device's end time sufficiently without causing it to finish too early relative to the average deadline.
- This iterative refinement continues until the end times of all devices are balanced within a specified tolerance, or no further beneficial permutations can be found.
- After the initial LPT assignment,
-
Output Schedule: The
get_order
function returns a list of lists, where each inner list contains the original indices of the tasks assigned to a specific thread.
ParallelExecutor
uses this generated schedule to distribute the actual execution of the element subproblems across the
available processor cores.
- Operating System: Windows (tested), macOS, Linux (Python is cross-platform, GUI tested on Windows).
- CPU: Modern multi-core processor recommended for parallel features.
- RAM: 4GB+, 8GB or more recommended for larger problems.
- Python: Version 3.12 or newer (developed with 3.12).
- Pip: For installing Python packages.
- Dependencies as listed in
requirements.txt
(or installed automatically via pip from PyPI):numpy~=2.2.5
PyQt5~=5.15.11
tabulate~=0.9.0
ortools~=9.12.4544
The lp-comp
package is available on PyPI and TestPyPI. Users can install it using pip:
From PyPI (stable releases):
pip install lp-comp
This will install the latest stable version of the library and its dependencies.
-
Project on PyPI: https://pypi.org/project/lp-comp/
From TestPyPI (for testing pre-releases):
pip install -i https://test.pypi.org/simple/ lp-comp
-
Project on TestPyPI: https://test.pypi.org/project/lp-comp/
After installation, User can verify it by checking User’s installed packages (e.g., pip list
or in User’s IDE).
If a User wants to contribute to the project, run examples directly from the source code, or modify the library, follow these steps:
git clone https://github.com/TheMegistone4Ever/COMP.git
cd COMP
It is highly recommended to use a virtual environment to manage project dependencies.
# Create a virtual environment (e.g., named .venv)
python -m venv .venv
# Activate the virtual environment
# On Windows:
.venv\Scripts\activate
# On macOS/Linux:
source .venv/bin/activate
Install the required Python packages using pip:
pip install -r requirements.txt
For development, including building and publishing the package, User might also want to install:
pip install build twine
The COMP library can be used programmatically or via its GUI.
The examples/
directory contains scripts to demonstrate core functionalities.
-
Data Generation & Solver Execution: The
examples/main.py
script demonstrates how to:- Generate sample
CenterData
and save it toexamples/center_data_generated.json
if it does not exist. - Load
CenterData
from the generated JSON file. - Configure the Center with a specific coordination strategy (e.g.,
WEIGHTED_BALANCE
). - Instantiate and run the
CenterSolver
. - Print results to the console and save detailed results to
examples/center_results_output.json
.
To run this example (from the cloned repository, within the activated virtual environment):
python examples/main.py
Console Output Example (partial):
*(Note: This image shows GUI results tab, but the console output format is similar in structure.)*
If User have installed
lp-comp
via pip into an environment, User can test its core functionality by adaptingexamples/main.py
to run in that environment. The images in the Publishing to PyPI section show this kind of verification. - Generate sample
The GUI provides an interactive way to work with the COMP library.
-
Launching the GUI:
- From source code (e.g., within the cloned repository and activated virtual environment):
python examples/run_gui.py
- After installing the package via pip:
If an entry point script (
comp-gui
) was configured during installationAlternatively, User can usually run the GUI module directly:comp-gui
python -m comp.gui_launcher
- From source code (e.g., within the cloned repository and activated virtual environment):
-
Data Loading Tab: Allows users to load system data from a
.json
file. Initial View:File Dialog for Loading Data:
Data Loaded Confirmation:
-
Configuration & Run Tab: After loading data, this tab allows users to:
- Select Elements for display.
- Configure Center parameters (number of threads, parallelization threshold, coordination type).
- View data for selected elements or the entire center.
- Run the coordinated planning calculation.
-
Result Tab: Displays the results of the calculation.
- Textual summary of the solution, including chosen parameters and objective values;
- Button to "Copy to Clipboard";
- Button to "Save to .json file";
Results Display:
Copy to Clipboard Action:
Save Results Action:
COMP/
├── .gitattributes
├── .gitignore
├── LICENSE.md
├── MANIFEST.in
├── README.md
├── pyproject.toml
├── requirements.txt
├── setup.cfg
├── setup.py
├── version.txt
├── comp/
│ ├── __init__.py
│ ├── _version.py
│ ├── gui_launcher.py
│ ├── io/
│ │ ├── __init__.py
│ │ └── json_io.py
│ ├── media/
│ │ ├── __init__.py
│ │ └── COMP.ico
│ ├── models/
│ │ ├── __init__.py
│ │ ├── base.py
│ │ ├── center.py
│ │ └── element.py
│ ├── parallelization/
│ │ ├── __init__.py
│ │ ├── heuristic.py
│ │ ├── parallel_executor.py
│ │ └── core/
│ │ ├── __init__.py
│ │ ├── device.py
│ │ ├── empiric.py
│ │ └── operation.py
│ ├── solvers/
│ │ ├── __init__.py
│ │ ├── factories.py
│ │ ├── center/
│ │ │ ├── __init__.py
│ │ │ ├── linear/
│ │ │ │ ├── __init__.py
│ │ │ │ ├── first.py
│ │ │ │ ├── second.py
│ │ │ │ └── third.py
│ │ │ └── linked/
│ │ │ ├── __init__.py
│ │ │ └── first.py
│ │ ├── core/
│ │ │ ├── __init__.py
│ │ │ ├── base.py
│ │ │ ├── center.py
│ │ │ └── element.py
│ │ └── element/
│ │ ├── __init__.py
│ │ └── linear/
│ │ ├── __init__.py
│ │ ├── first.py
│ │ └── second.py
│ ├── ui/
│ │ ├── __init__.py
│ │ ├── config_run_tab.py
│ │ ├── data_load_tab.py
│ │ ├── main_window.py
│ │ ├── results_tab.py
│ │ ├── styles.py
│ │ └── worker.py
│ └── utils/
│ ├── __init__.py
│ ├── assertions.py
│ ├── helpers.py
│ └── json_base_serializer.py
├── examples/
│ ├── __init__.py
│ ├── center_data_generated.json
│ ├── center_results_output.json
│ ├── main.py
│ ├── run_gui.py
│ └── data/
│ ├── __init__.py
│ └── generator.py
├── images/
│ ├── General_Mixed_W1_linear_2d.png
│ ├── General_Mixed_W1_linear_3d.png
│ ├── General_Mixed_W1_log_2d.png
│ ├── General_Mixed_W1_log_3d.png
│ ├── General_Mixed_W2_linear_2d.png
│ ├── General_Mixed_W2_linear_3d.png
│ ├── General_Mixed_W2_log_2d.png
│ ├── General_Mixed_W2_log_3d.png
│ ├── pypi_build_success.png
│ ├── pypi_building.png
│ ├── pypi_clearing.png
│ ├── pypi_env_check.png
│ ├── pypi_packages_installed.png
│ ├── pypi_sol_check.png
│ ├── pypi_test_env_check.png
│ ├── pypi_test_sol_check.png
│ ├── pypi_test_upload.png
│ ├── pypi_test_uploaded.png
│ ├── pypi_upload.png
│ ├── pypi_uploaded.png
│ ├── ui_copied_to_buffer.png
│ ├── ui_copied_to_buffer_msg.png
│ ├── ui_data_loaded.png
│ ├── ui_dialog_load_data.png
│ ├── ui_dialog_save_results.png
│ ├── ui_results_saved.png
│ ├── ui_results_saved_msg.png
│ ├── ui_tab_load_data.png
│ ├── ui_tab_results.png
│ ├── ui_tab_setup.png
│ └── unit_tests_results.png
└── tests/
├── __init__.py
└── test_core.py
comp/_version.py
: Defines the package version (__version__
).comp/__init__.py
: Makes__version__
available at the package level (from comp import __version__
).comp/gui_launcher.py
: Contains themain_app_entry
function for launching the PyQt5 GUI. This is the target forpython -m comp.gui_launcher
.comp.models
: Contains dataclasses defining the structure of the system:BaseConfig
,BaseData
: Base classes for configuration and data.CenterConfig
,CenterData
,CenterType
: Define the Center's properties, data, and coordination strategies.ElementConfig
,ElementData
,ElementType
,ElementSolution
: Define Element properties, data, types, and solution structure.
comp.io
: Handles data input/output.json_io.py
: Functions to loadCenterData
from JSON files, with custom parsing for enums, numpy arrays, and nested dataclasses.
comp.solvers
: Core logic for optimization.core/
: Abstract base classes and common solver logic.BaseSolver
: Abstract base for all solvers.ElementSolver
: Base for element-level solvers, integrating with OR-Tools.CenterSolver
: Base for center-level solvers, managing element solvers and parallel execution.
element/linear/
: Concrete implementations of element solvers.first.py
(ElementLinearFirst
): Implements the first linear model for an element.second.py
(ElementLinearSecond
): Implements the second linear model with private decision variables.
center/linear/
: Concrete implementations of center coordination strategies.first.py
(CenterLinearFirst
): Implements the "Strict Priority" strategy.second.py
(CenterLinearSecond
): Implements the "Guaranteed Concession" strategy.third.py
(CenterLinearThird
): Implements the "Weighted Balance" strategy.
center/linked/
: Concrete implementations of linked center coordination strategies.first.py
(CenterLinkedFirst
): Implements the first linked strategy.
factories.py
: Factory functions (new_element_solver
,new_center_solver
) to create appropriate solver instances based on configuration.
comp.parallelization
: Logic for parallel execution of element subproblems.core/empiric.py
: Implements the empirical formula to estimate LP solving time.core/device.py
,core/operation.py
: Data structures for the scheduling heuristic.heuristic.py
: Implements theget_order
function, a multi-device scheduling heuristic (LPT + iterative refinement) to determine task distribution among threads.parallel_executor.py
:ParallelExecutor
class usingconcurrent.futures.ProcessPoolExecutor
to run tasks in parallel according to the order from the heuristic.
comp.utils
: Utility functions.assertions.py
: Custom assertion functions for input validation.helpers.py
: String formatting (stringify
,tab_out
), LP sum utilities.json_base_serializer.py
: Custom JSON serializer for numpy arrays, enums, etc., and asave_to_json
utility.
comp.ui
: PyQt5 based Graphical User Interface.main_window.py
(MainWindow
): The main application window, hosting tabs.data_load_tab.py
(DataLoadTab
): Tab for loading data from JSON.config_run_tab.py
(ConfigRunTab
): Tab for configuring center parameters and running calculations.results_tab.py
(ResultsTab
): Tab for displaying results and providing copy/save functionality.worker.py
(SolverWorker
): QObject for running solver calculations in a background thread to keep the GUI responsive.styles.py
: Stylesheet for the GUI.media/COMP.ico
: Application icon.
main.py
: Demonstrates CLI usage: data generation, loading, solver instantiation, running coordination, and saving results.run_gui.py
: Entry point to launch the PyQt5 GUI application when running from source. It typically imports and callsmain_app_entry
fromcomp.gui_launcher
.data/generator.py
(DataGenerator
): Class for generating randomCenterData
for testing and examples.center_data_generated.json
: Example data file that can be generated bymain.py
.center_results_output.json
: Example results file that can be generated bymain.py
.
test_core.py
: Contains unit tests for various components including assertions, helpers, JSON serialization, models, data generator, parallelization, and solver factories/types.
The plots below visualize data related to the empirical complexity analysis of the simplex method, which informs the
parallelization heuristically used in COMP.
W1
and W2
refer to different datasets used during the empirical study.
3D Plot - General Mixed Model (Linear Scale) - W1:
3D Plot - General Mixed Model (Logarithmic Scale) - W1:
3D Plot - General Mixed Model (Linear Scale) - W2:
3D Plot - General Mixed Model (Logarithmic Scale) - W2:
2D Plots - Operations vs. m (Linear Scale) - W1:
2D Plots - Operations vs. m (Logarithmic Scale) - W1:
2D Plots - Operations vs. m (Linear Scale) - W2:
2D Plots - Operations vs. m (Logarithmic Scale) - W2:
Unit tests are implemented using Python's unittest
framework and can be found in the tests/
directory.
They cover core utilities, data models, data generation, parallelization logic, and solver factories.
To run tests (from the root of the cloned repository, with the virtual environment activated):
python -m unittest tests.test_core
Example Test Output:
This section outlines the steps to build and publish the lp-comp
package to PyPI and TestPyPI.
Ensure User have build
and twine
installed (pip install build twine
).
Before creating new distribution files, it is good practice to remove any old ones from the dist/
, build/
, and
*.egg-info
directories.
(Example PowerShell commands as shown in User’s screenshot, adapt as needed for User’s OS/shell)
if (Test-Path -Path "build") { Remove-Item -Recurse -Force "build" }
if (Test-Path -Path "dist") { Remove-Item -Recurse -Force "dist" }
if (Test-Path -Path "lp_comp.egg-info") { Remove-Item -Recurse -Force "lp_comp.egg-info" } # Adjust egg-info name if needed
Use the build
package to create the source distribution (.tar.gz
) and wheel (.whl
).
This process typically uses pyproject.toml
for build configuration.
python -m build
This will create the distribution files in the dist/
directory.
Upload the distributions to TestPyPI to ensure everything works correctly before publishing to the main PyPI.
twine upload --repository testpypi dist/*
User will be prompted for User’s TestPyPI username and password (or API token).
Create a fresh virtual environment, install the package from TestPyPI, and run some checks.
# In a new, clean virtual environment
pip install -i https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple lp-comp
pip list # Check if lp-comp and dependencies are installed
Run an example script (e.g., examples/main.py
, ensuring it uses the installed package) to verify functionality.
User might need to adapt paths or copy necessary example files (examples/main.py
,
examples/center_data_generated.json
) to User’s test location if running outside the source tree.
Once a User is confident that the package is working correctly, upload it to the official Python Package Index (PyPI).
twine upload dist/*
User will be prompted for User’s PyPI username and password (or API token).
As a final check, install the package from the main PyPI in a clean environment and verify.
# In a new, clean virtual environment
pip install lp-comp
pip list # Check if lp-comp and dependencies are installed
User can also run an example to confirm, similar to the TestPyPI verification.
The project is licensed under the CC BY-NC 4.0 License.