# Multi-Domain Model-Driven Optimization for Networked Control Systems

This repository contains the experimental artifacts for our paper on multi-domain model-driven optimization for networked control systems. Below, you'll find all the resources required to explore the results.

- plant-side dynamical models,
- communication-delay models,
- neural-network surrogate training and evaluation,
- objective-function analysis for accuracy-versus-computation trade-offs, and
- optional Hailo deployment and latency measurements.

The repository is still research-code oriented: most scripts are standalone, rely on local relative paths, and write outputs into the directory they are run from. The root Python environment files now capture the core end-to-end Python dependencies for the plant-model, training, and objective-analysis workflow.

## Repository Purpose

The project studies how model fidelities from two domains can be combined and selected jointly:

- plant models with increasing physical realism,
- communication models with increasing wireless/network realism,
- learned surrogate models for both domains, and
- a downstream optimization stage that balances prediction quality against execution cost.

Alongside source code, the repository also keeps many checked-in artifacts such as CSV datasets, TFLite models, figures, calibration arrays, Hailo compilation outputs, and latency logs.

## High-Level Workflow

The main workflow is:

1. Generate plant-model datasets in Python.
2. Generate communication-delay datasets in MATLAB.
3. Train TensorFlow fully connected regressors and export them to TFLite.
4. Evaluate surrogate error with `training_nn_models/test_nn_models.py`.
5. Combine error and complexity in `objective_func_calc/graphs.py`.
6. Optionally compile and benchmark compact models for Hailo hardware.

## Python Setup

Two root-level environment definitions are provided:

- `requirements.txt` for `pip install -r requirements.txt`
- `environment.yml` for `conda env create -f environment.yml`

They cover the core Python toolchain used by the repository:

- `numpy`
- `pandas`
- `scipy`
- `matplotlib`
- `plotly`
- `scikit-learn`
- `tensorflow`
- `control`

These files do not install MATLAB toolboxes or the Hailo SDK/HailoRT stack.

## Top-Level Layout

### `communication_models_matlab/`

MATLAB implementations of wireless communication delay models. These scripts generate regression-ready CSV datasets for increasingly detailed network-delay surrogates.

Main files:

- `run_all_models.m`: top-level entry point that generates all communication-model CSV files from a shared 802.11ax scenario.
- `ax_common_scenario.m`: creates the common HE-SU scenario used by the communication models.
- `make_regression_table.m`: converts raw delay traces into supervised-learning tables with lag features and a one-step-ahead target.

Communication models in this folder:

| Script | Description | Dataset output |
| --- | --- | --- |
| `model1_deterministic_phy_service_time.m` | Deterministic PHY service-time approximation with fixed MAC and processing overhead | `model1_deterministic_phy_service_time.csv` |
| `model2_awgn_link_retransmission.m` | AWGN-based retransmission model with retries, success flags, and SNR margin | `model2_awgn_link_retransmission.csv` |
| `model3_tgax_link_retransmission.m` | TGax plus AWGN retransmission model with received-power features | `model3_tgax_link_retransmission.csv` |
| `model4_queue_aware_dcf_tgax.m` | Queue-aware DCF/TGax model with contention, backoff, waiting, and service time | `model4_queue_aware_dcf_tgax.csv` |

These scripts depend on MATLAB wireless APIs such as `wlanHESUConfig`, `wlanWaveformGenerator`, `wlanPacketDetect`, and `wlanTGaxChannel`, so reproducing them requires the relevant MATLAB toolboxes.

### `plant_models/`

Python code for the plant-side dynamical models.

Current contents:

- `plant_models.py`: generates the plant datasets and comparison figure.
- `out/`: checked-in generated datasets and figures.

The plant script builds three fidelity levels for a mass-spring chain with a PMDC motor actuator:

- a linear discrete-time state-space model,
- a reduced nonlinear model, and
- a fuller nonlinear truth model with temperature and wear effects.

Generated outputs in `plant_models/out/` include:

- `linear_model_data.csv`
- `nonlinear_reduced_model_data.csv`
- `nonlinear_full_truth_data.csv`
- `three_model_comparison.png`

### `training_nn_models/`

TensorFlow training scripts for the main surrogate models used in the plant and communication domains. These scripts train fully connected regressors, save loss curves, export TFLite models, and include a model-evaluation pass that computes MAPE.

The seven main surrogate models are:

| Model | Domain | Training data |
| --- | --- | --- |
| `create_nn_model1.py` | Plant | `linear_model_data.csv` |
| `create_nn_model2.py` | Plant | `nonlinear_reduced_model_data.csv` |
| `create_nn_model3.py` | Plant | `nonlinear_full_truth_data.csv` |
| `create_nn_model4.py` | Communication | `model1_deterministic_phy_service_time.csv` |
| `create_nn_model5.py` | Communication | `model2_awgn_link_retransmission.csv` |
| `create_nn_model6.py` | Communication | `model3_tgax_link_retransmission.csv` |
| `create_nn_model7.py` | Communication | `model4_queue_aware_dcf_tgax.csv` |

Important files in this folder:

- `test_nn_models.py`: evaluates exported TFLite models and writes `mape_errors.csv`.
- `model1.tflite` through `model7.tflite`: exported surrogate models.
- `loss_curve1.html` through `loss_curve7.html`: saved training-loss plots.
- local CSV copies of the plant and communication datasets used by the training scripts.

### `objective_func_calc/`

Analysis code for the multi-domain objective function.

Main files:

- `graphs.py`: fits a cost model, scores all plant and communication model combinations, and generates the comparison plots.
- `mpc_plus_adaptive_smith_pred-example.py`: a worked control example that combines MPC, a Smith predictor, and a neural network delay predictor.

Checked-in outputs in this folder include:

- `combined_scores.csv`
- `combined_scores_plot.png`
- `pareto_front.csv`
- `pareto_front.png`
- `power_law_fit.png`
- `sensitivity_all_combinations.png`
- `sensitivity_best_per_P.csv`
- `sensitivity_best_vs_P.png`

### `hailo_nn_models/`

Hailo-oriented neural-network model generation and calibration scripts.

This folder is separate from the seven main plant and communication surrogate models. The `create_nn_model_t*.py` scripts use `dummy_data.csv` and generate compact fully connected TFLite models under `model_t1/` through `model_t14/`. The matching `create_calib_npy_t*.py` scripts generate calibration arrays for quantization and compilation.

Typical contents:

- `create_nn_model_t*.py`: train and export small TFLite models for Hailo experiments.
- `create_calib_npy_t*.py`: create `calib_set_t*.npy` arrays.
- `model_t*/`: per-model Hailo artifacts such as `.tflite`, `.har`, optimized `.har`, `.hef`, allocator logs, and calibration arrays.
- `complexity_calc.ipynb`: notebook related to complexity analysis.

### `hailo_nn_results/`

Deployment and benchmarking outputs for the Hailo experiments.

This folder mirrors parts of `hailo_nn_models/` and adds runtime outputs, latency logs, and visualization utilities. For example, `model_t1/` contains compiled model artifacts, latency CSV files, histogram and CDF plots, `run_model_v2.cpp`, and `visualize.py`.

## Practical Reproduction Notes

This repository is best understood as a paper artifact with a reproducible core Python environment, not as a library-style package.

- Many scripts assume they are executed from their own folder because they use plain relative paths.
- Some generated CSV files are duplicated across folders so later stages can run without path rewriting.
- MATLAB communication scripts require the relevant wireless and communication toolboxes.
- Hailo experiments require the Hailo SDK/HailoRT toolchain and compatible hardware/software setup.

## Suggested Entry Points

If you want to understand or rerun the repository in a practical order, start here:

1. `plant_models/plant_models.py` to inspect the plant-side datasets.
2. `communication_models_matlab/run_all_models.m` to generate communication datasets.
3. `training_nn_models/create_nn_model1.py` through `create_nn_model7.py` to train the surrogate models.
4. `training_nn_models/test_nn_models.py` to compute MAPE across the trained TFLite models.
5. `objective_func_calc/graphs.py` to reproduce the trade-off plots and Pareto analysis.
6. `objective_func_calc/mpc_plus_adaptive_smith_pred-example.py` for the control example.
7. `hailo_nn_models/` and `hailo_nn_results/` for the hardware-deployment side experiments.

## Summary

At a high level, this repository is a multi-stage workflow for comparing plant-model and communication-model fidelity, learning neural surrogates for each domain, and selecting model pairs using a weighted objective that trades off prediction accuracy against computational cost. The checked-in datasets, figures, TFLite models, and Hailo artifacts make it possible to inspect most of the paper workflow without rerunning every stage from scratch.
