AMPL - solve multiple models in parallel
Description: Solve multiple AMPL models in parallel in Python with amplpy and the multiprocessing modules.
Tags: AMPL, Python, amplpy, multiprocess, Parallel Computing, Stochastic Programming
Notebook author: Nicolau Santos <nfbvs@ampl.com>
Motivation
A common task is to analyze the results of a model given different combinations of some input parameters. This can be done in parallel with amplpy and the multiprocessing module.
For our demonstration we will use a stochastic programming
model available at
https://colab.ampl.com/
The model has three parameters of major interest for our example, namelly:
alpha - parameter to manage the conditional value at risk (CVaR)
beta - parameter that manages the contribution of the CVaR and average profit to the objective function
demand - unknown parameter for which we will generate multiple scenarios.
Our main objective is to study the impact of different alpha and beta combinations for different scenarios of the demand.
Implementation
First we create a worker function that:
gets as input a list with the values of alpha, beta, run and seed
generate data for the demand parameter using the given seed
instantiates an AMPL object and loads the data, including the provided alpha and beta values
solves the problem and returns a list with the initial alpha, beta and run parameters and also the obtained objective value and used wall time.
In our case the model will be the same for every run. However, it's possible to pass the name of the model as a parameter to the worker function and solve different models in parallel.
Note that the chosen solver may use by default more than one process/thread. Unless you are configuring the number of AMPL and solver processes manually you should set the number of processes/threads of the solver to 1.
Afterwards we create a list with the different parameter combinations that we will send as input for each process.
Parallelization is obtained with the multiprocessing module: we create a multiprocessing pool with a given number of processors and we map the created pool to the worker function and the list with the different parameter combinations.
At the end we print the obtained results.
Configurations:
alpha beta run seed
0.7 0.6 0 0
0.7 0.6 1 1
0.7 0.6 2 2
0.7 0.7 0 3
0.7 0.7 1 4
0.7 0.7 2 5
0.8 0.6 0 6
0.8 0.6 1 7
0.8 0.6 2 8
0.8 0.7 0 9
0.8 0.7 1 10
0.8 0.7 2 11
Number of workers: 12
4 processors available
Results:
alpha beta run objective workertime
0.7 0.6 0 3525.832209279827 25.238504648208618
0.7 0.6 1 3524.356178682977 25.41237211227417
0.7 0.6 2 3541.325355197152 25.553394317626953
0.7 0.7 0 3565.9342984444384 27.238896131515503
0.7 0.7 1 3585.2380403500324 21.48252558708191
0.7 0.7 2 3402.66236960429 20.380152225494385
0.8 0.6 0 3265.1446466955886 20.773118257522583
0.8 0.6 1 3390.28915038965 20.527676343917847
0.8 0.6 2 3422.6281549721944 19.438149213790894
0.8 0.7 0 3290.5810949315205 20.264755964279175
0.8 0.7 1 3233.292842109562 17.839205741882324
0.8 0.7 2 3282.2282040925375 18.890254974365234
Main time: 66.715
Average worker time: 65.760
Total time: 263.039
In our example a nearly linear speedup was achieved. Depending on the ratio between the number of workers and number of processes this value may vary.
Statistical analysis of the results is beyond the scope of this notebook.