In order to be use AMPL on any notebook platform (e.g., Google Colab) you just need the following two code blocks at the beginning of your notebook:
# Install dependencies !pip install -q amplpy
# Google Colab & Kaggle integration from amplpy import AMPL, tools ampl = tools.ampl_notebook( modules=["coin", "highs", "gokestrel"], # modules to install license_uuid="your-license-uuid") # license to use
In the list
modules you can specify the AMPL solvers you want to use in your notebook.
For more information on the AMPL Modules for Python see Python Modules Documentation.
For more information on how to use
amplpy see Python API Documentation.
In these notebooks there are
%%ampl_eval cells that allow you to run AMPL code directly from the notebook.
They are equivalent to
Learn more: [Python Modules Documentation] [Python API Documentation]
AMPL is free on Colab¶
On Google Colab there is a default AMPL Community Edition license that gives you unlimited access to AMPL with open-source solvers (e.g., HiGHS, CBC, Couenne, Ipopt, Bonmin) or with commercial solvers from the NEOS Server as described in Kestrel documentation.
AMPL for Courses is another free license of full-featured AMPL with no limitations on problem size, and a selection of popular commercial and open-source solvers. This license can be used on Google Colab and similar platforms for teaching.
To access commercial solvers you can use solver trials associated with your AMPL Community Edition license.
Learn more: [AMPL Community Edition] [AMPL for Courses]
AMPL Python API: amplpy¶
amplpy is an interface that allows developers to access the features of AMPL from within Python. For a quick introduction to AMPL see Quick Introduction to AMPL.
In the same way that AMPL’s syntax matches naturally the mathematical description of the model,
the input and output data matches naturally Python lists, sets, dictionaries,
All model generation and solver interaction is handled directly by AMPL, which leads to great stability and speed; the library just acts as an intermediary, and the added overhead (in terms of memory and CPU usage) depends mostly on how much data is sent and read back from AMPL, the size of the expanded model as such is irrelevant.
With amplpy you can model and solve large scale optimization problems in Python with the performance of heavily optimized C code without losing model readability. The same model can be deployed on applications built on different languages by just switching the API used.
Learn more: [Python API Documentation]