Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Graver Augmentation Multiseed Algorithm (Julia)

Graver Augmentation Multiseed Algorithm

David E. Bernal Neira
Davidson School of Chemical Engineering, Purdue University

Pedro Maciel Xavier
Davidson School of Chemical Engineering, Purdue University

Azain Khalid
Department of Computer Science, Purdue University
Undergraduate Researcher

Open In Colab SECQUOIA

Environment setup

For local execution from the repository root, run uv sync --group qubo and make setup-julia NOTEBOOK=notebooks_jl/3-GAMA.ipynb before launching Jupyter. This notebook reuses the repo-local Python environment for the D-Wave Ocean stack instead of relying on Julia’s CondaPkg resolver.

This notebook should open directly with the Julia runtime in Google Colab. If it does not, switch the runtime before running the next cell. The setup cell will clone SECQUOIA/QuIP into the Colab workspace when needed, activate notebooks_jl/envs/3-GAMA/Project.toml, install the Python D-Wave Ocean packages into the Colab Python runtime, and print progress for clone, activate, instantiate, and package-load steps. Package precompilation still happens as needed during setup. The first Colab run can still take several minutes.

function load_notebook_bootstrap()
    candidates = (
        joinpath(pwd(), "scripts", "notebook_bootstrap.jl"),
        joinpath(pwd(), "..", "scripts", "notebook_bootstrap.jl"),
        joinpath(pwd(), "QuIP", "scripts", "notebook_bootstrap.jl"),
        joinpath("/content", "QuIP", "scripts", "notebook_bootstrap.jl"),
    )

    for candidate in candidates
        if isfile(candidate)
            include(candidate)
            return nothing
        end
    end

    in_colab = haskey(ENV, "COLAB_RELEASE_TAG") || haskey(ENV, "COLAB_JUPYTER_IP") || isdir(joinpath("/content", "sample_data"))
    if in_colab
        repo_dir = get(ENV, "QUIP_REPO_DIR", joinpath(pwd(), "QuIP"))
        if !isdir(repo_dir)
            println("[bootstrap] Cloning SECQUOIA/QuIP into $repo_dir")
            run(`git clone --depth 1 https://github.com/SECQUOIA/QuIP.git $repo_dir`)
        end
        include(joinpath(repo_dir, "scripts", "notebook_bootstrap.jl"))
        return nothing
    end

    error("Could not locate scripts/notebook_bootstrap.jl from $(pwd()).")
end

load_notebook_bootstrap()

BOOTSTRAP = QuIPNotebookBootstrap.bootstrap_notebook("3-GAMA")

QUIP_REPO_DIR = BOOTSTRAP.repo_dir
JULIA_NOTEBOOKS_DIR = BOOTSTRAP.notebooks_dir
JULIA_PROJECT_DIR = BOOTSTRAP.project_dir
IN_COLAB = BOOTSTRAP.in_colab
[22:59:35] Notebook project key: 3-GAMA
[22:59:36] Google Colab runtime detected: false
[22:59:36] Using Python runtime: /home/bernalde/repos/QuIP/.venv/bin/python3
[22:59:36] Activating project at `/home/bernalde/repos/QuIP/notebooks_jl/envs/3-GAMA`
  Activating 
project at `~/repos/QuIP/notebooks_jl/envs/3-GAMA`
[22:59:36] Instantiating Julia packages
  3.167800 seconds (1.02 M allocations: 76.558 MiB, 0.92% gc time, 16.73% compilation time: 69% of which was recompilation)
[22:59:40] Loading notebook packages
[22:59:57] Working directory set to /home/bernalde/repos/QuIP/notebooks_jl
[22:59:57] Notebook bootstrap complete
false

Graver Augmentation Multiseed Algorithm (Julia)

Introduction to GAMA

The Graver Augmentation Multiseed Algorithm (GAMA) was proposed by Alghassi, Dridi, and Tayur in the works listed in Reference [1] and Reference [2]. The three main ingredients of this algorithm, designed to solve integer programs with linear constraints and nonlinear objective, are:

  • Computing the Graver basis (or a subset of it) of an integer program.

  • Performing an augmentation.

  • Given that only for certain objective functions, the Graver augmentation is guaranteed to find a globally optimal solution, the algorithm is initialized in several points.

This algorithm can be adapted to take advantage of Quantum Computers by leveraging them as black-box Ising/QUBO problem solvers. In particular, obtaining several feasible solution points for the augmentation and computing the kernel of the constraint matrix can be posed as QUBO problems. After obtaining these solutions, other routines implemented in classical computers are used to solve the optimization problems, making this a hybrid quantum-classical algorithm.

Introduction to Graver basis computation

This notebook makes simple computations of Graver basis. Because of the complexity of these computations, we suggest that for more complicated problems you install the excellent 4ti2 software, an open-source implementation of several routines useful for the study of integer programming through algebraic geometry. It can be used as a stand-alone library and called from C++ or directly from Julia. In Julia, a binding is provided by lib4ti2_jll.

A Graver basis is defined as

G(A)=jHj(A)\mathcal{G}(\mathbf{A}) = \bigcup_{j} \mathcal{H}_{j}(\mathbf{A})

where Hj(A)\mathcal{H}_{j}(\mathbf{A}) are the minimal Hilbert basis of A\mathbf{A} in each orthant.

Equivalently we can define the Graver basis as the \sqsubseteq-minimal set of a lattice

L(A)={x:Ax=0,xZn}{0}=kerAZn\mathcal{L}(\mathbf{A}) = \left\lbrace{}{\mathbf{x} : \mathbf{A} \mathbf{x} = 0, \mathbf{x} \in \mathbb{Z}^{n}}\right\rbrace{} \setminus \left\lbrace{}{0}\right\rbrace{} = \ker A \cap \mathbb{Z}^{n}

where the partial ordering xy\mathbf{x} \sqsubseteq \mathbf{y} holds whenever xiyi0x_i y_i \geq 0 and xiyi\left\vert x_i \right\vert \leq \left\vert y_i \right\vert for all ii.

Here we won’t interact with the Quantum Computer. However, we will obtain the Graver basis of a problem using package 4ti2. This notebook studies the behavior of the search algorithm in the case that we only have available a subset of the Graver basis.

Problem statement

By default, the GAMA computation below uses Example 4 from the code, which corresponds to Case 2 in the original GAMA paper. The problem is derived from finance and deals with the maximization of expected returns on investments and the minimization of the variance.

mini=1nμixi+1εεi=1nσi2xi2s.t.Ax=b x{2,1,0,1,2}n\begin{array}{rll} \displaystyle \min & \displaystyle -\sum_{i = 1}^{n} \mu_{i} x_{i} + \sqrt{\frac{1 - \varepsilon}{\varepsilon} \sum_{i = 1}^{n} \sigma_{i}^2 x_{i}^2 } \\ \textrm{s.t.} & A \mathbf{x} = \mathbf{b} \\ ~ & \mathbf{x} \in \left\lbrace{}{-2, -1, 0, 1, 2}\right\rbrace{}^{n} \end{array}

This particular instance of convex INLP has n=25n = 25, ε=0.01\varepsilon = 0.01, μi=rand[0,1]\mu_i = \textrm{rand}[0, 1], and σi=rand[0,μi]\sigma_i = \textrm{rand}[0, \mu_i]. The matrix AA is a matrix with 1’s and 0’s and the bb vector is half of the sum of the rows of AA, in this case b=(9,8,7,5,5)b = (9, 8, 7, 5, 5)^\top.

To make this Example 4 run reproducible, the notebook loads one fixed realization of the coefficient vectors from notebooks_data/3-GAMA_example4_coefficients.csv, reuses precomputed feasible starts from notebooks_data/3-GAMA_example4_feasible_starts.csv, and traverses a saved Graver-direction order from notebooks_data/3-GAMA_example4_graver_order.csv.

The Graver basis of this matrix AA has 29789 elements, which on a standard laptop using 4ti2 takes about 5 seconds to compute. With lib4ti2_jll available, the notebook can compute that basis directly in Colab or locally; the bundled graver.npy file is only a fallback when the graver binary is unavailable.

SELECTED_GAMA_EXAMPLE = 4  # Change to 1, 2, 3, or 4, then rerun from this cell downward.
4

Class examples

The original GAMA class notebook used EXAMPLE = 1, EXAMPLE = 2, EXAMPLE = 3, and EXAMPLE = 4. Examples 1-3 are retained here as class-reference problem definitions. By default, the executable GAMA workflow below continues with the saved Example 4 portfolio instance.

Example 1: illustrative Graver augmentation

This example uses

A=[1111151025],b=[21156],x(0)=[11532].A = \begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & 5 & 10 & 25 \end{bmatrix},\quad b = \begin{bmatrix} 21 \\ 156 \end{bmatrix},\quad x^{(0)} = \begin{bmatrix} 1 & 15 & 3 & 2 \end{bmatrix}.

The variable bounds are 0xi150 \le x_i \le 15. In the original class notebook, the objective is to minimize distance to 5.

Example 2: four-variable linear example

This example uses

A=[11111234],b=[1021],c=[0102],x(0)=[1801].A = \begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & 2 & 3 & 4 \end{bmatrix},\quad b = \begin{bmatrix} 10 \\ 21 \end{bmatrix},\quad c = \begin{bmatrix} 0 & 1 & 0 & 2 \end{bmatrix},\quad x^{(0)} = \begin{bmatrix} 1 & 8 & 0 & 1 \end{bmatrix}.

The variable bounds are 0xi210 \le x_i \le 21.

Example 3: alternate four-variable linear example

This example uses

A=[11110123],b=[1015],c=[131417],x(0)=[3061].A = \begin{bmatrix} 1 & 1 & 1 & 1 \\ 0 & 1 & 2 & 3 \end{bmatrix},\quad b = \begin{bmatrix} 10 \\ 15 \end{bmatrix},\quad c = \begin{bmatrix} 1 & 3 & 14 & 17 \end{bmatrix},\quad x^{(0)} = \begin{bmatrix} 3 & 0 & 6 & 1 \end{bmatrix}.

The variable bounds are 0xi100 \le x_i \le 10.

Example 4: saved portfolio instance

Let

A=[11111111110101010111010101111010100100010010111111010001010110110110010011100000001011101111001000000111110001000100000101010]A = \begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 0 \\ 1 & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 \\ 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 \\ \end{bmatrix}

This particular instance of convex INLP has m=5m = 5, n=25n = 25, ε=0.01\varepsilon = 0.01, μi=rand[0,1]\mu_{i} = \textrm{rand}[0, 1], σi=rand[0,μi]\sigma_{i} = \textrm{rand}[0, \mu_{i}]. ABm×nA \in \mathbb{B}^{m \times n} and each bjb_{j} is half the sum of the jj-th row of AA. In this example, b=(9,8,7,5,5)\mathbf{b} = \left({9, 8, 7, 5, 5}\right)'.

The remaining GAMA method below uses this saved Example 4 instance: complete-basis greedy augmentation, a 10-direction partial-basis greedy augmentation, and several larger fractions of the basis.

First we would write this problem as an unconstrained one by penalizing the linear constraints as quadratics in the objective. Let’s first define the problem parameters.

To switch among the class examples, change SELECTED_GAMA_EXAMPLE to 1, 2, 3, or 4, then rerun the notebook from that cell downward. Example 4 remains the default because it is the saved portfolio instance used for the executed results below.

"""Return one of the class-reference GAMA examples."""
function load_gama_example(example_id::Integer)
    if example_id == 1
        return (
            id = 1,
            name = "Example 1: illustrative Graver augmentation",
            A = [1 1 1 1; 1 5 10 25],
            b = [21, 156],
            x0 = [1, 15, 3, 2],
            xl = zeros(Int, 4),
            xu = fill(15, 4),
            c = nothing,
            objective = :distance_to_5,
            graver_directions = [
                5 -6 0 1
                5 -9 4 0
                0 3 -4 1
                5 -3 -4 2
                5 0 -8 3
            ],
        )
    elseif example_id == 2
        return (
            id = 2,
            name = "Example 2: four-variable linear example",
            A = [1 1 1 1; 1 2 3 4],
            b = [10, 21],
            x0 = [1, 8, 0, 1],
            xl = zeros(Int, 4),
            xu = fill(21, 4),
            c = [0, 1, 0, 2],
            objective = :linear,
            graver_directions = [
                1 -2 1 0
                2 -3 0 1
                1 -1 -1 1
                0 1 -2 1
                1 0 -3 2
            ],
        )
    elseif example_id == 3
        return (
            id = 3,
            name = "Example 3: alternate four-variable linear example",
            A = [1 1 1 1; 0 1 2 3],
            b = [10, 15],
            x0 = [3, 0, 6, 1],
            xl = zeros(Int, 4),
            xu = fill(10, 4),
            c = [1, 3, 14, 17],
            objective = :linear,
            graver_directions = [
                2 -3 0 1
                1 -2 1 0
                1 -1 -1 1
                0 1 -2 1
                1 0 -3 2
            ],
        )
    elseif example_id == 4
        A = [
            1 1 1 1 1 1 1 1 1 1 0 1 0 1 0 1 0 1 1 1 0 1 0 1 0
            1 1 1 1 0 1 0 1 0 0 1 0 0 0 1 0 0 1 0 1 1 1 1 1 1
            0 1 0 0 0 1 0 1 0 1 1 0 1 1 0 1 1 0 0 1 0 0 1 1 1
            0 0 0 0 0 0 0 1 0 1 1 1 0 1 1 1 1 0 0 1 0 0 0 0 0
            0 1 1 1 1 1 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 1 0 1 0
        ]
        n = size(A, 2)
        return (
            id = 4,
            name = "Example 4: saved portfolio instance",
            A = A,
            b = vec(ceil.(Int, sum(A; dims = 2) / 2)),
            x0 = [1, 1, 1, 1, -1, 1, 1, 1, 1, 1, 1, 1, -2, 1, 0, -1, 0, 1, -1, 1, -2, -2, 1, 1, 1],
            xl = fill(-2, n),
            xu = fill(2, n),
            c = nothing,
            objective = :portfolio,
            graver_directions = nothing,
        )
    else
        error("Unknown GAMA example $(example_id); choose 1, 2, 3, or 4.")
    end
end

gama_example = load_gama_example(SELECTED_GAMA_EXAMPLE)
A = gama_example.A
b = gama_example.b
x0 = gama_example.x0
xl = gama_example.xl
xu = gama_example.xu
c = gama_example.c
m, n = size(A)

println("Running $(gama_example.name) with A size $(size(A)) and $(length(x0)) variables.")
Running Example 4: saved portfolio instance with A size (5, 25) and 25 variables.
println("Initial point is feasible: ", A * x0 == b)
println("Variable bounds: [", minimum(xl), ", ", maximum(xu), "]")
Initial point is feasible: true

Variable bounds: [-2, 2]
if !@isdefined(DelimitedFiles)
    using DelimitedFiles
end

if !@isdefined(Random)
    using Random
end

if !@isdefined(LinearAlgebra)
    using LinearAlgebra
end

"""Return the path to a shared Notebook 3 data file."""
shared_data_path(filename::AbstractString) = joinpath(QUIP_REPO_DIR, "notebooks_data", filename)

"""
Return the fixed `μ` and `σ` vectors shared by both notebooks.

Returns
-------
Tuple{Vector{Float64}, Vector{Float64}}
    The expected-return vector `μ` and the volatility vector `σ`.
"""
function load_shared_coefficients(num_variables::Integer)
    coeff_path = shared_data_path("3-GAMA_example4_coefficients.csv")
    if !isfile(coeff_path)
        rng = MersenneTwister(1729)
        μ = rand(rng, num_variables)
        σ = rand(rng, num_variables) .* μ
        println("Shared coefficient file not found; generating deterministic coefficients with seed 1729.")
        return (μ, σ)
    end

    coeffs = readdlm(coeff_path, ',', Float64)
    size(coeffs) == (2, num_variables) || error("Expected coefficient array with shape (2, $(num_variables)), found $(size(coeffs)).")
    return (vec(coeffs[1, :]), vec(coeffs[2, :]))
end

ε = 0.01
if SELECTED_GAMA_EXAMPLE == 4
    μ, σ = load_shared_coefficients(n)
else
    μ = zeros(Float64, n)
    σ = zeros(Float64, n)
end

function f(x)
    if SELECTED_GAMA_EXAMPLE == 1
        return sum(abs.(x .- 5))
    elseif SELECTED_GAMA_EXAMPLE == 4
        return -dot(μ, x) + sqrt(((1 - ε) / ε) * dot(σ .^ 2, x .^ 2))
    else
        return dot(c, x)
    end
end
f (generic function with 1 method)
using DelimitedFiles
import lib4ti2_jll
import BinaryWrappers
using NPZ
const lib4ti2_bin = BinaryWrappers.@generate_wrappers(lib4ti2_jll)

"""Return `true` when the bundled 4ti2 `graver` executable is available."""
function has_graver()::Bool
    try
        return success(`$(lib4ti2_bin)/graver --help`)
    catch
        return false
    end
end

"""Print the help text for the bundled 4ti2 `graver` executable."""
function graver()
    run(`$(lib4ti2_bin)/graver --help`)

    return nothing
end

"""Run the bundled 4ti2 `graver` executable on the given project path."""
function graver(proj_path::AbstractString; silent::Bool = true)
    if silent
        run(`$(lib4ti2_bin)/graver -q $(proj_path)`)
    else
        run(`$(lib4ti2_bin)/graver $(proj_path)`)
    end

    return nothing
end

"""Write an integer matrix in the text format expected by 4ti2."""
function write_mat(path::AbstractString, A)
    m, n = size(A)

    open(path, "w") do io
        println(io, "$m $n")

        join(io, (join(@view(A[i, :]), " ") for i = 1:m), "\n")
    end

    return nothing
end

"""Read a matrix written in 4ti2's plain-text format."""
function read_mat(path::AbstractString, type = Int)
    m, n = parse.(Int, split(readline(path)))

    A = Matrix{type}(undef, m, n)

    open(path, "r") do io
        readline(io)

        for (i, line) in enumerate(eachline(io))
            A[i, :] .= parse.(Int, split(line))
        end
    end

    return A
end

"""
Return the Graver basis of `A` by calling the bundled 4ti2 `graver` executable.

Only the constraint matrix is written because the Graver basis depends on `A` alone;
the box bounds `xl` and `xu` are enforced later during augmentation.
"""
function compute_graver_basis_local(A)
    G = nothing

    mktempdir() do path
        proj_path = joinpath(path, "proj")

        write_mat("$(proj_path).mat", A)
        graver(proj_path)

        G = read_mat("$(proj_path).gra")
    end

    return G
end

"""Load the bundled `graver.npy` file when the 4ti2 executable is unavailable."""
function download_graver_basis()
    npy_path = joinpath(JULIA_NOTEBOOKS_DIR, "graver.npy")
    G = NPZ.npyread(npy_path)
    return Array{Int}(G)
end

"""Return the Graver basis of `A`, preferring the bundled 4ti2 binary when available."""
function graver_basis(A)
    if has_graver()
        return compute_graver_basis_local(A)
    else
        return download_graver_basis()
    end
end
graver_basis
function graver_basis_for_selected_example(A)
    if gama_example.graver_directions !== nothing
        println("Using $(size(gama_example.graver_directions, 1)) built-in Graver directions for Example $(SELECTED_GAMA_EXAMPLE).")
        return gama_example.graver_directions
    end

    return graver_basis(A)
end

G = graver_basis_for_selected_example(A)
29789×25 Matrix{Int64}: 0 0 0 0 0 0 1 0 0 0 … 0 0 0 0 -1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 -1 0 -1 0 0 0 0 0 1 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 -1 0 0 0 0 0 0 … 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 1 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 -1 0 0 0 0 -1 0 -2 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 … 0 0 0 0 0 0 -1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 -1 0 -2 0 0 0 0 0 0 0 0 0 0 0 1 0 0 -1 0 0 0 -1 0 0 0 0 0 0 ⋮ ⋮ ⋱ ⋮ ⋮ 2 0 0 0 0 -3 0 0 0 3 -1 -2 0 0 0 0 0 0 1 0 1 2 0 0 0 0 -3 0 0 0 3 -1 -2 0 0 0 0 0 0 2 0 0 2 0 0 0 0 -3 0 0 0 2 -1 -2 0 0 0 0 0 0 2 0 0 2 0 0 0 0 -3 0 0 0 2 … -1 -2 0 0 0 0 0 0 1 0 1 2 0 0 0 0 -3 0 0 0 1 -1 -2 0 0 0 0 0 0 1 0 1 2 0 0 0 0 -3 0 0 0 1 -1 -2 0 0 0 0 0 0 2 0 0 2 0 0 0 0 -3 0 0 0 0 -1 -2 0 0 0 0 0 0 1 0 1 2 0 0 0 0 -3 0 0 0 0 -1 -2 0 0 0 0 0 0 2 0 0 2 0 0 0 0 -3 0 0 0 3 … -1 -2 0 0 0 0 0 0 0 0 2 2 0 0 0 0 -3 0 0 0 2 -1 -2 0 0 0 0 0 0 0 0 2 2 0 0 0 0 -3 0 0 0 1 -1 -2 0 0 0 0 0 0 0 0 2 2 0 0 0 0 -3 0 0 0 0 -1 -2 0 0 0 0 0 0 0 0 2
# Define rules to choose augmentation element, either the best one (argmin) or the first one that is found
function argmin_rule(a)
    local i, y, α = (nothing, NaN, Inf)

    for (_i, (_y, _α)) in enumerate(a)
        if _α < α
            i, y, α = (_i, _y, _α)
        end
    end

    return (i, (y, α))
end

function greedy_rule(a)
    local i, y, α = (nothing, NaN, Inf)

    for (_i, (_y, _α)) in enumerate(a)
        if _α ≉ 0.0
            return (_i, (_y, _α))
        else
            i, y, α = (_i, _y, _α)
        end
    end

    return (i, (y, α))
end
greedy_rule (generic function with 1 method)
# Bisection rules for finding best step size
"""Compute the best integer step size along a Graver direction within the box bounds."""
function bisection_rule(f::Function, g, x, xl = nothing, xu = nothing, laststep = nothing)
    if g == laststep
        return (f(x), 0)
    end

    if xl === nothing
        xl = zeros(length(x))
    end

    if xu === nothing
        xu = fill(2 * maximum(x), length(x))
    end

    u = maximum(xu) - minimum(xl)
    l = -u

    for (i, gi) in enumerate(g)
        if gi >= 1
            u = min(u, floor(Int, (xu[i] - x[i]) / gi))
            l = max(l, ceil(Int, (xl[i] - x[i]) / gi))
        elseif gi <= -1
            u = min(u, floor(Int, (xl[i] - x[i]) / gi))
            l = max(l, ceil(Int, (xu[i] - x[i]) / gi))
        end
    end

    α = u

    while u - l > 1
        α = ifelse(f(x + l * g) < f(x + u * g), l, u)

        p1 = floor(Int, (l + u) / 2) - 1
        p2 = floor(Int, (l + u) / 2)
        p3 = floor(Int, (l + u) / 2) + 1

        if f(x + p1 * g) < f(x + p2 * g)
            u = floor(Int, (l + u) / 2)
        elseif f(x + p3 * g) < f(x + p2 * g)
            l = floor(Int, (l + u) / 2) + 1
        else
            α = p2
            break
        end
    end

    if f(x + l * g) < f(x + u * g) && f(x + l * g) < f(x + α * g)
        α = l
    elseif f(x + u * g) < f(x + α * g)
        α = u
    end

    return (f(x + α * g), α)
end

# We can just have a single step move (works well with greedy approach)
"""Evaluate whether a single forward or backward step improves the objective."""
function single_move_rule(f::Function, g, x, xl = nothing, xu = nothing, laststep = nothing)
    if xl === nothing
        xl = zeros(length(x))
    end

    if xu === nothing
        xu = fill(2 * maximum(x), length(x))
    end

    α = 0

    if all(x + g .<= xu) && all(x + g .>= xl)
        if f(x + g) < f(x)
            α = 1
        end
    elseif all(x - g .<= xu) && all(x - g .>= xl)
        if f(x - g) < f(x) && f(x - g) < f(x + g)
            α = -1
        end
    end

    return (f(x + α * g), α)
end
single_move_rule
"""Compute the Graver basis for `A` and run the augmentation experiment from `x0`."""
function augmentation(
    f::Function,
    A,
    b,
    x0,
    xl,
    xu;
    silent::Bool = true,
    max_iter::Integer = 1_000,
    step_rule::Function = bisection_rule,
    choice_rule::Function = argmin_rule,
)
    G = graver_basis_for_selected_example(A)

    return augmentation(
        f,
        G,
        A,
        b,
        x0,
        xl,
        xu;
        silent,
        max_iter,
        step_rule,
        choice_rule,
    )
end

"""Apply the selected augmentation rule to a fixed Graver basis until convergence."""
function augmentation(
    f::Function,
    G,
    A,
    b,
    x0,
    xl,
    xu;
    silent::Bool = true,
    max_iter::Integer = 1_000,
    step_rule::Function = bisection_rule,
    choice_rule::Function = argmin_rule,
)
    # Constraints definition
    isfeasible = (x) -> (A * x == b)

    Δ = 1
    n = length(x0)
    k = 1

    if !silent
        println("Initial point: $(x0)")
        println("Objective function: $(f(x0))")
    end

    y = nothing
    x = copy(x0)
    δ = Vector{Int}(undef, n)
    gprev = Vector{Int}(undef, n)

    while Δ != 0 && k < max_iter
        i, (y, Δ) = choice_rule((step_rule(f, g, x, xl, xu, gprev) for g in eachrow(G)))

        δ .= @view(G[i, :]) .* Δ
        x .+= δ
        gprev .= @view(G[i, :])

        if !silent
            println("Iteration ", k)
            println(i, (y, Δ))
            println("Augmentation direction:", gprev)
            println("Distanced moved:", Δ)
            println("Step taken:", δ)
            println("Objective function:", y)
            println(f(x))
            println("Current point:", x)
            println("Are constraints satisfied?", isfeasible(x))
        end

        k += 1
    end

    return (k, y, x)
end
augmentation

First, we will compare our augmentation strategies, either best or greedy, and for that last case, either computing the best step or a single move. In the order that was mentioned, the augmentation will take more iterations, but each one of the augmentation steps or iterations is going to be cheaper.

println("Best-augmentation: Choosing among the best step that each element of G can do (via bisection), the one that reduces the most the objective")

iter, f_obj, xf = @time augmentation(
    f, G, A, b, x0, xl, xu;
    step_rule = bisection_rule, choice_rule = argmin_rule
)

println("$(iter), iterations")
println("solution: $(xf)")
println("objective: $(f_obj)")

println("Greedy-best-augmentation: Choosing among the best step that each element of G can do (via bisection), the first one encountered that reduces the objective")

iter,f_obj,xf = @time augmentation(
    f, G, A, b, x0, xl, xu;
    step_rule = bisection_rule, choice_rule = greedy_rule,
)

println("$(iter), iterations")
println("solution: $(xf)")
println("objective: $(f_obj)")

println("Greedy-augmentation: Choosing among the first element of G that with a single step reduces the objective")

iter,f_obj,xf = @time augmentation(
    f, G, A, b, x0, xl, xu;
    step_rule = single_move_rule, choice_rule = greedy_rule,
)

println("$(iter), iterations")
println("solution: $(xf)")
println("objective: $(f_obj)")
Best-augmentation: Choosing among the best step that each element of G can do (via bisection), the one that reduces the most the objective
 12.479642 seconds (66.27 M allocations: 3.646 GiB, 5.01% gc time, 10.32% compilation time)
18, iterations

solution: [-2, -2, -1, 1, 0, 1, 2, 0, 0, 1, 0, 0, 0, 0, 1, 2, -1, 1, 1, 2, 2, 1, 0, 2, 2]
objective: -0.6092901929830958
Greedy-best-augmentation: Choosing among the best step that each element of G can do (via bisection), the first one encountered that reduces the objective
  1.114163 seconds (4.96 M allocations: 279.168 MiB, 4.37% gc time, 19.02% compilation time)
42, iterations
solution: [0, 0, 1, 1, 0, 0, 2, -1, 0, 1, 1, 1, 0, 0, 1, 2, 0, 0, 0, 0, 1, 0, 0, 2, 2]
objective: -2.8809811115284694
Greedy-augmentation: Choosing among the first element of G that with a single step reduces the objective
  1.319782 seconds (2.91 M allocations: 153.189 MiB, 2.93% gc time, 75.39% compilation time)
42, iterations

solution: [0, 0, 1, 1, 0, 0, 2, -1, 0, 1, 1, 1, 0, 0, 1, 2, 0, 0, 0, 0, 1, 0, 0, 2, 2]
objective: -2.8809811115284694

Now we can highlight another feature of the algorithm, computing starting feasible solutions. For this case we formulate a QUBO and solve it via annealing. In this particular case, we will not encode the integer variables and will only look for feasible solutions with x{0,1}nx \in \{0,1\}^n.

QUBO formulation for feasible starting points

We now formulate a QUBO to generate diverse feasible starting points for the augmentation stage. In this part of the notebook, we only search over binary points with x{0,1}nx \in \{0,1\}^n before passing those feasible candidates to the classical augmentation routine.

To do that, we minimize the squared residual of the linear constraints over binary points:

minx{0,1}nAxb22=minx{0,1}nxAAx2bAx+bb.\min_{x \in \{0,1\}^n} \|Ax - b\|_2^2 = \min_{x \in \{0,1\}^n} x^\top A^\top A x - 2 b^\top A x + b^\top b.

The constant term bbb^\top b does not affect the minimizers, so the notebook keeps it as an offset while the binary quadratic model is built from Q = A^\top A - 2 \operatorname{diag}(A^\top b). Any sample with objective value 0 is therefore a feasible starting point for the augmentation stage.

We use simulated annealing here because the goal is not a single feasible point but a diverse set of feasible starts. A MIP solver would typically stop after one optimum, while annealing can return several distinct zero-energy states of the same QUBO in one run.

if !@isdefined(JuMP)
    using JuMP
end

if !@isdefined(DWave)
    using DWave
end

if !@isdefined(LinearAlgebra)
    using LinearAlgebra
end
"""Return the committed Example 4 feasible starts when available."""
function load_precomputed_feasible_starts()
    SELECTED_GAMA_EXAMPLE == 4 || return nothing

    feasible_path = shared_data_path("3-GAMA_example4_feasible_starts.csv")
    if !isfile(feasible_path)
        return nothing
    end

    X = readdlm(feasible_path, ',', Int)
    println("Loaded $(size(X, 1)) precomputed feasible solutions from 3-GAMA_example4_feasible_starts.csv.")
    return [vec(X[i, :]) for i in axes(X, 1)]
end

"""Return Example 4 starts, or use the documented start for smaller examples."""
function get_default_feasible_starts(A, b; num_reads = 20)
    if SELECTED_GAMA_EXAMPLE != 4
        println("Using the documented x0 as the feasible start for Example $(SELECTED_GAMA_EXAMPLE).")
        return [copy(x0)]
    end

    X = load_precomputed_feasible_starts()
    if X !== nothing
        return X
    end

    println("Shared feasible-start file not found; recomputing feasible solutions with simulated annealing.")
    return get_feasible(A, b; num_reads = num_reads)
end

"""Return the saved Example 4 Graver order, or the natural order for small examples."""
function load_graver_order(num_directions::Int)
    if SELECTED_GAMA_EXAMPLE != 4
        return collect(1:num_directions)
    end

    order_path = shared_data_path("3-GAMA_example4_graver_order.csv")
    if isfile(order_path)
        order = vec(readdlm(order_path, ',', Int)) .+ 1
        length(order) == num_directions || error("Expected $(num_directions) Graver indices, found $(length(order)).")
        return order
    end

    rng = MersenneTwister(271828)
    println("Shared Graver-order file not found; generating a deterministic permutation with seed 271828.")
    return randperm(rng, num_directions)
end

"""
Sample binary feasible starting points by minimizing the QUBO residual.

Arguments
---------
A, b:
    Constraint matrix and right-hand side.
num_reads:
    Number of reads requested from the annealer.

Returns
-------
Vector{Vector{Int}}
    Binary feasible starting points with one vector per sample.
"""
function get_feasible(A, b; num_reads = 1_000)
    m, n = size(A)

    Q = A' * A - 2 * diagm(A' * b)
    β = b' * b

    # Define Binary Quadratic Model using new DWave.jl
    model = Model(DWave.Neal.Optimizer)

    @variable(model, x[1:n], Bin)

    @objective(model, Min, x' * Q * x + β)

    set_optimizer_attribute(model, "num_reads", num_reads)

    optimize!(model)

    X = [
        round.(Int, value.(x; result = i))
        for i = 1:result_count(model)
        if objective_value(model; result = i) ≈ 0.0
    ]

    println("$(length(X)) feasible solutions found.")

    return X
end
get_feasible
X_feas = get_default_feasible_starts(A, b; num_reads = 20)
G_order = load_graver_order(size(G, 1));
Loaded 20 precomputed feasible solutions from 3-GAMA_example4_feasible_starts.csv.

By default the notebook loads the committed feasible starts from notebooks_data/3-GAMA_example4_feasible_starts.csv, so the experiment begins from the same saved 20 binary points on every run. If that file is unavailable, the notebook falls back to simulated annealing and regenerates the starts with DWave.Neal. We now apply the augmentation procedure to each feasible point and record the final objective, the runtime, and the number of iterations it takes. Here we use the greedy single-move augmentation rule because of runtime.

"""
Run the full-basis augmentation experiment for each feasible starting point in `X`.

Arguments
---------
f, G, A, b, X, xl, xu:
    Objective function, Graver basis, linear constraints, feasible starts, and box bounds.

Returns
-------
Tuple{Vector{Float64}, Vector{Int}, Vector{Float64}}
    Final objectives, iteration counts, and runtimes for each feasible start.
"""
function augmentation_experiment(f, G, A, b, X, xl, xu)
    K = length(X)

    Y = Vector{Float64}(undef, K)
    I = Vector{Int}(undef, K)
    T = Vector{Float64}(undef, K)

    for (i, x) in enumerate(X)
        result = @timed augmentation(
            f, G, A, b, x, xl, xu;
            step_rule   = single_move_rule,
            choice_rule = greedy_rule,
        )

        num_iter, obj_val, _ = result.value

        Y[i] = obj_val
        I[i] = num_iter
        T[i] = result.time
    end

    return (Y, I, T)
end

Y_feas = f.(X_feas)

Y_aug, I_aug, T_aug = augmentation_experiment(f, G, A, b, X_feas, xl, xu);

We record the initial objective function, the one after doing the augmentation, and the number of augmentation steps.

if !@isdefined(Measures)
    using Measures
end

if !@isdefined(Random)
    using Random
end

if !@isdefined(Plots)
    using Plots
end

if !@isdefined(StatsBase)
    using StatsBase
end

if !@isdefined(StatsPlots)
    using StatsPlots
end

if !@isdefined(Printf)
    using Printf
end
"""
Plot the initial and augmented objectives together with the iteration counts.

Arguments
---------
Y_feas, Y_aug, I_aug:
    Initial objectives, augmented objectives, and iteration counts.
experiment_name:
    Title shown on the plot.

Returns
-------
Plots.Plot
    The combined plot object.
"""
function plot_augmentation(Y_feas, Y_aug, I_aug; experiment_name = "Augmentation")
    plt = plot(;
        plot_title = experiment_name,
        top_margin = 6mm,
        bottom_margin = 15mm,
    )

    plot!(
        plt, Y_feas;
        label       = "Initial",
        color       = :blue,
        markershape = :square,
    )
    plot!(
        plt, Y_aug;
        label       = "Augmented",
        color       = :orange,
        markershape = :square,
        ylabel      = "Objective Value",
        xlabel      = "Solutions",
        legend      = (0.1, -0.25),
    )

    scatter!(
        twinx(plt), I_aug;
        label       = "Iterations",
        color       = :green,
        legend      = (0.85, -0.25),
        markershape = :diamond,
        ylabel      = "Iterations",
        ylims       = (0, maximum(I_aug) + 1),
    )

    return plt
end

plot_augmentation(Y_feas, Y_aug, I_aug; experiment_name = "Complete-basis greedy ($(size(G, 1)) directions)")
Loading...

With the complete Graver basis, every starting point improves substantially, but the final objective values can still differ. Here, complete basis refers to the direction set; the augmentation rule still uses greedy single-move selection for runtime on this nonseparable convex objective, so different feasible starts can terminate at different local optima.

Now let’s try an extreme case, where we only have 10 of the elements of the Graver basis.

"""
Run the augmentation experiment on a shared subset of the Graver basis.

Arguments
---------
f, G, A, b, X, xl, xu:
    Objective function, Graver basis, linear constraints, feasible starts, and box bounds.
order:
    Shared ordering of the Graver directions used to build nested subsets.
num_samples:
    Number of Graver directions kept in the subset.

Returns
-------
Tuple{Vector{Float64}, Vector{Int}, Vector{Float64}}
    Final objectives, iteration counts, and runtimes for each feasible start.
"""
function partial_augmentation_experiment(f, G, A, b, X, xl, xu, order; num_samples = 10)
    num_samples = min(num_samples, length(order), size(G, 1))
    K = length(X)

    Y = Vector{Float64}(undef, K)
    I = Vector{Int}(undef, K)
    T = Vector{Float64}(undef, K)

    index = order[1:num_samples]
    G_idx = G[index, :]

    for (i, x) in enumerate(X)
        result = @timed augmentation(
            f, G_idx, A, b, x, xl, xu;
            step_rule   = single_move_rule,
            choice_rule = greedy_rule,
        )

        num_iter, obj_val, _ = result.value

        Y[i] = obj_val
        I[i] = num_iter
        T[i] = result.time
    end

    return (Y, I, T)
end

num_partial_directions = min(10, size(G, 1))
Y_feas = f.(X_feas)

Y_paug, I_paug, T_paug = partial_augmentation_experiment(f, G, A, b, X_feas, xl, xu, G_order; num_samples = num_partial_directions);
plot_augmentation(Y_feas, Y_paug, I_paug; experiment_name = "Partial-basis greedy ($(num_partial_directions) directions)")
Loading...

With only 10 Graver directions, the augmentation runs quickly but often stalls after only a few improving moves.

function log_ticks_for(values; include_zero_floor = nothing)
    positive = [Float64(v) for v in vec(values) if Float64(v) > 0]
    if isempty(positive)
        ticks = [1.0]
    else
        tick_lo = floor(Int, log10(minimum(positive)))
        tick_hi = ceil(Int, log10(maximum(positive)))
        ticks = 10.0 .^ collect(tick_lo:tick_hi)
    end

    if include_zero_floor !== nothing
        ticks = sort(unique(vcat([Float64(include_zero_floor)], ticks)))
    end

    labels = [include_zero_floor !== nothing && isapprox(t, Float64(include_zero_floor); atol = max(Float64(include_zero_floor) / 10, eps(Float64)), rtol = 0.0) ? "0" : @sprintf("%.1e", t) for t in ticks]
    return ticks, labels
end

"""
Plot the runtime comparison between the full and partial augmentation experiments.

Arguments
---------
T_aug, T_paug:
    Full-basis and partial-basis runtimes.
partial_label:
    Legend label used for the partial-basis series.

Returns
-------
Plots.Plot
    The runtime comparison plot.
"""
function plot_augmentation_runtime(T_aug, T_paug; partial_label = "10 sampled Graver directions")
    ticks, labels = log_ticks_for(vcat(T_aug, T_paug))
    plt = plot(;
        plot_title = "Augmentation Runtime",
        xlabel     = "Solutions",
        ylabel     = "Runtime (s)",
        legend     = :outertopright,
        right_margin = 18mm,
        size       = (760, 400),
        yscale     = :log10,
        yticks     = (ticks, labels),
    )

    plot!(T_aug; label = "Complete basis", markershape = :diamond)
    plot!(T_paug; label = partial_label, markershape = :diamond)

    return plt
end

plot_augmentation_runtime(T_aug, T_paug; partial_label = "$(num_partial_directions) sampled Graver directions")
Loading...

This speed/quality tradeoff motivates the next experiment, which samples larger fractions of the Graver basis to look for a better middle ground.

"""
Run the partial-basis experiment over several fractions of the full Graver basis.

Arguments
---------
f, G, A, b, X, xl, xu:
    Objective function, Graver basis, linear constraints, feasible starts, and box bounds.
order:
    Shared ordering of the Graver directions used to build nested subsets.

Returns
-------
Tuple{Matrix{Float64}, Matrix{Int}, Matrix{Float64}}
    Final objectives, iteration counts, and runtimes for each sample-size fraction.
"""
function multiple_partial_augmentation_experiment(f, G, A, b, X, xl, xu, order)
    N = 10 # Discretization of the fractions of Graver considered
    M = length(X)
    K = size(G, 1)

    Y = Matrix{Float64}(undef, N, M)
    I = Matrix{Int}(undef, N, M)
    T = Matrix{Float64}(undef, N, M)

    for j in 1:N
        num_samples = max(1, trunc(Int, K / N * j))
        Y_p, I_p, T_p = partial_augmentation_experiment(
            f, G, A, b, X, xl, xu, order; num_samples = num_samples
        )

        Y[j, :] .= Y_p
        I[j, :] .= I_p
        T[j, :] .= T_p
    end

    return (Y, I, T)
end

Y_mpaug, I_mpaug, T_mpaug = multiple_partial_augmentation_experiment(f, G, A, b, X_feas, xl, xu, G_order);
"""Return log-safe objective gaps together with the floor used to display zero-gap entries."""
function lift_zero_gaps(Y, global_minimum)
    Y_gap = Y .- global_minimum
    positive = Y_gap[Y_gap .> 0]
    zero_floor = isempty(positive) ? eps(Float64) : minimum(positive) / 10
    adjusted = map(y -> y > 0 ? y : zero_floor, Y_gap)
    return adjusted, zero_floor
end

"""
Plot the log-scaled objective gap to the best full-basis result across sample sizes.

Arguments
---------
Y_feas, Y_mpaug, global_minimum:
    Initial objectives, partial-basis objectives, and the best full-basis objective.

Returns
-------
Plots.Plot
    The boxplot object with zero-gap entries lifted for the log axis.
"""
function plot_multiple_partial_augmentation(Y_feas, Y_mpaug, global_minimum)
    Y = hcat(Y_feas, Y_mpaug')
    Y_gap, zero_floor = lift_zero_gaps(Y, global_minimum)
    X = permutedims(["Initial"; ["$(10i)% |G|" for i = 1:size(Y_mpaug, 1)]])

    ticks, labels = log_ticks_for(Y_gap; include_zero_floor = zero_floor)

    plt = boxplot(X, Y_gap;
        plot_title = "Multiple partial augmentation samples",
        size       = (800, 400),
        margin     = 5mm,
        ylabel     = "Objective gap to best full-basis result",
        xlabel     = "Sample Sizes",
        yscale     = :log10,
        yticks     = (ticks, labels),
        legend     = false,
    )

    return plt
end

plot_multiple_partial_augmentation(Y_feas, Y_mpaug, minimum(Y_aug))
Loading...

References