Algorithms

This section contains all main algorithms of the package. These are the ones typical users will call.

The typical signature for these algorithms is:

my_algorithm(f, grad!, lmo, x0)

Standard algorithms

FrankWolfe.frank_wolfeMethod
frank_wolfe(f, grad!, lmo, x0; kwargs...)

Simplest form of the Frank-Wolfe algorithm.

Common arguments

These positional arguments are common to most Frank-Wolfe variants:

  • f: a function f(x) computing the value of the objective to minimize at point x
  • grad!: a function grad!(g, x) overwriting g with the gradient of f at point x
  • lmo: a linear minimization oracle, subtyping LinearMinimizationOracle
  • x0: a starting point for the optimization

Common keyword arguments

These keyword arguments are common to most Frank-Wolfe variants.

Warning

The current variant may have additional keyword arguments, documented elsewhere, or it may only use a subset of the ones listed below. The default values of these arguments may also vary between variants, and thus are not part of the public API.

  • line_search::LineSearchMethod: an object specifying the line search and its parameters (see LineSearchMethod)
  • momentum::Union{Real,Nothing}=nothing: constant momentum to apply to the gradient
  • epsilon::Real: absolute dual gap threshold at which the algorithm is interrupted
  • max_iteration::Integer: maximum number of iterations after which the algorithm is interrupted
  • print_iter::Integer: interval between two consecutive log prints, expressed in number of iterations
  • trajectory::Bool=false: whether to record the trajectory of algorithm states (through callbacks)
  • verbose::Bool: whether to print periodic logs (through callbacks)
  • memory_mode::MemoryEmphasis: an object dictating whether the algorithm operates in-place or out-of-place (see MemoryEmphasis)
  • gradient=nothing: pre-allocated container for the gradient
  • callback=nothing: function called on a CallbackState at each iteration
  • traj_data=[]: pre-allocated storage for the trajectory of algorithm states
  • timeout::Real=Inf: maximum time after which the algorithm is interrupted (in nanoseconds)
  • linesearch_workspace=nothing: pre-allocated workspace for the line search
  • dual_gap_compute_frequency::Integer=1: frequency of dual gap computation,

Return

Returns a tuple (x, v, primal, dual_gap, traj_data) with:

  • x: the final iterate
  • v: the last vertex from the linear minimization oracle
  • primal: the final primal value f(x)
  • dual_gap: the final Frank-Wolfe gap
  • traj_data: a vector of trajectory information, each element being the output of callback_state.
source
FrankWolfe.lazified_conditional_gradientMethod
lazified_conditional_gradient(f, grad!, lmo_base, x0; kwargs...)

Similar to FrankWolfe.frank_wolfe but lazyfying the LMO: each call is stored in a cache, which is looked up first for a good-enough direction. The cache used is a FrankWolfe.MultiCacheLMO or a FrankWolfe.VectorCacheLMO depending on whether the provided cache_size option is finite.

Common arguments

These positional arguments are common to most Frank-Wolfe variants:

  • f: a function f(x) computing the value of the objective to minimize at point x
  • grad!: a function grad!(g, x) overwriting g with the gradient of f at point x
  • lmo: a linear minimization oracle, subtyping LinearMinimizationOracle
  • x0: a starting point for the optimization

Common keyword arguments

These keyword arguments are common to most Frank-Wolfe variants.

Warning

The current variant may have additional keyword arguments, documented elsewhere, or it may only use a subset of the ones listed below. The default values of these arguments may also vary between variants, and thus are not part of the public API.

  • line_search::LineSearchMethod: an object specifying the line search and its parameters (see LineSearchMethod)
  • momentum::Union{Real,Nothing}=nothing: constant momentum to apply to the gradient
  • epsilon::Real: absolute dual gap threshold at which the algorithm is interrupted
  • max_iteration::Integer: maximum number of iterations after which the algorithm is interrupted
  • print_iter::Integer: interval between two consecutive log prints, expressed in number of iterations
  • trajectory::Bool=false: whether to record the trajectory of algorithm states (through callbacks)
  • verbose::Bool: whether to print periodic logs (through callbacks)
  • memory_mode::MemoryEmphasis: an object dictating whether the algorithm operates in-place or out-of-place (see MemoryEmphasis)
  • gradient=nothing: pre-allocated container for the gradient
  • callback=nothing: function called on a CallbackState at each iteration
  • traj_data=[]: pre-allocated storage for the trajectory of algorithm states
  • timeout::Real=Inf: maximum time after which the algorithm is interrupted (in nanoseconds)
  • linesearch_workspace=nothing: pre-allocated workspace for the line search
  • dual_gap_compute_frequency::Integer=1: frequency of dual gap computation,

Return

Returns a tuple (x, v, primal, dual_gap, traj_data) with:

  • x: the final iterate
  • v: the last vertex from the linear minimization oracle
  • primal: the final primal value f(x)
  • dual_gap: the final Frank-Wolfe gap
  • traj_data: a vector of trajectory information, each element being the output of callback_state.
source
FrankWolfe.stochastic_frank_wolfeMethod
stochastic_frank_wolfe(f::StochasticObjective, lmo, x0; kwargs...)

Stochastic version of Frank-Wolfe, evaluates the objective and gradient stochastically, implemented through the FrankWolfe.StochasticObjective interface.

Common arguments

These positional arguments are common to most Frank-Wolfe variants:

  • f: a function f(x) computing the value of the objective to minimize at point x
  • grad!: a function grad!(g, x) overwriting g with the gradient of f at point x
  • lmo: a linear minimization oracle, subtyping LinearMinimizationOracle
  • x0: a starting point for the optimization

Common keyword arguments

These keyword arguments are common to most Frank-Wolfe variants.

Warning

The current variant may have additional keyword arguments, documented elsewhere, or it may only use a subset of the ones listed below. The default values of these arguments may also vary between variants, and thus are not part of the public API.

  • line_search::LineSearchMethod: an object specifying the line search and its parameters (see LineSearchMethod)
  • momentum::Union{Real,Nothing}=nothing: constant momentum to apply to the gradient
  • epsilon::Real: absolute dual gap threshold at which the algorithm is interrupted
  • max_iteration::Integer: maximum number of iterations after which the algorithm is interrupted
  • print_iter::Integer: interval between two consecutive log prints, expressed in number of iterations
  • trajectory::Bool=false: whether to record the trajectory of algorithm states (through callbacks)
  • verbose::Bool: whether to print periodic logs (through callbacks)
  • memory_mode::MemoryEmphasis: an object dictating whether the algorithm operates in-place or out-of-place (see MemoryEmphasis)
  • gradient=nothing: pre-allocated container for the gradient
  • callback=nothing: function called on a CallbackState at each iteration
  • traj_data=[]: pre-allocated storage for the trajectory of algorithm states
  • timeout::Real=Inf: maximum time after which the algorithm is interrupted (in nanoseconds)
  • linesearch_workspace=nothing: pre-allocated workspace for the line search
  • dual_gap_compute_frequency::Integer=1: frequency of dual gap computation,

Specific keyword arguments

Keyword arguments include batch_size to pass a fixed batch_size or a batch_iterator implementing batch_size = FrankWolfe.batchsize_iterate(batch_iterator) for algorithms like Variance-reduced and projection-free stochastic optimization, E Hazan, H Luo, 2016.

Similarly, a constant momentum can be passed or replaced by a momentum_iterator implementing momentum = FrankWolfe.momentum_iterate(momentum_iterator).

Return

Returns a tuple (x, v, primal, dual_gap, traj_data) with:

  • x: the final iterate
  • v: the last vertex from the linear minimization oracle
  • primal: the final primal value f(x)
  • dual_gap: the final Frank-Wolfe gap
  • traj_data: a vector of trajectory information, each element being the output of callback_state.
source
FrankWolfe.block_coordinate_frank_wolfeFunction
block_coordinate_frank_wolfe(f, grad!, lmo::ProductLMO{N}, x0; kwargs...) where {N}

Block-coordinate version of the Frank-Wolfe algorithm. Minimizes objective f over the product of feasible domains specified by the lmo. The optional argument the update_order is of type FrankWolfe.BlockCoordinateUpdateOrder and controls the order in which the blocks are updated. The argument update_step is a single instance or tuple of FrankWolfe.UpdateStep and defines which FW-algorithms to use to update the iterates in the different blocks.

See S. Lacoste-Julien, M. Jaggi, M. Schmidt, and P. Pletscher 2013 and A. Beck, E. Pauwels and S. Sabach 2015 for more details about Block-Coordinate Frank-Wolfe.

Common arguments

These positional arguments are common to most Frank-Wolfe variants:

  • f: a function f(x) computing the value of the objective to minimize at point x
  • grad!: a function grad!(g, x) overwriting g with the gradient of f at point x
  • lmo: a linear minimization oracle, subtyping LinearMinimizationOracle
  • x0: a starting point for the optimization

Common keyword arguments

These keyword arguments are common to most Frank-Wolfe variants.

Warning

The current variant may have additional keyword arguments, documented elsewhere, or it may only use a subset of the ones listed below. The default values of these arguments may also vary between variants, and thus are not part of the public API.

  • line_search::LineSearchMethod: an object specifying the line search and its parameters (see LineSearchMethod)
  • momentum::Union{Real,Nothing}=nothing: constant momentum to apply to the gradient
  • epsilon::Real: absolute dual gap threshold at which the algorithm is interrupted
  • max_iteration::Integer: maximum number of iterations after which the algorithm is interrupted
  • print_iter::Integer: interval between two consecutive log prints, expressed in number of iterations
  • trajectory::Bool=false: whether to record the trajectory of algorithm states (through callbacks)
  • verbose::Bool: whether to print periodic logs (through callbacks)
  • memory_mode::MemoryEmphasis: an object dictating whether the algorithm operates in-place or out-of-place (see MemoryEmphasis)
  • gradient=nothing: pre-allocated container for the gradient
  • callback=nothing: function called on a CallbackState at each iteration
  • traj_data=[]: pre-allocated storage for the trajectory of algorithm states
  • timeout::Real=Inf: maximum time after which the algorithm is interrupted (in nanoseconds)
  • linesearch_workspace=nothing: pre-allocated workspace for the line search
  • dual_gap_compute_frequency::Integer=1: frequency of dual gap computation,

Return

The method returns a tuple (x, v, primal, dual_gap, traj_data) with:

  • x cartesian product of final iterates
  • v cartesian product of last vertices of the LMOs
  • primal primal value f(x)
  • dual_gap final Frank-Wolfe gap
  • traj_data vector of trajectory information.
source

Active-set based methods

The following algorithms maintain the representation of the iterates as a convex combination of vertices.

Away-step

FrankWolfe.away_frank_wolfeMethod
away_frank_wolfe(f, grad!, lmo, x0; kwargs...)

Frank-Wolfe with away steps. The algorithm maintains the current iterate as a convex combination of vertices in the FrankWolfe.ActiveSet data structure. See M. Besançon, A. Carderera and S. Pokutta 2021 for illustrations of away steps.

Common arguments

These positional arguments are common to most Frank-Wolfe variants:

  • f: a function f(x) computing the value of the objective to minimize at point x
  • grad!: a function grad!(g, x) overwriting g with the gradient of f at point x
  • lmo: a linear minimization oracle, subtyping LinearMinimizationOracle
  • x0: a starting point for the optimization

Common keyword arguments

These keyword arguments are common to most Frank-Wolfe variants.

Warning

The current variant may have additional keyword arguments, documented elsewhere, or it may only use a subset of the ones listed below. The default values of these arguments may also vary between variants, and thus are not part of the public API.

  • line_search::LineSearchMethod: an object specifying the line search and its parameters (see LineSearchMethod)
  • momentum::Union{Real,Nothing}=nothing: constant momentum to apply to the gradient
  • epsilon::Real: absolute dual gap threshold at which the algorithm is interrupted
  • max_iteration::Integer: maximum number of iterations after which the algorithm is interrupted
  • print_iter::Integer: interval between two consecutive log prints, expressed in number of iterations
  • trajectory::Bool=false: whether to record the trajectory of algorithm states (through callbacks)
  • verbose::Bool: whether to print periodic logs (through callbacks)
  • memory_mode::MemoryEmphasis: an object dictating whether the algorithm operates in-place or out-of-place (see MemoryEmphasis)
  • gradient=nothing: pre-allocated container for the gradient
  • callback=nothing: function called on a CallbackState at each iteration
  • traj_data=[]: pre-allocated storage for the trajectory of algorithm states
  • timeout::Real=Inf: maximum time after which the algorithm is interrupted (in nanoseconds)
  • linesearch_workspace=nothing: pre-allocated workspace for the line search
  • dual_gap_compute_frequency::Integer=1: frequency of dual gap computation,

Return

Returns a tuple (x, v, primal, dual_gap, traj_data, active_set) with:

  • x: the final iterate
  • v: the last vertex from the linear minimization oracle
  • primal: the final primal value f(x)
  • dual_gap: the final Frank-Wolfe gap
  • traj_data: a vector of trajectory information, each element being the output of callback_state.
  • active_set: the computed active set of vertices, of which the solution is a convex combination
source

Pairwise Frank-Wolfe

FrankWolfe.blended_pairwise_conditional_gradientMethod
blended_pairwise_conditional_gradient(f, grad!, lmo, x0; kwargs...)

Implements the BPCG algorithm from Tsuji, Tanaka, Pokutta (2021). The method uses an active set of current vertices. Unlike away-step, it transfers weight from an away vertex to another vertex of the active set.

Common arguments

These positional arguments are common to most Frank-Wolfe variants:

  • f: a function f(x) computing the value of the objective to minimize at point x
  • grad!: a function grad!(g, x) overwriting g with the gradient of f at point x
  • lmo: a linear minimization oracle, subtyping LinearMinimizationOracle
  • x0: a starting point for the optimization

Common keyword arguments

These keyword arguments are common to most Frank-Wolfe variants.

Warning

The current variant may have additional keyword arguments, documented elsewhere, or it may only use a subset of the ones listed below. The default values of these arguments may also vary between variants, and thus are not part of the public API.

  • line_search::LineSearchMethod: an object specifying the line search and its parameters (see LineSearchMethod)
  • momentum::Union{Real,Nothing}=nothing: constant momentum to apply to the gradient
  • epsilon::Real: absolute dual gap threshold at which the algorithm is interrupted
  • max_iteration::Integer: maximum number of iterations after which the algorithm is interrupted
  • print_iter::Integer: interval between two consecutive log prints, expressed in number of iterations
  • trajectory::Bool=false: whether to record the trajectory of algorithm states (through callbacks)
  • verbose::Bool: whether to print periodic logs (through callbacks)
  • memory_mode::MemoryEmphasis: an object dictating whether the algorithm operates in-place or out-of-place (see MemoryEmphasis)
  • gradient=nothing: pre-allocated container for the gradient
  • callback=nothing: function called on a CallbackState at each iteration
  • traj_data=[]: pre-allocated storage for the trajectory of algorithm states
  • timeout::Real=Inf: maximum time after which the algorithm is interrupted (in nanoseconds)
  • linesearch_workspace=nothing: pre-allocated workspace for the line search
  • dual_gap_compute_frequency::Integer=1: frequency of dual gap computation,

Return

Returns a tuple (x, v, primal, dual_gap, traj_data, active_set) with:

  • x: the final iterate
  • v: the last vertex from the linear minimization oracle
  • primal: the final primal value f(x)
  • dual_gap: the final Frank-Wolfe gap
  • traj_data: a vector of trajectory information, each element being the output of callback_state.
  • active_set: the computed active set of vertices, of which the solution is a convex combination
source
FrankWolfe.pairwise_frank_wolfeMethod
pairwise_frank_wolfe(f, grad!, lmo, x0; kwargs...)

Frank-Wolfe with pairwise steps. The algorithm maintains the current iterate as a convex combination of vertices in the FrankWolfe.ActiveSet data structure. See M. Besançon, A. Carderera and S. Pokutta 2021 for illustrations of away steps. Unlike away-step, it transfers weight from an away vertex to another vertex.

Common arguments

These positional arguments are common to most Frank-Wolfe variants:

  • f: a function f(x) computing the value of the objective to minimize at point x
  • grad!: a function grad!(g, x) overwriting g with the gradient of f at point x
  • lmo: a linear minimization oracle, subtyping LinearMinimizationOracle
  • x0: a starting point for the optimization

Common keyword arguments

These keyword arguments are common to most Frank-Wolfe variants.

Warning

The current variant may have additional keyword arguments, documented elsewhere, or it may only use a subset of the ones listed below. The default values of these arguments may also vary between variants, and thus are not part of the public API.

  • line_search::LineSearchMethod: an object specifying the line search and its parameters (see LineSearchMethod)
  • momentum::Union{Real,Nothing}=nothing: constant momentum to apply to the gradient
  • epsilon::Real: absolute dual gap threshold at which the algorithm is interrupted
  • max_iteration::Integer: maximum number of iterations after which the algorithm is interrupted
  • print_iter::Integer: interval between two consecutive log prints, expressed in number of iterations
  • trajectory::Bool=false: whether to record the trajectory of algorithm states (through callbacks)
  • verbose::Bool: whether to print periodic logs (through callbacks)
  • memory_mode::MemoryEmphasis: an object dictating whether the algorithm operates in-place or out-of-place (see MemoryEmphasis)
  • gradient=nothing: pre-allocated container for the gradient
  • callback=nothing: function called on a CallbackState at each iteration
  • traj_data=[]: pre-allocated storage for the trajectory of algorithm states
  • timeout::Real=Inf: maximum time after which the algorithm is interrupted (in nanoseconds)
  • linesearch_workspace=nothing: pre-allocated workspace for the line search
  • dual_gap_compute_frequency::Integer=1: frequency of dual gap computation,

Return

Returns a tuple (x, v, primal, dual_gap, traj_data, active_set) with:

  • x: the final iterate
  • v: the last vertex from the linear minimization oracle
  • primal: the final primal value f(x)
  • dual_gap: the final Frank-Wolfe gap
  • traj_data: a vector of trajectory information, each element being the output of callback_state.
  • active_set: the computed active set of vertices, of which the solution is a convex combination
source

Blended Conditional Gradient

FrankWolfe.blended_conditional_gradientMethod
blended_conditional_gradient(f, grad!, lmo, x0; kwargs...)

Entry point for the Blended Conditional Gradient algorithm. See Braun, Gábor, et al. "Blended conditonal gradients" ICML 2019. The method works on an active set like FrankWolfe.away_frank_wolfe, performing gradient descent over the convex hull of active vertices, removing vertices when their weight drops to 0 and adding new vertices by calling the linear oracle in a lazy fashion.

Common arguments

These positional arguments are common to most Frank-Wolfe variants:

  • f: a function f(x) computing the value of the objective to minimize at point x
  • grad!: a function grad!(g, x) overwriting g with the gradient of f at point x
  • lmo: a linear minimization oracle, subtyping LinearMinimizationOracle
  • x0: a starting point for the optimization

Common keyword arguments

These keyword arguments are common to most Frank-Wolfe variants.

Warning

The current variant may have additional keyword arguments, documented elsewhere, or it may only use a subset of the ones listed below. The default values of these arguments may also vary between variants, and thus are not part of the public API.

  • line_search::LineSearchMethod: an object specifying the line search and its parameters (see LineSearchMethod)
  • momentum::Union{Real,Nothing}=nothing: constant momentum to apply to the gradient
  • epsilon::Real: absolute dual gap threshold at which the algorithm is interrupted
  • max_iteration::Integer: maximum number of iterations after which the algorithm is interrupted
  • print_iter::Integer: interval between two consecutive log prints, expressed in number of iterations
  • trajectory::Bool=false: whether to record the trajectory of algorithm states (through callbacks)
  • verbose::Bool: whether to print periodic logs (through callbacks)
  • memory_mode::MemoryEmphasis: an object dictating whether the algorithm operates in-place or out-of-place (see MemoryEmphasis)
  • gradient=nothing: pre-allocated container for the gradient
  • callback=nothing: function called on a CallbackState at each iteration
  • traj_data=[]: pre-allocated storage for the trajectory of algorithm states
  • timeout::Real=Inf: maximum time after which the algorithm is interrupted (in nanoseconds)
  • linesearch_workspace=nothing: pre-allocated workspace for the line search
  • dual_gap_compute_frequency::Integer=1: frequency of dual gap computation,

Return

Returns a tuple (x, v, primal, dual_gap, traj_data, active_set) with:

  • x: the final iterate
  • v: the last vertex from the linear minimization oracle
  • primal: the final primal value f(x)
  • dual_gap: the final Frank-Wolfe gap
  • traj_data: a vector of trajectory information, each element being the output of callback_state.
  • active_set: the computed active set of vertices, of which the solution is a convex combination
source
FrankWolfe.build_reduced_problemMethod
build_reduced_problem(atoms::AbstractVector{<:AbstractVector}, hessian, weights, gradient, tolerance)

Given an active set formed by vectors , a (constant) Hessian and a gradient constructs a quadratic problem over the unit probability simplex that is equivalent to minimizing the original function over the convex hull of the active set. If λ are the barycentric coordinates of dimension equal to the cardinality of the active set, the objective function is:

f(λ) = reduced_linear^T λ + 0.5 * λ^T reduced_hessian λ

In the case where we find that the current iterate has a strong-Wolfe gap over the convex hull of the active set that is below the tolerance we return nothing (as there is nothing to do).

source
FrankWolfe.lp_separation_oracleMethod

Returns either a tuple (y, val) with y an atom from the active set satisfying the progress criterion and val the corresponding gap dot(y, direction) or the same tuple with y from the LMO.

inplace_loop controls whether the iterate type allows in-place writes. kwargs are passed on to the LMO oracle.

source
FrankWolfe.minimize_over_convex_hull!Method
minimize_over_convex_hull!

Given a function f with gradient grad! and an active set active_set this function will minimize the function over the convex hull of the active set until the strong-wolfe gap over the active set is below tolerance.

It will either directly minimize over the convex hull using simplex gradient descent, or it will transform the problem to barycentric coordinates and minimize over the unit probability simplex using gradient descent or Nesterov's accelerated gradient descent.

source
FrankWolfe.simplex_gradient_descent_over_convex_hullMethod
simplex_gradient_descent_over_convex_hull(f, grad!, gradient, active_set, tolerance, t, time_start, non_simplex_iter)

Minimizes an objective function over the convex hull of the active set until the Strong-Wolfe gap is below tolerance using simplex gradient descent.

source

Blended Pairwise Conditional Gradient

Corrective Frank-Wolfe

FrankWolfe.corrective_frank_wolfeMethod
corrective_frank_wolfe(f, grad!, lmo, corrective_step, active_set::AS; kwargs...)

A corrective Frank-Wolfe variant with corrective step defined by corrective_step.

A corrective FW algorithm alternates between a standard FW step at which a vertex is added to the active set and a corrective step at which an update is performed in the convex hull of current vertices. Examples of corrective FW algorithms include blended (pairwise) conditional gradients, away-step Frank-Wolfe, and fully-corrective Frank-Wolfe.

Common keyword arguments

These keyword arguments are common to most Frank-Wolfe variants.

Warning

The current variant may have additional keyword arguments, documented elsewhere, or it may only use a subset of the ones listed below. The default values of these arguments may also vary between variants, and thus are not part of the public API.

  • line_search::LineSearchMethod: an object specifying the line search and its parameters (see LineSearchMethod)
  • momentum::Union{Real,Nothing}=nothing: constant momentum to apply to the gradient
  • epsilon::Real: absolute dual gap threshold at which the algorithm is interrupted
  • max_iteration::Integer: maximum number of iterations after which the algorithm is interrupted
  • print_iter::Integer: interval between two consecutive log prints, expressed in number of iterations
  • trajectory::Bool=false: whether to record the trajectory of algorithm states (through callbacks)
  • verbose::Bool: whether to print periodic logs (through callbacks)
  • memory_mode::MemoryEmphasis: an object dictating whether the algorithm operates in-place or out-of-place (see MemoryEmphasis)
  • gradient=nothing: pre-allocated container for the gradient
  • callback=nothing: function called on a CallbackState at each iteration
  • traj_data=[]: pre-allocated storage for the trajectory of algorithm states
  • timeout::Real=Inf: maximum time after which the algorithm is interrupted (in nanoseconds)
  • linesearch_workspace=nothing: pre-allocated workspace for the line search
  • dual_gap_compute_frequency::Integer=1: frequency of dual gap computation,

Return

Returns a tuple (x, v, primal, dual_gap, traj_data, active_set) with:

  • x: the final iterate
  • v: the last vertex from the linear minimization oracle
  • primal: the final primal value f(x)
  • dual_gap: the final Frank-Wolfe gap
  • traj_data: a vector of trajectory information, each element being the output of callback_state.
  • active_set: the computed active set of vertices, of which the solution is a convex combination
source
FrankWolfe.HybridPairAwayStepType

Compares a pairwise and away step and chooses the one with most progress. The line search is computed for both steps. If one step incurs a drop, it is favored, otherwise the one decreasing the primal value the most is favored.

source
FrankWolfe.prepare_corrective_stepFunction
prepare_corrective_step(corrective_step::CS, f, grad!, gradient, active_set, t, lmo, primal, phi, tot_time) -> (should_compute_vertex, use_corrective)

should_compute_vertex is a boolean flag deciding whether a new vertex should be computed. use_corrective is a function that takes the vertex (the vertex is a valid new vertex only if shouldcomputevertex was true)

source
FrankWolfe.run_corrective_stepFunction
run_corrective_step(corrective_step, f, grad!, gradient, x, active_set, t, lmo, line_search, linesearch_workspace, primal, phi, tot_time, callback, renorm_interval) -> (x, phi, primal)

Corrective step method specific to the CS correctivestep type. The corrective step can perform whatever update over the current active set, the function should return the new iterate a FW step should be run next with the boolean `shouldfw_stepand compute a new dual gap estimatephi`.

source

Alternating Methods

Problems over intersections of convex sets, i.e.

\[\min_{x \in \bigcap_{i=1}^n P_i} f(x),\]

pose a challenge as one has to combine the information of two or more LMOs.

FrankWolfe.alternating_linear_minimization converts the problem into a series of subproblems over single sets. To find a point within the intersection, one minimizes both the distance to the iterates of the other subproblems and the original objective function.

FrankWolfe.alternating_projections solves feasibility problems over intersections of feasible regions.

FrankWolfe.alternating_linear_minimizationMethod
alternating_linear_minimization(bc_algo::BlockCoordinateMethod, f, grad!, lmos::NTuple{N,LinearMinimizationOracle}, x0; ...) where {N}

Alternating Linear Minimization minimizes the objective f over the intersections of the feasible domains specified by lmos. The tuple x0 defines the initial points for each domain. Returns a tuple (x, v, primal, dual_gap, dist2, traj_data) with:

  • x cartesian product of final iterates
  • v cartesian product of last vertices of the LMOs
  • primal primal value f(x)
  • dual_gap final Frank-Wolfe gap
  • dist2 is 1/2 of the sum of squared, pairwise distances between iterates
  • traj_data vector of trajectory information.
source
FrankWolfe.alternating_projectionsMethod
alternating_projections(lmos::NTuple{N,LinearMinimizationOracle}, x0; ...) where {N}

Computes a point in the intersection of feasible domains specified by lmos. Returns a tuple (x, v, dual_gap, dist2, traj_data) with:

  • x cartesian product of final iterates
  • v cartesian product of last vertices of the LMOs
  • dual_gap final Frank-Wolfe gap
  • dist2 is 1/2 * sum of squared, pairwise distances between iterates
  • traj_data vector of trajectory information.
source

Index