NLopt
(require nlopt) | package: nlopt |
I consider this package to be in a somewhat beta state. I don’t yet promise to keep the API from changing. It needs some feedback yet. Feel free to comment on it.
This package provides a wrapper for the NLopt nonlinear optimization package[NLopt], which is a common interface for a number of different optimization routines.
The nlopt module currently reexports the contents of nlopt/highlevel. The interface provided by nlopt/highlevel will be more stable than the interface provided by nlopt. The goal of nlopt will be to facilitate interactive programming, scripting, and generally getting things done quickly with a minimal amount of typing. If nlopt/highlevel is no longer the best way to do that, nlopt will provide different stuff which better satisfies those goals, while nlopt/highlevel will be left alone. (Perhaps nlopt would reexport a hypothetical new "nlopt/highlevel2".)
1 Installation
This Racket wrapper was developed and tested against NLopt 2.4.2; I’d expect anything later in the 2.4.x series to suffice as well, but there are no guarantees.
You’ll need to get the appropriate NLopt shared library for your Racket installation. Precompiled Windows binaries are available from the NLopt website; make sure you download the appropriate bitness.
Many Linux distributions provide NLopt. Look for libnlopt0 or similar.
In FreeBSD, NLopt is available in the ports collection as nlopt.
And on Mac OS X, I believe you’re stuck compiling it for yourself.
If you have to compile it yourself, and on Windows, where you’re handed a DLL, you’ll need to ensure that the shared library ends up somewhere that Racket will be able to find it with minimal effort. Placing it in the same directory as your code might work, or you can modify your PATH or LD_LIBRARY_PATH environmental variables to point to it.
2 High Level Interface
(require nlopt/highlevel) | package: nlopt |
This is the most unstable part of the package. Not only will things here change, they might not even work right now.
procedure
(minimize/flvector ...) → ...
procedure
(maximize/flvector ...) → ...
procedure
(optimize/flvector fun x0 #:maximize maximize #:minimize minimize #:method method #:jac jac #:bounds bounds #:ineq-constraints ineq-constraints #:eq-constraints eq-constraints #:tolerance tolerance #:epsilon epsilon #:maxeval maxeval #:maxtime maxtime)
→
real? flvector? fun : (-> flvector? any/c flonum?) x0 : flvector? maximize : boolean? minimize : boolean? method : (or/c symbol? #f) jac : (or/c (-> flonum? flvector? flvector? any/c) #f) bounds : (or/c (sequence/c (pair/c real? real?)) #f)
ineq-constraints :
(or/c #f (sequence/c (or/c (-> flvector? any/c flonum?) (cons/c (-> flvector? any/c flonum?) (-> flonum? flvector? flvector? any/c)))))
eq-constraints :
(or/c #f (sequence/c (or/c (-> flvector? any/c flonum?) (cons/c (-> flvector? any/c flonum?) (-> flonum? flvector? flvector? any/c))))) tolerance : real? epsilon : real? maxeval : natural-number/c maxtime : (and/c positive? real?)
These /flvector variants require flonum? values. (Which is largely enforced by the flvectors themselves.) /flvector objective functions should be of the form (fun x) where x is an flvector? and the Jacobians should be of the form (jac y x grad) where y is (fun x), and grad is an flvector? to be populated with the gradient.
fun is the procedure that will be optimized. It shouldn’t be invoked significantly more than maxeval times, over maxtime seconds. x0 is your initial guess for the optimization; some algorithms are more sensitive to the quality of your initial guess than others. data will be passed to every invocation of fun or jac. You may use force-stop inside of fun.
#:maximize and #:minimize determine whether a maximization, or minimzation will be performed. Exactly one of them must be #t. Anything else will result in an exception being raised.
method is a symbol indicating which optimization algorithm to run. See the Algorithms section for your options. If you omit it, or set it to #f, an algorithm will be chosen automatically based on jac, bounds, ineq-constraints and eq-constraints. It should run without error; performance is not guaranteed.
jac is the Jacobian of fun. If you omit it, or supply #f then a very simple approximation will be constructed, by determining how much fun varies when the current x is varied by epsilon. If you provide a Jacobian, or it is not used by the algorithm you select, then epsilon is unused.
bounds may be #f in which case no upper or lower bounds are applied. If it isn’t #f then it should be a sequence of the same length as x0, each element in the sequence should be a pair. The car of each pair will be the lower bound, and the cdr the upper bound. You may supply +max.0 and -max.0 if you don’t wish to bound a dimension above or below, respectively.
ineq-constraints and eq-constraints are sequences of constraint functions (or #f). They must have the same interface as an objective function. An inequality constraint f will constrain the optimization so that (<= (f x _) 0.0) is #t. An equality constraint requires that (= (f x _) 0.0) remain #t. You may provide just the constraint function itself, or a pair, containing the constraint function in car, and the Jacobian of the constraint function in cdr.
tolerance is not currently used. Sorry!
This procedure’s interface was based on scipy’s optimize function.
procedure
(minimize/f64vector ...) → ...
procedure
(maximize/f64vector ...) → ...
procedure
(optimize/f64vector ...) → ...
The /f64vector variants should perform about as well as the /flvector variants. They accept any real? values. /flvector objective functions should be of the form (fun x) where x is an f64vector? and the Jacobians should be of the form (jac y x grad) where y is (fun x), and grad is an f64vector? to be populated with the gradient.
procedure
(minimize/vector ...) → ...
procedure
(maximize/vector ...) → ...
procedure
(optimize/vector ...) → ...
The /vector variants will be less efficient than the /flvector variants. They accept any real? values. /vector objective functions should be of the form (fun x) where x is a vector? and the Jacobians should be of the form (jac y x grad) where y is (fun x), and grad is a vector? to be populated with the gradient.
procedure
(minimize/list ...) → ...
procedure
(maximize/list ...) → ...
procedure
(optimize/list ...) → ...
The /list variants will be the slowest. Like /vector variants, will handle any real? values. /list objective functions should be of the form (fun x) where x is a list? and the Jacobians should be of the form (jac y x) where y is (fun x). They should return the gradient as a list?.
procedure
(minimize/args ...) → ...
procedure
(maximize/args ...) → ...
procedure
(optimize/args ...) → ...
The /args variants will be among the slowest. Like /vector and /list variants, will handle any real? values. /args objective functions should be of the form (fun x ...) where x ... are the elements of the x vector and the Jacobians should be of the form (jac x ...). They should return the gradient as a list?. Note /args Jacobian functions do not receive the the precomputed y. This allows you to quickly set up an optimization problem using existing mathematical functions:
"(maximize/args sin""\n"" " "'(0.0)""\n"" " "#:jac cos""\n"" " "#:bounds '((-inf.0 0.0)))"
3 Safe Interface
(require nlopt/safe) | package: nlopt |
This module is the safe, fully-contracted version of the interface to the C library. For the unsafe, contractless version, see nlopt/unsafe.
3.1 Basics
procedure
(create algorithm dimension) → nlopt-opt?
algorithm : symbol? dimension : (and/c natural-number/c positive?)
The general pattern for using this library is to create an options structure, apply the various setup options to it (making sure to include a stopping condition!), and then run optimize.
procedure
(copy opt) → nlopt-opt?
opt : nlopt-opt?
procedure
(get-algorithm opt) → symbol?
opt : nlopt-opt?
procedure
(get-dimension opt) → (and/c natural-number/c positive?)
opt : nlopt-opt?
procedure
(optimize opt x) →
[res symbol?] [f flonum?] opt : nlopt-opt? x : flvector?
x must be at least as large as the dimension of opt.
procedure
(nlopt-opt? v) → boolean?
v : any/c
3.2 Constraints
procedure
(set-lower-bounds opt bounds) → nlopt-result/c
opt : nlopt-opt? bounds : (flvector/length? (get-dimension opt))
procedure
(set-upper-bounds opt bounds) → nlopt-result/c
opt : nlopt-opt? bounds : (flvector/length? (get-dimension opt))
procedure
(set-lower-bounds1 opt lower-bound) → nlopt-result/c
opt : nlopt-opt? lower-bound : real?
procedure
(set-upper-bounds1 opt upper-bound) → nlopt-result/c
opt : nlopt-opt? upper-bound : real?
procedure
(get-lower-bounds opt)
→
[res nlopt-result/c] [bounds (flvector/length? (get-dimension opt))] opt : nlopt-opt?
procedure
(get-upper-bounds opt)
→
[res nlopt-result/c] [bounds (flvector/length? (get-dimension opt))] opt : nlopt-opt?
procedure
(remove-inequality-constraints opt) → nlopt-result/c
opt : nlopt-opt?
procedure
(add-inequality-constraint opt f data tolerance) → nlopt-result/c opt : nlopt-opt?
f :
(-> (=/c (get-dimension opt)) flvector? (or/c flvector? #f) any/c flonum?) data : any/c tolerance : real?
procedure
(remove-equality-constraints opt) → nlopt-result/c
opt : nlopt-opt?
procedure
(add-equality-constraint opt f data tolerance) → nlopt-result/c opt : nlopt-opt?
f :
(-> (=/c (get-dimension opt)) flvector? (or/c flvector? #f) any/c flonum?) data : any/c tolerance : real?
3.3 Stopping Criteria
procedure
(set-stopval opt stopval) → nlopt-result/c
opt : nlopt-opt? stopval : real?
procedure
(get-stopval opt) → flonum?
opt : nlopt-opt?
procedure
(set-ftol-rel opt ftol-rel) → nlopt-result/c
opt : nlopt-opt? ftol-rel : real?
procedure
(get-ftol-rel opt) → flonum?
opt : nlopt-opt?
procedure
(set-ftol-abs opt ftol-abs) → nlopt-result/c
opt : nlopt-opt? ftol-abs : real?
procedure
(get-ftol-abs opt) → flonum?
opt : nlopt-opt?
procedure
(set-xtol-rel opt xtol-rel) → nlopt-result/c
opt : nlopt-opt? xtol-rel : real?
procedure
(get-xtol-rel opt) → flonum?
opt : nlopt-opt?
procedure
(set-xtol-abs1 opt xtol-abs) → nlopt-result/c
opt : nlopt-opt? xtol-abs : real?
procedure
(set-xtol-abs opt xtols) → nlopt-result/c
opt : nlopt-opt? xtols : (flvector/length? (get-dimension opt))
procedure
(get-xtol-abs opt) →
nlopt-result/c (flvector/length? (get-dimension opt)) opt : nlopt-opt?
procedure
(set-maxeval opt maxeval) → nlopt-result/c
opt : nlopt-opt? maxeval : natural-number/c
procedure
(get-maxeval opt) → natural-number/c
opt : nlopt-opt?
procedure
(set-maxtime opt maxtime) → nlopt-result/c
opt : nlopt-opt? maxtime : real?
procedure
(get-maxtime opt) → flonum?
opt : nlopt-opt?
procedure
(force-stop opt) → nlopt-result/c
opt : nlopt-opt?
procedure
(set-force-stop opt value) → nlopt-result/c
opt : nlopt-opt? value : integer?
procedure
(get-force-stop opt) → integer?
opt : nlopt-opt?
3.4 Algorithm-Specific Parameters
procedure
(set-local-optimizer opt local-opt) → nlopt-result/c
opt : nlopt-opt? local-opt : nlopt-opt?
procedure
(set-population opt pop) → nlopt-result/c
opt : nlopt-opt? pop : natural-number/c
procedure
(get-population opt) → natural-number/c
opt : nlopt-opt?
procedure
(set-vector-storage opt vectors) → nlopt-result/c
opt : nlopt-opt? vectors : natural-number/c
procedure
(get-vector-storage opt) → natural-number/c
opt : nlopt-opt?
procedure
(set-default-initial-step opt bounds) → nlopt-result/c
opt : nlopt-opt? bounds : (flvector/length? (get-dimension opt))
procedure
(set-initial-step1 opt initial-step) → nlopt-result/c
opt : nlopt-opt? initial-step : real?
procedure
(set-initial-step opt bounds) → nlopt-result/c
opt : nlopt-opt? bounds : (flvector/length? (get-dimension opt))
procedure
(get-initial-step opt) →
nlopt-result/c (flvector/length? (get-dimension opt)) opt : nlopt-opt?
4 Unsafe Interface
(require nlopt/unsafe) | package: nlopt |
This module is the unsafe, contractless version of the interface to the C library. For the safe, fully-contracted version, see nlopt/safe.
The one bit of safety provided by this module is that nlopt_opt structures will be cleaned up properly, and Racket values passed to NLopt procedures will be held onto until NLopt no longer refers to them.
The API of this module is probably the most stable thing in the whole package, as it is a direct mapping of the C API.
Failure to obey any of the requirements mentioned below will probably cause Racket to crash.
4.1 Differences in the Basics
procedure
(set-min-objective opt f data) → symbol?
opt : nlopt-opt?
f :
(-> natural-number/c cpointer? (or/c cpointer? #f) any/c flonum?) data : any/c
procedure
(set-max-objective opt f data) → symbol?
opt : nlopt-opt?
f :
(-> natural-number/c cpointer? (or/c cpointer? #f) any/c flonum?) data : any/c
4.2 Differences in the Constraints
procedure
(set-lower-bounds opt bounds) → symbol?
opt : nlopt-opt? bounds : cpointer?
procedure
(set-upper-bounds opt bounds) → symbol?
opt : nlopt-opt? bounds : cpointer?
procedure
(get-lower-bounds opt bounds) → symbol?
opt : nlopt-opt? bounds : cpointer?
procedure
(get-upper-bounds opt bounds) → symbol?
opt : nlopt-opt? bounds : cpointer?
procedure
(add-inequality-constraint opt f data tolerance) → symbol? opt : nlopt-opt?
f :
(-> nlopt-opt? cpointer? (or/c cpointer? #f) any/c flonum?) data : any/c tolerance : real?
procedure
(add-equality-constraint opt f data tolerance) → symbol? opt : nlopt-opt?
f :
(-> nlopt-opt? cpointer? (or/c cpointer? #f) any/c flonum?) data : any/c tolerance : real?
4.3 Differences in the Stopping Criteria
procedure
(set-xtol-abs opt xtols) → symbol?
opt : nlopt-opt? xtols : cpointer?
procedure
(get-xtol-abs opt bounds) → symbol?
opt : nlopt-opt? bounds : cpointer?
4.4 Differences in the Algorithm-Specific Parameters
procedure
(set-default-initial-step opt stepsizes) → symbol?
opt : nlopt-opt? stepsizes : cpointer?
procedure
(set-initial-step opt stepsizes) → symbol?
opt : nlopt-opt? stepsizes : cpointer?
procedure
(get-initial-step opt stepsizes) → symbol?
opt : nlopt-opt? stepsizes : cpointer?
5 Algorithms
This section is a rough sketch of what I intend it to be when the package is complete: Categorize the algorithms, list their Racket names, briefly indicate what they are, provide citations, and include DOI in the bibliography.
This section is not intended to completely describe the algorithms. Rather, the goal is to quickly give you a sense of what your options are. That way, if you have simple needs, you won’t have to consult external documentation. If you anticipate needing to know the exact details of the optimization algorithm you use, consider consulting the NLopt website
5.1 Global Optimization
value
value
value
value
value
value
value
value
GN_DIRECT is the DIviding RECTangles algorithm described in [Jones93]. GN_DIRECT_L is the "locally biased" variant proposed in [Gablonsky01]. They are deterministic search algorithms based on systematic division of the search domain into smaller and smaller hyperrectangles. The NLopt documentation suggests starting with GN_DIRECT_L first.
value
GN_CRS2_LM is Controlled Random Search with local mutation. It is one of the algorithms which makes use of set-stochastic-population. The NLopt version is described in [Kaelo06].
value
value
value
value
value
value
These are all variations on the "Multi-Level Single-Linkage" (MLSL) algorithm proposed in [Kan87-1] and [Kan87-2].
5.2 Local derivative-free optimization
5.3 Local gradient-based optimization
5.4 Augmented Lagrangian algorithm
value
value
Requires the specification of a subsidiary optimization algorithm via set-local-optimizer. AUGLAG converts the objective function and all constraints into a single function, and uses the subsidiary algorithm to optimize it. AUGLAG_EQ only converts the objective and equality constraints; the inequality constraints are passed through to the subsidiary algorithm. (Which must be able to handle them.)
Described in [Conn91] and [Birgin08].
Bibliography
[NLopt] | “The NLopt nonlinear-optimization package.” http://ab-initio.mit.edu/nlopt | |
[Jones93] | D. R. Jones, C. D. Perttunen, and B. E. Stuckmann, “Lipschitzian optimization without the lipschitz constant,” J. Optimization Theory and Applications 79, pp. 157–157, 1993. http://dx.doi.org/10.1007/BF00941892 | |
[Gablonsky01] | J. M. Gablonsky and C. T. Kelley, “A locally-biased form of the DIRECT algorithm,” Journal of Global Optimization 21(1), pp. 27–37, 2001. http://dx.doi.org/10.1023/A:1017930332101 | |
[Kaelo06] | P. Kaelo and M. M. Ali, “Some variants of the controlled random search algorithm for global optimization,” J. Optim. Theory Appl. 130(2), pp. 253–264, 2006. http://dx.doi.org/10.1007/s10957-006-9101-0 | |
[Kan87-1] | A. H. G. Rinnooy Kan and G. T. Timmer, “Stochastic global optimization methods part I: Clustering methods,” Mathematical Programming 39(1), pp. 27–56, 1987. http://dx.doi.org/10.1007/BF02592070 | |
[Kan87-2] | A. H. G. Rinnooy Kan and G. T. Timmer, “Stochastic global optimization methods part II: Multi level methods,” Mathematical Programming 39(1), pp. 57–78, 1987. http://dx.doi.org/10.1007/BF02592071 | |
[Conn91] | Andrew R. Conn, Nicholas I. M. Gould, and Philippe L. Toint, “A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds,” SIAM J. Numer. Anal. 28(2), pp. 545–572, 1991. http://dx.doi.org/10.1137/0728030 | |
[Birgin08] | E. G. Birgin and J. M. Martínez, “Improving ultimate convergence of an augmented Lagrangian method,” Optimization Methods and Software 23(2), pp. 177–195, 2008. http://dx.doi.org/10.1080/10556780701577730 |