Releases: relf/egobox
Releases · relf/egobox
0.6.0
gp
: Kriging derivatives predictions are implemented (#44, #45), derivatives for Gp with linear regression are implemented (#47)predict_derivatives
: prediction of the output derivatives y wtr the input xpredict_variance_derivatives
: prediction of the derivatives of the output variance wrt the input x
moe
: as above derivatives for smooth and hard predictions are implemented (#46)ego
: when available derivatives are used to optimize the infill criterion with slsqp (#44)egobox
Python binding: addGpMix
/Gpx
in Pythonegobox
module, the Python binding ofegobox-moe::Moe
(#31)
0.5.0
- Enable
Egor
optimizer interruption with Ctrl+C (#30) - API improvements: generalize the use of
ArrayBase<...>
, betterlinfa
integration by implementingPredictInplace
trait (#37) - Minor performance improvement in mixture of experts clustering (#29)
- Documentation improvements: JOSS paper submission review (#34, #36, #38, #39, #40, #42)
0.4.0
- Generate Python
egobox
module for Linux (#20) - Improve
Egor
robustness by adding LHS optimization (#21) - Improve
moe
with automatic number of clusters determination (#22) - Use
linfa 0.6.0
making BLAS dependency optional (#23) - Improve
Egor
by implementing automatic reclustering every 10-points addition (#25) - Fix
Egor
parallel infill strategy (qEI): bad objectives and constraints gp models update (#26)
0.3.0
0.2.1
0.2.0
0.1.0
Initial version contains:
- doe: LHS, FullFactorial, Random sampling
- gp: Gaussian Process models with 3 regression models (constant, linear quadratic) and 4 correlation models (squared exponential, absolute exponential, matern32, matern52)
- moe: Mixture of Experts: find the bests mix of gps given a number of clusters regarding smooth or hard recombination
- ego: Contains egor optimizer which is a super EGO algorithm implemented on top of the previous elements.
It implements several infill strategy: EI, WB2, WB2S and use either COBYLA or SLSQP for internal optimization.