You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the oscillators in the SB optimizer have the same dtype as the IsingCore model which itself inherits its dtype from the polynomial model defined by the user. Although it makes sense to create a polynomial model with an integer dtype (float32, float64, ...) and to cast the SB results to this integer dtype to allow a full-integer computation, it is counter-productive to use this very dtype for the SB optimization because the oscillators' range of values is [-1, 1] which would not work with integer values.
Thus, it would be nice to allow the user to chose a dtype for the model and a dtype for the optimization.
Several options are availables to remedy this problem:
Option 1: int to float mapping
The dtype provided in the sb.optimize, sb.minimize and sb.maximize functions, is used for the model and the SB computation is derived from it:
if the dtype is a float (float8, float16, float32, float64) it is also used for SB
if the dtype is an integer (int8, int16, int32, int64), SB uses the float dtype encoded on the same number of bits (int8 -> float8, int16 -> float16, etc.)
Option 2: dtype is only for SB computation
The dtype passed is only used for the SB computation (a float dtype is required). If the model to optimize is created first, it can have any dtype, but the equivalent Ising model will have its own dtype. If the polynomial is directly provided in the sb.maximize or sb.minimize function, its dtype will be the SB computation one as well.
Option 3: use two parameters in functions
The optimization functions use 2 parameters: model_dtype and computation_dtype which are respectively used to create the model and run SB
The text was updated successfully, but these errors were encountered:
The SB backend must run with float dtypes because the values of the oscillators are in [-1, 1]. During tests carried out for #61 it appeared that some key PyTorch functions are not defined for float16. Thus, option 2 would be the best one with torch.float32 and torch.float64 being the only two accepted dtypes.
Currently, the oscillators in the SB optimizer have the same dtype as the IsingCore model which itself inherits its dtype from the polynomial model defined by the user. Although it makes sense to create a polynomial model with an integer dtype (float32, float64, ...) and to cast the SB results to this integer dtype to allow a full-integer computation, it is counter-productive to use this very dtype for the SB optimization because the oscillators' range of values is [-1, 1] which would not work with integer values.
Thus, it would be nice to allow the user to chose a dtype for the model and a dtype for the optimization.
Several options are availables to remedy this problem:
Option 1: int to float mapping
The dtype provided in the
sb.optimize
,sb.minimize
andsb.maximize
functions, is used for the model and the SB computation is derived from it:Option 2: dtype is only for SB computation
The dtype passed is only used for the SB computation (a float dtype is required). If the model to optimize is created first, it can have any dtype, but the equivalent Ising model will have its own dtype. If the polynomial is directly provided in the
sb.maximize
orsb.minimize
function, its dtype will be the SB computation one as well.Option 3: use two parameters in functions
The optimization functions use 2 parameters:
model_dtype
andcomputation_dtype
which are respectively used to create the model and run SBThe text was updated successfully, but these errors were encountered: