You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your codes. However, I have a question about setting hyper-parameters.
When using CIFAR10, the DQ paper has the following description:
According to the above description, I set the hyperparameters as below, although it is different from the sample code written inthe README. Are the following correct? (Changed '-se' from 0 to 10, and removed '--pretrained') # Dataset bin generation (By default we use a bin number of 10) CUDA_VISIBLE_DEVICES=0 python -u quantize_sample.py \ --fraction 0.1 --dataset CIFAR10 --data_path ~/data_cifar \ --num_exp 10 --workers 10 -se 10 --selection Submodular --model ResNet18 \ -sp ../results/bin_cifar_010 \ --batch 128 --submodular GraphCut --submodular_greedy NaiveGreedy
The text was updated successfully, but these errors were encountered:
Thanks for pointing out the inconsistency between the paper and the implementation. In the paper we kept it consistent with the original Deepcore setting to conduct 10-epoch pre-training. And directly adopting pre-trained model is also practical. Please try running these two scripts and compare the performance.
Thanks for your codes. However, I have a question about setting hyper-parameters.
When using CIFAR10, the DQ paper has the following description:
According to the above description, I set the hyperparameters as below, although it is different from the sample code written inthe README. Are the following correct? (Changed '-se' from 0 to 10, and removed '--pretrained')
# Dataset bin generation (By default we use a bin number of 10) CUDA_VISIBLE_DEVICES=0 python -u quantize_sample.py \ --fraction 0.1 --dataset CIFAR10 --data_path ~/data_cifar \ --num_exp 10 --workers 10 -se 10 --selection Submodular --model ResNet18 \ -sp ../results/bin_cifar_010 \ --batch 128 --submodular GraphCut --submodular_greedy NaiveGreedy
The text was updated successfully, but these errors were encountered: