Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running train multiple times (same inputs) results in different models #91

Open
mathisloevenich opened this issue Jul 15, 2021 · 1 comment

Comments

@mathisloevenich
Copy link

Hey, I am using your Tool to write my Bachelor thesis in matching social network profiles.
It's working fine but as you might know, Reproducibility is a major factor in writing a thesis.
However I came to notice that when I run the setup multiple times with the same inputs,
the training behaves differently and creates different kind of models (sometimes astonishing good models and sometimes comparably bad ones)

I did not find any documentation about this. Is randomness a key thing or did I get something wrong.
If you have any documentation about deeper insights into the process, please hand it out to me.

@mathisloevenich
Copy link
Author

I want to mention that I am not retraining the existing model

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant