-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multi gpu inference with run_rm.py #95
Comments
Hey @SeungoneKim -- we just haven't needed it yet (the biggest classifiers are 34B). Happy to add it.
|
Thanks for your response @natolambert! I was trying to test generative reward modeling (with GPT-4, Prometheus, Auto-J) and it seems like Considering that generative RMs require generating a CoT-ish feedback before their scoring decision, I think it would be best to integrate vllm and add an additional If this makes sense to you, I'll leave a pull request of this and try to maintain the style of the code as similar to |
@SeungoneKim generative RM's (via API) are being added in #86, but adding the full generation thing is another can of worms. I agree with your path, I just worry a bit about complexity. It's prolly worth having though. The API implementation should be closer to what you want to build off of. Here are preliminary results
|
Hello Nathan,
Thank you for this valuable resource! I strongly think that we needed more standardized benchmarks to evaluate reward/evaluator models.
I think submit_eval_jobs.py (using AI2's beaker) supports multi gpu inference but run_rm.py doesn't at the moment.
I was wondering if this intended (correct me if I'm wrong)!
Best,
Seungone
The text was updated successfully, but these errors were encountered: