Skip to content

Commit

Permalink
Apply replica id to model seed default for vllm_module_v2.
Browse files Browse the repository at this point in the history
  • Loading branch information
charles9304 committed Dec 27, 2024
1 parent 8fc4805 commit 5658312
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 0 deletions.
6 changes: 6 additions & 0 deletions chatlearn/models/vllm_module_v2.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,10 +101,16 @@ def setup_vllm(self, workers):
else:
model_loader_extra_config = None

if self.model_args.get("apply_replica_id_to_seed", True):
seed = self.model_args.get("seed", 0) + self.replica_id
else:
seed = self.model_args.get("seed", 0)

self.llm = LLM(
model=self.model_args['tokenizer'],
tokenizer=self.model_args['tokenizer'],
max_seq_len_to_capture=self.model_args.get("seq_length"),
seed=seed,
# load model: 'dummy' for megatron ckpt or mock weight; others for hf ckpt.
load_format=load_format,
model_loader_extra_config=model_loader_extra_config,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,3 +47,4 @@ tensor_model_parallel_size: ${policy_tp}
pipeline_model_parallel_size: ${policy_pp}

vllm_load_format: ${vllm_load_format:dummy}
apply_replica_id_to_seed: ${apply_replica_id_to_seed:True}

0 comments on commit 5658312

Please sign in to comment.