You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
train_caption_file: training corpus, refer to this file val_caption_file: validation corpus, refer to this file eval_gt_file_for_grounding: validation file for video grounding, refer to this file dict_file: vocabulary file of your dataset, refer to this file
Prepare the features: Gather each video's features into a .npy file, with the format L * D, where L denotes temporal resolution and D represents the feature dimension. Store these files in a single designated folder for streamlined access.
Prepare the .yaml file: Create a configuration file for training by modifying the existing cfg file. You can start with the template provided at: Configuration File Template and adjust it using the annotation details mentioned above.
Could you please share a pipline for pre-preparing a new data for training?
The text was updated successfully, but these errors were encountered: