You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I used coco dataset when I was using model fine-tuning. I modified metric=['bbox', 'segm'] in val_evaluator according to the mmengine user manual, and the metric of bbox was obtained during evaluation, but raise an error:
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/runner/runner.py", line 1777, in train
model = self.train_loop.run() # type: ignore
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/runner/loops.py", line 102, in run
self.runner.val_loop.run()
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/runner/loops.py", line 374, in run
metrics = self.evaluator.evaluate(len(self.dataloader.dataset))
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/evaluator/evaluator.py", line 79, in evaluate
_results = metric.evaluate(size)
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/evaluator/metric.py", line 133, in evaluate
_metrics = self.compute_metrics(results) # type: ignore
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmdet/evaluation/metrics/coco_metric.py", line 446, in compute_metrics
raise KeyError(f'{metric} is not in results')
KeyError: 'segm is not in results'
how can I modify the code to show the recognition accuracy of each category in the evaluation? As the picture shows, thank you!
The text was updated successfully, but these errors were encountered:
wowangle97
changed the title
如何使用模型评估时显示各个类别的识别准确度
How to show the recognition accuracy of each category when using model evaluation?
Nov 28, 2024
Hello, I used coco dataset when I was using model fine-tuning. I modified metric=['bbox', 'segm'] in val_evaluator according to the mmengine user manual, and the metric of bbox was obtained during evaluation, but raise an error:
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/runner/runner.py", line 1777, in train
model = self.train_loop.run() # type: ignore
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/runner/loops.py", line 102, in run
self.runner.val_loop.run()
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/runner/loops.py", line 374, in run
metrics = self.evaluator.evaluate(len(self.dataloader.dataset))
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/evaluator/evaluator.py", line 79, in evaluate
_results = metric.evaluate(size)
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmengine/evaluator/metric.py", line 133, in evaluate
_metrics = self.compute_metrics(results) # type: ignore
File "/home/work/miniforge3/envs/yoloworld/lib/python3.9/site-packages/mmdet/evaluation/metrics/coco_metric.py", line 446, in compute_metrics
raise KeyError(f'{metric} is not in results')
KeyError: 'segm is not in results'
how can I modify the code to show the recognition accuracy of each category in the evaluation? As the picture shows, thank you!
The text was updated successfully, but these errors were encountered: