From 7b985f746e09e0077f6f0ac357a37cea2f6b398e Mon Sep 17 00:00:00 2001 From: Alina <67658835+alozowski@users.noreply.github.com> Date: Thu, 9 Jan 2025 18:21:02 +0100 Subject: [PATCH] Update leaderboard-emissions-analysis.md Co-authored-by: Pedro Cuenca --- leaderboard-emissions-analysis.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/leaderboard-emissions-analysis.md b/leaderboard-emissions-analysis.md index bd8c5129c4..0864d835be 100644 --- a/leaderboard-emissions-analysis.md +++ b/leaderboard-emissions-analysis.md @@ -207,7 +207,7 @@ Here is only the end of the answer (the full answer is 5,821 characters long). I ## Conclusion Fine-tuning large language models like `Qwen2-72B` and `Meta-Llama-3.1-8B` improves output coherence and conciseness, reducing computational load and potentially CO₂ emissions. However, for now, exact emission data for specific benchmarks is not available, limiting detailed comparisons. Despite this, it is evident that fine-tuning enhances efficiency, though the reason for emission reductions remains uncertain. -# Future Questions +## Open Questions Several open questions remain, for interested individuals in the community to explore!