Journal of Northeastern University(Natural Science) ›› 2025, Vol. 46 ›› Issue (11): 37-47.DOI: 10.12068/j.issn.1005-3026.2025.20240187

• Information & Control • Previous Articles     Next Articles

Fine-Tuned Large Language Model EcoPowerGPT for Multi-energy Power Generation Field

Wen-jun TAN1(), Yan-liang GUO1,2, Rui-ting QU3, Qing SONG3   

  1. 1.School of Computer Science & Engineering,Northeast University,Shenyang 110169,China
    2.Shenyang Fire Science and Technology Research Institute of MEM,Shenyang 110034,China
    3.State Grid Liaoning Electric Power Co. ,Ltd. ,Shenyang 110004,China.
  • Received:2024-10-21 Online:2025-11-15 Published:2026-02-07
  • Contact: Wen-jun TAN

Abstract:

To address the issues of poor question answering (QA) performance due to the lack of high-quality datasets in the multi-energy power generation field, as well as the current limitations in the generalization capability of Chinese responses, a fine-tuned large language model called EcoPowerGPT based on the Llama architecture was proposed for the multi-energy power generation field. By organizing literature and reports in the multi-energy power generation field, the model employed classification filtering and multi-dimensional scoring methods for data processing, thereby constructing a fine-tuned dataset for multi-energy power generation. This dataset was then used to fine-tune the large language model. Comparative experiments were conducted between EcoPowerGPT and six other dialogue models on multi-energy power generation QA test sets and test sets of multiple-choice questions with a single correct answer. The results demonstrate that EcoPowerGPT outperforms existing dialogue models in terms of both the accuracy and comprehensiveness of its responses.

Key words: generative large language model, question answering, natural language processing, multi-energy-power generation, instruction fine-tuning

CLC Number: