Optimization of Large Language Models (LLMs) through Prompt Engineering
Abstract
This article explored the impact of prompt engineering on optimizing the performance of large language models (LLMs) such as GPT and BERT. Prompt engineering was introduced as an innovative approach that involved designing specific instructions to guide the models' responses, enhancing their accuracy and relevance without modifying their internal parameters. The study evaluated methodologies for constructing effective prompts, compared different strategies such as few-shot and zero-shot learning, and analyzed practical cases in areas like text generation, question answering, and sentiment analysis. The results demonstrated that a strategic design of prompts could significantly improve response quality, reduce errors, and expand the range of LLM applications.
Downloads
References
T. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, ... D. Amodei, "Language Models are Few-Shot Learners," arXiv, 2020. Available: https://arxiv.org/abs/2005.14165.
J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding," arXiv, 2018. Available: https://arxiv.org/abs/1810.04805.
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, "Attention Is All You Need," arXiv, 2017. Available: https://arxiv.org/abs/1706.03762.
T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, and J. Brew, "Transformers: State-of-the-Art Natural Language Processing," arXiv, 2020. Available: https://arxiv.org/abs/1910.03771.
C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer," Journal of Machine Learning Research, vol. 21, pp. 1–67, 2020. Available: https://arxiv.org/abs/1910.10683.
J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. H. Chi, Q. V. Le, and D. Zhou, "Chain of Thought Prompting Elicits Reasoning in Large Language Models," arXiv, 2022. Available: https://arxiv.org/abs/2201.11903.
OpenAI, "GPT-4 Technical Report," OpenAI, 2023. Available: https://openai.com/research/gpt-4.
Copyright (c) 2025 Innovation and Software

This work is licensed under a Creative Commons Attribution 4.0 International License.
The authors exclusively grant the right to publish their article to the Innovation and Software Journal, which may formally edit or modify the approved text to comply with their own editorial standards and with universal grammatical standards, prior to publication; Likewise, our journal may translate the approved manuscripts into as many languages as it deems necessary and disseminates them in several countries, always giving public recognition to the author or authors of the research.











