Optimization of Large Language Models (LLMs) through Prompt Engineering

Keywords: Few-shot learning, generative models, LLMs, prompt engineering, zero-shot learning

Abstract

This article explored the impact of prompt engineering on optimizing the performance of large language models (LLMs) such as GPT and BERT. Prompt engineering was introduced as an innovative approach that involved designing specific instructions to guide the models' responses, enhancing their accuracy and relevance without modifying their internal parameters. The study evaluated methodologies for constructing effective prompts, compared different strategies such as few-shot and zero-shot learning, and analyzed practical cases in areas like text generation, question answering, and sentiment analysis. The results demonstrated that a strategic design of prompts could significantly improve response quality, reduce errors, and expand the range of LLM applications.

Downloads

Download data is not yet available.

References

T. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, ... D. Amodei, "Language Models are Few-Shot Learners," arXiv, 2020. Available: https://arxiv.org/abs/2005.14165.

J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding," arXiv, 2018. Available: https://arxiv.org/abs/1810.04805.

A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, "Attention Is All You Need," arXiv, 2017. Available: https://arxiv.org/abs/1706.03762.

T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, and J. Brew, "Transformers: State-of-the-Art Natural Language Processing," arXiv, 2020. Available: https://arxiv.org/abs/1910.03771.

C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer," Journal of Machine Learning Research, vol. 21, pp. 1–67, 2020. Available: https://arxiv.org/abs/1910.10683.

J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. H. Chi, Q. V. Le, and D. Zhou, "Chain of Thought Prompting Elicits Reasoning in Large Language Models," arXiv, 2022. Available: https://arxiv.org/abs/2201.11903.

OpenAI, "GPT-4 Technical Report," OpenAI, 2023. Available: https://openai.com/research/gpt-4.

Received: 2024-08-10
Accepted: 2024-09-16
Published: 2025-09-30
How to Cite
[1]
C. B. Paz Fernández, S. H. Diaz Sifuentes, and M. Torres Villanueva, “Optimization of Large Language Models (LLMs) through Prompt Engineering”, Innov. softw., vol. 6, no. 2, pp. 6-11, Sep. 2025.