
Revista Innovaci´on y Software
Vol. 6, No. 2, Mes Septiembre-Febrero, 2025
ISSN: 2708-0935
P´ag. 6-11
https://revistas.ulasalle.edu.pe/innosoft
Curaci´on de datos,Escritura, revisi´on y edici´on.Marcelino Torres Villanueva.
Referencias
[1] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam,
G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh,
D. Ziegler, J. Wu, C. Winter, ..., and D. Amodei, “Language models are few-shot learners,” arXiv, 2020.
[Online]. Available: https://arxiv.org/abs/2005.14165
[2] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers
for language understanding,” arXiv, 2018. [Online]. Available: https://arxiv.org/abs/1810.04805
[3] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin,
“Attention is all you need,” arXiv, 2017. [Online]. Available: https://arxiv.org/abs/1706.03762
[4] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf,
M. Funtowicz, and J. Brew, “Transformers: State-of-the-art natural language processing,” arXiv, 2020.
[Online]. Available: https://arxiv.org/abs/1910.03771
[5] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu,
“Exploring the limits of transfer learning with a unified text-to-text transformer,” Journal of Machine
Learning Research, vol. 21, pp. 1–67, 2020. [Online]. Available: https://arxiv.org/abs/1910.10683
[6] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. H. Chi, Q. V. Le, and D. Zhou,
“Chain of thought prompting elicits reasoning in large language models,” arXiv, 2022. [Online]. Available:
https://arxiv.org/abs/2201.11903
[7] OpenAI, “GPT-4 technical report,” OpenAI, 2023. [Online]. Available: https://openai.com/research/gpt-4
Facultad de Ingenier´ıa
Universidad La Salle, Arequipa, Per´u
facin.innosoft@ulasalle.edu.pe
11