Automatic Code Generation using Pre-Trained Language Models

Published in arxiv, 2021

Recommended citation: L Perez, L Ottens, S Viswanathan - arXiv preprint arXiv:2102.10535, 2021 https://arxiv.org/abs/2102.10535

Recent advancements in natural language processing have led to near-human performance in multiple natural language tasks. In this paper, we seek to understand whether similar techniques can be applied to a highly structured environment with strict syntax rules. Specifically, we propose an end-to-end machine learning model for code generation in the Python language built on-top of pre-trained language models. We demonstrate that a fine-tuned model can perform well in code generation tasks, achieving a BLEU score of 0.22, an improvement of 46% over a reasonable sequence-to-sequence baseline. All results and related code used for training and data processing are available on GitHub.

Download