Copilot, as the code synthesizer is called, has been developed in collaboration with open AI and leverages Codex, a new AI system that’s trained on publicly available source code and natural language to translate comments and code written by a user into auto-generated code snippets.
Despite its function as an AI-based autocomplete for writing boilerplate code, the Microsoft-owned software repository hosting and version control platform reiterated that Copilot is not designed to write code on behalf of the developer while noting that users can cycle through alternative suggestions and manually edit suggested code.
Given that the code suggestions are based on a selection of English language and source code from publicly available repositories on GitHub, the company explicitly spelled out the security consequences that may arise out of relying on low-quality code from the training set, leading to “insecure coding patterns, bugs, or references to outdated APIs or idioms.”
In other words, the code suggested by GitHub Copilot “should be carefully tested, reviewed, and vetted, like any other code.”
However, if it’s any consolation, code auto-filled by Copilot is largely unique, with a test performed by GitHub finding that onyln0.1 % of generated code could be found verbatim in the training set. The company also said it has filters in place to block offensive words and avoid generating suggestions in sensitive contexts.
GitHub Copilot is currently available as an extension for Microsoft’s cross-platform code editor Visual Studio Code, both locally on the machine or in the cloud.