Generative AI: Good or bad news for software developers?

In the past, software development was extremely repetitive, laborious and primitive, and required a lot of custom code, even for very simple programmes.

Developers used low-level languages like assembler or binary to manually input hardware-specific instructions and manage memory.

Fortunately, software development is evolving over time, with technology advancements enabling development teams to build high-quality applications more quickly. In 2008, the creation of Stack Overflow, the question-and-answer website for programmers, was already a big step. Joining such collaborative software communities or being able to access code libraries made programming much more efficient. Developers could ask for guidance and support from their peers, and even copy and paste code snippets to build their application.

Generative AI (Artificial Intelligence) tools like ChatGPT and similar AI-powered tools are the latest revolutionary developments in programming. These tools go beyond what human developers can possibly do by generating vast amounts of code snippets in multiple programming languages. But such powerful tools often come with important security risks which shouldn’t be overlooked.

Just like humans, AI tools are not perfect and might recommend code snippets that contain security vulnerabilities. And if human developers simply copy and paste these snippets, they might introduce vulnerabilities into their programme, which might have major consequences on security. So, although coding is increasingly being automated with AI-assisted tools, some form of human intervention is still necessary to double-check every code generated by a machine. Developers must not forget that security practices are vital. Using tools like ChatGPT in software development doesn’t mean human programmers are no longer accountable for the code, it simply means new skill sets are required to improve identity security and ensure AI-generated code does not put the business at risk.

Coding more quickly with AI tools

One of the aspects I find most enjoyable about software development is its constant evolution. As a developer, you are always seeking ways to enhance efficiency and avoid duplicating code, following the principle of “don’t repeat yourself.” Throughout history, humans have sought means to automate repetitive tasks. From a developer’s perspective, eliminating repetitive coding allows us to construct superior and more intricate applications.

AI bots are not the first technology to assist us in this endeavour. Instead, they represent the next phase in the advancement of application development, building upon previous achievements.

Can software developers blindly trust ChatGPT?

Prior to AI-powered tools, developers would search on platforms like Google and Stack Overflow for code solutions, comparing multiple answers to find the most suitable one. With ChatGPT, developers specify the programming language and required functionality, receiving what the AI tool deems the best answer. This saves time by reducing the amount of code developers need to write. By automating repetitive tasks, ChatGPT enables developers to focus on higher-level concepts, resulting in advanced applications and faster development cycles.

However, there are caveats to using AI tools. They provide a single answer with no validation from other sources, unlike what you would see in a collective software development community, so developers need to validate any AI solution. In addition, because the tool is in beta stage, the code served by ChatGPT should still be evaluated and cross-checked before being used in any application.

There are plenty of examples of breaches that started thanks to someone copying over code and not checking it thoroughly. Think back to the Heartbleed exploit, a security bug in a popular library that led to the exposure of hundreds of thousands of websites, servers and other devices which used the code.

Because the library was so widely used, the thought was, of course, someone had checked it for vulnerabilities. But instead, the vulnerability persisted for years, quietly used by attackers to exploit vulnerable systems.

This is the darker side to ChatGPT; attackers also have access to the tool. While OpenAI has built some safeguards to prevent it from answering questions regarding problematic subjects like code injection, the CyberArk Labs team has already uncovered some ways in which the tool could be used for malicious reasons. Breaches have occurred due to blindly incorporating code without thorough verification. Attackers can exploit ChatGPT, using its capabilities to create polymorphic malware or produce malicious code more rapidly. Even with safeguards, developers must exercise caution.

ChatGPT generates the code, but developers are accountable for it

With these potential security risks in mind, there are some important best practices to follow when using code generated by AI tools like ChatGPT. This involves checking the solution generated by ChatGPT against another source, like a community you trust, or friends. You should then make sure the code follows best practices for granting access to databases and other critical resources, following the principle of least privilege, secrets management, auditing and authenticating access to sensitive resources.

Make sure you double-check the code for any potential vulnerabilities and be aware of what you’re putting into ChatGPT as well. There is a question of how secure the information you put into ChatGPT is, so be careful when using highly sensitive inputs. Ensure you’re not accidentally exposing any personal identifying information that could run afoul of compliance regulations.

Although developers now often don’t generate code themselves, the responsibility still lies with them. They cannot simply blindly trust a machine. So, to prevent potential issues or breaches, they must collaborate with security teams and make sure they understand and adopt identity security best practices. Human users ultimately are the ones bearing the consequences of any insecure code, so they are accountable – not the machine that generated the code. As such, careful evaluation and adherence to cybersecurity practices are essential in utilising ChatGPT – only then will software developers be able to improve efficiency with AI-assisted tools, without jeopardising security.


About the Author

John Walsh is Senior Product Marketing Manager, Developer Security at CyberArk. CyberArk is the global leader in Identity Security. Centered on privileged access management, CyberArk provides the most comprehensive security offering for any identity – human or machine – across business applications, distributed workforces, hybrid cloud workloads and throughout the DevOps lifecycle. The world’s leading organizations trust CyberArk to help secure their most critical assets.

more insights