Using Artificial Intelligence

The use of artificial intelligence (AI) has exploded in recent years, and the technology has advanced to the point where it can be a useful tool for brainstorming solutions, writing, debugging, or evaluating code. But what does this mean for security?

The use of AI in projects raises a number of issues we must address, including:

  • Who owns the code and data, and who owns the result from AI?
  • What options do we have for addressing breaches or violations of agreements?
  • What can go wrong? Can data or code be exposed or can the tool make changes we don’t understand or control?
Handling secrets

Remember that Bouvet and most clients have guidelines for the use of AI that must be followed. It is not permitted to use AI tools without explicit approval from Bouvet or the client!

What we are allowed to do

At Bouvet and on Bouvet equipment, we are only allowed to use AI tools that are explicitly permitted in Bouvet’s AI policy; on client equipment, we may only use tools approved by the client. These restrictions are in place because of the complexity surrounding AI tools.

They often run in their own environments and process or handle potentially sensitive information, which may result in changes that affect us or the client.

Even though we have the technical ability to run an AI tool, that doesn’t necessarily mean we should run it. If you believe a tool could improve the development process or benefit your project, submit a BSD ticket so that it can be properly evaluated.

New tool? Consider the following

If you want to start using a new tool, it’s important to clarify who owns the results produced by that tool.

Many free or non-enterprise versions of AI tools include license terms that allow the provider to use input data for training purposes. This will never be acceptable for Bouvet, or for our clients.

We must also maintain control over where data and information flow, to ensure privacy and compliance with our obligations under data protection laws.

What does the AI tool have access to?

If you have been authorized to use an AI tool in your development project, you must have control over the following:

  • What are you allowed to share with the tool?
  • What have you actually shared with the tool?
  • How can you ensure you don’t share more than you’re permitted to?

How you use the tool will vary; some AI code tools run as assistants within your IDE, while others connect to GitHub suggesting changes in separate branches based on your prompts.

Unless explicitly approved, the tool must under no circumstances have access to data beyond the codebase.

Check that you are not including data files, secrets, or other sensitive information in the repository, and exclude them in .gitignore if necessary. Use key vaults wherever possible to avoid secrets ending up in the repository by accident.

Sensitive data disclosure

Be aware that some AI tools used in the IDE can commit and push code to Github automatically, and that precations are required to avoid uploading sensitive information such as keys, certificates and data.

Quality assurance of AI contributions

AI solutions can have a positive effect on progress, but they must always be treated as third-party code and quality assured accordingly. Code generated by AI may often do what you intended; but just as often in unnecessarily complex ways. There have been countless examples of weaknesses or vulnerabilities being introduced by AI, or by hallucinating solutions that don’t work in practice.

As a developer, you must know how to properly instruct the tool, and be aware of its limitations. To make things easier, here are a few basic principles:

  • AI must not make design or architectural decisions. It should only be used to solve specific tasks within a human-defined architecture.
  • AI should be treated like a junior developer: everything it produces must be reviewed, understood, and tested. AI-based contributions must be traceable and verifiable.
  • Tasks should be divided into small, reviewable components that you can fully validate. Avoid large code blocks without human insight.

AI can make us more productive, but it’s crucial that we understand the results these tools produce. There have been many examples of AI-generated code being used uncritically, only for serious vulnerabilities to be discovered later—vulnerabilities that can be exploited to manipulate or extract data. Security testing should always be part of the development process, but it becomes even more important when using AI tools for coding. AI-generated code must never be deployed to production without being reviewed, understood, and tested.

You should also consider implementing safeguards to prevent unintended or harmful consequences, such as rule-based files with additional AI instructions, access restrictions preventing AI from merging code automatically, and other measures ensuring that AI cannot make changes without human review and approval.

More information