Once the solution is designed and the code is written, it's "just" about deploying, and then we're done, right? We often use CI/CD solutions to build and deploy, run tests, and much more.
If someone can compromise pipelines, the build agent building the solution, or the connection to the resources we deploy to, we will have major problems.
Even though it's challenging to cover everything in a few short articles, we still try to provide insights into the issues that delivery teams should address.
1 - CI/CD
When we create solutions, they need to be built and deployed consistently. Using CI/CD eliminates human errors from the process and ensures that we can reliably reproduce both artifacts and deployments.
When setting up runtime environments, it is important to consider how the solution we are developing can be built and deployed in a way that makes it easy and removes the need for a person to spend time and energy doing the same thing each time.
Using CI/CD
Continuous Integration and Continuous Delivery, often abbreviated as CI/CD, are common approaches for how software is built and deployed to runtime environments, often using scripts in the form of pipelines or actions. Names and terms here depend on the tools used, but the principle is much the same.
Such a pipeline can do much more than just build; it can also perform other tasks such as running automated testing, vulnerability scanning, secret scanning, and much more. Regardless of what it is used for, it is important to be aware of the risk elements associated with CI/CD.
The big advantage of CI/CD is the automation built into the solution. Each time you run a pipeline, the run and all artifacts are archived and linked to the version control system so you can go back and see which commit was used. Running CI/CD should be safe as long as you have control over and protect the branch used as the basis for deployment to the production environment.
When building a solution, various considerations must be taken. Is it acceptable for the customer to build in third-party managed cloud environments, or must this happen in our or the customer’s own environments?
Building is often the first step in the process and is typically done only once per release. The build environments used in a CI/CD process, often called build agents, usually come in two forms:
Cloud provider-managed agents
Self-hosted agents - these can be hosted both in the cloud or on-premise
With cloud provider-managed agents, standard images pre-configured for this task are used. They are deployed when you start a build process and contain all the tools needed for building. Once deployed, they check out your source code, build it, store the artifact in a suitable system, and then the instance is stopped and deleted.
Self-hosted agents are more complex because you are responsible for all maintenance and configuration. In return, you have dedicated agents used only by the teams or projects granted access to them.
Although the first option is often good enough, it is important to be aware of the possibilities that exist and when to consider them. Regardless of the solution, it is important to remember that the build environment is a very vulnerable point; if compromised, an attacker could potentially make changes that affect everything built there.
This is especially important when using third-party packages, and a minimum here should be that packages are pinned to specific versions and that you never fetch the latest version of a package automatically.
When we deploy a solution, we move it from an artifact repository into the runtime environments. How this happens depends on the platform used.
When setting up runtime environments, it is important to consider how the solution we are developing can be built and deployed to these in a way that makes it easy and removes the need for a person to spend time and energy doing the same thing each time.
When deploying an application, you start with the artifact that was built, which is then uploaded to the desired runtime environment. To ensure consistency, it is common to build only once so that the same artifact is deployed to multiple locations - if the environments are the same and the artifact is the same, we should see the same result everywhere.
It is common to have several steps in the pipeline that handle deployment to different environments, so that you only deploy to the next environment if the previous step was successful. If necessary, you can also restart a step in the pipeline if unexpected errors occur to rule out that it was the deployment itself that caused this.
In a deployment pipeline, it is important to consider when it is appropriate to deploy. Running a deployment should not be dangerous, as the entire process is automated. However, in many cases, you want to avoid rolling out changes or new functionality in certain environments before this is cleared with the product owner. To prevent someone from accidentally deploying to the wrong environment, there should be some approval steps along the way, where it is required that others on the team approve a deployment before it can start.
Penetration testing, often referred to as pentesting, is the art of testing a system to find weak points that can be exploited and the risk these weaknesses pose to the owner of the solution.
Security testing and pentesting have many similarities, but while approaches like DAST primarily focus on web applications and more automated tests, a pentest is more comprehensive and typically also includes underlying infrastructure and networks. In some cases, it may also have a physical element where pentesters will attempt to gain access to premises to uncover weaknesses in physical security or routines.
A penetration test will always have an agreed scope that regulates what the pentesters can do, when they can do it, and which resources and services they can test.
Why Pentest?
It is not possible to prove that a solution is secure, only that it is not vulnerable to certain attacks. If delivering a solution that has strict security requirements or operates within an agreement that dictates it, a pentest is a useful tool to ensure that the solution and its surrounding environment are secure.
After the testing is completed, a report will usually be delivered that describes what was tested and how, as well as an assessment of all findings. In some cases, findings may be described as vulnerabilities, but these do not necessarily need to be addressed due to other mitigating measures or because the risk or consequence is low.
What is Required to Conduct a Pentest?
First and foremost, you need one or more pentesters. This is not something you do on your own after watching a few videos on YouTube! A pentest requires expertise in several areas, as some attacks depend on exploiting multiple vulnerabilities that are not particularly serious on their own.
As a development team, you must ensure that the environment to be tested is properly identified so that everyone understands where the testing is taking place. The scope of the test must be defined - remember that it must be possible to distinguish an actual attack from a pentest if both occur simultaneously: If you see signs of an attack on an environment that is not part of the test and you have segregated your environments, you should take action!
As part of the planning, it is important to check with the customer what routines they have for pentesting. In many cases, they will have a Security Operations Center (SOC) and/or a Network Operations Center (NOC) that continuously monitors the infrastructure. These must be part of the planning to avoid misunderstandings or problems when the test begins.
In some cases, it is desirable to conduct a pentest without notifying anyone, as you want to see if such a test is detected - remember that a pentest is, in practice, an attack.
When to Conduct a Pentest, and What to Do While It Is Ongoing?
In a perfect world, you should conduct a pentest with every major change, but this is not feasible except for a few actors with special requirements. Each customer will have different requirements and expectations, so it is important to establish guidelines for this before planning to conduct the test.
If the test is announced in advance, it is a great opportunity to monitor logs and other monitoring tools to see if you notice anything unusual. If you can correlate this information with the tests reported afterward, you have a good opportunity to create automatic alerting routines that detect deviations from the norm.
What to Do After a Pentest?
When the team receives the report after a completed test, it is important to review it with the product owner. Always remember that security is never the responsibility of individuals alone - it is the delivery manager’s responsibility to ensure that security measures are implemented, but it is the team’s collective responsibility to ensure that what is built meets the set requirements.
Identified findings must be classified and added to the backlog. Then, the findings must be assessed against the importance of addressing them; some findings can wait, while others must be addressed as quickly as possible. This will vary from delivery to delivery and finding to finding.
Remember
You should never conduct a pentest yourself unless you know _very_ well what you are doing. It is not allowed to run tools used in connection with pentesting on Bouvet machines or in Bouvet's network without this being cleared with Internal IT & Security in advance.