End-to-end tests in the project ¶

In every CI build, a set of end-to-end tests are run to verify, as much as possible, that the changes don’t include regressions from a user point of view. Please refer to the CI documentation for further information. The current end-to-end tests are just browser tests

These tests are run by the script script/e2e-test.sh . Particularly, this script:

  1. Installs Kubeapps using the images built during the CI process (c.f., CI config file ) by setting the proper args to the Helm command.
    1. If the USE_MULTICLUSTER_OIDC_ENV is enabled, a set of flags will be passed to configure the Kubeapps installation in a multicluster environment.
  2. Waits for:
    1. the different deployments to be ready.
    2. the bitnami repo sync job to be completed.
  3. Installs some dependencies:
    1. Chart Museum.
    2. Operator framework (not in GKE).
  4. Runs the web browser tests .

If all of the above succeeded, the control is returned to the CI with the proper exit code.

Web Browser tests ¶

Apart from the basic functionality tests run by the chart tests, this project contains web browser tests that you can find in the integration folder.

These tests are based on Playwright . Playwright is a NodeJS library that provides a high-level API to control Chrome or Chromium (in headless mode by default).

The aforementioned integration folder is self-contained, that is, it contains every required dependency to run the browser tests in a separate package.json . Furthermore, a Dockerfile is used to generate an image with all the dependencies needed to run the browser tests.

These tests can be run either locally or in a container environment .

You can set up a configured Kubeapps instance in your cluster with the script/setup-kubeapps.sh script.

Running browser tests locally ¶

To run the tests locally you just need to install the required dependencies and set the required environment variables:

cd integration
yarn install
INTEGRATION_ENTRYPOINT=http://kubeapps.local USE_MULTICLUSTER_OIDC_ENV=false ADMIN_TOKEN=foo1 VIEW_TOKEN=foo2 EDIT_TOKEN=foo3 yarn test

If a test happens to fail, besides the test logs, a screenshot will be generated and saved in the reports/screenshots folder.

Running browser tests in a pod ¶

Since the CI environment doesn’t have the required dependencies and to provide a reproducible environment, it’s possible to run the browser tests in a Kubernetes pod.

To do so, you can spin up an instance running the image kubeapps/integration-tests . This image contains all the required dependencies, and it waits forever, so you can run commands within it.

The goal of this setup is that you can copy the latest tests to the image, run the tests and extract the screenshots in case of failure.

Building the “kubeapps/integration-tests” image ¶

Our CI system relies on the kubeapps/integration-tests image to run browser tests (c.f., CI config file and CI documentation ). Consequently, this image should be properly versioned to avoid CI issues.

The kubeapps/integration-tests image is built using this Makefile , it uses the IMAGE_TAG variable to pass the version with which the image is built. It is important to increase the version each time the image is built and pushed:

# Get the latest tag from https://hub.docker.com/r/kubeapps/integration-tests/tags?page=1&ordering=last_updated
# and then increment the patch version of the latest tag to get the IMAGE_TAG that you'll use below.
cd integration
IMAGE_TAG=v1.0.1 make build
IMAGE_TAG=v1.0.1 make push

It will build and push the image using this Dockerfile (we are using the base image as in the Kubeapps Dashboard build image ). The dependencies of this image are defined in the package.json .

When pushing a new image, also update the field version in the package.json .

To sum up, whenever a change triggers a new kubeapps/integration-tests version (new NodeJS image, updating the integration dependencies, other changes, etc.), you will have to release a new version. This process involves:

  • Checking if the integration Dockerfile is using the proper base version.
  • Ensuring we are not using any deprecated dependency in the package.json .
  • Updating the Makefile with the new version tag.
  • running make build && make push to release a new image version.