Bitnami package for Kubeapps ¶

Kubeapps is a web-based UI for launching and managing applications on Kubernetes. It allows users to deploy trusted applications and operators to control users access to the cluster.

Overview of Kubeapps

TL;DR ¶

helm install my-release oci://registry-1.docker.io/bitnamicharts/kubeapps --namespace kubeapps --create-namespace

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts. Check out the getting started to start deploying apps with Kubeapps.

Looking to use Kubeapps in production? Try VMware Tanzu Application Catalog , the enterprise edition of Bitnami Application Catalog.

Introduction ¶

This chart bootstraps a Kubeapps deployment on a Kubernetes cluster using the Helm package manager.

With Kubeapps you can:

Note: Kubeapps 2.0 and onwards supports Helm 3 only. While only the Helm 3 API is supported, in most cases, charts made for Helm 2 will still work.

It also packages the Bitnami PostgreSQL chart , which is required for bootstrapping a deployment for the database requirements of the Kubeapps application.

Prerequisites ¶

  • Kubernetes 1.23+
  • Helm 3.8.0+
  • Administrative access to the cluster to create Custom Resource Definitions (CRDs)
  • PV provisioner support in the underlying infrastructure (required for PostgreSQL database)

Installing the Chart ¶

To install the chart with the release name my-release:

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/kubeapps --namespace kubeapps --create-namespace

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The command deploys Kubeapps on the Kubernetes cluster in the kubeapps namespace. The Parameters section lists the parameters that can be configured during installation.

Caveat: Only one Kubeapps installation is supported per namespace

Once you have installed Kubeapps follow the Getting Started Guide for additional information on how to access and use Kubeapps.

Parameters ¶

Global parameters ¶

NameDescriptionValue
global.imageRegistryGlobal Docker image registry""
global.imagePullSecretsGlobal Docker registry secret names as an array[]
global.storageClassGlobal StorageClass for Persistent Volume(s)""

Common parameters ¶

NameDescriptionValue
kubeVersionOverride Kubernetes version""
nameOverrideString to partially override common.names.fullname""
fullnameOverrideString to fully override common.names.fullname""
commonLabelsLabels to add to all deployed objects{}
commonAnnotationsAnnotations to add to all deployed objects{}
extraDeployArray of extra objects to deploy with the release[]
enableIPv6Enable IPv6 configurationfalse
diagnosticMode.enabledEnable diagnostic mode (all probes will be disabled and the command will be overridden)false
diagnosticMode.commandCommand to override all containers in the deployment["sleep"]
diagnosticMode.argsArgs to override all containers in the deployment["infinity"]

Traffic Exposure Parameters ¶

NameDescriptionValue
ingress.enabledEnable ingress record generation for Kubeappsfalse
ingress.apiVersionForce Ingress API version (automatically detected if not set)""
ingress.hostnameDefault host for the ingress recordkubeapps.local
ingress.pathDefault path for the ingress record/
ingress.pathTypeIngress path typeImplementationSpecific
ingress.annotationsAdditional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations.{}
ingress.tlsEnable TLS configuration for the host defined at ingress.hostname parameterfalse
ingress.selfSignedCreate a TLS secret for this ingress record using self-signed certificates generated by Helmfalse
ingress.extraHostsAn array with additional hostname(s) to be covered with the ingress record[]
ingress.extraPathsAn array with additional arbitrary paths that may need to be added to the ingress under the main host[]
ingress.extraTlsTLS configuration for additional hostname(s) to be covered with this ingress record[]
ingress.secretsCustom TLS certificates as secrets[]
ingress.ingressClassNameIngressClass that will be be used to implement the Ingress (Kubernetes 1.18+)""
ingress.extraRulesAdditional rules to be covered with this ingress record[]

Kubeapps packaging options ¶

NameDescriptionValue
packaging.helm.enabledEnable the standard Helm packaging.true
packaging.carvel.enabledEnable support for the Carvel (kapp-controller) packaging.false
packaging.flux.enabledEnable support for Flux (v2) packaging.false

Frontend parameters ¶

NameDescriptionValue
frontend.image.registryNGINX image registryREGISTRY_NAME
frontend.image.repositoryNGINX image repositoryREPOSITORY_NAME/nginx
frontend.image.digestNGINX image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag""
frontend.image.pullPolicyNGINX image pull policyIfNotPresent
frontend.image.pullSecretsNGINX image pull secrets[]
frontend.image.debugEnable image debug modefalse
frontend.proxypassAccessTokenAsBearerUse access_token as the Bearer when talking to the k8s api serverfalse
frontend.proxypassExtraSetHeaderSet an additional proxy header for all requests proxied via NGINX""
frontend.largeClientHeaderBuffersSet large_client_header_buffers in NGINX config4 32k
frontend.replicaCountNumber of frontend replicas to deploy2
frontend.updateStrategy.typeFrontend deployment strategy type.RollingUpdate
frontend.resources.limits.cpuThe CPU limits for the NGINX container250m
frontend.resources.limits.memoryThe memory limits for the NGINX container128Mi
frontend.resources.requests.cpuThe requested CPU for the NGINX container25m
frontend.resources.requests.memoryThe requested memory for the NGINX container32Mi
frontend.extraEnvVarsArray with extra environment variables to add to the NGINX container[]
frontend.extraEnvVarsCMName of existing ConfigMap containing extra env vars for the NGINX container""
frontend.extraEnvVarsSecretName of existing Secret containing extra env vars for the NGINX container""
frontend.containerPorts.httpNGINX HTTP container port8080
frontend.podSecurityContext.enabledEnabled frontend pods’ Security Contexttrue
frontend.podSecurityContext.fsGroupChangePolicySet filesystem group change policyAlways
frontend.podSecurityContext.sysctlsSet kernel settings using the sysctl interface[]
frontend.podSecurityContext.supplementalGroupsSet filesystem extra groups[]
frontend.podSecurityContext.fsGroupSet frontend pod’s Security Context fsGroup1001
frontend.containerSecurityContext.enabledEnabled containers’ Security Contexttrue
frontend.containerSecurityContext.seLinuxOptionsSet SELinux options in containernil
frontend.containerSecurityContext.runAsUserSet containers’ Security Context runAsUser1001
frontend.containerSecurityContext.runAsNonRootSet container’s Security Context runAsNonRoottrue
frontend.containerSecurityContext.privilegedSet container’s Security Context privilegedfalse
frontend.containerSecurityContext.readOnlyRootFilesystemSet container’s Security Context readOnlyRootFilesystemfalse
frontend.containerSecurityContext.allowPrivilegeEscalationSet container’s Security Context allowPrivilegeEscalationfalse
frontend.containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
frontend.containerSecurityContext.seccompProfile.typeSet container’s Security Context seccomp profileRuntimeDefault
frontend.livenessProbe.enabledEnable livenessProbetrue
frontend.livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe60
frontend.livenessProbe.periodSecondsPeriod seconds for livenessProbe10
frontend.livenessProbe.timeoutSecondsTimeout seconds for livenessProbe5
frontend.livenessProbe.failureThresholdFailure threshold for livenessProbe6
frontend.livenessProbe.successThresholdSuccess threshold for livenessProbe1
frontend.readinessProbe.enabledEnable readinessProbetrue
frontend.readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe0
frontend.readinessProbe.periodSecondsPeriod seconds for readinessProbe10
frontend.readinessProbe.timeoutSecondsTimeout seconds for readinessProbe5
frontend.readinessProbe.failureThresholdFailure threshold for readinessProbe6
frontend.readinessProbe.successThresholdSuccess threshold for readinessProbe1
frontend.startupProbe.enabledEnable startupProbefalse
frontend.startupProbe.initialDelaySecondsInitial delay seconds for startupProbe0
frontend.startupProbe.periodSecondsPeriod seconds for startupProbe10
frontend.startupProbe.timeoutSecondsTimeout seconds for startupProbe5
frontend.startupProbe.failureThresholdFailure threshold for startupProbe6
frontend.startupProbe.successThresholdSuccess threshold for startupProbe1
frontend.customLivenessProbeCustom livenessProbe that overrides the default one{}
frontend.customReadinessProbeCustom readinessProbe that overrides the default one{}
frontend.customStartupProbeCustom startupProbe that overrides the default one{}
frontend.lifecycleHooksCustom lifecycle hooks for frontend containers{}
frontend.commandOverride default container command (useful when using custom images)[]
frontend.argsOverride default container args (useful when using custom images)[]
frontend.podLabelsExtra labels for frontend pods{}
frontend.podAnnotationsAnnotations for frontend pods{}
frontend.podAffinityPresetPod affinity preset. Ignored if affinity is set. Allowed values: soft or hard""
frontend.podAntiAffinityPresetPod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hardsoft
frontend.nodeAffinityPreset.typeNode affinity preset type. Ignored if affinity is set. Allowed values: soft or hard""
frontend.nodeAffinityPreset.keyNode label key to match. Ignored if affinity is set""
frontend.nodeAffinityPreset.valuesNode label values to match. Ignored if affinity is set[]
frontend.affinityAffinity for pod assignment{}
frontend.nodeSelectorNode labels for pod assignment{}
frontend.tolerationsTolerations for pod assignment[]
frontend.priorityClassNamePriority class name for frontend pods""
frontend.schedulerNameName of the k8s scheduler (other than default)""
frontend.topologySpreadConstraintsTopology Spread Constraints for pod assignment[]
frontend.automountServiceAccountTokenMount Service Account token in podtrue
frontend.hostAliasesCustom host aliases for frontend pods[]
frontend.extraVolumesOptionally specify extra list of additional volumes for frontend pods[]
frontend.extraVolumeMountsOptionally specify extra list of additional volumeMounts for frontend container(s)[]
frontend.sidecarsAdd additional sidecar containers to the frontend pod[]
frontend.initContainersAdd additional init containers to the frontend pods[]
frontend.service.typeFrontend service typeClusterIP
frontend.service.ports.httpFrontend service HTTP port80
frontend.service.nodePorts.httpNode port for HTTP""
frontend.service.clusterIPFrontend service Cluster IP""
frontend.service.loadBalancerIPFrontend service Load Balancer IP""
frontend.service.loadBalancerSourceRangesFrontend service Load Balancer sources[]
frontend.service.externalTrafficPolicyFrontend service external traffic policyCluster
frontend.service.extraPortsExtra ports to expose (normally used with the sidecar value)[]
frontend.service.annotationsAdditional custom annotations for frontend service{}
frontend.service.sessionAffinitySession Affinity for Kubernetes service, can be “None” or “ClientIP”None
frontend.service.sessionAffinityConfigAdditional settings for the sessionAffinity{}

Dashboard parameters ¶

NameDescriptionValue
dashboard.enabledSpecifies whether Kubeapps Dashboard should be deployed or nottrue
dashboard.image.registryDashboard image registryREGISTRY_NAME
dashboard.image.repositoryDashboard image repositoryREPOSITORY_NAME/kubeapps-dashboard
dashboard.image.digestDashboard image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag""
dashboard.image.pullPolicyDashboard image pull policyIfNotPresent
dashboard.image.pullSecretsDashboard image pull secrets[]
dashboard.image.debugEnable image debug modefalse
dashboard.customStyleCustom CSS injected to the Dashboard to customize Kubeapps look and feel""
dashboard.customAppViewsPackage names to signal a custom app view[]
dashboard.customComponentsCustom Form components injected into the BasicDeploymentForm""
dashboard.remoteComponentsUrlRemote URL that can be used to load custom components vs loading from the local filesystem""
dashboard.skipAvailablePackageDetailsSkip the package details view and go straight to the installation view of the latest versionfalse
dashboard.customLocaleCustom translations injected to the Dashboard to customize the strings used in Kubeapps""
dashboard.defaultThemeDefault theme used in the Dashboard if the user has not selected any theme yet.""
dashboard.replicaCountNumber of Dashboard replicas to deploy2
dashboard.createNamespaceLabelsLabels added to newly created namespaces{}
dashboard.updateStrategy.typeDashboard deployment strategy type.RollingUpdate
dashboard.extraEnvVarsArray with extra environment variables to add to the Dashboard container[]
dashboard.extraEnvVarsCMName of existing ConfigMap containing extra env vars for the Dashboard container""
dashboard.extraEnvVarsSecretName of existing Secret containing extra env vars for the Dashboard container""
dashboard.containerPorts.httpDashboard HTTP container port8080
dashboard.resources.limits.cpuThe CPU limits for the Dashboard container250m
dashboard.resources.limits.memoryThe memory limits for the Dashboard container128Mi
dashboard.resources.requests.cpuThe requested CPU for the Dashboard container25m
dashboard.resources.requests.memoryThe requested memory for the Dashboard container32Mi
dashboard.podSecurityContext.enabledEnabled Dashboard pods’ Security Contexttrue
dashboard.podSecurityContext.fsGroupChangePolicySet filesystem group change policyAlways
dashboard.podSecurityContext.sysctlsSet kernel settings using the sysctl interface[]
dashboard.podSecurityContext.supplementalGroupsSet filesystem extra groups[]
dashboard.podSecurityContext.fsGroupSet Dashboard pod’s Security Context fsGroup1001
dashboard.containerSecurityContext.enabledEnabled containers’ Security Contexttrue
dashboard.containerSecurityContext.seLinuxOptionsSet SELinux options in containernil
dashboard.containerSecurityContext.runAsUserSet containers’ Security Context runAsUser1001
dashboard.containerSecurityContext.runAsNonRootSet container’s Security Context runAsNonRoottrue
dashboard.containerSecurityContext.privilegedSet container’s Security Context privilegedfalse
dashboard.containerSecurityContext.readOnlyRootFilesystemSet container’s Security Context readOnlyRootFilesystemfalse
dashboard.containerSecurityContext.allowPrivilegeEscalationSet container’s Security Context allowPrivilegeEscalationfalse
dashboard.containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
dashboard.containerSecurityContext.seccompProfile.typeSet container’s Security Context seccomp profileRuntimeDefault
dashboard.livenessProbe.enabledEnable livenessProbetrue
dashboard.livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe60
dashboard.livenessProbe.periodSecondsPeriod seconds for livenessProbe10
dashboard.livenessProbe.timeoutSecondsTimeout seconds for livenessProbe5
dashboard.livenessProbe.failureThresholdFailure threshold for livenessProbe6
dashboard.livenessProbe.successThresholdSuccess threshold for livenessProbe1
dashboard.readinessProbe.enabledEnable readinessProbetrue
dashboard.readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe0
dashboard.readinessProbe.periodSecondsPeriod seconds for readinessProbe10
dashboard.readinessProbe.timeoutSecondsTimeout seconds for readinessProbe5
dashboard.readinessProbe.failureThresholdFailure threshold for readinessProbe6
dashboard.readinessProbe.successThresholdSuccess threshold for readinessProbe1
dashboard.startupProbe.enabledEnable startupProbetrue
dashboard.startupProbe.initialDelaySecondsInitial delay seconds for startupProbe0
dashboard.startupProbe.periodSecondsPeriod seconds for startupProbe10
dashboard.startupProbe.timeoutSecondsTimeout seconds for startupProbe5
dashboard.startupProbe.failureThresholdFailure threshold for startupProbe6
dashboard.startupProbe.successThresholdSuccess threshold for startupProbe1
dashboard.customLivenessProbeCustom livenessProbe that overrides the default one{}
dashboard.customReadinessProbeCustom readinessProbe that overrides the default one{}
dashboard.customStartupProbeCustom startupProbe that overrides the default one{}
dashboard.lifecycleHooksCustom lifecycle hooks for Dashboard containers{}
dashboard.commandOverride default container command (useful when using custom images)[]
dashboard.argsOverride default container args (useful when using custom images)[]
dashboard.podLabelsExtra labels for Dashboard pods{}
dashboard.podAnnotationsAnnotations for Dashboard pods{}
dashboard.podAffinityPresetPod affinity preset. Ignored if affinity is set. Allowed values: soft or hard""
dashboard.podAntiAffinityPresetPod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hardsoft
dashboard.nodeAffinityPreset.typeNode affinity preset type. Ignored if affinity is set. Allowed values: soft or hard""
dashboard.nodeAffinityPreset.keyNode label key to match. Ignored if affinity is set""
dashboard.nodeAffinityPreset.valuesNode label values to match. Ignored if affinity is set[]
dashboard.affinityAffinity for pod assignment{}
dashboard.nodeSelectorNode labels for pod assignment{}
dashboard.tolerationsTolerations for pod assignment[]
dashboard.priorityClassNamePriority class name for Dashboard pods""
dashboard.schedulerNameName of the k8s scheduler (other than default)""
dashboard.topologySpreadConstraintsTopology Spread Constraints for pod assignment[]
dashboard.automountServiceAccountTokenMount Service Account token in podtrue
dashboard.hostAliasesCustom host aliases for Dashboard pods[]
dashboard.extraVolumesOptionally specify extra list of additional volumes for Dashboard pods[]
dashboard.extraVolumeMountsOptionally specify extra list of additional volumeMounts for Dashboard container(s)[]
dashboard.sidecarsAdd additional sidecar containers to the Dashboard pod[]
dashboard.initContainersAdd additional init containers to the Dashboard pods[]
dashboard.service.ports.httpDashboard service HTTP port8080
dashboard.service.annotationsAdditional custom annotations for Dashboard service{}

AppRepository Controller parameters ¶

NameDescriptionValue
apprepository.image.registryKubeapps AppRepository Controller image registryREGISTRY_NAME
apprepository.image.repositoryKubeapps AppRepository Controller image repositoryREPOSITORY_NAME/kubeapps-apprepository-controller
apprepository.image.digestKubeapps AppRepository Controller image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag""
apprepository.image.pullPolicyKubeapps AppRepository Controller image pull policyIfNotPresent
apprepository.image.pullSecretsKubeapps AppRepository Controller image pull secrets[]
apprepository.syncImage.registryKubeapps Asset Syncer image registryREGISTRY_NAME
apprepository.syncImage.repositoryKubeapps Asset Syncer image repositoryREPOSITORY_NAME/kubeapps-asset-syncer
apprepository.syncImage.digestKubeapps Asset Syncer image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag""
apprepository.syncImage.pullPolicyKubeapps Asset Syncer image pull policyIfNotPresent
apprepository.syncImage.pullSecretsKubeapps Asset Syncer image pull secrets[]
apprepository.globalReposNamespaceSuffixSuffix for the namespace of global repos in the Helm plugin. Defaults to empty for backwards compatibility. Ignored if kubeappsapis.pluginConfig.helm.packages.v1alpha1.globalPackagingNamespace is set.""
apprepository.initialReposInitial chart repositories to configure[]
apprepository.customAnnotationsCustom annotations be added to each AppRepository-generated CronJob, Job and Pod{}
apprepository.customLabelsCustom labels be added to each AppRepository-generated CronJob, Job and Pod{}
apprepository.initialReposProxy.enabledEnables the proxyfalse
apprepository.initialReposProxy.httpProxyURL for the http proxy""
apprepository.initialReposProxy.httpsProxyURL for the https proxy""
apprepository.initialReposProxy.noProxyURL to exclude from using the proxy""
apprepository.crontabDefault schedule for syncing App repositories (defaults to every 10 minutes)""
apprepository.watchAllNamespacesWatch all namespaces to support separate AppRepositories per namespacetrue
apprepository.extraFlagsAdditional command line flags for AppRepository Controller[]
apprepository.replicaCountNumber of AppRepository Controller replicas to deploy1
apprepository.updateStrategy.typeAppRepository Controller deployment strategy type.RollingUpdate
apprepository.resources.limits.cpuThe CPU limits for the AppRepository Controller container250m
apprepository.resources.limits.memoryThe memory limits for the AppRepository Controller container128Mi
apprepository.resources.requests.cpuThe requested CPU for the AppRepository Controller container25m
apprepository.resources.requests.memoryThe requested memory for the AppRepository Controller container32Mi
apprepository.podSecurityContext.enabledEnabled AppRepository Controller pods’ Security Contexttrue
apprepository.podSecurityContext.fsGroupChangePolicySet filesystem group change policyAlways
apprepository.podSecurityContext.sysctlsSet kernel settings using the sysctl interface[]
apprepository.podSecurityContext.supplementalGroupsSet filesystem extra groups[]
apprepository.podSecurityContext.fsGroupSet AppRepository Controller pod’s Security Context fsGroup1001
apprepository.containerSecurityContext.enabledEnabled containers’ Security Contexttrue
apprepository.containerSecurityContext.seLinuxOptionsSet SELinux options in containernil
apprepository.containerSecurityContext.runAsUserSet containers’ Security Context runAsUser1001
apprepository.containerSecurityContext.runAsNonRootSet container’s Security Context runAsNonRoottrue
apprepository.containerSecurityContext.privilegedSet container’s Security Context privilegedfalse
apprepository.containerSecurityContext.readOnlyRootFilesystemSet container’s Security Context readOnlyRootFilesystemfalse
apprepository.containerSecurityContext.allowPrivilegeEscalationSet container’s Security Context allowPrivilegeEscalationfalse
apprepository.containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
apprepository.containerSecurityContext.seccompProfile.typeSet container’s Security Context seccomp profileRuntimeDefault
apprepository.lifecycleHooksCustom lifecycle hooks for AppRepository Controller containers{}
apprepository.commandOverride default container command (useful when using custom images)[]
apprepository.argsOverride default container args (useful when using custom images)[]
apprepository.extraEnvVarsArray with extra environment variables to add to AppRepository Controller pod(s)[]
apprepository.extraEnvVarsCMName of existing ConfigMap containing extra env vars for AppRepository Controller pod(s)""
apprepository.extraEnvVarsSecretName of existing Secret containing extra env vars for AppRepository Controller pod(s)""
apprepository.extraVolumesOptionally specify extra list of additional volumes for the AppRepository Controller pod(s)[]
apprepository.extraVolumeMountsOptionally specify extra list of additional volumeMounts for the AppRepository Controller container(s)[]
apprepository.podLabelsExtra labels for AppRepository Controller pods{}
apprepository.podAnnotationsAnnotations for AppRepository Controller pods{}
apprepository.podAffinityPresetPod affinity preset. Ignored if affinity is set. Allowed values: soft or hard""
apprepository.podAntiAffinityPresetPod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hardsoft
apprepository.nodeAffinityPreset.typeNode affinity preset type. Ignored if affinity is set. Allowed values: soft or hard""
apprepository.nodeAffinityPreset.keyNode label key to match. Ignored if affinity is set""
apprepository.nodeAffinityPreset.valuesNode label values to match. Ignored if affinity is set[]
apprepository.affinityAffinity for pod assignment{}
apprepository.nodeSelectorNode labels for pod assignment{}
apprepository.tolerationsTolerations for pod assignment[]
apprepository.priorityClassNamePriority class name for AppRepository Controller pods""
apprepository.schedulerNameName of the k8s scheduler (other than default)""
apprepository.topologySpreadConstraintsTopology Spread Constraints for pod assignment[]
apprepository.automountServiceAccountTokenMount Service Account token in podtrue
apprepository.hostAliasesCustom host aliases for AppRepository Controller pods[]
apprepository.sidecarsAdd additional sidecar containers to the AppRepository Controller pod(s)[]
apprepository.initContainersAdd additional init containers to the AppRepository Controller pod(s)[]
apprepository.serviceAccount.createSpecifies whether a ServiceAccount should be createdtrue
apprepository.serviceAccount.nameName of the service account to use. If not set and create is true, a name is generated using the fullname template.""
apprepository.serviceAccount.automountServiceAccountTokenAutomount service account token for the server service accountfalse
apprepository.serviceAccount.annotationsAnnotations for service account. Evaluated as a template. Only used if create is true.{}

Auth Proxy parameters ¶

NameDescriptionValue
authProxy.enabledSpecifies whether Kubeapps should configure OAuth login/logoutfalse
authProxy.image.registryOAuth2 Proxy image registryREGISTRY_NAME
authProxy.image.repositoryOAuth2 Proxy image repositoryREPOSITORY_NAME/oauth2-proxy
authProxy.image.digestOAuth2 Proxy image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag""
authProxy.image.pullPolicyOAuth2 Proxy image pull policyIfNotPresent
authProxy.image.pullSecretsOAuth2 Proxy image pull secrets[]
authProxy.externalUse an external Auth Proxy instead of deploying its own onefalse
authProxy.oauthLoginURIOAuth Login URI to which the Kubeapps frontend redirects for authn/oauth2/start
authProxy.oauthLogoutURIOAuth Logout URI to which the Kubeapps frontend redirects for authn/oauth2/sign_out
authProxy.skipKubeappsLoginPageSkip the Kubeapps login page when using OIDC and directly redirect to the IdPfalse
authProxy.providerOAuth provider""
authProxy.clientIDOAuth Client ID""
authProxy.clientSecretOAuth Client secret""
authProxy.cookieSecretSecret used by oauth2-proxy to encrypt any credentials""
authProxy.existingOauth2SecretName of an existing secret containing the OAuth client secrets, it should contain the keys clientID, clientSecret, and cookieSecret""
authProxy.cookieRefreshDuration after which to refresh the cookie2m
authProxy.scopeOAuth scope specificationopenid email groups
authProxy.emailDomainAllowed email domains*
authProxy.extraFlagsAdditional command line flags for oauth2-proxy[]
authProxy.lifecycleHooksfor the Auth Proxy container(s) to automate configuration before or after startup{}
authProxy.commandOverride default container command (useful when using custom images)[]
authProxy.argsOverride default container args (useful when using custom images)[]
authProxy.extraEnvVarsArray with extra environment variables to add to the Auth Proxy container[]
authProxy.extraEnvVarsCMName of existing ConfigMap containing extra env vars for Auth Proxy containers(s)""
authProxy.extraEnvVarsSecretName of existing Secret containing extra env vars for Auth Proxy containers(s)""
authProxy.extraVolumeMountsOptionally specify extra list of additional volumeMounts for the Auth Proxy container(s)[]
authProxy.containerPorts.proxyAuth Proxy HTTP container port3000
authProxy.containerSecurityContext.enabledEnabled containers’ Security Contexttrue
authProxy.containerSecurityContext.seLinuxOptionsSet SELinux options in containernil
authProxy.containerSecurityContext.runAsUserSet containers’ Security Context runAsUser1001
authProxy.containerSecurityContext.runAsNonRootSet container’s Security Context runAsNonRoottrue
authProxy.containerSecurityContext.privilegedSet container’s Security Context privilegedfalse
authProxy.containerSecurityContext.readOnlyRootFilesystemSet container’s Security Context readOnlyRootFilesystemfalse
authProxy.containerSecurityContext.allowPrivilegeEscalationSet container’s Security Context allowPrivilegeEscalationfalse
authProxy.containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
authProxy.containerSecurityContext.seccompProfile.typeSet container’s Security Context seccomp profileRuntimeDefault
authProxy.resources.limits.cpuThe CPU limits for the OAuth2 Proxy container250m
authProxy.resources.limits.memoryThe memory limits for the OAuth2 Proxy container128Mi
authProxy.resources.requests.cpuThe requested CPU for the OAuth2 Proxy container25m
authProxy.resources.requests.memoryThe requested memory for the OAuth2 Proxy container32Mi

Pinniped Proxy parameters ¶

NameDescriptionValue
pinnipedProxy.enabledSpecifies whether Kubeapps should configure Pinniped Proxyfalse
pinnipedProxy.image.registryPinniped Proxy image registryREGISTRY_NAME
pinnipedProxy.image.repositoryPinniped Proxy image repositoryREPOSITORY_NAME/kubeapps-pinniped-proxy
pinnipedProxy.image.digestPinniped Proxy image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag""
pinnipedProxy.image.pullPolicyPinniped Proxy image pull policyIfNotPresent
pinnipedProxy.image.pullSecretsPinniped Proxy image pull secrets[]
pinnipedProxy.defaultPinnipedNamespaceNamespace in which pinniped concierge is installedpinniped-concierge
pinnipedProxy.defaultAuthenticatorTypeAuthenticator typeJWTAuthenticator
pinnipedProxy.defaultAuthenticatorNameAuthenticator namejwt-authenticator
pinnipedProxy.defaultPinnipedAPISuffixAPI suffixpinniped.dev
pinnipedProxy.tls.existingSecretTLS secret with which to proxy requests""
pinnipedProxy.tls.caCertificateTLS CA cert config map which clients of pinniped proxy should use with TLS requests""
pinnipedProxy.lifecycleHooksFor the Pinniped Proxy container(s) to automate configuration before or after startup{}
pinnipedProxy.commandOverride default container command (useful when using custom images)[]
pinnipedProxy.argsOverride default container args (useful when using custom images)[]
pinnipedProxy.extraEnvVarsArray with extra environment variables to add to Pinniped Proxy container(s)[]
pinnipedProxy.extraEnvVarsCMName of existing ConfigMap containing extra env vars for Pinniped Proxy container(s)""
pinnipedProxy.extraEnvVarsSecretName of existing Secret containing extra env vars for Pinniped Proxy container(s)""
pinnipedProxy.extraVolumeMountsOptionally specify extra list of additional volumeMounts for the Pinniped Proxy container(s)[]
pinnipedProxy.containerPorts.pinnipedProxyPinniped Proxy container port3333
pinnipedProxy.containerSecurityContext.enabledEnabled containers’ Security Contexttrue
pinnipedProxy.containerSecurityContext.seLinuxOptionsSet SELinux options in containernil
pinnipedProxy.containerSecurityContext.runAsUserSet containers’ Security Context runAsUser1001
pinnipedProxy.containerSecurityContext.runAsNonRootSet container’s Security Context runAsNonRoottrue
pinnipedProxy.containerSecurityContext.privilegedSet container’s Security Context privilegedfalse
pinnipedProxy.containerSecurityContext.readOnlyRootFilesystemSet container’s Security Context readOnlyRootFilesystemfalse
pinnipedProxy.containerSecurityContext.allowPrivilegeEscalationSet container’s Security Context allowPrivilegeEscalationfalse
pinnipedProxy.containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
pinnipedProxy.containerSecurityContext.seccompProfile.typeSet container’s Security Context seccomp profileRuntimeDefault
pinnipedProxy.resources.limits.cpuThe CPU limits for the Pinniped Proxy container250m
pinnipedProxy.resources.limits.memoryThe memory limits for the Pinniped Proxy container128Mi
pinnipedProxy.resources.requests.cpuThe requested CPU for the Pinniped Proxy container25m
pinnipedProxy.resources.requests.memoryThe requested memory for the Pinniped Proxy container32Mi
pinnipedProxy.service.ports.pinnipedProxyPinniped Proxy service port3333
pinnipedProxy.service.annotationsAdditional custom annotations for Pinniped Proxy service{}

Other Parameters ¶

NameDescriptionValue
clustersList of clusters that Kubeapps can target for deployments[]
rbac.createSpecifies whether RBAC resources should be createdtrue

Feature flags ¶

NameDescriptionValue
featureFlags.apiOnly.enabledEnable ingress for API operations only. Access to “/” will not be possible, so Dashboard will be unusable.false
featureFlags.apiOnly.grpc.annotationsSpecific annotations for the GRPC ingress in API-only mode{}
featureFlags.operatorsEnable support for Operators in Kubeappsfalse
featureFlags.schemaEditor.enabledEnable a visual editor for customizing the package schemasfalse

Database Parameters ¶

NameDescriptionValue
postgresql.enabledDeploy a PostgreSQL server to satisfy the applications database requirementstrue
postgresql.auth.usernameUsername for PostgreSQL serverpostgres
postgresql.auth.postgresPasswordPassword for ‘postgres’ user""
postgresql.auth.databaseName for a custom database to createassets
postgresql.auth.existingSecretName of existing secret to use for PostgreSQL credentials""
postgresql.primary.persistence.enabledEnable PostgreSQL Primary data persistence using PVCfalse
postgresql.architecturePostgreSQL architecture (standalone or replication)standalone
postgresql.securityContext.enabledEnabled PostgreSQL replicas pods’ Security Contextfalse
postgresql.resources.limitsThe resources limits for the PostgreSQL container{}
postgresql.resources.requests.cpuThe requested CPU for the PostgreSQL container250m
postgresql.resources.requests.memoryThe requested memory for the PostgreSQL container256Mi

kubeappsapis parameters ¶

NameDescriptionValue
kubeappsapis.enabledPluginsManually override which plugins are enabled for the Kubeapps-APIs service[]
kubeappsapis.pluginConfig.core.packages.v1alpha1.versionsInSummary.majorNumber of major versions to display in the summary3
kubeappsapis.pluginConfig.core.packages.v1alpha1.versionsInSummary.minorNumber of minor versions to display in the summary3
kubeappsapis.pluginConfig.core.packages.v1alpha1.versionsInSummary.patchNumber of patch versions to display in the summary3
kubeappsapis.pluginConfig.core.packages.v1alpha1.timeoutSecondsValue to wait for Kubernetes commands to complete300
kubeappsapis.pluginConfig.helm.packages.v1alpha1.globalPackagingNamespaceCustom global packaging namespace. Using this value will override the current “kubeapps release namespace + suffix” pattern and will create a new namespace if not exists.""
kubeappsapis.pluginConfig.kappController.packages.v1alpha1.defaultUpgradePolicyDefault upgrade policy generating version constraintsnone
kubeappsapis.pluginConfig.kappController.packages.v1alpha1.defaultPrereleasesVersionSelectionDefault policy for allowing prereleases containing one of the identifiersnil
kubeappsapis.pluginConfig.kappController.packages.v1alpha1.defaultAllowDowngradesDefault policy for allowing applications to be downgraded to previous versionsfalse
kubeappsapis.pluginConfig.kappController.packages.v1alpha1.globalPackagingNamespaceDefault global packaging namespacekapp-controller-packaging-global
kubeappsapis.pluginConfig.flux.packages.v1alpha1.defaultUpgradePolicyDefault upgrade policy generating version constraintsnone
kubeappsapis.pluginConfig.flux.packages.v1alpha1.noCrossNamespaceRefsEnable this flag to disallow cross-namespace references, useful when running Flux on multi-tenant clustersfalse
kubeappsapis.pluginConfig.resources.packages.v1alpha1.trustedNamespaces.headerNameOptional header name for trusted namespaces""
kubeappsapis.pluginConfig.resources.packages.v1alpha1.trustedNamespaces.headerPatternOptional header pattern for trusted namespaces""
kubeappsapis.image.registryKubeapps-APIs image registryREGISTRY_NAME
kubeappsapis.image.repositoryKubeapps-APIs image repositoryREPOSITORY_NAME/kubeapps-apis
kubeappsapis.image.digestKubeapps-APIs image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag""
kubeappsapis.image.pullPolicyKubeapps-APIs image pull policyIfNotPresent
kubeappsapis.image.pullSecretsKubeapps-APIs image pull secrets[]
kubeappsapis.replicaCountNumber of frontend replicas to deploy2
kubeappsapis.updateStrategy.typeKubeappsAPIs deployment strategy type.RollingUpdate
kubeappsapis.extraFlagsAdditional command line flags for KubeappsAPIs[]
kubeappsapis.qpsKubeappsAPIs Kubernetes API client QPS limit50.0
kubeappsapis.burstKubeappsAPIs Kubernetes API client Burst limit100
kubeappsapis.terminationGracePeriodSecondsThe grace time period for sig term300
kubeappsapis.extraEnvVarsArray with extra environment variables to add to the KubeappsAPIs container[]
kubeappsapis.extraEnvVarsCMName of existing ConfigMap containing extra env vars for the KubeappsAPIs container""
kubeappsapis.extraEnvVarsSecretName of existing Secret containing extra env vars for the KubeappsAPIs container""
kubeappsapis.containerPorts.httpKubeappsAPIs HTTP container port50051
kubeappsapis.resources.limits.cpuThe CPU limits for the KubeappsAPIs container250m
kubeappsapis.resources.limits.memoryThe memory limits for the KubeappsAPIs container256Mi
kubeappsapis.resources.requests.cpuThe requested CPU for the KubeappsAPIs container25m
kubeappsapis.resources.requests.memoryThe requested memory for the KubeappsAPIs container32Mi
kubeappsapis.podSecurityContext.enabledEnabled KubeappsAPIs pods’ Security Contexttrue
kubeappsapis.podSecurityContext.fsGroupChangePolicySet filesystem group change policyAlways
kubeappsapis.podSecurityContext.sysctlsSet kernel settings using the sysctl interface[]
kubeappsapis.podSecurityContext.supplementalGroupsSet filesystem extra groups[]
kubeappsapis.podSecurityContext.fsGroupSet KubeappsAPIs pod’s Security Context fsGroup1001
kubeappsapis.containerSecurityContext.enabledEnabled containers’ Security Contexttrue
kubeappsapis.containerSecurityContext.seLinuxOptionsSet SELinux options in containernil
kubeappsapis.containerSecurityContext.runAsUserSet containers’ Security Context runAsUser1001
kubeappsapis.containerSecurityContext.runAsNonRootSet container’s Security Context runAsNonRoottrue
kubeappsapis.containerSecurityContext.privilegedSet container’s Security Context privilegedfalse
kubeappsapis.containerSecurityContext.readOnlyRootFilesystemSet container’s Security Context readOnlyRootFilesystemfalse
kubeappsapis.containerSecurityContext.allowPrivilegeEscalationSet container’s Security Context allowPrivilegeEscalationfalse
kubeappsapis.containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
kubeappsapis.containerSecurityContext.seccompProfile.typeSet container’s Security Context seccomp profileRuntimeDefault
kubeappsapis.livenessProbe.enabledEnable livenessProbetrue
kubeappsapis.livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe60
kubeappsapis.livenessProbe.periodSecondsPeriod seconds for livenessProbe10
kubeappsapis.livenessProbe.timeoutSecondsTimeout seconds for livenessProbe5
kubeappsapis.livenessProbe.failureThresholdFailure threshold for livenessProbe6
kubeappsapis.livenessProbe.successThresholdSuccess threshold for livenessProbe1
kubeappsapis.readinessProbe.enabledEnable readinessProbetrue
kubeappsapis.readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe0
kubeappsapis.readinessProbe.periodSecondsPeriod seconds for readinessProbe10
kubeappsapis.readinessProbe.timeoutSecondsTimeout seconds for readinessProbe5
kubeappsapis.readinessProbe.failureThresholdFailure threshold for readinessProbe6
kubeappsapis.readinessProbe.successThresholdSuccess threshold for readinessProbe1
kubeappsapis.startupProbe.enabledEnable startupProbefalse
kubeappsapis.startupProbe.initialDelaySecondsInitial delay seconds for startupProbe0
kubeappsapis.startupProbe.periodSecondsPeriod seconds for startupProbe10
kubeappsapis.startupProbe.timeoutSecondsTimeout seconds for startupProbe5
kubeappsapis.startupProbe.failureThresholdFailure threshold for startupProbe6
kubeappsapis.startupProbe.successThresholdSuccess threshold for startupProbe1
kubeappsapis.customLivenessProbeCustom livenessProbe that overrides the default one{}
kubeappsapis.customReadinessProbeCustom readinessProbe that overrides the default one{}
kubeappsapis.customStartupProbeCustom startupProbe that overrides the default one{}
kubeappsapis.lifecycleHooksCustom lifecycle hooks for KubeappsAPIs containers{}
kubeappsapis.commandOverride default container command (useful when using custom images)[]
kubeappsapis.argsOverride default container args (useful when using custom images)[]
kubeappsapis.extraVolumesOptionally specify extra list of additional volumes for the KubeappsAPIs pod(s)[]
kubeappsapis.extraVolumeMountsOptionally specify extra list of additional volumeMounts for the KubeappsAPIs container(s)[]
kubeappsapis.podLabelsExtra labels for KubeappsAPIs pods{}
kubeappsapis.podAnnotationsAnnotations for KubeappsAPIs pods{}
kubeappsapis.podAffinityPresetPod affinity preset. Ignored if affinity is set. Allowed values: soft or hard""
kubeappsapis.podAntiAffinityPresetPod anti-affinity preset. Ignored if affinity is set. Allowed values: soft or hardsoft
kubeappsapis.nodeAffinityPreset.typeNode affinity preset type. Ignored if affinity is set. Allowed values: soft or hard""
kubeappsapis.nodeAffinityPreset.keyNode label key to match. Ignored if affinity is set""
kubeappsapis.nodeAffinityPreset.valuesNode label values to match. Ignored if affinity is set[]
kubeappsapis.affinityAffinity for pod assignment{}
kubeappsapis.nodeSelectorNode labels for pod assignment{}
kubeappsapis.tolerationsTolerations for pod assignment[]
kubeappsapis.priorityClassNamePriority class name for KubeappsAPIs pods""
kubeappsapis.schedulerNameName of the k8s scheduler (other than default)""
kubeappsapis.topologySpreadConstraintsTopology Spread Constraints for pod assignment[]
kubeappsapis.automountServiceAccountTokenMount Service Account token in podtrue
kubeappsapis.hostAliasesCustom host aliases for KubeappsAPIs pods[]
kubeappsapis.sidecarsAdd additional sidecar containers to the KubeappsAPIs pod(s)[]
kubeappsapis.initContainersAdd additional init containers to the KubeappsAPIs pod(s)[]
kubeappsapis.service.ports.httpKubeappsAPIs service HTTP port8080
kubeappsapis.service.annotationsAdditional custom annotations for KubeappsAPIs service{}
kubeappsapis.serviceAccount.createSpecifies whether a ServiceAccount should be createdtrue
kubeappsapis.serviceAccount.nameName of the service account to use. If not set and create is true, a name is generated using the fullname template.""
kubeappsapis.serviceAccount.automountServiceAccountTokenAutomount service account token for the server service accountfalse
kubeappsapis.serviceAccount.annotationsAnnotations for service account. Evaluated as a template. Only used if create is true.{}

OCI Catalog chart configuration ¶

NameDescriptionValue
ociCatalog.enabledEnable the OCI catalog gRPC service for catalogingfalse
ociCatalog.image.registryOCI Catalog image registryREGISTRY_NAME
ociCatalog.image.repositoryOCI Catalog image repositoryREPOSITORY_NAME/kubeapps-oci-catalog
ociCatalog.image.digestOCI Catalog image digest in the way sha256:aa…. Please note this parameter, if set, will override the tag""
ociCatalog.image.pullPolicyOCI Catalog image pull policyIfNotPresent
ociCatalog.image.pullSecretsOCI Catalog image pull secrets[]
ociCatalog.image.debugEnable image debug modefalse
ociCatalog.extraFlagsAdditional command line flags for OCI Catalog[]
ociCatalog.extraEnvVarsArray with extra environment variables to add to the oci-catalog container[]
ociCatalog.extraEnvVarsCMName of existing ConfigMap containing extra env vars for the OCI Catalog container""
ociCatalog.extraEnvVarsSecretName of existing Secret containing extra env vars for the OCI Catalog container""
ociCatalog.containerPorts.grpcOCI Catalog gRPC container port50061
ociCatalog.resources.limits.cpuThe CPU limits for the OCI Catalog container250m
ociCatalog.resources.limits.memoryThe memory limits for the OCI Catalog container256Mi
ociCatalog.resources.requests.cpuThe requested CPU for the OCI Catalog container25m
ociCatalog.resources.requests.memoryThe requested memory for the OCI Catalog container32Mi
ociCatalog.containerSecurityContext.enabledEnabled containers’ Security Contexttrue
ociCatalog.containerSecurityContext.seLinuxOptionsSet SELinux options in containernil
ociCatalog.containerSecurityContext.runAsUserSet containers’ Security Context runAsUser1001
ociCatalog.containerSecurityContext.runAsNonRootSet container’s Security Context runAsNonRoottrue
ociCatalog.containerSecurityContext.privilegedSet container’s Security Context privilegedfalse
ociCatalog.containerSecurityContext.readOnlyRootFilesystemSet container’s Security Context readOnlyRootFilesystemfalse
ociCatalog.containerSecurityContext.allowPrivilegeEscalationSet container’s Security Context allowPrivilegeEscalationfalse
ociCatalog.containerSecurityContext.capabilities.dropList of capabilities to be dropped["ALL"]
ociCatalog.containerSecurityContext.seccompProfile.typeSet container’s Security Context seccomp profileRuntimeDefault
ociCatalog.livenessProbe.enabledEnable livenessProbetrue
ociCatalog.livenessProbe.initialDelaySecondsInitial delay seconds for livenessProbe60
ociCatalog.livenessProbe.periodSecondsPeriod seconds for livenessProbe10
ociCatalog.livenessProbe.timeoutSecondsTimeout seconds for livenessProbe5
ociCatalog.livenessProbe.failureThresholdFailure threshold for livenessProbe6
ociCatalog.livenessProbe.successThresholdSuccess threshold for livenessProbe1
ociCatalog.readinessProbe.enabledEnable readinessProbetrue
ociCatalog.readinessProbe.initialDelaySecondsInitial delay seconds for readinessProbe0
ociCatalog.readinessProbe.periodSecondsPeriod seconds for readinessProbe10
ociCatalog.readinessProbe.timeoutSecondsTimeout seconds for readinessProbe5
ociCatalog.readinessProbe.failureThresholdFailure threshold for readinessProbe6
ociCatalog.readinessProbe.successThresholdSuccess threshold for readinessProbe1
ociCatalog.startupProbe.enabledEnable startupProbefalse
ociCatalog.startupProbe.initialDelaySecondsInitial delay seconds for startupProbe0
ociCatalog.startupProbe.periodSecondsPeriod seconds for startupProbe10
ociCatalog.startupProbe.timeoutSecondsTimeout seconds for startupProbe5
ociCatalog.startupProbe.failureThresholdFailure threshold for startupProbe6
ociCatalog.startupProbe.successThresholdSuccess threshold for startupProbe1
ociCatalog.customLivenessProbeCustom livenessProbe that overrides the default one{}
ociCatalog.customReadinessProbeCustom readinessProbe that overrides the default one{}
ociCatalog.customStartupProbeCustom startupProbe that overrides the default one{}
ociCatalog.lifecycleHooksCustom lifecycle hooks for OCI Catalog containers{}
ociCatalog.commandOverride default container command (useful when using custom images)[]
ociCatalog.argsOverride default container args (useful when using custom images)[]
ociCatalog.extraVolumesOptionally specify extra list of additional volumes for the OCI Catalog pod(s)[]
ociCatalog.extraVolumeMountsOptionally specify extra list of additional volumeMounts for the OCI Catalog container(s)[]

Redis® chart configuration ¶

NameDescriptionValue
redis.auth.enabledEnable password authenticationtrue
redis.auth.passwordRedis® password""
redis.auth.existingSecretThe name of an existing secret with Redis® credentials""
redis.architectureRedis(R) architecture (standalone or replication)standalone
redis.master.extraFlagsArray with additional command line flags for Redis® master["--maxmemory 200mb","--maxmemory-policy allkeys-lru"]
redis.master.disableCommandsArray with commands to deactivate on Redis®[]
redis.master.persistence.enabledEnable Redis® master data persistence using PVCfalse
redis.replica.replicaCountNumber of Redis® replicas to deploy1
redis.replica.extraFlagsArray with additional command line flags for Redis® replicas["--maxmemory 200mb","--maxmemory-policy allkeys-lru"]
redis.replica.disableCommandsArray with commands to deactivate on Redis®[]
redis.replica.persistence.enabledEnable Redis® replica data persistence using PVCfalse
helm install kubeapps --namespace kubeapps \
  --set ingress.enabled=true \
    oci://REGISTRY_NAME/REPOSITORY_NAME/kubeapps

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The above command enables an Ingress Rule to expose Kubeapps.

Alternatively, a YAML file that specifies the values for parameters can be provided while installing the chart. For example,

helm install kubeapps --namespace kubeapps -f custom-values.yaml oci://REGISTRY_NAME/REPOSITORY_NAME/kubeapps

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

Configuration and installation details ¶

Configuring Initial Repositories ¶

By default, Kubeapps will track the Bitnami Application Catalog . To change these defaults, override with your desired parameters the apprepository.initialRepos object present in the values.yaml file.

Enabling Operators ¶

Since v1.9.0 (and by default since v2.0), Kubeapps supports deploying and managing Operators within its dashboard. More information about how to enable and use this feature can be found in this guide .

Exposing Externally ¶

Note: The Kubeapps frontend sets up a proxy to the Kubernetes API service which means that when exposing the Kubeapps service to a network external to the Kubernetes cluster (perhaps on an internal or public network), the Kubernetes API will also be exposed for authenticated requests from that network. It is highly recommended that you use an OAuth2/OIDC provider with Kubeapps to ensure that your authentication proxy is exposed rather than the Kubeapps frontend. This ensures that only the configured users trusted by your Identity Provider will be able to reach the Kubeapps frontend and therefore the Kubernetes API. Kubernetes service token authentication should only be used for users for demonstration purposes only, not production environments.

LoadBalancer Service ¶

The simplest way to expose the Kubeapps Dashboard is to assign a LoadBalancer type to the Kubeapps frontend Service. For example, you can use the following parameter: frontend.service.type=LoadBalancer

Wait for your cluster to assign a LoadBalancer IP or Hostname to the kubeapps Service and access it on that address:

kubectl get services --namespace kubeapps --watch
Ingress ¶

This chart provides support for Ingress resources. If you have an ingress controller installed on your cluster, such as nginx-ingress-controller or contour you can utilize the ingress controller to serve your application.

To enable ingress integration, please set ingress.enabled to true

Hosts ¶

Most likely you will only want to have one hostname that maps to this Kubeapps installation (use the ingress.hostname parameter to set the hostname), however, it is possible to have more than one host. To facilitate this, the ingress.extraHosts object is an array.

Annotations ¶

For annotations, please see this document . Not all annotations are supported by all ingress controllers, but this document does a good job of indicating which annotation is supported by many popular ingress controllers. Annotations can be set using ingress.annotations.

TLS ¶

This chart will facilitate the creation of TLS secrets for use with the ingress controller, however, this is not required. There are four common use cases:

  • Helm generates/manages certificate secrets based on the parameters.
  • The user generates/manages certificates separately.
  • Helm creates self-signed certificates and generates/manages certificate secrets.
  • An additional tool (like cert-manager ) manages the secrets for the application.

In the first two cases, it is needed a certificate and a key. We would expect them to look like this:

  • certificate files should look like (and there can be more than one certificate if there is a certificate chain)

    -----BEGIN CERTIFICATE-----
    MIID6TCCAtGgAwIBAgIJAIaCwivkeB5EMA0GCSqGSIb3DQEBCwUAMFYxCzAJBgNV
    ...
    jScrvkiBO65F46KioCL9h5tDvomdU1aqpI/CBzhvZn1c0ZTf87tGQR8NK7v7
    -----END CERTIFICATE-----
    
  • keys should look like:

    -----BEGIN RSA PRIVATE KEY-----
    MIIEogIBAAKCAQEAvLYcyu8f3skuRyUgeeNpeDvYBCDcgq+LsWap6zbX5f8oLqp4
    ...
    wrj2wDbCDCFmfqnSJ+dKI3vFLlEz44sAV8jX/kd4Y6ZTQhlLbYc=
    -----END RSA PRIVATE KEY-----
    
  • If you are going to use Helm to manage the certificates based on the parameters, please copy these values into the certificate and key values for a given ingress.secrets entry.

  • In case you are going to manage TLS secrets separately, please know that you must use a TLS secret with name INGRESS_HOSTNAME-tls (where INGRESS_HOSTNAME is a placeholder to be replaced with the hostname you set using the ingress.hostname parameter).

  • To use self-signed certificates created by Helm, set both ingress.tls and ingress.selfSigned to true.

  • If your cluster has a cert-manager add-on to automate the management and issuance of TLS certificates, set ingress.certManager boolean to true to enable the corresponding annotations for cert-manager.

Upgrading Kubeapps ¶

You can upgrade Kubeapps from the Kubeapps web interface. Select the namespace in which Kubeapps is installed (kubeapps if you followed the instructions in this guide) and click on the “Upgrade” button. Select the new version and confirm.

You can also use the Helm CLI to upgrade Kubeapps, first ensure you have updated your local chart repository cache:

helm repo update

Now upgrade Kubeapps:

export RELEASE_NAME=kubeapps
helm upgrade $RELEASE_NAME oci://REGISTRY_NAME/REPOSITORY_NAME/kubeapps

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

If you find issues upgrading Kubeapps, check the troubleshooting section.

To 14.0.0 ¶

This major updates the PostgreSQL subchart to its newest major, 13.0.0. Here you can find more information about the changes introduced in that version.

To 13.0.0 ¶

This major updates the Redis® subchart to its newest major, 18.0.0. Here you can find more information about the changes introduced in that version.

NOTE: Due to an error in our release process, Redis®’ chart versions higher or equal than 17.15.4 already use Redis® 7.2 by default.

To 12.0.0 ¶

This major updates the PostgreSQL subchart to its newest major, 12.0.0. Here you can find more information about the changes introduced in that version.

Uninstalling the Chart ¶

To uninstall/delete the kubeapps deployment:

helm uninstall -n kubeapps kubeapps

# Optional: Only if there are no more instances of Kubeapps
$ kubectl delete crd apprepositories.kubeapps.com

The first command removes most of the Kubernetes components associated with the chart and deletes the release. After that, if there are no more instances of Kubeapps in the cluster you can manually delete the apprepositories.kubeapps.com CRD used by Kubeapps that is shared for the entire cluster.

NOTE: If you delete the CRD for apprepositories.kubeapps.com it will delete the repositories for all the installed instances of kubeapps. This will break existing installations of kubeapps if they exist.

If you have dedicated a namespace only for Kubeapps you can completely clean the remaining completed/failed jobs or any stale resources by deleting the namespace

kubectl delete namespace kubeapps

FAQ ¶

How to install Kubeapps for demo purposes? ¶

Install Kubeapps for exclusively demo purposes by simply following the getting started docs.

How to install Kubeapps in production scenarios? ¶

For any user-facing installation, you should configure an OAuth2/OIDC provider to enable secure user authentication with Kubeapps and the cluster. Please also refer to the Access Control documentation to configure fine-grained access control for users.

How to use Kubeapps? ¶

Have a look at the dashboard documentation for knowing how to use the Kubeapps dashboard: deploying applications, listing and removing the applications running in your cluster and adding new repositories.

How to configure Kubeapps with Ingress ¶

The example below will match the URL http://example.com to the Kubeapps dashboard. For further configuration, please refer to your specific Ingress configuration docs (e.g., NGINX or HAProxy ).

helm install kubeapps oci://REGISTRY_NAME/REPOSITORY_NAME/kubeapps \
  --namespace kubeapps \
  --set ingress.enabled=true \
  --set ingress.hostname=example.com \
  --set ingress.annotations."kubernetes\.io/ingress\.class"=nginx # or your preferred ingress controller

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

If you are using LDAP via Dex with OIDC or you are getting an error message like upstream sent too big header while reading response header from upstream it means the cookie size is too big and can’t be processed by the Ingress Controller. You can work around this problem by setting the following Nginx ingress annotations (look for similar annotations in your preferred Ingress Controller):

  # rest of the helm install ... command
  --set ingress.annotations."nginx\.ingress\.kubernetes\.io/proxy-read-timeout"=600
  --set ingress.annotations."nginx\.ingress\.kubernetes\.io/proxy-buffer-size"=8k
  --set ingress.annotations."nginx\.ingress\.kubernetes\.io/proxy-buffers"=4
Serving Kubeapps in a subpath ¶

You may want to serve Kubeapps with a subpath, for instance http://example.com/subpath, you have to set the proper Ingress configuration. If you are using the ingress configuration provided by the Kubeapps chart, you will have to set the ingress.hostname and path parameters:

helm install kubeapps oci://REGISTRY_NAME/REPOSITORY_NAME/kubeapps \
  --namespace kubeapps \
  --set ingress.enabled=true \
  --set ingress.hostname=example.com \
  --set ingress.path=/subpath \
  --set ingress.annotations."kubernetes\.io/ingress\.class"=nginx # or your preferred ingress controller

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

Besides, if you are using the OAuth2/OIDC login (more information at the using an OIDC provider documentation ), you will need, also, to configure the different URLs:

helm install kubeapps oci://REGISTRY_NAME/REPOSITORY_NAME/kubeapps \
  --namespace kubeapps \
  # ... other OIDC and ingress flags
  --set authProxy.oauthLoginURI="/subpath/oauth2/login" \
  --set authProxy.oauthLogoutURI="/subpath/oauth2/logout" \
  --set authProxy.extraFlags="{<other flags>,--proxy-prefix=/subpath/oauth2}"

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

Can Kubeapps install apps into more than one cluster? ¶

Yes! Kubeapps 2.0+ supports multicluster environments. Have a look at the Kubeapps dashboard documentation to know more.

Can Kubeapps be installed without Internet connection? ¶

Yes! Follow the offline installation documentation to discover how to perform an installation in an air-gapped scenario.

Does Kubeapps support private repositories? ¶

Of course! Have a look at the private package repositories documentation to learn how to configure a private repository in Kubeapps.

Is there any API documentation? ¶

Yes! But it is not definitive and is still subject to change. Check out the latest API online documentation or download the Kubeapps OpenAPI Specification yaml file from the repository.

Why can’t I configure global private repositories? ¶

You can, but you will need to configure the imagePullSecrets manually.

Kubeapps does not allow you to add imagePullSecrets to an AppRepository that is available to the whole cluster because it would require that Kubeapps copies those secrets to the target namespace when a user deploys an app.

If you create a global AppRepository but the images are on a private registry requiring imagePullSecrets, the best way to configure that is to ensure your Kubernetes nodes are configured with the required imagePullSecrets - this allows all users (of those nodes) to use those images in their deployments without ever requiring access to the secrets.

You could alternatively ensure that the imagePullSecret is available in all namespaces in which you want people to deploy, but this unnecessarily compromises the secret.

Does Kubeapps support Operators? ¶

Yes! You can get started by following the operators documentation .

Slow response when listing namespaces ¶

Kubeapps uses the currently logged-in user credential to retrieve the list of all namespaces. If the user does not have permission to list namespaces, the backend will try again with its own service account. It will list all the namespaces and then will iterate through each namespace to check if the user has permissions to get secrets for each one. This can lead to a slow response if the number of namespaces on the cluster is large.

To reduce this response time, you can increase the number of checks that Kubeapps will perform in parallel (per connection) setting the value: kubeappsapis.burst=<desired_number> and kubeappsapis.QPS=<desired_number>.

More questions? ¶

Feel free to open an issue if you have any questions!

Troubleshooting ¶

Upgrading to chart version 8.0.0 ¶

This major release renames several values in this chart and adds missing features, in order to get aligned with the rest of the assets in the Bitnami charts repository.

Additionally, it updates both the PostgreSQL and the Redis subcharts to their latest major versions, 11.0.0 and 16.0.0 respectively, where similar changes have been also performed. Check PostgreSQL Upgrading Notes and Redis Upgrading Notes for more information.

The following values have been renamed:

  • frontend.service.port renamed as frontend.service.ports.http.
  • frontend.service.nodePort renamed as frontend.service.nodePorts.http.
  • frontend.containerPort renamed as frontend.containerPorts.http.
  • dashboard.service.port renamed as dashboard.service.ports.http.
  • dashboard.containerPort renamed as dashboard.containerPorts.http.
  • apprepository.service.port renamed as apprepository.service.ports.http.
  • apprepository.containerPort renamed as apprepository.containerPorts.http.
  • kubeops.service.port renamed as kubeops.service.ports.http.
  • kubeops.containerPort renamed as kubeops.containerPorts.http.
  • assetsvc.service.port renamed as assetsvc.service.ports.http.
  • assetsvc.containerPort renamed as assetsvc.containerPorts.http.
  • authProxy.containerPort renamed as authProxy.containerPorts.proxy.
  • authProxy.additionalFlags renamed as authProxy.extraFlags,
  • Pinniped Proxy service no longer uses pinnipedProxy.containerPort. Use pinnipedProxy.service.ports.pinnipedProxy to change the service port.
  • pinnipedProxy.containerPort renamed as pinnipedProxy.containerPorts.pinnipedProxy.
  • postgresql.replication.enabled has been removed. Use postgresql.architecture instead.
  • postgresql.postgresqlDatabase renamed as postgresql.auth.database.
  • postgresql.postgresqlPassword renamed as postgresql.auth.password.
  • postgresql.existingSecret renamed as postgresql.auth.existingSecret.
  • redis.redisPassword renamed as redis.auth.password.
  • redis.existingSecret renamed as redis.auth.existingSecret.

Note also that if you have an existing Postgresql secret that is used for Kubeapps, you will need to update the key from postgresql-password to postgres-password.

Nginx Ipv6 error ¶

When starting the application with the --set enableIPv6=true option, the Nginx server present in the services kubeapps and kubeapps-internal-dashboard may fail with the following:

nginx: [emerg] socket() [::]:8080 failed (97: Address family not supported by protocol)

This usually means that your cluster is not compatible with IPv6. To deactivate it, install kubeapps with the flag: --set enableIPv6=false.

Forbidden error while installing the Chart ¶

If during installation you run into an error similar to:

Error: release kubeapps failed: clusterroles.rbac.authorization.k8s.io "kubeapps-apprepository-controller" is forbidden: attempt to grant extra privileges: [{[get] [batch] [cronjobs] [] []...

Or:

Error: namespaces "kubeapps" is forbidden: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "kubeapps"

It is possible, though uncommon, that your cluster does not have Role-Based Access Control (RBAC) enabled. To check if your cluster has RBAC you can run the following command:

kubectl api-versions

If the above command does not include entries for rbac.authorization.k8s.io you should perform the chart installation by setting rbac.create=false:

helm install --name kubeapps --namespace kubeapps oci://REGISTRY_NAME/REPOSITORY_NAME/kubeapps --set rbac.create=false

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

Error while upgrading the Chart ¶

It is possible that when upgrading Kubeapps an error appears. That can be caused by a breaking change in the new chart or because the current chart installation is in an inconsistent state. If you find issues upgrading Kubeapps you can follow these steps:

Note: These steps assume that you have installed Kubeapps in the namespace kubeapps using the name kubeapps. If that is not the case replace the command with your namespace and/or name. Note: If you are upgrading from 2.3.1 see the following section . Note: If you are upgrading from 1.X to 2.X see the following section .

  1. (Optional) Backup your personal repositories (if you have any):

    kubectl get apprepository -A -o yaml > <repo name>.yaml
    
  2. Delete Kubeapps:

    helm del --purge kubeapps
    
  3. (Optional) Delete the App Repositories CRD:

    Warning: Do not run this step if you have more than one Kubeapps installation in your cluster.

    kubectl delete crd apprepositories.kubeapps.com
    
  4. (Optional) Clean the Kubeapps namespace:

    Warning: Do not run this step if you have workloads other than Kubeapps in the kubeapps namespace.

    kubectl delete namespace kubeapps
    
  5. Install the latest version of Kubeapps (using any custom modifications you need):

    helm repo update
    helm install --name kubeapps --namespace kubeapps oci://REGISTRY_NAME/REPOSITORY_NAME/kubeapps
    

    Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

  6. (Optional) Restore any repositories you backed up in the first step:

    kubectl apply -f <repo name>.yaml
    

After that you should be able to access the new version of Kubeapps. If the above doesn’t work for you or you run into any other issues please open an issue .

Upgrading to chart version 7.0.0 ¶

In this release, no breaking changes were included in Kubeapps (version 2.3.2). However, the chart adopted the standardizations included in the rest of the charts in the Bitnami catalog.

Most of these standardizations simply add new parameters that allow to add more customizations such as adding custom env. variables, volumes or sidecar containers. That said, some of them include breaking changes:

  • Chart labels were adapted to follow the Helm charts standard labels .
  • securityContext.* parameters are deprecated in favor of XXX.podSecurityContext.* and XXX.containerSecurityContext.*, where XXX is placeholder you need to replace with the actual component(s). For instance, to modify the container security context for “kubeops” use kubeops.podSecurityContext and kubeops.containerSecurityContext parameters.

Upgrading to 2.3.1 ¶

Kubeapps 2.3.1 (Chart version 6.0.0) introduces some breaking changes. Helm-specific functionality has been removed in order to support other installation methods (like using YAML manifests, kapp or kustomize ). Because of that, there are some steps required before upgrading from a previous version:

  1. Kubeapps will no longer create a database secret for you automatically but rather will rely on the default behavior of the PostgreSQL chart. If you try to upgrade Kubeapps and you installed it without setting a password, you will get the following error:

    Error: UPGRADE FAILED: template: kubeapps/templates/NOTES.txt:73:4: executing "kubeapps/templates/NOTES.txt" at <include "common.errors.upgrade.passwords.empty" (dict "validationErrors" $passwordValidationErrors "context" $)>: error calling include: template: kubeapps/charts/common/templates/_errors.tpl:18:48: executing "common.errors.upgrade.passwords.empty" at <fail>: error calling fail:
    PASSWORDS ERROR: you must provide your current passwords when upgrade the release
        'postgresql.postgresqlPassword' must not be empty, please add '--set postgresql.postgresqlPassword=$POSTGRESQL_PASSWORD' to the command. To get the current value:
    

    The error gives you generic instructions for retrieving the PostgreSQL password, but if you have installed a Kubeapps version prior to 2.3.1, the name of the secret will differ. Run the following command:

    export POSTGRESQL_PASSWORD=$(kubectl get secret --namespace "kubeapps" kubeapps-db -o jsonpath="{.data.postgresql-password}" | base64 -d)
    

    NOTE: Replace the namespace in the command with the namespace in which you have deployed Kubeapps.

    Make sure that you have stored the password in the variable $POSTGRESQL_PASSWORD before continuing with the next issue.

  2. The chart initialRepos are no longer installed using Helm hooks , which caused these repos not to be handled by Helm after the first installation. Now they will be tracked for every update. However, if you do not delete the existing ones, it will fail to update with:

Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: AppRepository "bitnami" in namespace "kubeapps" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "kubeapps"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kubeapps"

To bypass this issue, you will need to before delete all the initialRepos from the chart values (only the bitnami repo by default):

kubectl delete apprepositories.kubeapps.com -n kubeapps bitnami

NOTE: Replace the namespace in the command with the namespace in which you have deployed Kubeapps.

After that, you will be able to upgrade Kubeapps to 2.3.1 using the existing database secret:

WARNING: Make sure that the variable $POSTGRESQL_PASSWORD is properly populated. Setting a wrong (or empty) password will corrupt the release.

helm upgrade kubeapps oci://REGISTRY_NAME/REPOSITORY_NAME/kubeapps -n kubeapps --set postgresql.postgresqlPassword=$POSTGRESQL_PASSWORD

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

Upgrading to 2.0.1 (Chart 5.0.0) ¶

On November 13, 2020, Helm 2 support was formally finished , this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm 3 and to be consistent with the Helm project itself regarding the Helm 2 EOL.

What changes were introduced in this major version? ¶
  • Previous versions of this Helm Chart use apiVersion: v1 (installable by both Helm 2 and 3), this Helm Chart was updated to apiVersion: v2 (installable by Helm 3 only). Here you can find more information about the apiVersion field.
  • Move dependency information from the requirements.yaml to the Chart.yaml
  • After running helm dependency update, a Chart.lock file is generated containing the same structure used in the previous requirements.lock
  • The different fields present in the Chart.yaml file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Charts
  • In the case of PostgreSQL subchart, apart from the same changes that are described in this section, there are also other major changes due to the master/slave nomenclature was replaced by primary/readReplica. Here you can find more information about the changes introduced.
Considerations when upgrading to this version ¶
  • If you want to upgrade to this version using Helm 2, this scenario is not supported as this version does not support Helm 2 anymore
  • If you installed the previous version with Helm 2 and wants to upgrade to this version with Helm 3, please refer to the official Helm documentation about migrating from Helm 2 to 3
  • If you want to upgrade to this version from a previous one installed with Helm 3, you should not face any issues related to the new apiVersion. Due to the PostgreSQL major version bump, it is necessary to remove the existing statefulsets:

Note: The command below assumes that Kubeapps has been deployed in the kubeapps namespace using “kubeapps” as release name, if that is not the case, adapt the command accordingly.

kubectl delete statefulset -n kubeapps kubeapps-postgresql-master kubeapps-postgresql-slave

Upgrading to 2.0 ¶

Kubeapps 2.0 (Chart version 4.0.0) introduces some breaking changes:

  • Helm 2 is no longer supported. If you are still using some Helm 2 charts, migrate them with the available tools . Note that some charts (but not all of them) may require to be migrated to the new Chart specification (v2) . If you are facing any issue managing this migration and Kubeapps, please open a new issue!
  • MongoDB® is no longer supported. Since 2.0, the only database supported is PostgreSQL.
  • PostgreSQL chart dependency has been upgraded to a new major version.

Due to the last point, it is necessary to run a command before upgrading to Kubeapps 2.0:

Note: The command below assumes that Kubeapps has been deployed in the kubeapps namespace using “kubeapps” as release name, if that is not the case, adapt the command accordingly.

kubectl delete statefulset -n kubeapps kubeapps-postgresql-master kubeapps-postgresql-slave

After that, you should be able to upgrade Kubeapps as always and the database will be repopulated.

License ¶

Copyright © 2024 Broadcom. The term “Broadcom” refers to Broadcom Inc. and/or its subsidiaries.

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.