Skip to main content
Version: development

Kubernetes

Below are the instructions to install the Aperture Controller on Kubernetes.

Prerequisites

You can do the installation using the aperturectl CLI tool or using Helm. Install the tool of your choice using the following links:

  1. aperturectl

    Refer

    aperturectl install controller to see all the available command line arguments.

  2. Helm

    1. Once the Helm CLI is installed, add the Aperture Controller Helm chart repository in your environment for install or upgrade:

      helm repo add aperture https://fluxninja.github.io/aperture/
      helm repo update

Installation

The Aperture Controller can be installed on Kubernetes using the below options:

warning

Upgrading from one of the installation modes below to the other is discouraged and can result in unpredictable behavior.

  1. Install with Operator

    The Aperture Controller can be installed using the Kubernetes Operator available for it. This method requires access to create cluster level resources like ClusterRole, ClusterRoleBinding, CustomResourceDefinition and so on.

  2. Namespace-scoped Installation

    The Aperture Controller can also be installed with only namespace-scoped resources.

Exposing etcd and Prometheus services

If the Aperture Controller is installed with the packaged etcd and Prometheus, follow the following steps to expose them outside the Kubernetes cluster so that the Aperture Agent running on Linux can access them.

info

Contour is used as a Kubernetes Ingress Controller in the following steps to expose the etcd and Prometheus services out of Kubernetes cluster using Load Balancer.

Any other tools can also be used to expose the etcd and Prometheus services out of the Kubernetes cluster based on your infrastructure.

  1. Add the Helm chart repository for Contour in your environment:

    helm repo add bitnami https://charts.bitnami.com/bitnami
  2. Install the Contour chart by running the following command:

    helm install aperture bitnami/contour --namespace projectcontour --create-namespace
  3. It might take a few minutes for the Contour Load Balancer IP to become available. You can watch the status by running:

    kubectl get svc aperture-contour-envoy --namespace projectcontour -w
  4. Once EXTERNAL-IP is no longer <pending>, run the following command to get the External IP for the Load Balancer:

    kubectl describe svc aperture-contour-envoy --namespace projectcontour | grep Ingress | awk '{print $3}'
  5. Add an entry for the above IP in the cloud provider's DNS configuration. For example, follow steps on Cloud DNS on GKE for Google Kubernetes Engine.

  6. Configure the below parameters to install the Kubernetes Ingress with the Aperture Controller by updating the values.yaml created during installation and passing it with install command:

    ingress:
    enabled: true
    domain_name: YOUR_DOMAIN_HERE

    etcd:
    service:
    annotations:
    projectcontour.io/upstream-protocol.h2c: "2379"

    Replace the values of YOUR_DOMAIN_HERE with the actual value the domain name under with the External IP is exposed.

    aperturectl install controller --version main --values-file values.yaml
  7. It might take a few minutes for the Ingress resource to get the ADDRESS. You can watch the status by running:

    kubectl get ingress controller-ingress -w
  8. Once the ADDRESS matches the External IP, the etcd will be accessible on http://etcd.YOUR_DOMAIN_HERE:80 and the Prometheus will be accessible on http://prometheus.YOUR_DOMAIN_HERE:80.