Skip to main content

Check out Port for yourselfΒ 

Visualize your services' k8s runtime

Port’s Kubernetes integration helps you model and visualize your cluster’s workloads alongside your existing workloads in Port. This guide will help you set up the integration and visualize your services' Kubernetes runtime.

Common use cases​

  • Developers can easily view the health and status of their services' K8s runtime.
  • Platform engineers can create custom views and dashboards for different stakeholders.
  • Platform engineers can set, maintain, and track standards for Kubernetes resources.
  • R&D managers can track data about services' Kubernetes resources, enabling high-level oversight and better decision-making.

Prerequisites​

  • This guide assumes you have a Port account and that you have finished the onboarding process. We will use the Workload blueprint that was created during the onboarding process.
  • You will need an accessible k8s cluster. If you don't have one, here is how to quickly set-up a minikube cluster.
  • Helm - required to install Port's Kubernetes exporter.

Set up data model​

To visualize your cluster's workloads in Port, we will first install Port’s Kubernetes exporter, which automatically creates Kubernetes-related blueprints and entities in your portal.

Install Port's Kubernetes exporter​

To install the integration using Helm:

  1. Go to the Kubernetes data source page in your portal.

  2. Select the Real-time and always on method:

  3. A helm command will be displayed, with default values already filled out (e.g. your Port cliend ID, client secret, etc).
    Copy the command, replace the placeholders with your values, then run it in your terminal to install the integration.

Selecting a Port API URL by account region

The baseUrl, port_region, port.baseUrl, portBaseUrl, port_base_url and OCEAN__PORT__BASE_URL parameters are used to select which instance or Port API will be used.

Port exposes two API instances, one for the EU region of Port, and one for the US region of Port.

What does the exporter do?​

After installation, the exporter will:

  1. Create blueprints in your Builder (as defined here) that represent Kubernetes resources:



    What is K8sWorkload?

    K8sWorkload is an abstraction of Kubernetes objects which create and manage pods (e.g. Deployment, StatefulSet, DaemonSet).


  2. Create entities in your Software catalog. You will see a new page for each blueprint containing your resources, filled with data from your Kubernetes cluster (according to the default mapping that is defined here):



  3. Create scorecards for the blueprints that represent your K8s resources (as defined here). These scorecards define rules and checks over the data ingested from your K8s cluster, making it easy to check that your K8s resources meet your standards.

  4. Create dashboards that provide you with a visual view of the data ingested from your K8s cluster.

  5. Listen to changes in your Kubernetes cluster and update your entities accordingly.


Set up automatic discovery​

After installing the integration, the relationship between the Workload blueprint and the k8_workload blueprint is established automatically. To ensure each Workload entity is properly related to its respective k8_workload entity, we will configure automatic discovery using labels.

In this guide we will use the following convention:
A k8_workload with a label in the form of portWorkload: <workload-identifier> will automatically be assigned to a Workload with that identifier.

For example, a k8s deployment with the label portWorkload: myWorkload will be assigned to a Workload with the identifier myWorkload.

We achieved this by adding a mapping definition in the configuration YAML we used when installing the exporter. The definition uses jq to perform calculations between properties.

Let's see this in action:

  1. Create a Deployment resource in your cluster with a label matching the identifier of a Workload in your Software catalog.
    You can use the simple example below and change the metadata.labels.portWorkload value to match your desired Workload. Copy it into a file named deployment.yaml, then apply it:

    kubectl apply -f deployment.yaml
    Deployment example (Click to expand)
        apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: awesomeapp
    labels:
    app: nginx
    portWorkload: AwesomeWorkload
    spec:
    replicas: 2
    selector:
    matchLabels:
    app: nginx
    template:
    metadata:
    labels:
    app: nginx
    spec:
    containers:
    - name: nginx
    image: nginx:1.14.2
    resources:
    limits:
    cpu: "200m"
    memory: "256Mi"
    requests:
    cpu: "100m"
    memory: "128Mi"
    ports:
    - containerPort: 80

  2. To see the new data, we need to update the mapping configuration that the K8s exporter uses to ingest data.
    To edit the mapping, go to your data sources page, find the K8s exporter card, click on it and you will see a YAML editor showing the current configuration.

    Add the following block to the mapping configuration and click Resync:

    resources:
    # ... Other resource mappings installed by the K8s exporter
    - kind: apps/v1/deployments
    selector:
    query: .metadata.namespace | startswith("kube") | not
    port:
    entity:
    mappings:
    - identifier: .metadata.labels.portWorkload
    title: .metadata.name
    blueprint: '"workload"'
    relations:
    k8s_workload: >-
    .metadata.name + "-Deployment-" + .metadata.namespace + "-" +
    env.CLUSTER_NAME

  3. Go to your Software catalog, and click on Workloads. Click on the Workload for which you created the deployment, and you should see the k8_workload relation filled.



Visualize data from your Kubernetes environment​

We now have a lot of data about our workloads, and some metrics to track their quality. Let's see how we can visualize this information in ways that will benefit the routine of our developers and managers. Let's start by creating a few widgets that will help us keep track of our services' health and availability.

Add an "Unhealthy services" table to your homepage​

In the configuration provided for this guide, a workload is considered Healthy if its defined number of replicas is equal to its available replicas (of course, you can change this definition).

  1. Go to your homepage, click on the + Widget button in the top right corner, then select Table.

  2. Fill the form out like this, then click Save:



  3. In your new table, click on Filter, then on + Add new filter. Fill out the fields like this:



Now you can keep track of services that need your attention right from your homepage.

These services were not included in this guide, but serve to show an example of how this table might look.

Use your scorecards to get a clear overview of your workloads' availability​

In the configuration provided for this guide, the availability metric is defined like this:

  • Bronze: >=1 replica
  • Silver: >=2 replicas
  • Gold: >=3 replicas

To get an overall picture of our workloads' availability, we can use a table operation.

  1. Go to the Workloads catalog page.

  2. Click on the Group by button, then choose High availability from the dropdown:



  3. Click on any of the metric levels to see the corresponding workloads:



Note that you can also set this as the default view by click on the Save this view button πŸ“

Possible daily routine integrations​

  • Send a slack message in the R&D channel to let everyone know that a new deployment was created.
  • Notify Devops engineers when a service's availability drops.
  • Send a weekly/monthly report to R&D managers displaying the health of services' production runtime.

Conclusion​

Kubernetes is a complex environment that requires high-quality observability. Port's Kubernetes integration allows you to easily model and visualize your Kubernetes resources, and integrate them into your daily routine.
Customize your views to display the data that matters to you, grouped or filtered by teams, namespaces, or any other criteria.
With Port, you can seamlessly fit your organization's needs, and create a single source of truth for your Kubernetes resources.

More guides & tutorials will be available soon, in the meantime feel free to reach out with any questions via our community slack or Github project.