Streamline Prometheus in Kubernetes using Prometheus Operator CRD’s

Roberto Javier Yudice Monico
5 min readOct 24, 2020

The Promethus Operator project are a set of CRD’s designed to deploy and manage Kubernetes Prometheus deployments using CRD’s rather than using a Helm chart or ad-hoc YAML files, and not just that, it also enables you to manage Prometheus configuration, such as scrape configs, using CRD’s. This has great benefits, if you have worked with Prometheus you know that the scrape configs file can get a bit messy as you add more and more to it. It also enables to automate the discovery of Prometheus endpoints in your Pods or Services, and automatically scrape them, rather than having to create new scrape configs for every new service that you deploy.

Quick Intro to CRD’s

In case you haven’t heard about CRDs, it stands for “Custom Resource Definition” and it’s a way to extend Kubernetes so that you can define other resource types in a declarative way through YAML just as you define Pods, Deployments, Ingresses, etc.

It’s been gaining a lot of traction lately, one of the most promising projects that use CRD’s is the cluster API, which makes cluster creation easier by allowing to define clusters in YAML files.

CRD’s make streamlining your deployments easier because you can have all dependencies in once place rather than having to set up configurations in many places.

By using the Prometheus operator’s CRD’s you can streamline monitoring as part of your application deployment.

Getting Started: Setting up CRD’s

There are different ways to set up Prometheus Operator in your cluster, we are going to use the helm chart because it’s the easiest one in my opinion and it’s more than enough for most use cases. The other options are the kube-prometheus project or executing the YAMLs from the Prometheus Operator repository.

So let’s go ahead and deploy the helm chart to set up Prometheus Operator CRD’s in our cluster:

helm repo add prometheus-community https://prometheus-community.github.io/helm-chartshelm install my-prometheus prometheus-community/kube-prometheus-stack -n prometheus-monitoring --create-namespace --set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false

The value “prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues” we are setting it’s to tell Prometheus to also scan for ServiceMonitor’s that were created outside the Helm chart. If we don't set this value then it will only pick up ServiceMonitors that have the label of the Helm release, and that’s not very useful because we are going to create ServiceMonitors for our application outside the Helm chart release. More into Service Monitors later.

After installing the helm charts you can then run “kubectl get pods -n pormetheus-monitoring” and should get something like this:

Accessing Grafana

Grafana is where we will be able to view all of the metrics that are stored in prometheus. After executing the helm chart you should have the following services setup:

kubectl get services -n prometheus-monitoring

To access grafana we just need to do a port forward to the grafana service:

kubectl port-forward svc/my-prometheus-grafana 8000:80 -n prometheus-monitoring

Now go to “http://localhost:8000". The username is “admin” and the password is “prom-operator” by default. You can adjust these in the helm chart values.

Deploying the Sample Service

Now that we have our CRD’s in place, we are going to deploy a sample service that I built in Go. This service will expose prometheus metrics at the /metrics path.

To deploy it run the following YAML’s:

Now we will create the service — Prometheus will only create scrape configs based on services:

Notice that we are naming our port “prometheus”. This is important because we will later specify in our ServiceMonitor resource to automatically look for all services that have a port named “prometheus” and gather metrics from it, and this is where all the magic comes together because you can automate metrics scraping, since every new service that you create and has a port named “prometheus” will be automatically scraped.

Creating a Service Monitor Resource to gather Metrics

Now with everything running the next step is to create a ServiceMonitor resource. The ServiceMonitor resource is part of Prometheus Operator’s CRD’s, which we installed using the Helm chart.

ServiceMonitor resources allow you to define rules to scrape prometheus endpoints automatically, instead of having to add a new scrape config whenever you deploy a new service.

We are going to create a ServiceMonitor to scrape the metrics of our Go service. This is the definition:

Now we should be able to query the metric that we created in grafana:

To view all the scrape configs that have been created in prometheus as a result of your ServiceMonitor resources you can access Prometheus UI and then go into the Status menu and then Service Discovery. You should get this:

Wrap Up

This is a great way of streamlining your Prometheus monitoring stacks, since you no longer have to be managing your scrape configs themselves, they will be automatically created for you when you deploy a new service.

You can also include your ServiceMonitor as part of the Helm charts that you used to deploy your service to Kubernetes and get all the flexibility that Helm offers in terms of reusability.

Some benefits:

  • No longer have to maintain a big yaml with scrape configs
  • You can pack your monitoring stack within the same helm chart you use to deploy your applications/service by just adding a ServiceMonitor resource to the helm chart package.
  • You can standardize your monitoring more easily

We just covered the ServiceMonitor resource in this article which is just the surface. You can also set up alert manager and alerts just using CRD’s that you can also deploy together with your microservices.

--

--