Deploy Kafka UI tool

David (Dudu) Zbeda
13 min readAug 11, 2023

--

Update: adding helm chart support

Overview

Kafka UI is an open-source tool designed to simplify the management and monitoring of your Kafka clusters through a unified user interface. With Kafka UI, you can:

  • Monitor broker statuses and ensure the health of your clusters.
  • View consumer lists and track message lag for better insights into your data flow.
  • Explore existing topics in your cluster and even read the messages stored within them.

Additionally, Kafka UI functions as a “producer,” allowing you to create new topics and write messages directly into them, providing both convenience and enhanced control over your Kafka ecosystem.

Goals

In this blog, I will demonstrate how to deploy and configure the Kafka UI tool using Docker Compose and Helm charts. The guide will include the necessary configurations for integrating Kafka UI with various types of Kafka clusters, including:

  • Unsecured Kafka Clusters
  • Kafka Clusters Secured with TLS
  • Kafka Clusters Secured with TLS and Kerberos

By the end of this blog, you’ll have a clear understanding of how to deploy Kafka UI and connect it to different Kafka cluster setups for efficient monitoring and management.

Running a solution with Docker compose

Prerequisites

To get started, ensure you have the following:

  1. Kafka Cluster — Secure or Unsecure
  2. Linux Machine: A machine with network access to your Kafka cluster. — for the blog I have used Ubuntu 22.04
  3. Docker Engine and Docker Compose: Installed on the Linux machine. I followed DigitalOcean’s guide to install Docker Engine and Docker ComposeDocker engine & Docker compose.

For my setup, I used Windows Subsystem for Linux (WSL). If you’re new to WSL, check out my LinkedIn post, which links to a detailed blog explaining how to install WSL and Docker.

Note for WSL Users

When installing Docker on WSL, you might encounter an issue when starting the Docker service, resulting in the following error message:

Error message when starting docker service.

To resolve this problem, follow these steps:

  1. vi /etc/default/docker
  2. add the following to file DOCKER_OPTS=” — iptables=false”
  3. service docker start
  4. service docker status

Integration with a Non-secure Kafka cluster

To integrate with a non-secure Kafka cluster, you need to create a Docker Compose file. This file consists of two key configuration sections:

  1. Extra Hosts Configuration
    In this section, define the fully qualified domain names (FQDNs) and IP addresses of all brokers in the Kafka cluster.
  • Ensure that the hostnames match the exact broker hostnames.
  • You can verify the broker FQDNs by running the following command on the Kafka broker: hostname -f

2. Environment Variables
This section specifies the parameters required to connect to the Kafka cluster. Integrating with a non-secure Kafka cluster is relatively simple.

  • Typically, non-secure Kafka clusters use port 9092 for communication.

docker-compose.yaml

---
version: '3.4'
services:
kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:latest
ports:
- 8080:8080

extra_hosts:
- "kafka1-nonesecure:130.1.1.1"
- "kafka2-nonesecure:130.1.1.2"
- "kafka3-nonesecure:130.1.1.3"
- "kafka4-nonesecure:130.1.1.4"
- "kafka5-nonesecure:130.1.1.5"
- "kafka6-nonesecure:130.1.1.6"
environment:
#### Unsecure kafka cluster configuration ####
KAFKA_CLUSTERS_0_NAME: unsecure-kafka-cluster
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka1-nonesecure:9092
#DYNAMIC_CONFIG_ENABLED: true

Additional Tip

In the Docker Compose file, there’s an option to remove the comment (#) from the line #DYNAMIC_CONFIG_ENABLED: true , this will allow configuring Kafka UI directly from its GUI. While this can be useful, I recommend keeping this feature disabled to prevent accidental configuration changes, especially by users unfamiliar with the system. Sometimes, curiosity can lead to unintended consequences! 😊

Integration with a Secure Kafka Cluster (TLS + Kerberos)

This section is the main reason behind writing this blog. Setting up integration with a Kafka cluster enabled with TLS and Kerberos proved to be a challenging task — it took me two days to fully understand the required configuration.

A Kafka cluster with TLS and Kerberos is deployed in an environment that includes an LDAP server. For this blog, we will use Windows Active Directory as both the LDAP server and the root CA.

Here are the domain details used in this setup:

  • Domain Name: cluster.com
  • Domain User: daviduser

For the integration, we will need a keytab file , krb5.conf file and a Truststore file — I will explain.

keytab file

If your Kafka cluster is configured with Kerberos, you likely already understand what a keytab file is and have one prepared. For our example, we will use a keytab file for the user daviduser.

High-Level Explanation of Keytab Files

For those who may not be familiar, here’s a simplified explanation:

  • On Windows, when you log in, you supply your username and password to the system. This process generates a valid Kerberos ticket (TGT) by authenticating with the LDAP server, which is typically an Active Directory server.
  • On Linux, there isn’t a direct way to supply your domain username and password for Kerberos authentication. Instead, you use a keytab file, which contains your credentials (username and password) in an encrypted format. The keytab file allows you to obtain a Kerberos ticket without manually entering your credentials.

How to Obtain a Keytab File

  • The keytab file is generated by your IT administrator from the Active Directory server.
  • Keep in mind that if your password changes, the existing keytab file becomes invalid, and a new keytab file must be generated to reflect the updated credentials.

krb5.conf file

Once you have the keytab file, the next step is to configure the krb5.conf file. krb5.conf file is essential for obtaining a Kerberos ticket from the LDAP server and specifies the LDAP connection settings required for authentication.

The krb5.conf file includes critical configuration details, such as:

  • LDAP Server IP or Hostname: Specifies the address of the LDAP server (typically an Active Directory server).
  • Encryption Type: Defines the encryption protocol used by the keytab file for secure communication.
  • Kerberos Realm: The domain or realm name associated with your LDAP server (e.g., CLUSTER.COM).
  • KDC (Key Distribution Center): The server responsible for issuing Kerberos tickets.

Example for krb5.conf file that connects to a single LDAP server.

[libdefaults]
default_realm = CLUSTER.COM
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
allow_weak_crypto = true
forwardable = true
default_tgs_enctypes = rc4-hmac
default_tkt_enctypes = rc4-hmac
permitted_enctypes = rc4-hmac
udp_preference_limit = 1
kdc_timeout = 3000
[realms]
CLUSTER.COM = {
kdc = 130.1.1.50
admin_server = 130.1.1.50
}
[domain_realm]
.cluster.com = CLUSTER.COM
cluster.com = CLUSTER.COM

Example for krb5.conf file with multi-LDAP servers.

To manage multiple Kafka clusters, each connected to a different LDAP server, you should use a single krb5.conf file that supports multiple LDAP servers.

In the following example, we define two domains/realms:

  1. CLUSTER.COM with an LDAP/Active Directory server at IP 130.1.1.50.
  2. LAB.COM with an LDAP/Active Directory server at IP 130.1.1.60.

Make sure to use uppercase letters as shown in the above examples.

[libdefaults]
default_realm = CLUSTER.COM
dns_lookup_kdc = false
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
allow_weak_crypto = true
forwardable = true
default_tgs_enctypes = rc4-hmac
default_tkt_enctypes = rc4-hmac
permitted_enctypes = rc4-hmac
udp_preference_limit = 1
kdc_timeout = 3000
[realms]
CLUSTER.COM = {
kdc = 130.1.1.50
admin_server = 130.1.1.50
}
LAB.COM = {
kdc = 130.1.1.60
admin_server = 130.1.1.60
}
[domain_realm]
.lab.com = LAB.COM
lab.com = LAB.COM
.cluster.com = CLUSTER.COM
cluster.com = CLUSTER.COM

Truststore file

A Truststore is a secure file used by Java to store public certificates that are trusted by the application. In our case, the Truststore file will include the Kafka cluster’s public certificate, ensuring secure communication between Kafka UI and the cluster.

how to generate a Truststore file
The process of generating a Truststore file is straightforward and can be performed on any machine with Java installed. In this example, we use a Linux machine:

  1. Login to your Linux box
  2. Install Java
  3. Obtain the Kafka cluster’s public certificate. In this example, the file is named kafka-public.cer.
  4. Run the following command to import the certificate into a Truststore:
keytool -import -alias kafka-cluster -file kafka-public.cer -keystore my.truststore

5. Set the Truststore Password: During the process, you’ll be prompted to set a password for the Truststore. In our example, the password is zaq1234.

This command generates a file named my.truststore, which contains the Kafka cluster's public certificate.

docker-compose file configuration

Once all the required files are ready, the next step is to configure the Docker Compose file and set up the necessary folder structure on the server running Docker Compose.

  1. Copy the krb5.conf File: Place the krb5.conf file in your home directory, where the docker-compose.yaml file will be located.
  2. create a new folder “cluster” and copy the Truststore and keytab file to this folder.

If you want to add additional Kafka cluster that connects to a different domain, open a new folder that includes the relevant Truststore and keytab files for the new Kafka cluster and make sure that the krb5.conf has multi entries for LDAP Servers

Create docker-compose file

create a Docker Compose file. This file consists of three key configuration sections:

  1. Extra Hosts Configuration
    In this section, define the fully qualified domain names (FQDNs) and IP addresses of all brokers in the Kafka cluster.
  • Ensure that the hostnames match the exact broker hostnames.
  • You can verify the broker FQDNs by running the following command on the Kafka broker: hostname -f

2. Volume Configuration

In this section, specify the volumes to mount the required configuration files into the Docker container.

  • Mount the Truststore, krb5.conf, and keytab files based on the file structure defined earlier.
  • These files are essential for secure communication and authentication with the Kafka cluster.

3. Environment Variables
This section specifies the parameters required to connect to the Kafka cluster. Integrating with a non-secure Kafka cluster is relatively simple.

  • Typically, in secure Kafka clusters use port 9093 for communication.
  • Typically, in secure Kafka clusters, the Kafka kerberos service name is kafka

docker-compose.yaml

The docker-compose file includes the nonsecure and secure Kafka clusters.

---
version: '3.4'
services:
restarter:
image: docker:cli
volumes: [ "/var/run/docker.sock:/var/run/docker.sock" ]
command: [ "/bin/sh", "-c", "while true; do sleep 43200; docker restart kafka-ui; done" ]
restart: always
kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:latest
restart: always
ports:
- 80:8080

extra_hosts:
- "kafka1-nonesecure:130.1.1.1"
- "kafka1-nonesecure:130.1.1.2"
- "kafka1-nonesecure:130.1.1.3"
- "kafka1-nonesecure:130.1.1.4"
- "kafka1-nonesecure:130.1.1.5"
- "kafka1-nonesecure:130.1.1.6"
- "kafka1.cluster.com:130.1.1.51"
- "kafka2.cluster.com:130.1.1.52"
- "kafka3.cluster.com:130.1.1.53"
environment:
#### Unsecure kafka
KAFKA_CLUSTERS_0_NAME: unsecure-kafka-cluster
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka1-nonesecure:9092

#### Secure kafka connected to cluster.com domain TLS + KERBEROS
KAFKA_CLUSTERS_1_NAME: secure-Kafka-cluster
KAFKA_CLUSTERS_1_PROPERTIES_SECURITY_PROTOCOL: SASL_SSL
KAFKA_CLUSTERS_1_PROPERTIES_SASL_MECHANISM: GSSAPI
KAFKA_CLUSTERS_1_PROPERTIES_SASL_JAAS_CONFIG: com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/etc/kafkaui/cluster/daviduser.keytab" principal="daviduser1@CLUSTER.COM";
KAFKA_CLUSTERS_1_PROPERTIES_SASL_KERBEROS_SERVICE_NAME: "kafka"
KAFKA_CLUSTERS_1_BOOTSTRAPSERVERS: kafka1.cluster.com:9093,kafka2.cluster.com:9093,kafka3.cluster.com:9093
KAFKA_CLUSTERS_1_SSL_TRUSTSTORELOCATION: /etc/kafkaui/cluster/my.truststore
KAFKA_CLUSTERS_1_SSL_TRUSTSTOREPASSWORD: zaq123
KAFKA_CLUSTERS_1_METRICS_PORT: 8080
KAFKA_CLUSTERS_1_METRICS_TYPE: PROMETHEUS
KAFKA_CLUSTERS_1_PROPERTIES_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: '' # DISABLE COMMON NAME VERIFICATION

#### General configuration
# ## If you wish to enable the configuration from the UI enable this value
# #DYNAMIC_CONFIG_ENABLED: true
volumes:
- ./cluster/my.truststore:/etc/kafkaui/cluster/my.truststore:U
- ./cluster/daviduser-keytab:/etc/kafkaui/cluster/daviduser-keytab:U
- ./krb5.conf:/etc/krb5.conf:U

Integration with a Secure Kafka Cluster — TLS only

In this section, I will describe how to integrate with a Kafka cluster that has TLS enabled. The integration for a TLS-only cluster primarily requires a Truststore file, which I will explain in detail.

Truststore file

A Truststore is a secure file used by Java to store public certificates that are trusted by the application. In our case, the Truststore file will include the Kafka cluster’s public certificate, ensuring secure communication between Kafka UI and the cluster.

how to generate a Truststore file
The process of generating a Truststore file is straightforward and can be performed on any machine with Java installed. In this example, we use a Linux machine:

  1. Login to your Linux box
  2. Install Java
  3. Obtain the Kafka cluster’s public certificate. In this example, the file is named kafka-public.cer.
  4. Run the following command to import the certificate into a Truststore:
keytool -import -alias kafka-cluster -file kafka-public.cer -keystore my.truststore

5. Set the Truststore Password: During the process, you’ll be prompted to set a password for the Truststore. In our example, the password is zaq1234.

This command generates a file named my.truststore, which contains the Kafka cluster's public certificate.

Create docker-compose file

Once all the required files are ready, the next step is to configure the Docker Compose file and set up the necessary folder structure on the server running Docker Compose.

  1. create a new folder “cluster” and copy the Truststore file to this folder.

create a Docker Compose file. This file consists of three key configuration sections:

  1. Extra Hosts Configuration
    In this section, define the fully qualified domain names (FQDNs) and IP addresses of all brokers in the Kafka cluster.
  • Ensure that the hostnames match the exact broker hostnames.
  • You can verify the broker FQDNs by running the following command on the Kafka broker: hostname -f

2. Volume Configuration

In this section, specify the volumes to mount the required configuration files into the Docker container.

  • Mount the Truststore file based on the file structure defined earlier.
  • These files are essential for secure communication and authentication with the Kafka cluster.

3. Environment Variables
This section specifies the parameters required to connect to the Kafka cluster. Integrating with a non-secure Kafka cluster is relatively simple.

  • Typically, in secure Kafka clusters use port 9093 for communication.
  • Typically, in secure Kafka clusters, the Kafka kerberos service name is kafka

docker-compose.yaml

The above docker-compose file includes the nonsecure, secure Kafka with TLS and Kerberos and Secure Kafka with TLS only clusters.

---
version: '3.4'
services:
restarter:
image: docker:cli
volumes: [ "/var/run/docker.sock:/var/run/docker.sock" ]
command: [ "/bin/sh", "-c", "while true; do sleep 43200; docker restart kafka-ui; done" ]
restart: always
kafka-ui:
container_name: kafka-ui
image: provectuslabs/kafka-ui:latest
restart: always
ports:
- 80:8080
extra_hosts:
- "kafka1-nonesecure:130.1.1.1"
- "kafka1-nonesecure:130.1.1.2"
- "kafka1-nonesecure:130.1.1.3"
- "kafka1-nonesecure:130.1.1.4"
- "kafka1-nonesecure:130.1.1.5"
- "kafka1-nonesecure:130.1.1.6"
- "kafka1.cluster.com:130.1.1.51"
- "kafka2.cluster.com:130.1.1.52"
- "kafka3.cluster.com:130.1.1.53"
- "kafka1-tls.cluster.com:130.1.1.54"
- "kafka2-tls.cluster.com:130.1.1.55"
- "kafka3-tls.cluster.com:130.1.1.56"

environment:
#### Unsecure kafka
KAFKA_CLUSTERS_0_NAME: unsecure-kafka-cluster
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka1-nonesecure:9092

#### Secure kafka connected to cluster.com domain TLS + KERBEROS
KAFKA_CLUSTERS_1_NAME: secure-Kafka-cluster
KAFKA_CLUSTERS_1_PROPERTIES_SECURITY_PROTOCOL: SASL_SSL
KAFKA_CLUSTERS_1_PROPERTIES_SASL_MECHANISM: GSSAPI
KAFKA_CLUSTERS_1_PROPERTIES_SASL_JAAS_CONFIG: com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/etc/kafkaui/cluster/daviduser.keytab" principal="daviduser1@CLUSTER.COM";
KAFKA_CLUSTERS_1_PROPERTIES_SASL_KERBEROS_SERVICE_NAME: "kafka"
KAFKA_CLUSTERS_1_BOOTSTRAPSERVERS: kafka1.cluster.com:9093,kafka2.cluster.com:9093,kafka3.cluster.com:9093
KAFKA_CLUSTERS_1_SSL_TRUSTSTORELOCATION: /etc/kafkaui/cluster/my.truststore
KAFKA_CLUSTERS_1_SSL_TRUSTSTOREPASSWORD: zaq123
KAFKA_CLUSTERS_1_METRICS_PORT: 8080
KAFKA_CLUSTERS_1_METRICS_TYPE: PROMETHEUS
KAFKA_CLUSTERS_1_PROPERTIES_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: '' # DISABLE COMMON NAME VERIFICATION

#### Secure kafka connected to cluster.com domain TLS only
KAFKA_CLUSTERS_2_NAME: tls-Kafka-cluster
KAFKA_CLUSTERS_2_PROPERTIES_SECURITY_PROTOCOL: SSL
KAFKA_CLUSTERS_2_PROPERTIES_SASL_MECHANISM: GSSAPI
KAFKA_CLUSTERS_2_BOOTSTRAPSERVERS: kafka1-tls.cluster.com:9093,kafka2-tls.cluster.com:9093,kafka3-tls.cluster.com:9093
KAFKA_CLUSTERS_2_SSL_TRUSTSTORELOCATION: /etc/kafkaui/cluster-tls/tls.truststore
KAFKA_CLUSTERS_2_SSL_TRUSTSTOREPASSWORD: zaq123
KAFKA_CLUSTERS_2_METRICS_PORT: 8080
KAFKA_CLUSTERS_2_METRICS_TYPE: PROMETHEUS
KAFKA_CLUSTERS_2_PROPERTIES_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: '' # DISABLE COMMON NAME VERIFICATION
#### General configuration
# ## If you wish to enable the configuration from the UI enable this value
# #DYNAMIC_CONFIG_ENABLED: true
volumes:
- ./cluster/my.truststore:/etc/kafkaui/cluster/my.truststore:U
- ./cluster/daviduser-keytab:/etc/kafkaui/cluster/daviduser-keytab:U
- ./cluster-tls/tls.truststore:/etc/kafkaui/cluster-tls/tls.truststore:U
- ./krb5.conf:/etc/krb5.conf:U

Execution — Run Kafka UI Tool

run the docker by executing the command docker compose up

docker compose up

login to the kafka-ui http://<your-linux-box>:8080

Login page
Brokers
topics
Generate message in a topic
review message in a topic

Running solution based on Helm chart

Before running these steps, I strongly recommend reviewing the docker-compose solution where I explained in detail the relevant configuration and the requirements. This section takes into the assumption that you have read the previous section.

In this section, i will explain how to deploy kafka-ui based on Helm chart. The Kafka UI application will connect to an unsecured Kafka cluster and to a Kafka cluster that is configured with TLS and Kerberos.

Prerequisites

  1. Kubernetes cluster
  2. The Kubernetes cluster should have the option to resolve the kafka cluster DNS fqdn.
  3. Linux box that runs helm app & kubectl app and that integrate with the Kubernetes cluster

Create & Upload Kubernetes manifests.

Upload truststore file as a secret

This step is required to connect to Kafka cluster that is configured with TLS and\or Kerberos.

  1. Create truststore file — details can be found in previous section user Integrate with secure Kafka cluster — TLS only section
  2. Upload the file as secret by running the following command: kubectl create secret generic truststore — from-file=truststore.jks=path/to/my.truststore

Upload krb5.conf file as a configmap

This step is required to connect to Kafka cluster that is configured with Kerberos.

  1. Create krb5 file — details can be found in previous section user Integrate with secure Kafka cluster — TLS only section
  2. Upload the file as secret by running the following command: kubectl create configmap krb5 — from-file=path/to/krb5.conf

Upload keytab file as a secret

This step is required to connect to Kafka cluster that is configured with Kerberos.

  1. Create keytab file — details can be found in previous section user Integrate with secure Kafka cluster — TLS only section
  2. Upload the file as secret by running the following command: kubectl create secret generic keytab — from-file=daviduser-keytab=path/to/daviduser-keytab

Update helm repository

  1. Login to the Linux box the running helm
  2. Run the following command to add the Kafka Ui repository:
    helm repo add kafka-ui https://provectus.github.io/kafka-ui-charts

Create customized value.yaml

In this section i will explain how to create customized value yaml that will connect to unsecure kafka and kafka that is configured with TLS and Kerberos.

  1. Login to the linux box the running helm
  2. Create new file “kafka-ui-custom-value.yaml” including the following configuration

cluster 0 — is the unsecure Kafka cluster.

cluster 1 — is the Kafka cluster enabled with TLS and Kerberos.

envs:
secret: {}
config:
KAFKA_CLUSTERS_0_NAME: "unsecure-kafka-cluster"
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: "kafka1-nonesecure:9092"
KAFKA_CLUSTERS_1_NAME: "secure-Kafka-cluster"
KAFKA_CLUSTERS_1_PROPERTIES_SECURITY_PROTOCOL: "SASL_SSL"
KAFKA_CLUSTERS_1_PROPERTIES_SASL_MECHANISM: "GSSAPI"
KAFKA_CLUSTERS_1_PROPERTIES_SASL_JAAS_CONFIG: com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/kafka-ui/keytab/daviduser-keytab" principal="daviduser@CLUSTER.COM";
KAFKA_CLUSTERS_1_PROPERTIES_SASL_KERBEROS_SERVICE_NAME: "kafka"
KAFKA_CLUSTERS_1_BOOTSTRAPSERVERS: "kafka1.cluster.com:9093,kafka2.cluster.com:9093,kafka3.cluster.com:9093"
KAFKA_CLUSTERS_1_SSL_TRUSTSTORELOCATION: "/kafka-ui/truststore/my.truststore"
KAFKA_CLUSTERS_1_SSL_TRUSTSTOREPASSWORD: "zaq1234"
KAFKA_CLUSTERS_1_METRICS_PORT: "8080"
KAFKA_CLUSTERS_1_METRICS_TYPE: "PROMETHEUS"
KAFKA_CLUSTERS_1_PROPERTIES_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: '' # DISABLE COMMON NAME VERIFICATION
AUTH_TYPE: "DISABLED"
MANAGEMENT_HEALTH_LDAP_ENABLED: "FALSE"
volumeMounts:
- mountPath: /kafka-ui/truststore
name: truststore
- mountPath: /etc/krb5.conf
name: krb5
subPath: krb5.conf
- mountPath: /kafka-ui/keytab/daviduser-keytab
name: keytab
subPath: daviduser-keytab
volumes:
- name: truststore
secret:
defaultMode: 420
secretName: truststore
- name: krb5
configMap:
name: krb5
- name: keytab
secret:
defaultMode: 420
secretName: keytab

Execute the helm installation

  1. Login to the Linux box the running helm
  2. Run the following command to add the Kafka Ui repository:
    helm install kafka-ui -f ./kafka-ui-custom-value.yaml kafka-ui/kafka-ui

If you liked this blog don’t forget to clap and follow me on both Medium and Linkedin

www.linkedin.com/in/davidzbeda

--

--

David (Dudu) Zbeda
David (Dudu) Zbeda

Written by David (Dudu) Zbeda

DevOps | Infrastructure Architect | System Integration | Professional Services | Leading Teams & Training Future Experts | Linkedin: linkedin.com/in/davidzbeda

No responses yet