【云原生】zookeeper + kafka on k8s 环境部署

一、概述

  • Apache ZooKeeper 是一个集中式服务,用于保护装备信息、命名、供给分布式同步和供给组服务,ZooKeeper 致力于开发和保护一个开源服务器,以完成高度可靠的分布式和谐,其实也能够认为就是一个分布式数据库,仅仅结构比较特殊,是树状结构。官网文档:zookeeper.apache.org/doc/r3.8.0/ ,关于Zookeeper的介绍,也能够参阅我之前的文章:分布式开源和谐服务——Zookeeper

【云原生】zookeeper + kafka on k8s 环境部署

  • Kafka是最初由Linkedin公司开发,是一个分布式、支撑分区的(partition)、多副本的(replica),基于zookeeper和谐的分布式消息体系。官方文档:kafka.apache.org/documentati…关于Kafka的介绍,也能够参阅我之前的文章:Kafka原理介绍+装置+根本操作

【云原生】zookeeper + kafka on k8s 环境部署

二、Zookeeper on k8s 布置

【云原生】zookeeper + kafka on k8s 环境部署

1)增加源

布置包地址:artifacthub.io/packages/he…

helm repo add bitnami https://charts.bitnami.com/bitnami
helm pull bitnami/zookeeper
tar -xf  zookeeper-10.2.1.tgz

2)修正装备

  • 修正zookeeper/values.yaml
image:
  registry: myharbor.com
  repository: bigdata/zookeeper
  tag: 3.8.0-debian-11-r36
...
replicaCount: 3
...
service:
  type: NodePort
  nodePorts:
    #NodePort 默认范围是 30000-32767
    client: "32181"
    tls: "32182"
...
persistence:
  storageClass: "zookeeper-local-storage"
  size: "10Gi"
  # 目录需求提早在宿主机上创立
  local:
    - name: zookeeper-0
      host: "local-168-182-110"
      path: "/opt/bigdata/servers/zookeeper/data/data1"
    - name: zookeeper-1
      host: "local-168-182-111"
      path: "/opt/bigdata/servers/zookeeper/data/data1"
    - name: zookeeper-2
      host: "local-168-182-112"
      path: "/opt/bigdata/servers/zookeeper/data/data1"
...
# Enable Prometheus to access ZooKeeper metrics endpoint
metrics:
  enabled: true
  • 增加zookeeper/templates/pv.yaml
{{- range .Values.persistence.local }}
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: {{ .name }}
  labels:
    name: {{ .name }}
spec:
  storageClassName: {{ $.Values.persistence.storageClass }}
  capacity:
    storage: {{ $.Values.persistence.size }}
  accessModes:
    - ReadWriteOnce
  local:
    path: {{ .path }}
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - {{ .host }}
---
{{- end }}
  • 增加zookeeper/templates/storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: {{ .Values.persistence.storageClass }}
provisioner: kubernetes.io/no-provisioner

3)开端装置

# 先准备好镜像
docker pull docker.io/bitnami/zookeeper:3.8.0-debian-11-r36
docker tag docker.io/bitnami/zookeeper:3.8.0-debian-11-r36 myharbor.com/bigdata/zookeeper:3.8.0-debian-11-r36
docker push myharbor.com/bigdata/zookeeper:3.8.0-debian-11-r36
# 开端装置
helm install zookeeper ./zookeeper -n zookeeper --create-namespace

NOTES

NAME: zookeeper
LAST DEPLOYED: Sun Sep 18 18:24:03 2022
NAMESPACE: zookeeper
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: zookeeper
CHART VERSION: 10.2.1
APP VERSION: 3.8.0
** Please be patient while the chart is being deployed **
ZooKeeper can be accessed via port 2181 on the following DNS name from within your cluster:
    zookeeper.zookeeper.svc.cluster.local
To connect to your ZooKeeper server run the following commands:
    export POD_NAME=$(kubectl get pods --namespace zookeeper -l "app.kubernetes.io/name=zookeeper,app.kubernetes.io/instance=zookeeper,app.kubernetes.io/component=zookeeper" -o jsonpath="{.items[0].metadata.name}")
    kubectl exec -it $POD_NAME -- zkCli.sh
To connect to your ZooKeeper server from outside the cluster execute the following commands:
    export NODE_IP=$(kubectl get nodes --namespace zookeeper -o jsonpath="{.items[0].status.addresses[0].address}")
    export NODE_PORT=$(kubectl get --namespace zookeeper -o jsonpath="{.spec.ports[0].nodePort}" services zookeeper)
    zkCli.sh $NODE_IP:$NODE_PORT

【云原生】zookeeper + kafka on k8s 环境部署
检查pod状况

kubectl get pods,svc -n zookeeper -owide

【云原生】zookeeper + kafka on k8s 环境部署

4)测验验证

# 登录zookeeper pod
kubectl exec -it zookeeper-0 -n zookeeper -- zkServer.sh status
kubectl exec -it zookeeper-1 -n zookeeper -- zkServer.sh status
kubectl exec -it zookeeper-2 -n zookeeper -- zkServer.sh status
kubectl exec -it zookeeper-0 -n zookeeper -- bash

【云原生】zookeeper + kafka on k8s 环境部署

5)Prometheus监控

Prometheus:prometheus.k8s.local/targets?sea…

【云原生】zookeeper + kafka on k8s 环境部署

能够经过指令检查收集数据

kubectl get --raw http://10.244.0.52:9141/metrics
kubectl get --raw http://10.244.1.101:9141/metrics
kubectl get --raw http://10.244.2.137:9141/metrics

Grafana:grafana.k8s.local/ 账号:admin,密码经过下面指令获取

kubectl get secret --namespace grafana grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

导入grafana模板,集群资源监控:10465 官方模块下载地址:grafana.com/grafana/das…

【云原生】zookeeper + kafka on k8s 环境部署

6)卸载

helm uninstall zookeeper -n zookeeper
kubectl delete pod -n zookeeper `kubectl get pod -n zookeeper|awk 'NR>1{print $1}'` --force
kubectl patch ns zookeeper -p '{"metadata":{"finalizers":null}}'
kubectl delete ns zookeeper --force

三、Kafka on k8s 布置

1)增加源

布置包地址:artifacthub.io/packages/he…

helm repo add bitnami https://charts.bitnami.com/bitnami
helm pull bitnami/kafka
tar -xf kafka-18.4.2.tgz

2)修正装备

  • 修正kafka/values.yaml
image:
  registry: myharbor.com
  repository: bigdata/kafka
  tag: 3.2.1-debian-11-r16
...
replicaCount: 3
...
service:
  type: NodePort
  nodePorts:
    client: "30092"
    external: "30094"
...
externalAccess
  enabled: true
  service:
    type: NodePort
     nodePorts:
       - 30001
       - 30002
       - 30003
     useHostIPs: true
...
persistence:
  storageClass: "kafka-local-storage"
  size: "10Gi"
  # 目录需求提早在宿主机上创立
  local:
    - name: kafka-0
      host: "local-168-182-110"
      path: "/opt/bigdata/servers/kafka/data/data1"
    - name: kafka-1
      host: "local-168-182-111"
      path: "/opt/bigdata/servers/kafka/data/data1"
    - name: kafka-2
      host: "local-168-182-112"
      path: "/opt/bigdata/servers/kafka/data/data1"
...
metrics:
  kafka:
    enabled: true
    image:
      registry: myharbor.com
      repository: bigdata/kafka-exporter
      tag: 1.6.0-debian-11-r8
    jmx:
      enabled: true
      image:
      registry: myharbor.com
      repository: bigdata/jmx-exporter
      tag: 0.17.1-debian-11-r1
      annotations:
        prometheus.io/path: "/metrics"
...
zookeeper:
  enabled: false
...
externalZookeeper
  servers:
    - zookeeper-0.zookeeper-headless.zookeeper
    - zookeeper-1.zookeeper-headless.zookeeper
    - zookeeper-2.zookeeper-headless.zookeeper
  • 增加kafka/templates/pv.yaml
{{- range .Values.persistence.local }}
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: {{ .name }}
  labels:
    name: {{ .name }}
spec:
  storageClassName: {{ $.Values.persistence.storageClass }}
  capacity:
    storage: {{ $.Values.persistence.size }}
  accessModes:
    - ReadWriteOnce
  local:
    path: {{ .path }}
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - {{ .host }}
---
{{- end }}
  • 增加kafka/templates/storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: {{ .Values.persistence.storageClass }}
provisioner: kubernetes.io/no-provisioner

3)开端装置

# 先准备好镜像
docker pull docker.io/bitnami/kafka:3.2.1-debian-11-r16
docker tag docker.io/bitnami/kafka:3.2.1-debian-11-r16 myharbor.com/bigdata/kafka:3.2.1-debian-11-r16
docker push myharbor.com/bigdata/kafka:3.2.1-debian-11-r16
# node-export
docker pull docker.io/bitnami/kafka-exporter:1.6.0-debian-11-r8
docker tag docker.io/bitnami/kafka-exporter:1.6.0-debian-11-r8 myharbor.com/bigdata/kafka-exporter:1.6.0-debian-11-r8
docker push myharbor.com/bigdata/kafka-exporter:1.6.0-debian-11-r8
# JXM
docker.io/bitnami/jmx-exporter:0.17.1-debian-11-r1
docker tag docker.io/bitnami/jmx-exporter:0.17.1-debian-11-r1 myharbor.com/bigdata/jmx-exporter:0.17.1-debian-11-r1
docker push myharbor.com/bigdata/jmx-exporter:0.17.1-debian-11-r1
#开端装置
helm install kafka ./kafka -n kafka --create-namespace

NOTES

NAME: kafka
LAST DEPLOYED: Sun Sep 18 20:57:02 2022
NAMESPACE: kafka
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 18.4.2
APP VERSION: 3.2.1
---------------------------------------------------------------------------------------------
 WARNING
    By specifying "serviceType=LoadBalancer" and not configuring the authentication
    you have most likely exposed the Kafka service externally without any
    authentication mechanism.
    For security reasons, we strongly suggest that you switch to "ClusterIP" or
    "NodePort". As alternative, you can also configure the Kafka authentication.
---------------------------------------------------------------------------------------------
** Please be patient while the chart is being deployed **
Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:
    kafka.kafka.svc.cluster.local
Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:
    kafka-0.kafka-headless.kafka.svc.cluster.local:9092
    kafka-1.kafka-headless.kafka.svc.cluster.local:9092
    kafka-2.kafka-headless.kafka.svc.cluster.local:9092
To create a pod that you can use as a Kafka client run the following commands:
    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.2.1-debian-11-r16 --namespace kafka --command -- sleep infinity
    kubectl exec --tty -i kafka-client --namespace kafka -- bash
    PRODUCER:
        kafka-console-producer.sh \
            --broker-list kafka-0.kafka-headless.kafka.svc.cluster.local:9092,kafka-1.kafka-headless.kafka.svc.cluster.local:9092,kafka-2.kafka-headless.kafka.svc.cluster.local:9092 \
            --topic test
    CONSUMER:
        kafka-console-consumer.sh \
            --bootstrap-server kafka.kafka.svc.cluster.local:9092 \
            --topic test \
            --from-beginning
To connect to your Kafka server from outside the cluster, follow the instructions below:
    Kafka brokers domain: You can get the external node IP from the Kafka configuration file with the following commands (Check the EXTERNAL listener)
        1. Obtain the pod name:
        kubectl get pods --namespace kafka -l "app.kubernetes.io/name=kafka,app.kubernetes.io/instance=kafka,app.kubernetes.io/component=kafka"
        2. Obtain pod configuration:
        kubectl exec -it KAFKA_POD -- cat /opt/bitnami/kafka/config/server.properties | grep advertised.listeners
    Kafka brokers port: You will have a different node port for each Kafka broker. You can get the list of configured node ports using the command below:
        echo "$(kubectl get svc --namespace kafka -l "app.kubernetes.io/name=kafka,app.kubernetes.io/instance=kafka,app.kubernetes.io/component=kafka,pod" -o jsonpath='{.items[*].spec.ports[0].nodePort}' | tr ' ' '\n')"

【云原生】zookeeper + kafka on k8s 环境部署
检查pod状况

kubectl get pods,svc -n kafka -owide

【云原生】zookeeper + kafka on k8s 环境部署

4)测验验证

# 登录zookeeper pod
kubectl exec -it kafka-0 -n kafka -- bash

1、创立Topic(一个副本一个分区)

--create: 指定创立topic动作
--topic:指定新建topic的称号
--bootstrap-server: 指定kafka连接地址
--config:指定当时topic上有用的参数值,参数列表参阅文档为: Topic-level configuration
--partitions:指定当时创立的kafka分区数量,默认为1个
--replication-factor:指定每个分区的复制因子个数,默认1个
kafka-topics.sh --create --topic test001 --bootstrap-server kafka.kafka:9092 --partitions 1 --replication-factor 1
# 检查
kafka-topics.sh --describe --bootstrap-server kafka.kafka:9092  --topic test001

【云原生】zookeeper + kafka on k8s 环境部署

2、检查Topic列表

kafka-topics.sh --list --bootstrap-server kafka.kafka:9092

3、生产者/顾客测验

【生产者】

kafka-console-producer.sh --broker-list kafka.kafka:9092 --topic test001
{"id":"1","name":"n1","age":"20"}
{"id":"2","name":"n2","age":"21"}
{"id":"3","name":"n3","age":"22"}

【顾客】

# 从头开端消费
kafka-console-consumer.sh --bootstrap-server kafka.kafka:9092 --topic test001 --from-beginning
# 指定从分区的某个方位开端消费,这里只指定了一个分区,能够多写几行或许遍历对应的所有分区
kafka-console-consumer.sh --bootstrap-server kafka.kafka:9092 --topic test001 --partition 0 --offset 100 --group test001

4、检查数据积压

kafka-consumer-groups.sh --bootstrap-server kafka.kafka:9092 --describe --group test001

5、删去topic

kafka-topics.sh --delete --topic test001 --bootstrap-server kafka.kafka:9092

5)Prometheus监控

Prometheus:prometheus.k8s.local/targets?sea…

【云原生】zookeeper + kafka on k8s 环境部署

能够经过指令检查收集数据

kubectl get --raw http://10.244.2.165:9308/metrics

Grafana:grafana.k8s.local/ 账号:admin,密码经过下面指令获取

kubectl get secret --namespace grafana grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

导入grafana模板,集群资源监控:11962 官方模块下载地址:grafana.com/grafana/das…

6)卸载

helm uninstall kafka -n kafka
kubectl delete pod -n kafka `kubectl get pod -n kafka|awk 'NR>1{print $1}'` --force
kubectl patch ns kafka  -p '{"metadata":{"finalizers":null}}'
kubectl delete ns kafka  --force

zookeeper + kafka on k8s 环境布置 就先到这里了,小伙伴有任何疑问,欢迎给我留言,后续会继续共享【云原生和大数据】相关的问题~