资讯

精准传达 • 有效沟通

从品牌网站建设到网络营销策划,从策略到执行的一站式服务

Kubernetes中EFK怎么用

这篇文章给大家分享的是有关Kubernetes中EFK怎么用的内容。小编觉得挺实用的,因此分享给大家做个参考,一起跟随小编过来看看吧。

岳阳网站建设公司创新互联建站,岳阳网站设计制作,有大型网站制作公司丰富经验。已为岳阳1000+提供企业网站建设服务。企业网站搭建\成都外贸网站建设要多少钱,请找那个售后服务好的岳阳做网站的公司定做!

一:前言

1.在安装Kubernetes集群的时候我们有下载过压缩包https://dl.k8s.io/v1.8.5/kubernetes-client-linux-amd64.tar.gz
解压缩后 在目录cluster\addons 下有各插件的yaml文件,大部分情况仅需少量改动即可使用。

2.在搭建Kubernetes的集群过程中,涉及到很多镜像的下载,建议可以在阿里云购买一个香港所在地的ECS服务器,镜像下载完成后通过docker save -o 将镜像导出,在通过docker load 导入镜像或者上传镜像到个人镜像仓库。

3.Kubernetes从1.8版本开始,EFK的安装中,elasticsearch-logging采用StatefulSet类型,但存在bug,会导致elasticsearch-logging-0 POD 一直无法成功创建。 所以建议还是采用1.8之前的版本采用ReplicationController。

4.要成功安装EFK,一定要先安装kube-DNS前面的文章已有介绍。

5.EFK安装过程中elasticsearch和kibana版本要兼容。这里采用的镜像如下:
gcr.io/google_containers/elasticsearch:v2.4.1-2

gcr.io/google_containers/fluentd-elasticsearch:1.22

gcr.io/google_containers/kibana:v4.6.1-1


二:yaml文件
Kubernetes中EFK怎么用

efk-rbac.yaml

点击(此处)折叠或打开

  1. apiVersion: v1

  2. kind: ServiceAccount

  3. metadata:

  4.   name: efk

  5.   namespace: kube-system

  6. ---

  7. kind: ClusterRoleBinding

  8. apiVersion: rbac.authorization.k8s.io/v1beta1

  9. metadata:

  10.   name: efk

  11. subjects:

  12.   - kind: ServiceAccount

  13.     name: efk

  14.     namespace: kube-system

  15. roleRef:

  16.   kind: ClusterRole

  17.   name: cluster-admin

  18.   apiGroup: rbac.authorization.k8s.io

es-controller.yaml

点击(此处)折叠或打开

  1. apiVersion: v1

  2. kind: ReplicationController

  3. metadata:

  4.   name: elasticsearch-logging-v1

  5.   namespace: kube-system

  6.   labels:

  7.     k8s-app: elasticsearch-logging

  8.     version: v1

  9.     kubernetes.io/cluster-service: "true"

  10.     addonmanager.kubernetes.io/mode: Reconcile

  11. spec:

  12.   replicas: 2

  13.   selector:

  14.     k8s-app: elasticsearch-logging

  15.     version: v1

  16.   template:

  17.     metadata:

  18.       labels:

  19.         k8s-app: elasticsearch-logging

  20.         version: v1

  21.         kubernetes.io/cluster-service: "true"

  22.     spec:

  23.       serviceAccountName: efk

  24.       containers:

  25.       - image: gcr.io/google_containers/elasticsearch:v2.4.1-2

  26.         name: elasticsearch-logging

  27.         resources:

  28.           # need more cpu upon initialization, therefore burstable class

  29.           limits:

  30.             cpu: 1000m

  31.           requests:

  32.             cpu: 100m

  33.         ports:

  34.         - containerPort: 9200

  35.           name: db

  36.           protocol: TCP

  37.         - containerPort: 9300

  38.           name: transport

  39.           protocol: TCP

  40.         volumeMounts:

  41.         - name: es-persistent-storage

  42.           mountPath: /data

  43.         env:

  44.         - name: "NAMESPACE"

  45.           valueFrom:

  46.             fieldRef:

  47.               fieldPath: metadata.namespace

  48.       volumes:

  49.       - name: es-persistent-storage

  50.         emptyDir: {}

es-service.yaml

点击(此处)折叠或打开

  1. apiVersion: v1

  2. kind: Service

  3. metadata:

  4.   name: elasticsearch-logging

  5.   namespace: kube-system

  6.   labels:

  7.     k8s-app: elasticsearch-logging

  8.     kubernetes.io/cluster-service: "true"

  9.     addonmanager.kubernetes.io/mode: Reconcile

  10.     kubernetes.io/name: "Elasticsearch"

  11. spec:

  12.   ports:

  13.   - port: 9200

  14.     protocol: TCP

  15.     targetPort: db

  16.   selector:

  17.     k8s-app: elasticsearch-logging

fluentd-es-ds.yaml

点击(此处)折叠或打开

  1. apiVersion: extensions/v1beta1

  2. kind: DaemonSet

  3. metadata:

  4.   name: fluentd-es-v1.22

  5.   namespace: kube-system

  6.   labels:

  7.     k8s-app: fluentd-es

  8.     kubernetes.io/cluster-service: "true"

  9.     addonmanager.kubernetes.io/mode: Reconcile

  10.     version: v1.22

  11. spec:

  12.   template:

  13.     metadata:

  14.       labels:

  15.         k8s-app: fluentd-es

  16.         kubernetes.io/cluster-service: "true"

  17.         version: v1.22

  18.       # This annotation ensures that fluentd does not get evicted if the node

  19.       # supports critical pod annotation based priority scheme.

  20.       # Note that this does not guarantee admission on the nodes (#40573).

  21.       annotations:

  22.         scheduler.alpha.kubernetes.io/critical-pod: ''

  23.     spec:

  24.       serviceAccountName: efk

  25.       containers:

  26.       - name: fluentd-es

  27.         image: gcr.io/google_containers/fluentd-elasticsearch:1.22

  28.         command:

  29.           - '/bin/sh'

  30.           - '-c'

  31.           - '/usr/sbin/td-agent 2>&1 >> /var/log/fluentd.log'

  32.         resources:

  33.           limits:

  34.             memory: 200Mi

  35.           requests:

  36.             cpu: 100m

  37.             memory: 200Mi

  38.         volumeMounts:

  39.         - name: varlog

  40.           mountPath: /var/log

  41.         - name: varlibdockercontainers

  42.           mountPath: /var/lib/docker/containers

  43.           readOnly: true

  44.       nodeSelector:

  45.         beta.kubernetes.io/fluentd-ds-ready: "true"

  46.       tolerations:

  47.       - key : "node.alpha.kubernetes.io/ismaster"

  48.         effect: "NoSchedule"

  49.       terminationGracePeriodSeconds: 30

  50.       volumes:

  51.       - name: varlog

  52.         hostPath:

  53.           path: /var/log

  54.       - name: varlibdockercontainers

  55.         hostPath:

  56.           path: /var/lib/docker/containers

kibana-controller.yaml  此处需要特殊说明,绿色标识的部分KIBANA_BASE_URL 的value要设置为空,默认值会导致Kibana访问出现问题。

点击(此处)折叠或打开

  1. apiVersion: extensions/v1beta1

  2. kind: Deployment

  3. metadata:

  4.   name: kibana-logging

  5.   namespace: kube-system

  6.   labels:

  7.     k8s-app: kibana-logging

  8.     kubernetes.io/cluster-service: "true"

  9.     addonmanager.kubernetes.io/mode: Reconcile

  10. spec:

  11.   replicas: 1

  12.   selector:

  13.     matchLabels:

  14.       k8s-app: kibana-logging

  15.   template:

  16.     metadata:

  17.       labels:

  18.         k8s-app: kibana-logging

  19.     spec:

  20.       serviceAccountName: efk

  21.       containers:

  22.       - name: kibana-logging

  23.         image: gcr.io/google_containers/kibana:v4.6.1-1

  24.         resources:

  25.           # keep request = limit to keep this container in guaranteed class

  26.           limits:

  27.             cpu: 100m

  28.           requests:

  29.             cpu: 100m

  30.         env:

  31.           - name: "ELASTICSEARCH_URL"

  32.             value: "http://elasticsearch-logging:9200"

  33.           - name: "KIBANA_BASE_URL"

  34.             value: ""

  35.         ports:

  36.         - containerPort: 5601

  37.           name: ui

  38.           protocol: TCP

kibana-service.yaml

点击(此处)折叠或打开

  1. apiVersion: v1

  2. kind: Service

  3. metadata:

  4.   name: kibana-logging

  5.   namespace: kube-system

  6.   labels:

  7.     k8s-app: kibana-logging

  8.     kubernetes.io/cluster-service: "true"

  9.     addonmanager.kubernetes.io/mode: Reconcile

  10.     kubernetes.io/name: "Kibana"

  11. spec:

  12.   ports:

  13.   - port: 5601

  14.     protocol: TCP

  15.     targetPort: ui

  16.   selector:

  17.     k8s-app: kibana-logging


三:启动与验证

1. 创建资源
kubectl create -f .
Kubernetes中EFK怎么用

2.通过 kubectl logs -f  查看相关pod的日志,确认是否正常启动。 其中kibana-logging-* POD 启动需要一定的时间。
Kubernetes中EFK怎么用

3.elasticsearch验证(可以通过kube proxy创建代理)
http://IP:PORT/_cat/nodes?v

点击(此处)折叠或打开

  1. host      ip        heap.percent ram.percent load node.role master name

  2. 10.1.88.4 10.1.88.4            9          87 0.45 d         m      elasticsearch-logging-v1-hnfv2

  3. 10.1.67.4 10.1.67.4            6          91 0.03 d         *      elasticsearch-logging-v1-zmtdl

http://IP:PORT/_cat/indices?v

点击(此处)折叠或打开

  1. health status index               pri rep docs.count docs.deleted store.size pri.store.size

  2. green open   logstash-2018.04.07   5   1        515            0      1.1mb        584.4kb

  3. green open .kibana               1   1          2            0     22.2kb          9.7kb

  4. green open   logstash-2018.04.06   5   1      15364            0      7.3mb          3.6mb

4.kibana验证
http://IP:PORT/app/kibana#/discover?_g
Kubernetes中EFK怎么用

四:备注
要成功搭建EFK,需要注意一下几点:
1.确保已经成功安装了kube-dns
2.当前版本elasticsearch-logging采用ReplicationController
3.elasticsearch和kibana的版本要兼容
4.KIBANA_BASE_URL value设置为“”

感谢各位的阅读!关于“Kubernetes中EFK怎么用”这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,让大家可以学到更多知识,如果觉得文章不错,可以把它分享出去让更多的人看到吧!


当前题目:Kubernetes中EFK怎么用
URL链接:http://cdkjz.cn/article/jegiji.html
多年建站经验

多一份参考,总有益处

联系快上网,免费获得专属《策划方案》及报价

咨询相关问题或预约面谈,可以通过以下方式与我们联系

大客户专线   成都:13518219792   座机:028-86922220