继续创作,加速生长!这是我参与「日新计划 10 月更文应战」的第1天,点击检查活动详情

背景

自己前端开发一枚,2015年参加工作。发现大多数公司的前端开发都不是很注重前端埋点,有一些做的不错的也便是将一些产品需求的数据,或许前端反常的数据经过接口的方法上报给到后端,不利于前端自己查一些业务日志。所以我经过一段时间的学习,想建立一套自己的前端日志渠道,所以有了此文,方便自己时常安定学习。

前置学习

  • 经过docker-compose发动docker容器:docker-compose up -d
  • 经过docker-compose封闭docker容器:docker-compose down
  • 经过docker-compose list docker容器:docker-compose ps
  • 进入容器:docker container exec -it 容器name bash
  • 检查容器日志:docker container logs 容器name
  • kafka创立顾客:kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic quickstart-events --from-beginning
  • kafka创立出产者: kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092

数据流

graph LR
前端数据 --> nodejs服务 --> 写入本地文件 --> filebeat --> kafka --> logstash --> es --> mysql聚合 --> dashboard展现

项目发动

创立空的文件夹

mkdir elk && cd elk

创立docker-compose.yml,内容如下

version: "2.2"
services:
  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f config/certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f config/certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: es02\n"\
          "    dns:\n"\
          "      - es02\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: es03\n"\
          "    dns:\n"\
          "      - es03\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt http://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" http://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120
  es01:
    depends_on:
      setup:
        condition: service_healthy
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata01:/usr/share/elasticsearch/data
      - ./config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es02,es03
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt http://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120
  es02:
    depends_on:
      - es01
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata02:/usr/share/elasticsearch/data
      - ./config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    environment:
      - node.name=es02
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es03
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es02/es02.key
      - xpack.security.http.ssl.certificate=certs/es02/es02.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es02/es02.key
      - xpack.security.transport.ssl.certificate=certs/es02/es02.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt http://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120
  es03:
    depends_on:
      - es02
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata03:/usr/share/elasticsearch/data
      - ./config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    environment:
      - node.name=es03
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es02
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es03/es03.key
      - xpack.security.http.ssl.certificate=certs/es03/es03.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es03/es03.key
      - xpack.security.transport.ssl.certificate=certs/es03/es03.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt http://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120
  kibana:
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    volumes:
      - certs:/usr/share/kibana/config/certs
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=http://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120
  head:
    image: mobz/elasticsearch-head:5
    ports:
      - 9100:9100
  head:
    image: mobz/elasticsearch-head:5
    ports:
      - 9100:9100
  logstash:
    image: logstash:${ELASTIC_STACK_VERSION}
    ports:
      - 5300:5000
    volumes: 
      - ./logstash/pipeline/:/usr/share/logstash/pipeline
      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
  zookeeper:
    image: 'bitnami/zookeeper:latest'
    ports:
      - '2181:2181'
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes
  kafka:
    image: 'bitnami/kafka:latest'
    ports:
      - '9093:9093'
    environment:
      - KAFKA_BROKER_ID=1
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
      - KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:9093
      - KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9092,EXTERNAL://localhost:9093
      - KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT
      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
      - ALLOW_PLAINTEXT_LISTENER=yes
    depends_on:
      - zookeeper
volumes:
  certs:
    driver: local
  esdata01:
    driver: local
  esdata02:
    driver: local
  esdata03:
    driver: local
  kibanadata:
    driver: local

创立.env环境变量文件

# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=abcd1234
# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=abcd1234
# Version of Elastic products
STACK_VERSION=8.0.1
# Set the cluster name
CLUSTER_NAME=docker-cluster
# Set to 'basic' or 'trial' to automatically start the 30-day trial
LICENSE=basic
#LICENSE=trial
# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200
#ES_PORT=127.0.0.1:9200
# Port to expose Kibana to the host
KIBANA_PORT=5601
#KIBANA_PORT=80
# Increase or decrease based on the available host memory (in bytes)
MEM_LIMIT=1073741824
# Project namespace (defaults to the current folder name if not set)
#COMPOSE_PROJECT_NAME=myproject
ELASTIC_STACK_VERSION=8.4.3

创立es的配置文件

相对路径为,/config/elasticsearch.yml,内容如下,允许跨域,方便es head链接

cluster.name: "docker-cluster"
network.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"

创立logstash的配置文件

  • 配置文件
mkdir -p logstash/config && cd logstash/config

创立logstash.yml文件,输出如下

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://es01:9200" ]
xpack.monitoring.enabled: true
  • 回到elk根目录下的logstash目录
  • logstash履行文件
mkdir -p pipeline && cd pipeline

创立logstash.conf文件,输出如下,链接kafka,输出kafka内容到elasticsearch

input {
    kafka {
        id => "my_plugin_id"
        bootstrap_servers =>["kafka:9092"]
        topics => ["my-topic"]
        group_id => "filebeat"
        auto_offset_reset => "latest"
        type => "pengclikafka"
        ssl_endpoint_identification_algorithm => ""
    }
}
output {
    elasticsearch {
        hosts => ["es01:9200"]
        index => "logstash-system-localhost-%{+YYYY.MM.dd}"
    }
}

发动docker-compose

在docker-compose.yml目录下履行

docker-compose up -d

检查容器运行情况

稍等两分钟,履行

docker-compose ps

检查容器运行情况,输出内容如下,则表明各容器运行正常

elk-es01-1     "/bin/tini -- /usr/l…"  es01        running (unhealthy)  0.0.0.0:9200->9200/tcp, 9300/tcp
elk-es02-1     "/bin/tini -- /usr/l…"  es02        running (unhealthy)  9200/tcp, 9300/tcp
elk-es03-1     "/bin/tini -- /usr/l…"  es03        running (unhealthy)  9200/tcp, 9300/tcp
elk-head-1     "/bin/sh -c 'grunt s…"  head        running        0.0.0.0:9100->9100/tcp
elk-kafka-1     "/opt/bitnami/script…"  kafka        running        9092/tcp, 0.0.0.0:9093->9093/tcp
elk-kibana-1    "/bin/tini -- /usr/l…"  kibana       running (healthy)   0.0.0.0:5601->5601/tcp
elk-logstash-1   "/usr/local/bin/dock…"  logstash      running
elk-setup-1     "/bin/tini -- /usr/l…"  setup        running (healthy)   9200/tcp, 9300/tcp
elk-zookeeper-1   "/opt/bitnami/script…"  zookeeper      running        2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, 8080/tcp

某一容器失利

假如某一容器失利,能够下面指令检查日志,然后自行百度。

docker container logs 容器id

拜访head

拜访head:http://localhost:9100/

从零构建前端日志平台,监控平台
来链接es集群

流程解析

nodejs服务

前端经过调用nodejs api,将所要上报的数据写入本地文件

  • 文件格式以/年/月/日/时/分.log的实行存储
const express = require('express')
const fs = require('fs')
const path = require('path')
const os = require('os')
const app = express()
// 获取时间
const getDate = () => {
  const date = new Date();
  const year = date.getFullYear().toString()
  const month = (date.getMonth() + 1).toString();
  const day = date.getDate().toString();
  const hour = date.getHours().toString();
  const minutes = date.getMinutes().toString();
  return {
    date, year, month, day, hour, minutes
  }
}
// 获得文件的绝对路径和文件名称
const getAbsolutePath = () => {
  const { year, month, day, hour, minutes } = getDate()
  const absolutePath = path.join(year, month, day, hour)
  return [absolutePath, minutes]
}
// 检查目录是否存在,不存在则创立
const checkAndMdkirPath = (dirpath, mode = 0777) => {
  if (!fs.existsSync(dirpath)) {
    let pathtmp;
    dirpath.split(path.sep).forEach(function (dirname) {
        console.log('dirname', dirname)
        if (pathtmp) {
          pathtmp = path.join(pathtmp, dirname);
        }
        else {
          pathtmp = dirname;
        }
        if (!fs.existsSync(pathtmp)) {
          if (!fs.mkdirSync(pathtmp, mode)) {
            return false;
          }
        }
    });
  }
  return true; 
}
// 的到测试数据
const getLogs = () => {
  const date = new Date();
  const message = 'test message'
  return JSON.stringify({ date, message })
}
// 写入log
const fileLogs = () => {
  const [absolutePath, filepath] = getAbsolutePath()
  const mkdirsuccess = checkAndMdkirPath(absolutePath)
  if (!mkdirsuccess) return
  const logs = getLogs()
  fs.appendFile(`${absolutePath}/${filepath}.log`, logs + os.EOL, (err) => {
    if (err) throw err;
    console.log('The file has been saved!');
  });
  return logs
}
// 浏览器拜访localhost:3000, 写入日志到本地
app.get('/', function (req, res) {
  const logs = fileLogs()
  res.send(logs)
})
// 监听端口
app.listen(3000)

发动nodejs服务,node ./index.js,拜访localhost:3000,浏览器返回结果

从零构建前端日志平台,监控平台
当时服务下会多出一个以年月日时分的目录,文件姓名时当时时间的分钟.log
从零构建前端日志平台,监控平台

filebeat

filebeat,监听上一过程nodejs文件改变,获取文件内容,为什么不必logstash呢?由于filebeat比logstash更轻量,占用内存更小,假如在没有清洗数据的前提下(清洗数据logstash有优势),filebeat比logstash更有优势。

# filebeat mac下载,其他版本请拜访上述链接
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.4.3-darwin-x86_64.tar.gz
tar xzvf filebeat-8.4.3-darwin-x86_64.tar.gz
graph LR
本地文件 --> filebeat获取日志

解压,进入解压后的文件夹,修正filebeat.yml为如下内容,读取nodejs项目下的日志文件,输出日志文件到控制台

filebeat.inputs:
- type: filestream
  paths:
    - /nodejs项目的绝对路径/**/*.log
output.console:
  pretty: true

履行指令

./filebeat -e -c filebeat.yml

输出如下:

从零构建前端日志平台,监控平台
其间,messag字段端内容为咱们mock的用户日志,这个时分ctrl + c,结束filebeat进程,在重新履行上面的指令,会发现控制台上没有了输出,由于filebeat现已消费过现有的log数据了,要想重新输出之前的内容,能够删去履行指令:rm -rf data/registry/filebeat删去filebeat的记载数据,记载数据如下

{"op":"set","id":1}
{"k":"filestream::.global::native::19630011-16777223","v":{"cursor":null,"meta":{"source":"/Users/xxx/tools/fe-log-server/2022/10/14/17/27.log","identifier_name":"native"},"ttl":0,"updated":[281470681743360,18446744011573954816]}}
{"op":"set","id":2}
{"k":"filestream::.global::native::19630011-16777223","v":{"updated":[2061957913080,1665972877],"cursor":null,"meta":{"source":"/Users/xxx/tools/fe-log-server/2022/10/14/17/27.log","identifier_name":"native"},"ttl":1800000000000}}
{"op":"set","id":3}
{"k":"filestream::.global::native::19630011-16777223","v":{"updated":[2061958151080,1665972877],"cursor":{"offset":61},"meta":{"identifier_name":"native","source":"/Users/xxx/tools/fe-log-server/2022/10/14/17/27.log"},"ttl":1800000000000}}

其间offest记载了filebeat当时消费文件的偏移量,也能够手动修正这个字端,然后重新发动filebeat,检查效果

kafka

让咱们想一想,当咱们的日志量非常巨大的时分,直接将日志写入es,很可能造成日志的堆积和丢掉,为了解决这个问题,让咱们引进消息中心件,kafka,假如你按照最上面的方法发动docker-compose,kafka就会被发动的,让咱们一起来测试一下kafka

  • 首要先进入kafka容器
  1. 经过docker-compose ps检查kafka容器name,参加name是elk-kafka-1
  2. 履行下面指令,进入kafka
docker container exec -it elk-kafka-1 bash
  • 创立一个出产者
kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092
  • 创立一个顾客
kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic quickstart-events --from-beginning

随后你就能够在kafka的出产者容器中随意写入一些内容,kafka顾客会呈现出对应的内容

从零构建前端日志平台,监控平台

从零构建前端日志平台,监控平台

  • 创立一个topic
kafka-topics.sh --bootstrap-server localhost:9092 --create --topic my-topic --partitions 1 \
  --replication-factor 1 --config max.message.bytes=64000 --config flush.messages=1
  • 检查topic列表
kafka-topics.sh --list --bootstrap-server localhost:9092

从零构建前端日志平台,监控平台
其他具体操作能够参阅:# 真的,Kafka 入门一篇文章就够了

filebeat输出日志到kafka

  • 删去filebeat的offest缓存
rm -rf data/registry/filebeat
  • 修正filebeat的filebeat.yml文件,内容为
filebeat.inputs:
- type: filestream
  paths: 
    - '/Users/pengcli/tools/fe-log-server/**/*.log'
output.kafka:
  # initial brokers for reading cluster metadata
  hosts: ["localhost:9093"]
  # message topic selection + partitioning
  topic: 'my-topic'
  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000
  • 重启filebeat
./filebeat -e -c filebeat.yml
  • 进入kafka容器,发动顾客
kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-topic --from-beginning
  • 拜访localhost:3000,拜访nodejs服务,nodejs将log数据首要写入到本地文件,其次filebeat经过监控本地文件的改变(出产环境能够设置filebeat的input为containers),将数据输出到kafka。顾客输出日志如下
    从零构建前端日志平台,监控平台
  • 数据流程
graph LR
前端数据 --> nodejs服务 --> 写入本地文件 --> filebeat --> kafka

logstash读取kafka数据,并输出到es,

logstash的用法和filebeat的用法相似,只不过中心多了一层filter,能够对数据进行清洗,过滤。目前此项目还没有用到数据过滤的流程,其实能够用filebeat代替logstash,这儿为了多学习一下,所以选用logstash,获取kafka的数据,并最终将数据输出到es中

  • 进入logstash容器,首要检查容器name,然后进入
docker container exec -it elk-logstash-1 bash
  • 消费kafka数据
logstash -f /usr/share/logstash/pipeline/logstash.conf

其间,/usr/share/logstash/pipeline/logstash.conf这个文件是经过docker-compose中的logstash下的volumes做的映射,映射代码

  logstash:
    ...
    volumes: 
      - ./logstash/pipeline/:/usr/share/logstash/pipeline
      - xxx

其间,logstash.yml咱们现已提早在根目录下创立完结,具体路径是./logstash/pipeline/logstash.yml,其间内容如下

input {
    kafka {
        id => "my_plugin_id"
        bootstrap_servers =>["kafka:9092"]
        topics => ["my-topic"]
        group_id => "filebeat"
        auto_offset_reset => "earliest"
        type => "pengclikafka"
        ssl_endpoint_identification_algorithm => ""
    }
}
output {
    elasticsearch {
        hosts => ["es01:9200"]
        index => "logstash-system-localhost-%{+YYYY.MM.dd}"
    }
}

简单来说,便是读取从头开始读取kafka数据,topics是my-topic,输出到es中,es的索引是logstash-system-localhost-%{+YYYY.MM.dd},其间%{+YYYY.MM.dd}是logstash中的变量

注意,如经过docker-compose发动服务,咱们的bootstrap_servers是kafka:9092,kafka这个name则是咱们在docker-compose文件中定义好的

  • 假如报错Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting.,则履行下面下面指令,删去.lcok文件
rm -r ./data/.lock

kibana数据展现

ok,到目前为止,咱们现已成功将数据经过nodejs服务,写入到了es集群中了,现在咱们需求配置下es的dataViews,拜访本地链接http://localhost:5601/app/management/kibana/dataViews

从零构建前端日志平台,监控平台
点击右上角的 create data view按钮,进入修正页面

从零构建前端日志平台,监控平台
修正完结,则能够经过kibanadiscover来检查数据了

从零构建前端日志平台,监控平台

经过nodejs查询es数据

目前为止,咱们现已能够将数据成功写入到es集群中了,并成功将它们展现出来,假如到了这一步,咱们现已成功的建立了一个前端日志渠道,假如咱们需求前端监控渠道来展现前端的性能监控反常监控等等,咱们需求在构建一个前端dashboard,需求nodejs从es集群中读取数据,那么如何做呢,代码如下

const { Client } = require('@elastic/elasticsearch')
const client = new Client({
  node: 'http://localhost:9200',
})
const essearch = async (req, res, next) => {
  try {
    const result = await client.search({
      index: 'logstash-system-localhost-2022.10.13',
      body: {
        "query": {
          "match_all": {}
        }
      }
      ,
    })
    res.json(result)
  } catch (err) {
    res.json(err)
  }
}

其间,假如想要验证query部分的正确性,能够拜访本地链接,http://localhost:5601/app/dev_tools#/console

从零构建前端日志平台,监控平台
关于kibana的其他查询语句,本文不做过多介绍,能够参阅这篇文章# Elasticsearch Query DSL查询入门 你能够写一个定时器,每分钟获取es的数据,然后持久化mysql中,具体细节这儿不再详聊

最后

这便是如何建立一个前端日志渠道,或许前端监控渠道的具体过程。当然,假如是在出产环境中,我认为会简单一些,咱们需求吧一些工具,如elk封装进k8s中,我想这都是现成的,咱们只需求写一些配置文件就能够了,然后在经过nodejs将日志写进容器日志文件就好了,其他的流程都是相同的

参阅链接

  • # Elasticsearch Query DSL查询入门
  • # 今天聊:60 天急速自研-前端埋点监控盯梢体系
  • # 一篇讲透自研的前端错误监控

参阅书籍

陈辰教师的:从零开始建立前端监控渠道