背景介绍
Terraform
Terraform[1] 是一个 Hashicorp[2] 开源的根底设施主动化编列东西,运用 IaC 的理念来办理根底设施的改变,并得到了亚马逊云科技,GCP,AZURE 等公有云厂商的支撑以及社区提供的各种各样的 provider,已成为「根底设施即代码」领域最流行的实践方法之一,Terraform 有以下长处:
支撑多云布置 Terraform 适用于多云方案,将类似的根底结构布置到阿里云、其他云提供商或者本地数据中心。开发人员可以运用相同的东西和类似的装备文件一起办理不同云提供商的资源。
主动化办理根底架构 Terraform 可以创立模块,可重复运用, 从而削减因人为因素导致的布置和办理错误。
根底架构即代码 可以用代码来办理维护资源,允许保存根底设施状态,从而运用户可以跟踪系统中不同组件所做的更改,并与其他人同享这些装备 。
ShardingSphere-Proxy
Apache ShardingSphere 是一款分布式的数据库生态系统,可以将任意数据库转换为分布式数据库,并经过数据分片、弹性弹性、加密等才能对原有数据库进行增强。
其设计哲学为 Database Plus,旨在构建异构数据库上层的规范和生态。它重视如何充分利用数据库的核算和存储才能,而并非实现一个全新的数据库。它站在数据库的上层视角,重视数据库之间的协作多于它们本身。
ShardingSphere-Proxy 的定位为透明化的数据库代理,理论上支撑任何运用 MySQL、PostgreSQL、openGauss 协议的客户端操作数据,对异构言语、运维场景更友爱。其对运用代码是无侵入的,用户只需更改数据库的连接串,就可以实现数据分片,读写别离等功用,作为数据根底设施的一部分,其本身的高可用性将非常重要。
运用 Terraform 布置
咱们期望您经过 IaC 的方法去布置办理 ShardingSphere Proxy 集群,去享受 IaC 带来的优点。基于以上,咱们方案运用 Terraform 创立一个多可用区的 ShardingSphere-Proxy 高可用集群。除此之外,在开端编写 Terraform 装备之前,咱们先需要了解 ShardingSphere-Proxy 集群的根本架构图:
咱们运用 ZooKeeper 来作为 Governance Center。可以看出,ShardingSphere-Proxy 本身是一个无状态的运用,在实际场景中,对外提供一个负载均衡即可, 由负载均衡去弹性分配各个实例之间的流量。为了保证 ZooKeeper 集群及 ShardingSphere-Proxy 集群的高可用,咱们将运用以下架构创立:
ZooKeeper 集群
界说输入参数
为了到达可重用装备的意图,咱们界说了一系列的变量,内容如下:
variable "cluster_size" {
type = number
description = "The cluster size that same size as available_zones"
}
variable "key_name" {
type = string
description = "The ssh keypair for remote connection"
}
variable "instance_type" {
type = string
description = "The EC2 instance type"
}
variable "vpc_id" {
type = string
description = "The id of VPC"
}
variable "subnet_ids" {
type = list(string)
description = "List of subnets sorted by availability zone in your VPC"
}
variable "security_groups" {
type = list(string)
default = []
description = "List of the Security Group, it must be allow access 2181, 2888, 3888 port"
}
variable "hosted_zone_name" {
type = string
default = "shardingsphere.org"
description = "The name of the hosted private zone"
}
variable "tags" {
type = map(any)
description = "A map of zk instance resource, the default tag is Name=zk-$${count.idx}"
default = {}
}
variable "zk_version" {
type = string
description = "The zookeeper version"
default = "3.7.1"
}
variable "zk_config" {
default = {
client_port = 2181
zk_heap = 1024
}
description = "The default config of zookeeper server"
}
仿制代码
这些变量也可以在下面装置 ShardingSphere-Proxy 集群时更改。
装备 ZooKeeper 集群
ZooKeeper 服务实例咱们运用了亚马逊云科技原生的amzn2-ami-hvm
镜像,咱们运用count参数来布置 ZooKeeper 服务,它指示 Terraform 创立的 ZooKeeper 集群的节点数量为var.cluster_size。
在创立 ZooKeeper 实例时,咱们运用了 ignore_changes 参数来忽略人为的更改 tag ,以防止在下次运行 Terraform 时实例被从头创立。
可运用cloud-init
来初始化 ZooKeeper 相关装备,具体内容见 [3]。咱们为每个 ZooKeeper 服务都创立了对应的域名,运用只需要运用域名即可,以防止 ZooKeeper 服务重启导致 ip 地址更改带来的问题。
data "aws_ami" "base" {
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-hvm-*-x86_64-ebs"]
}
most_recent = true
}
data "aws_availability_zones" "available" {
state = "available"
}
resource "aws_network_interface" "zk" {
count = var.cluster_size
subnet_id = element(var.subnet_ids, count.index)
security_groups = var.security_groups
}
resource "aws_instance" "zk" {
count = var.cluster_size
ami = data.aws_ami.base.id
instance_type = var.instance_type
key_name = var.key_name
network_interface {
delete_on_termination = false
device_index = 0
network_interface_id = element(aws_network_interface.zk.*.id, count.index)
}
tags = merge(
var.tags,
{
Name = "zk-${count.index}"
}
)
user_data = base64encode(templatefile("${path.module}/cloud-init.yml", {
version = var.zk_version
nodes = range(1, var.cluster_size + 1)
domain = var.hosted_zone_name
index = count.index + 1
client_port = var.zk_config["client_port"]
zk_heap = var.zk_config["zk_heap"]
}))
lifecycle {
ignore_changes = [
# Ignore changes to tags.
tags
]
}
}
data "aws_route53_zone" "zone" {
name = "${var.hosted_zone_name}."
private_zone = true
}
resource "aws_route53_record" "zk" {
count = var.cluster_size
zone_id = data.aws_route53_zone.zone.zone_id
name = "zk-${count.index + 1}"
type = "A"
ttl = 60
records = element(aws_network_interface.zk.*.private_ips, count.index)
}
仿制代码
界说输出
在成功运行terraform apply
后会输出 ZooKeeper 服务实例的 IP 及对应的域名。
output "zk_node_private_ip" {
value = aws_instance.zk.*.private_ip
description = "The private ips of zookeeper instances"
}
output "zk_node_domain" {
value = [for v in aws_route53_record.zk.*.name : format("%s.%s", v, var.hosted_zone_name)]
description = "The private domain names of zookeeper instances for use by ShardingSphere Proxy"
}
仿制代码
ShardingSphere-Proxy 集群
界说输入参数
界说输入参数的意图也是为了到达装备可重用的意图。
variable "cluster_size" {
type = number
description = "The cluster size that same size as available_zones"
}
variable "shardingsphere_proxy_version" {
type = string
description = "The shardingsphere proxy version"
}
variable "shardingsphere_proxy_asg_desired_capacity" {
type = string
default = "3"
description = "The desired capacity is the initial capacity of the Auto Scaling group at the time of its creation and the capacity it attempts to maintain. see https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-group.html#cfn-as-group-desiredcapacitytype, The default value is 3"
}
variable "shardingsphere_proxy_asg_max_size" {
type = string
default = "6"
description = "The maximum size of ShardingSphere Proxy Auto Scaling Group. The default values is 6"
}
variable "shardingsphere_proxy_asg_healthcheck_grace_period" {
type = number
default = 120
description = "The amount of time, in seconds, that Amazon EC2 Auto Scaling waits before checking the health status of an EC2 instance that has come into service and marking it unhealthy due to a failed health check. see https://docs.aws.amazon.com/autoscaling/ec2/userguide/health-check-grace-period.html"
}
variable "image_id" {
type = string
description = "The AMI id"
}
variable "key_name" {
type = string
description = "the ssh keypair for remote connection"
}
variable "instance_type" {
type = string
description = "The EC2 instance type"
}
variable "vpc_id" {
type = string
description = "The id of your VPC"
}
variable "subnet_ids" {
type = list(string)
description = "List of subnets sorted by availability zone in your VPC"
}
variable "security_groups" {
type = list(string)
default = []
description = "List of The Security group IDs"
}
variable "lb_listener_port" {
type = string
description = "lb listener port"
}
variable "hosted_zone_name" {
type = string
default = "shardingsphere.org"
description = "The name of the hosted private zone"
}
variable "zk_servers" {
type = list(string)
description = "The Zookeeper servers"
}
装备 AutoScalingGroup
咱们将创立一个 AutoScalingGroup 来让其办理 ShardingSphere-Proxy 实例,AutoScalingGroup 的健康检查类型被更改为 “ELB”,在负载均衡对实例履行健康检查失利后,AutoScalingGroup 能及时移出坏的节点。
在创立 AutoScallingGroup 时会忽略以下更改,分别为:load_balancers 、 target_group_arns 。咱们相同运用 cloud-init 来装备 ShardingSphere-Proxy 实例,具体内容见[4]。
resource "aws_launch_template" "ss" {
name = "shardingsphere-proxy-launch-template"
image_id = var.image_id
instance_initiated_shutdown_behavior = "terminate"
instance_type = var.instance_type
key_name = var.key_name
iam_instance_profile {
name = aws_iam_instance_profile.ss.name
}
user_data = base64encode(templatefile("${path.module}/cloud-init.yml", {
version = var.shardingsphere_proxy_version
version_elems = split(".", var.shardingsphere_proxy_version)
zk_servers = join(",", var.zk_servers)
}))
metadata_options {
http_endpoint = "enabled"
http_tokens = "required"
http_put_response_hop_limit = 1
instance_metadata_tags = "enabled"
}
monitoring {
enabled = true
}
vpc_security_group_ids = var.security_groups
tag_specifications {
resource_type = "instance"
tags = {
Name = "shardingsphere-proxy"
}
}
}
resource "aws_autoscaling_group" "ss" {
name = "shardingsphere-proxy-asg"
availability_zones = data.aws_availability_zones.available.names
desired_capacity = var.shardingsphere_proxy_asg_desired_capacity
min_size = 1
max_size = var.shardingsphere_proxy_asg_max_size
health_check_grace_period = var.shardingsphere_proxy_asg_healthcheck_grace_period
health_check_type = "ELB"
launch_template {
id = aws_launch_template.ss.id
version = "$Latest"
}
lifecycle {
ignore_changes = [load_balancers, target_group_arns]
}
}
装备负载均衡
经过上一步创立好的 AutoScalingGroup 会 attach 到负载均衡上,经过负载均衡的流量会主动路由到 AutoScalingGroup 创立的 ShardingSphere-Proxy 实例上。
resource "aws_lb_target_group" "ss_tg" {
name = "shardingsphere-proxy-lb-tg"
port = var.lb_listener_port
protocol = "TCP"
vpc_id = var.vpc_id
preserve_client_ip = false
health_check {
protocol = "TCP"
healthy_threshold = 2
unhealthy_threshold = 2
}
tags = {
Name = "shardingsphere-proxy"
}
}
resource "aws_autoscaling_attachment" "asg_attachment_lb" {
autoscaling_group_name = aws_autoscaling_group.ss.id
lb_target_group_arn = aws_lb_target_group.ss_tg.arn
}
resource "aws_lb_listener" "ss" {
load_balancer_arn = aws_lb.ss.arn
port = var.lb_listener_port
protocol = "TCP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.ss_tg.arn
}
tags = {
Name = "shardingsphere-proxy"
}
}
装备域名
咱们将创立默以为proxy.shardingsphere.org
的内部域名,实际内部指向到上一步创立的负载均衡。
data "aws_route53_zone" "zone" {
name = "${var.hosted_zone_name}."
private_zone = true
}
resource "aws_route53_record" "ss" {
zone_id = data.aws_route53_zone.zone.zone_id
name = "proxy"
type = "A"
alias {
name = aws_lb.ss.dns_name
zone_id = aws_lb.ss.zone_id
evaluate_target_health = true
}
}
装备 CloudWatch
咱们将经过 STS 去创立包含 CloudWatch 权限的人物,人物会附加到由 AutoScalingGroup 创立的 ShardingSphere-Proxy 实例上,其运行日志会被 CloudWatch Agent 收集到 CloudWatch 上。
默许会创立名为shardingsphere-proxy.log
的 log_group,CloudWatch 的具体装备见 [5]。
resource "aws_iam_role" "sts" {
name = "shardingsphere-proxy-sts-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy" "ss" {
name = "sharidngsphere-proxy-policy"
role = aws_iam_role.sts.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cloudwatch:PutMetricData",
"ec2:DescribeTags",
"logs:PutLogEvents",
"logs:DescribeLogStreams",
"logs:DescribeLogGroups",
"logs:CreateLogStream",
"logs:CreateLogGroup"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_instance_profile" "ss" {
name = "shardingsphere-proxy-instance-profile"
role = aws_iam_role.sts.name
}
布置
在创立完所有的 Terraform 装备后就可以布置 ShardingSphere-Proxy 集群了。在实际布置之前,引荐您运用如下指令去检查装备是否按预期履行。
terraform plan
在确认完方案后,就可以去真正的履行了,运行如下指令:
terraform apply
完好的代码可以在 [6] 找到。更多的内容请查看咱们的网站 [7]。
测验
测验的目标是证明创立的集群是可用的, 咱们运用一个简略 case:运用 DistSQL 增加两个数据源及创立一个简略的分片规矩,然后刺进数据,查询能回来正确的结果。
默许咱们会创立一个proxy.shardingsphere.org
的内部域名, ShardingSphere-Proxy 集群的用户名和暗码都是 root。
注:DistSQL(Distributed SQL)是 ShardingSphere 特有的操作言语,它与规范 SQL 的运用方法完全一致,用于提供增量功用的 SQL 级别操作才能, 详细阐明见 [8]。
总结
Terraform 是协助你实现 IaC 的有用东西,运用 Terraform 对迭代 ShardingSphere-Proxy 集群将非常有用。期望这篇文章可以协助到对 ShardingSphere 以及 Terraform 感兴趣的人。
引证
- www.terraform.io/
- www.hashicorp.com/
- raw.githubusercontent.com/apache/shar…
- raw.githubusercontent.com/apache/shar…
- raw.githubusercontent.com/apache/shar…
- github.com/apache/shar…
- shardingsphere.apache.org/oncloud/cur…
- shardingsphere.apache.org/document/cu…
阅读原文:dev.amazoncloud.cn/column/arti…