zabbix进行数据库备份以及表分区的方法
由于测试环境上面使用的zabbix服务器配置比较低,经常会遇到性能瓶颈(主要是数据库和磁盘i/o等),于是倒逼我使用了一些方式来缓解这些问题。
主要是以前使用的那个备份数据库的脚本是对zabbix数据库进行全备的,使用的又是mysql自带的工具mysqldump,当数据量大了之后进行全备所花的时间比较长,这样将会造成数据库的锁读。。。从而使zabbix服务以为mysql死掉了,产生一大堆的报警。
后来发现原来造成数据库数据量大量增加的是zabbix数据库中的一些存储数据的大表导致的。于是备份数据库的时候可以选择跳过这些表进行备份,这样,将大大减少数据库备份所花的时间(ps:之前备份数据库所花时间在十分钟左右,现在跳过大表备份,所花时间在1s左右就能备份完,大大缩短了备份数据库时间)。
下面就贴出某位大神写的专门为zabbix数据库做备份以及恢复的脚本:
#!/bin/bash #author: itnihao red='\e[0;31m' # 红色 red='\e[1;31m' green='\e[0;32m' # 绿色 green='\e[1;32m' blue='\e[0;34m' # 蓝色 blue='\e[1;34m' purple='\e[0;35m' # 紫色 purple='\e[1;35m' nc='\e[0m' # 没有颜色 source /etc/bashrc source /etc/profile mysql_user=zabbix mysql_password=zabbix mysql_host=localhost mysql_port=3306 mysql_dump_path=/opt/backup mysql_bin_path=/opt/software/mysql/bin/mysql mysql_dump_bin_path=/opt/software/mysql/bin/mysqldump mysql_database_name=zabbix date=$(date '+%y%m%d') mysqldump () { [ -d ${mysql_dump_path} ] || mkdir ${mysql_dump_path} cd ${mysql_dump_path} [ -d logs ] || mkdir logs [ -d ${date} ] || mkdir ${date} cd ${date} #table_name_all=$(${mysql_bin_path} -u${mysql_user} -p${mysql_password} -h${mysql_host} ${mysql_database_name} -e "show tables"|egrep -v "(tables_in_zabbix)") table_name_all=$(${mysql_bin_path} -u${mysql_user} -p${mysql_password} -h${mysql_host} ${mysql_database_name} -e "show tables"|egrep -v "(tables_in_zabbix|history*|trends*|acknowledges|alerts|auditlog|events|service_alarms)") for table_name in ${table_name_all} do ${mysql_dump_bin_path} --opt -u${mysql_user} -p${mysql_password} -p${mysql_port} -h${mysql_host} ${mysql_database_name} ${table_name} >${table_name}.sql sleep 0.01 done [ "$?" == 0 ] && echo "${date}: backup zabbix succeed" >> ${mysql_dump_path}/logs/zabbixmysqldump.log [ "$?" != 0 ] && echo "${date}: backup zabbix not succeed" >> ${mysql_dump_path}/logs/zabbixmysqldump.log cd ${mysql_dump_path}/ rm -rf $(date +%y%m%d --date='5 days ago') exit 0 } mysqlimport () { cd ${mysql_dump_path} date=$(ls ${mysql_dump_path} |egrep "\b^[0-9]+$\b") echo -e "${green}${date}" echo -e "${blue}what date do you want to import,please input date:${nc}" read select_date if [ -d "${select_date}" ];then echo -e "you select is ${green}${select_date}${nc}, do you want to contine,if,input ${red}(yes|y|y)${nc},else then exit" read input [[ 'yes|y|y' =~ "${input}" ]] status="$?" if [ "${status}" == "0" ];then echo "now import sql....... please wait......." else exit 1 fi cd ${select_date} for per_tabel_sql in $(ls *.sql) do ${mysql_bin_path} -u${mysql_user} -p${mysql_password} -h${mysql_host} ${mysql_database_name} < ${per_tabel_sql} echo -e "import ${per_tabel_sql} ${purple}........................${nc}" done echo "finish import sql,please check zabbix database" else echo "don't exist ${select_date} dir" fi } case "$1" in mysqldump|mysqldump) mysqldump ;; mysqlimport|mysqlimport) mysqlimport ;; *) echo "usage: $0 {(mysqldump|mysqldump) (mysqlimport|mysqlimport)}" ;; esac
该脚本源出处在这https://github.com/itnihao/zabbix-book/blob/master/03-chapter/zabbix_mysqldump_per_table_v2.sh
我这是在大神的脚本上做了修改之后形成的适合我自己备份的脚本,各位也可以自行修改成适合自己的备份脚本。这个脚本实现的效果上面已经说了,之前做全备的时候差不多有4g左右的数据量,现在只备份配置文件数据量只有不到10m,果断大大节省时间以及空间呀。
不过这样的话将无法保证数据的备份,我目前考虑使用xtradbbackup对数据进行增量备份,目前还未实现,留待过两天做吧。
好了,关于数据库备份的事情搞了,然后还需要对大数据量的表进行表分区,参考了zabbix官网的一篇文章https://www.zabbix.org/wiki/docs/howto/mysql_partition 各位有兴趣的话可以去看看,我这里将其总结在了一起,更加方便一点。
表分区可以对大数据量的表进行物理上的拆分成多个文件,但是逻辑上来看,还是一张表,对应用程序是透明的。另外,将这一张大表拆分成很多小表的话将使得数据查询速度能够更快。还可以随时删除旧的数据分区,删除过期数据。这种方式适用于大数据量的表,但是查询量比较少的应用场景。如果是大数据量的表,又有大量查询的话建议还是进行分库分表操作。
好了,不多扯了,开始作业了。
首先,登录数据库(ps:这个就不演示了)
然后登陆到zabbix库中修改两张表的结构:
use zabbix; alter table history_text drop primary key, add index (id), drop index history_text_2, add index history_text_2 (itemid, id); alter table history_log drop primary key, add index (id), drop index history_log_2, add index history_log_2 (itemid, id);
修改完之后再按照官网上的过程创建四个存储过程:
delimiter $$ create procedure `partition_create`(schemaname varchar(64), tablename varchar(64), partitionname varchar(64), clock int) begin /* schemaname = the db schema in which to make changes tablename = the table with partitions to potentially delete partitionname = the name of the partition to create */ /* verify that the partition does not already exist */ declare retrows int; select count(1) into retrows from information_schema.partitions where table_schema = schemaname and table_name = tablename and partition_description >= clock; if retrows = 0 then /* 1. print a message indicating that a partition was created. 2. create the sql to create the partition. 3. execute the sql from #2. */ select concat( "partition_create(", schemaname, ",", tablename, ",", partitionname, ",", clock, ")" ) as msg; set @sql = concat( 'alter table ', schemaname, '.', tablename, ' add partition (partition ', partitionname, ' values less than (', clock, '));' ); prepare stmt from @sql; execute stmt; deallocate prepare stmt; end if; end $$delimiter ;
delimiter $$ create procedure `partition_drop`(schemaname varchar(64), tablename varchar(64), delete_below_partition_date bigint) begin /* schemaname = the db schema in which to make changes tablename = the table with partitions to potentially delete delete_below_partition_date = delete any partitions with names that are dates older than this one (yyyy-mm-dd) */ declare done int default false; declare drop_part_name varchar(16); /* get a list of all the partitions that are older than the date in delete_below_partition_date. all partitions are prefixed with a "p", so use substring to get rid of that character. */ declare mycursor cursor for select partition_name from information_schema.partitions where table_schema = schemaname and table_name = tablename and cast(substring(partition_name from 2) as unsigned) < delete_below_partition_date; declare continue handler for not found set done = true; /* create the basics for when we need to drop the partition. also, create @drop_partitions to hold a comma-delimited list of all partitions that should be deleted. */ set @alter_header = concat("alter table ", schemaname, ".", tablename, " drop partition "); set @drop_partitions = ""; /* start looping through all the partitions that are too old. */ open mycursor; read_loop: loop fetch mycursor into drop_part_name; if done then leave read_loop; end if; set @drop_partitions = if(@drop_partitions = "", drop_part_name, concat(@drop_partitions, ",", drop_part_name)); end loop; if @drop_partitions != "" then /* 1. build the sql to drop all the necessary partitions. 2. run the sql to drop the partitions. 3. print out the table partitions that were deleted. */ set @full_sql = concat(@alter_header, @drop_partitions, ";"); prepare stmt from @full_sql; execute stmt; deallocate prepare stmt; select concat(schemaname, ".", tablename) as `table`, @drop_partitions as `partitions_deleted`; else /* no partitions are being deleted, so print out "n/a" (not applicable) to indicate that no changes were made. */ select concat(schemaname, ".", tablename) as `table`, "n/a" as `partitions_deleted`; end if; end$$ delimiter ;
delimiter $$ create procedure `partition_maintenance`(schema_name varchar(32), table_name varchar(32), keep_data_days int, hourly_interval int, create_next_intervals int) begin declare older_than_partition_date varchar(16); declare partition_name varchar(16); declare less_than_timestamp int; declare cur_time int; call partition_verify(schema_name, table_name, hourly_interval); set cur_time = unix_timestamp(date_format(now(), '%y-%m-%d 00:00:00')); set @__interval = 1; create_loop: loop if @__interval > create_next_intervals then leave create_loop; end if; set less_than_timestamp = cur_time + (hourly_interval * @__interval * 3600); set partition_name = from_unixtime(cur_time + hourly_interval * (@__interval - 1) * 3600, 'p%y%m%d%h00'); call partition_create(schema_name, table_name, partition_name, less_than_timestamp); set @__interval=@__interval+1; end loop; set older_than_partition_date=date_format(date_sub(now(), interval keep_data_days day), '%y%m%d0000'); call partition_drop(schema_name, table_name, older_than_partition_date); end$$ delimiter ;
delimiter $$ create procedure `partition_verify`(schemaname varchar(64), tablename varchar(64), hourlyinterval int(11)) begin declare partition_name varchar(16); declare retrows int(11); declare future_timestamp timestamp; /* * check if any partitions exist for the given schemaname.tablename. */ select count(1) into retrows from information_schema.partitions where table_schema = schemaname and table_name = tablename and partition_name is null; /* * if partitions do not exist, go ahead and partition the table */ if retrows = 1 then /* * take the current date at 00:00:00 and add hourlyinterval to it. this is the timestamp below which we will store values. * we begin partitioning based on the beginning of a day. this is because we don't want to generate a random partition * that won't necessarily fall in line with the desired partition naming (ie: if the hour interval is 24 hours, we could * end up creating a partition now named "p201403270600" when all other partitions will be like "p201403280000"). */ set future_timestamp = timestampadd(hour, hourlyinterval, concat(curdate(), " ", '00:00:00')); set partition_name = date_format(curdate(), 'p%y%m%d%h00'); -- create the partitioning query set @__partition_sql = concat("alter table ", schemaname, ".", tablename, " partition by range(`clock`)"); set @__partition_sql = concat(@__partition_sql, "(partition ", partition_name, " values less than (", unix_timestamp(future_timestamp), "));"); -- run the partitioning query prepare stmt from @__partition_sql; execute stmt; deallocate prepare stmt; end if; end$$ delimiter ;
上面四个存储过程执行后将可以使用
call partition_maintenance('<zabbix_db_name>', '<table_name>', <days_to_keep_data>, <hourly_interval>, <num_future_intervals_to_create>)
命令对想要分区的表进行表分区了。其中的参数我这里解释一下。
这是举例:
call partition_maintenance(zabbix, 'history_uint', 31, 24, 14);
zabbix_db_name:库名
table_name:表名
days_to_keep_data:保存多少天的数据
hourly_interval:每隔多久生成一个分区
num_future_intervals_to_create:本次一共生成多少个分区
这个例子就是history_uint表最多保存31天的数据,每隔24小时生成一个分区,这次一共生成14个分区
这里可以将上面四个存储过程保存为一个文件,导入到数据库中,文件我稍后将会放在附件中,这里使用的命令是:mysql -uzabbix -pzabbix zabbix<partition_call.sql
然后可以将call统一调用也做成一个文件,统一调用的内容如下:
delimiter $$ create procedure `partition_maintenance_all`(schema_name varchar(32)) begin call partition_maintenance(schema_name, 'history', 31, 24, 14); call partition_maintenance(schema_name, 'history_log', 31, 24, 14); call partition_maintenance(schema_name, 'history_str', 31, 24, 14); call partition_maintenance(schema_name, 'history_text', 31, 24, 14); call partition_maintenance(schema_name, 'history_uint', 31, 24, 14); call partition_maintenance(schema_name, 'trends', 180, 24, 14); call partition_maintenance(schema_name, 'trends_uint', 180, 24, 14); end$$ delimiter ;
也将该文件导入到数据库中,使用命令:mysql -uzabbix -pzabbix zabbix<partition_all.sql
好了,到了这里之后就可以使用如下命令执行表分区了:
mysql -uzabbix -pzabbix zabbix -e "call partition_maintenance_all('zabbix');" +----------------+--------------------+ | table | partitions_deleted | +----------------+--------------------+ | zabbix.history | n/a | +----------------+--------------------+ +--------------------+--------------------+ | table | partitions_deleted | +--------------------+--------------------+ | zabbix.history_log | n/a | +--------------------+--------------------+ +--------------------+--------------------+ | table | partitions_deleted | +--------------------+--------------------+ | zabbix.history_str | n/a | +--------------------+--------------------+ +---------------------+--------------------+ | table | partitions_deleted | +---------------------+--------------------+ | zabbix.history_text | n/a | +---------------------+--------------------+ +---------------------+--------------------+ | table | partitions_deleted | +---------------------+--------------------+ | zabbix.history_uint | n/a | +---------------------+--------------------+ +---------------+--------------------+ | table | partitions_deleted | +---------------+--------------------+ | zabbix.trends | n/a | +---------------+--------------------+ +--------------------+--------------------+ | table | partitions_deleted | +--------------------+--------------------+ | zabbix.trends_uint | n/a | +--------------------+--------------------+
看到如下结果证明所有7张表都进行了表分区,也可以在mysql的数data目录下看到新生成的表分区文件。(ps:注意,最好是清空history_uint表的数据之后再执行上面这条命令,否则因为这张表数据量太大,转换时间将会好长,清空表中数据的命令为: truncate table history_uint;)
好了,这样可以进行表分区了。
将上面这条命令写入到计划任务中如下:
crontab -l|tail -1 01 01 * * * /opt/software/mysql/bin/mysql -uzabbix -pzabbix zabbix -e "call partition_maintenance_all('zabbix');"
每天晚上的1点01执行一次。还有之前写的备份数据库的脚本也需要执行计划任务每天的凌晨0点01执行备份:
crontab -l|tail -2|head -1 01 00 * * * /usr/local/scripts/zabbix_mysqldump_per_table_v2.sh mysqldump
这样就大功告成了,之后再体验一下zabbix的web页面看是不是感觉比以前快了?
下一篇: 资金不足