欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

MySQL MHA /usr/share/perl5/vendor_perl/MHA/ServerManager.pm, ln301] install_driver(mysql) failed: Attempt to reload DBD/mysql.pm aborted

程序员文章站 2024-01-29 13:43:40
在公司随便找3台测试机搭个MHA,下面这个问题折腾了三天,之前没遇到过,查了OS版本发现一致,可能是不同人弄的OS吧,知道是cpan的问题就是搞不定,郁闷。。。[root@test247 ~]# masterha_check_repl --conf=/etc/masterha/app1.cnfWed ......

在公司随便找3台测试机搭个mha,下面这个问题折腾了三天,之前没遇到过,查了os版本发现一致,可能是不同人弄的os吧,知道是cpan的问题就是搞不定,郁闷。。。
[root@test247 ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
wed dec 4 11:53:59 2019 - [warning] global configuration file /etc/masterha_default.cnf not found. skipping.
wed dec 4 11:53:59 2019 - [info] reading application default configuration from /etc/masterha/app1.cnf..
wed dec 4 11:53:59 2019 - [info] reading server configuration from /etc/masterha/app1.cnf..
wed dec 4 11:53:59 2019 - [info] mha::mastermonitor version 0.56.
wed dec 4 11:54:00 2019 - [error][/usr/share/perl5/vendor_perl/mha/servermanager.pm, ln301] install_driver(mysql) failed: attempt to reload dbd/mysql.pm aborted.
compilation failed in require at (eval 26) line 3.

at /usr/share/perl5/vendor_perl/mha/dbhelper.pm line 205
at /usr/share/perl5/vendor_perl/mha/server.pm line 166
wed dec 4 11:54:00 2019 - [error][/usr/share/perl5/vendor_perl/mha/servermanager.pm, ln301] install_driver(mysql) failed: attempt to reload dbd/mysql.pm aborted.
compilation failed in require at (eval 26) line 3.

at /usr/share/perl5/vendor_perl/mha/dbhelper.pm line 205
at /usr/share/perl5/vendor_perl/mha/server.pm line 166
wed dec 4 11:54:00 2019 - [error][/usr/share/perl5/vendor_perl/mha/servermanager.pm, ln301] install_driver(mysql) failed: attempt to reload dbd/mysql.pm aborted.
compilation failed in require at (eval 26) line 3.

at /usr/share/perl5/vendor_perl/mha/dbhelper.pm line 205
at /usr/share/perl5/vendor_perl/mha/server.pm line 166
wed dec 4 11:54:01 2019 - [error][/usr/share/perl5/vendor_perl/mha/servermanager.pm, ln309] got fatal error, stopping operations
wed dec 4 11:54:01 2019 - [error][/usr/share/perl5/vendor_perl/mha/mastermonitor.pm, ln424] error happened on checking configurations. at /usr/share/perl5/vendor_perl/mha/mastermonitor.pm line 326
wed dec 4 11:54:01 2019 - [error][/usr/share/perl5/vendor_perl/mha/mastermonitor.pm, ln523] error happened on monitoring servers.
wed dec 4 11:54:01 2019 - [info] got exit code 1 (not master dead).

mysql replication health is not ok!


参考下面帖子发现结果还有异常
https://www.cnblogs.com/fangyuan303687320/p/9475279.html

yum install -y cpan
cpan -d dbi
# [yes---sudo]
--上面这不执行了好几分钟,输入了很多次yes
cpan dbd::mysql
还是出现下面的异常

--异常
dveeden/dbd-mysql-4.050.tar.gz
/usr/bin/make -- ok
running make test
perl_dl_nonlazy=1 /usr/bin/perl "-mextutils::command::mm" "-e" "test_harness(0, 'blib/lib', 'blib/arch')" t/*.t
t/00base.t .............................. 1/6
# failed test 'use dbd::mysql;'
# at t/00base.t line 15.
# tried to use 'dbd::mysql'.
# error: can't load '/root/.cpan/build/dbd-mysql-4.050-d0wgof/blib/arch/auto/dbd/mysql/mysql.so' for module dbd::mysql: libmysqlclient.so.20: cannot open shared object file: no such file or directory at /usr/lib64/perl5/dynaloader.pm line 200.
# at t/00base.t line 15
# compilation failed in require at t/00base.t line 15.
# begin failed--compilation aborted at t/00base.t line 15.
bailout called. further testing stopped: unable to load dbd::mysql
failed--further testing stopped: unable to load dbd::mysql
make: *** [test_dynamic] error 255
dveeden/dbd-mysql-4.050.tar.gz
/usr/bin/make test -- not ok
//hint// to see the cpan-testers results for installing this module, try:
reports dveeden/dbd-mysql-4.050.tar.gz
running make install
make test had returned bad status, won't install without force

最后发现需要先执行下这个才可以,需要回答几次yes
[root@test247 ~]# cpan
cpan[1]> force install gd
...
files found in blib/arch: installing files in blib/lib into architecture dependent library tree
installing /usr/local/lib64/perl5/auto/gd/gd.so
installing /usr/local/lib64/perl5/auto/gd/gd.bs
installing /usr/local/lib64/perl5/gd.pm
installing /usr/local/lib64/perl5/auto/gd/autosplit.ix
installing /usr/local/lib64/perl5/gd/polygon.pm
installing /usr/local/lib64/perl5/gd/simple.pm
installing /usr/local/lib64/perl5/gd/group.pm
installing /usr/local/lib64/perl5/gd/image.pm
installing /usr/local/lib64/perl5/gd/polyline.pm
installing /usr/local/share/man/man1/bdf2gdfont.pl.1
installing /usr/local/share/man/man3/gd.3pm
installing /usr/local/share/man/man3/gd::polygon.3pm
installing /usr/local/share/man/man3/gd::group.3pm
installing /usr/local/share/man/man3/gd::simple.3pm
installing /usr/local/share/man/man3/gd::polyline.3pm
installing /usr/local/share/man/man3/gd::image.3pm
installing /usr/local/bin/bdf2gdfont.pl
appending installation info to /usr/lib64/perl5/perllocal.pod
rurban/gd-2.71.tar.gz
/usr/bin/make install -- ok

cpan[2]> exit

[root@test247 ~]# cpan dbd::mysql
cpan: storable loaded ok (v2.20)
going to read '/root/.cpan/metadata'
database was generated on wed, 04 dec 2019 02:29:02 gmt
running install for module 'dbd::mysql'
cpan: yaml loaded ok (v0.70)
running make for d/dv/dveeden/dbd-mysql-4.050.tar.gz
cpan: digest::sha loaded ok (v5.47)
checksum for /root/.cpan/sources/authors/id/d/dv/dveeden/dbd-mysql-4.050.tar.gz ok
...
for 'make test' to run properly, you must ensure that the
database user 'root' can connect to your mysql server
and has the proper privileges that these tests require such
as 'drop table', 'create table', 'drop procedure', 'create procedure'
as well as others.

mysql> grant all privileges on test.* to 'root'@'localhost' identified by 's3kr1t';

you can also optionally set the user to run 'make test' with:

perl makefile.pl --testuser=username
...
prepending /root/.cpan/build/dbd-mysql-4.050-ltzcxu/blib/arch /root/.cpan/build/dbd-mysql-4.050-ltzcxu/blib/lib to perl5lib for 'install'
files found in blib/arch: installing files in blib/lib into architecture dependent library tree
installing /usr/local/lib64/perl5/auto/dbd/mysql/mysql.so
installing /usr/local/lib64/perl5/auto/dbd/mysql/mysql.bs
installing /usr/local/lib64/perl5/bundle/dbd/mysql.pm
installing /usr/local/lib64/perl5/dbd/mysql.pm
installing /usr/local/lib64/perl5/dbd/mysql/getinfo.pm
installing /usr/local/lib64/perl5/dbd/mysql/install.pod
installing /usr/local/share/man/man3/dbd::mysql.3pm
installing /usr/local/share/man/man3/bundle::dbd::mysql.3pm
installing /usr/local/share/man/man3/dbd::mysql::install.3pm
appending installation info to /usr/lib64/perl5/perllocal.pod
dveeden/dbd-mysql-4.050.tar.gz
/usr/bin/make install -- ok
[root@test247 ~]#


再运行ok了
[root@test247 ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
wed dec 4 12:20:09 2019 - [warning] global configuration file /etc/masterha_default.cnf not found. skipping.
wed dec 4 12:20:09 2019 - [info] reading application default configuration from /etc/masterha/app1.cnf..
wed dec 4 12:20:09 2019 - [info] reading server configuration from /etc/masterha/app1.cnf..
wed dec 4 12:20:09 2019 - [info] mha::mastermonitor version 0.56.
wed dec 4 12:20:10 2019 - [info] gtid failover mode = 0
wed dec 4 12:20:10 2019 - [info] dead servers:
wed dec 4 12:20:10 2019 - [info] alive servers:
wed dec 4 12:20:10 2019 - [info] 192.168.5.247(192.168.5.247:3306)
wed dec 4 12:20:10 2019 - [info] 192.168.5.93(192.168.5.93:3306)
wed dec 4 12:20:10 2019 - [info] 192.168.5.94(192.168.5.94:3306)
wed dec 4 12:20:10 2019 - [info] alive slaves:
wed dec 4 12:20:10 2019 - [info] 192.168.5.93(192.168.5.93:3306) version=5.7.21-log (oldest major version between slaves) log-bin:enabled
wed dec 4 12:20:10 2019 - [info] replicating from 192.168.5.247(192.168.5.247:3306)
wed dec 4 12:20:10 2019 - [info] primary candidate for the new master (candidate_master is set)
wed dec 4 12:20:10 2019 - [info] 192.168.5.94(192.168.5.94:3306) version=5.7.21-log (oldest major version between slaves) log-bin:enabled
wed dec 4 12:20:10 2019 - [info] replicating from 192.168.5.247(192.168.5.247:3306)
wed dec 4 12:20:10 2019 - [info] current alive master: 192.168.5.247(192.168.5.247:3306)
wed dec 4 12:20:10 2019 - [info] checking slave configurations..
wed dec 4 12:20:10 2019 - [info] checking replication filtering settings..
wed dec 4 12:20:10 2019 - [info] binlog_do_db= , binlog_ignore_db=
wed dec 4 12:20:10 2019 - [info] replication filtering check ok.
wed dec 4 12:20:10 2019 - [info] gtid (with auto-pos) is not supported
wed dec 4 12:20:10 2019 - [info] starting ssh connection tests..
wed dec 4 12:20:12 2019 - [info] all ssh connection tests passed successfully.
wed dec 4 12:20:12 2019 - [info] checking mha node version..
wed dec 4 12:20:13 2019 - [info] version check ok.
wed dec 4 12:20:13 2019 - [info] checking ssh publickey authentication settings on the current master..
wed dec 4 12:20:13 2019 - [info] healthcheck: ssh to 192.168.5.247 is reachable.
wed dec 4 12:20:13 2019 - [info] master mha node version is 0.56.
wed dec 4 12:20:13 2019 - [info] checking recovery script configurations on 192.168.5.247(192.168.5.247:3306)..
wed dec 4 12:20:13 2019 - [info] executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/data/mysql/logs/bin-log --output_file=/tmp/save_binary_logs_test --manager_version=0.56 --start_file=mysql-bin.000008
wed dec 4 12:20:13 2019 - [info] connecting to root@192.168.5.247(192.168.5.247:22)..
creating /tmp if not exists.. ok.
checking output directory is accessible or not..
ok.
binlog found at /data/mysql/logs/bin-log, up to mysql-bin.000008
wed dec 4 12:20:13 2019 - [info] binlog setting check done.
wed dec 4 12:20:13 2019 - [info] checking ssh publickey authentication and checking recovery script configurations on all alive slave servers..
wed dec 4 12:20:13 2019 - [info] executing command : apply_diff_relay_logs --command=test --slave_user='monitor' --slave_host=192.168.5.93 --slave_ip=192.168.5.93 --slave_port=3306 --workdir=/tmp --target_version=5.7.21-log --manager_version=0.56 --relay_log_info=/data/mysql/relay-log.info --relay_dir=/data/mysql/data/ --slave_pass=xxx
wed dec 4 12:20:13 2019 - [info] connecting to root@192.168.5.93(192.168.5.93:22)..
checking slave recovery environment settings..
opening /data/mysql/relay-log.info ... ok.
relay log found at /data/mysql/logs/relay-log, up to relay-bin.000015
temporary relay log file is /data/mysql/logs/relay-log/relay-bin.000015
testing mysql connection and privileges..mysql: [warning] using a password on the command line interface can be insecure.
done.
testing mysqlbinlog output.. done.
cleaning up test file(s).. done.
wed dec 4 12:20:13 2019 - [info] executing command : apply_diff_relay_logs --command=test --slave_user='monitor' --slave_host=192.168.5.94 --slave_ip=192.168.5.94 --slave_port=3306 --workdir=/tmp --target_version=5.7.21-log --manager_version=0.56 --relay_log_info=/data/mysql/relay-log.info --relay_dir=/data/mysql/data/ --slave_pass=xxx
wed dec 4 12:20:13 2019 - [info] connecting to root@192.168.5.94(192.168.5.94:22)..
checking slave recovery environment settings..
opening /data/mysql/relay-log.info ... ok.
relay log found at /data/mysql/logs/relay-log, up to relay-bin.000015
temporary relay log file is /data/mysql/logs/relay-log/relay-bin.000015
testing mysql connection and privileges..mysql: [warning] using a password on the command line interface can be insecure.
done.
testing mysqlbinlog output.. done.
cleaning up test file(s).. done.
wed dec 4 12:20:14 2019 - [info] slaves settings check done.
wed dec 4 12:20:14 2019 - [info]
192.168.5.247(192.168.5.247:3306) (current master)
+--192.168.5.93(192.168.5.93:3306)
+--192.168.5.94(192.168.5.94:3306)

wed dec 4 12:20:14 2019 - [info] checking replication health on 192.168.5.93..
wed dec 4 12:20:14 2019 - [info] ok.
wed dec 4 12:20:14 2019 - [info] checking replication health on 192.168.5.94..
wed dec 4 12:20:14 2019 - [info] ok.
wed dec 4 12:20:14 2019 - [info] checking master_ip_failover_script status:
wed dec 4 12:20:14 2019 - [info] /usr/local/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=192.168.5.247 --orig_master_ip=192.168.5.247 --orig_master_port=3306
wed dec 4 12:20:14 2019 - [info] ok.
wed dec 4 12:20:14 2019 - [warning] shutdown_script is not defined.
wed dec 4 12:20:14 2019 - [info] got exit code 0 (not master dead).

mysql replication health is ok.
[root@test247 ~]#