OB 运维 | OceanBase 备份清理任务卡死之谜

运维 系统运维
客户 OceanBase 集群的备份策略是每天全备,备份文件最近保留 4 天。经过观察,实际保留的备份文件多达 13 个,导致磁盘空间占用较高。

1问题背景

客户 OceanBase 集群的备份策略是每天全备,备份文件最近保留 4 天。经过观察,实际保留的备份文件多达 13 个,导致磁盘空间占用较高。

2问题排查

正常情况下,会有一个备份文件清理任务定期执行。备份文件超过了预设数量,猜测可能是文件清理任务出现问题。

1. 备份文件的清理任务

先看一下自动清理时间间隔的计算公式:

自动清理的时间间隔 = min(recovery_window/2, 1d)

故正常每天会发起一次备份文件的清理任务。

注意:对于 oceanbase-3.1.2-20211230114204 之前的版本(不含该版本),自动清理的时间间隔为 recovery_window/2。

备份文件的清理任务卡住了!

信息采集时间为 22 日。

发现 19 日发起备份清理任务,22 日还未完成,判断为清理任务卡住了。

MySQL [oceanbase]> select * from __all_tenant_backup_clean_info;
+----------------------------+----------------------------+-----------+--------+----------------------------+----------------------------+-------------+------------------------+--------+------------------+-----------+---------+------------------+--------+---------+
| gmt_create                 | gmt_modified               | tenant_id | job_id | start_time                 | end_time                   | incarnation | type                   | status | parameter        | error_msg | comment | clog_gc_snapshot | result | copy_id |
+----------------------------+----------------------------+-----------+--------+----------------------------+----------------------------+-------------+------------------------+--------+------------------+-----------+---------+------------------+--------+---------+
| 2021-03-19 14:18:50.841475 | 2024-10-19 00:28:50.021733 |         1 |   1187 | 2024-10-19 00:28:49.998736 | 2024-10-19 00:27:09.663132 |           1 | DELETE OBSOLETE BACKUP | DOING  | 1728923329993025 |           |         | 1728845993342237 |      0 |       0 |
| 2024-10-19 00:28:50.030195 | 2024-10-19 00:28:50.030195 |      1001 |   1187 | 2024-10-19 00:28:49.998736 | 2024-10-19 00:27:09.663132 |           1 | DELETE OBSOLETE BACKUP | DOING  | 1728923329993025 |           |         | 1728845993342237 |      0 |       0 |
+----------------------------+----------------------------+-----------+--------+----------------------------+----------------------------+-------------+------------------------+--------+------------------+-----------+---------+------------------+--------+---------+
2 rows in set (0.01 sec)

集群备份清理有关的历史事件信息。

MySQL [oceanbase]> select * from __all_rootservice_event_history where module='backup_clean';
+----------------------------+--------------+------------+-----------+--------+---------------+--------+---------+--------+----------+-------------------------------+-------+--------+-------+--------+------------+----------------+-------------+
| gmt_create                 | module       | event      | name1     | value1 | name2         | value2 | name3   | value3 | name4    | value4                        | name5 | value5 | name6 | value6 | extra_info | rs_svr_ip      | rs_svr_port |
+----------------------------+--------------+------------+-----------+--------+---------------+--------+---------+--------+----------+-------------------------------+-------+--------+-------+--------+------------+----------------+-------------+
| 2024-04-16 18:55:49.799602 | backup_clean | backup_set | tenant_id | 1      | backup_set_id | 1099   | copy_id | 0      | trace_id | YB42B6B49660-0005E8F1C2B2E2CD |       |        |       |        |            | 182.180.150.96 |        2882 |
-----------
输出信息过多,省略部分信息
-----------
| 2024-10-19 00:28:50.010504 | backup_clean | backup_set | tenant_id | 1      | backup_set_id | 1288   | copy_id | 0      | trace_id | YB42B6B49660-0005E8F1C2B2E845 |       |        |       |        |            | 182.180.150.96 |        2882 |
| 2024-10-19 00:28:50.010521 | backup_clean | backup_set | tenant_id | 1      | backup_set_id | 1289   | copy_id | 0      | trace_id | YB42B6B49660-0005E8F1C2B2E845 |       |        |       |        |            | 182.180.150.96 |        2882 |
+----------------------------+--------------+------------+-----------+--------+---------------+--------+---------+--------+----------+-------------------------------+-------+--------+-------+--------+------------+----------------+-------------+
191 rows in set (29.60 sec)

2. 查看日志

2.1 备份清理调度任务失败

[root@hwc-ocp RS_cl_bigdata]# grep "ob_backup_data_clean" rootservice.log.20241022040332 | grep WARN
rootservice.log.20241022040332:[2024-10-22 03:48:55.440734] INFO  [RS] ob_backup_data_clean_scheduler.cpp:71 [101273][788][YB42B6B49660-0005E8CB48709532] [lt=7] [dc=0] backup data scheduler init(arg={type:12, value:1729194535439601, copy_id:0})
rootservice.log.20241022040332:[2024-10-22 03:48:55.445090] WARN  [RS] schedule_sys_tenant_backup_data_clean (ob_backup_data_clean_scheduler.cpp:352) [101273][788][YB42B6B49660-0005E8CB48709532] [lt=6] [dc=0] sys tenant clean info status is not stop, can do scheduler(ret=-9044, clean_info={tenant_id:1, job_id:1187, start_time:1729268929998736, end_time:1729268829663132, incarnation:1, copy_id:0, type:1, status:2, expired_time:1728923329993025, backup_set_id:0, error_msg:"", comment:"", clog_gc_snapshot:1728845993342237, result:0, backup_piece_id:0, backup_round_id:0})
rootservice.log.20241022040332:[2024-10-22 03:48:55.445176] WARN  [RS] schedule_backup_data_clean (ob_backup_data_clean_scheduler.cpp:275) [101273][788][YB42B6B49660-0005E8CB48709532] [lt=12] [dc=0] failed to schedule sys tenant backup(ret=-9044)
rootservice.log.20241022040332:[2024-10-22 03:48:55.445187] WARN  [RS] start_schedule_backup_data_clean (ob_backup_data_clean_scheduler.cpp:89) [101273][788][YB42B6B49660-0005E8CB48709532] [lt=8] [dc=0] failed to schedule backup data clean(ret=-9044, tenant_ids=[1, 1003, 1004])
rootservice.log.20241022040332:[2024-10-22 03:53:55.447033] INFO  [RS] ob_backup_data_clean_scheduler.cpp:71 [101273][788][YB42B6B49660-0005E8CB48709532] [lt=15] [dc=0] backup data scheduler init(arg={type:12, value:1729194835445853, copy_id:0})

判断关键字:sys tenant clean info status is not stop, can do scheduler

查看日志可推断 sys 租户备份清理任务状态不是 stop 的状态,故备份清理调度任务失败。

与上文中 __all_tenant_backup_clean_info 视图表查询结果形成呼应,判定为备份清理任务异常。

清理任务失败原因:上一次的备份清理任务状态一直为 doing。

2.2 无法识别到路径

通过备份清理任务的 trace_id

[root@hwc-ocp RS_cl_bigdata]# grep "YB42B6B49660-0005E8F1C2B2E845" rootservice.log.20241022001100 |grep WARN
[2024-10-21 23:37:26.194250] WARN  [ARCHIVE] get_file_id_range_ (ob_archive_log_file_store.cpp:501) [101266][774][YB42B6B49660-0005E8F1C2B2E845] [lt=9] [dc=0] failed get_file_range(uri=file:///obbak_bigdata/cl_bigdata_new/cl_bigdata/2/incarnation_1/1003/clog/13/index/1102810163026406/13, ret=-4018, pg_key={tid:1102810163026406, partition_id:13, part_cnt:0})
[2024-10-21 23:37:26.194257] WARN  [ARCHIVE] get_index_file_id_range (ob_archive_log_file_store.cpp:281) [101266][774][YB42B6B49660-0005E8F1C2B2E845] [lt=5] [dc=0] failed to get_index_file_id_range(ret=-4018, pg_key={tid:1102810163026406, partition_id:13, part_cnt:0})
[2024-10-21 23:37:26.194264] WARN  [ARCHIVE] locate_file_by_log_ts_for_clear (ob_archive_log_file_store.cpp:252) [101266][774][YB42B6B49660-0005E8F1C2B2E845] [lt=5] [dc=0] get_index_file_id_range fail(ret=-4018, pg_key={tid:1102810163026406, partition_id:13, part_cnt:0})
[2024-10-21 23:37:26.217948] WARN  [ARCHIVE] get_file_id_range_ (ob_archive_log_file_store.cpp:501) [101266][774][YB42B6B49660-0005E8F1C2B2E845] [lt=8] [dc=0] failed get_file_range(uri=file:///obbak_bigdata/cl_bigdata_new/cl_bigdata/2/incarnation_1/1003/clog/13/index/1102810163026406/216, ret=-4018, pg_key={tid:1102810163026406, partition_id:216, part_cnt:0})

判断关键字:failed get_file_range

2.3 IO 调用异常

继续排查,发现通过该 trace_id

[root@hwc-ocp RS_cl_bigdata]# grep "access storage" rootservice.log.20241022040332 | grep "YB42B6B49660-0005E8F1C2B2E845"
rootservice.log.20241022040332:[2024-10-22 04:02:23.121397] WARN  [STORAGE] print_access_storage_log (ob_storage.cpp:39) [101266][774][YB42B6B49660-0005E8F1C2B2E845] [lt=10] [dc=0] access storage op=list_files uri=file:///obbak_bigdata/cl_bigdata_new/cl_bigdata/2/incarnation_1/1003/clog/13/index/1102810162740407/88 size=0 Byte cost_ts=2129326 us speed=0.00 MB/s
rootservice.log.20241022040332:[2024-10-22 04:02:24.660028] WARN  [STORAGE] print_access_storage_log (ob_storage.cpp:39) [101266][774][YB42B6B49660-0005E8F1C2B2E845] [lt=9] [dc=0] access storage op=list_files uri=file:///obbak_bigdata/cl_bigdata_new/cl_bigdata/2/incarnation_1/1003/clog/13/index/1102810162744871/4 size=0 Byte cost_ts=1537551 us speed=0.00 MB/s
rootservice.log.20241022040332:[2024-10-22 04:02:33.558367] WARN  [STORAGE] print_access_storage_log (ob_storage.cpp:39) [101266][774][YB42B6B49660-0005E8F1C2B2E845] [lt=13] [dc=0] access storage op=storage reader uri=file:///obbak_bigdata/cl_bigdata_new/cl_bigdata/2/incarnation_1/1003/clog/13/index/1102810162738281/141/2 size=144 Byte cost_ts=1243114 us speed=0.00 MB/s
rootservice.log.20241022040332:[2024-10-22 04:02:38.908843] WARN  [STORAGE] print_access_storage_log (ob_storage.cpp:39) [101263][768][YB42B6B49660-0005E8CB48109532] [lt=9] [dc=0] access storage op=get_file_length uri=file:///obbak_bigdata/cl_bigdata_new/cl_bigdata/2/incarnation_1/cluster_clog_backup_info size=0 Byte cost_ts=1176124 us speed=0.00 MB/s

判断关键词:access storage,us speed=0.00 MB/s

3. 检查 NFS

[admin@x-shhp-oceanbase-db-p02:~]$ mount |grep nfs
2.4.8.124:/obbak_bigdata on /obbak_bigdata type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=2.4.8.124,mountvers=3,mountport=2050,mountproto=tcp,local_lock=none,addr=2.4.8.124)

生产环境的 NFS 版本为 3.x,原来 3 版本为 OB 侧非标部署[1]。

结论:生产未严格按官方文档的要求配置 NFS,可能存在很多异常问题,包括备份清理问题。

3优化建议

严格按官方文档的要求配置 NFS,修改 NFS 版本为 4.x,备份卡住的问题得到解决。

参考资料

[1]OceanBase NFS 部署建议: https://www.oceanbase.com/docs/enterprise-oceanbase-database-cn-10000000000360550

作者:何文超,爱可生南区交付服务部 DBA 团队成员。主要负责 MySQL 故障处理,MySQL 高可用架构改造,OceanBase 相关技术支持。爱好足球,羽毛球。

责任编辑:武晓燕 来源: 爱可生开源社区
相关推荐

2024-05-31 13:23:19

OceanBase单机版架构

2024-07-26 00:00:15

OB运维查询

2019-09-27 08:44:46

Ansible运维DevOps

2016-12-13 13:15:49

运维

2020-03-19 09:32:10

数据库运维技术

2022-05-11 07:17:29

MySQLAnsible运维

2009-07-27 14:38:34

网络运维管理奥运

2019-03-19 08:41:38

Linux运维变更

2010-01-21 22:19:25

网络优化运维管理摩卡软件

2019-03-15 10:13:10

运维云计算运营

2013-03-29 09:15:08

IT运维运维人员运维工程师

2019-02-19 09:14:52

IT运维系统

2018-08-16 08:37:03

机房运维硬件

2014-08-04 10:10:35

IT运维自动化运维

2018-03-27 16:23:53

运维AI智能

2020-06-30 09:35:25

智能运维云架构IT运营

2021-09-08 10:20:55

潮数科技

2017-10-13 13:14:35

互联网

2011-11-24 21:59:55

运维企业外包
点赞
收藏

51CTO技术栈公众号