Clickhouse备份恢复_Docker环境下的clickhouse如何备份恢复

embedded/2024/10/18 6:08:20/

总结:
Docker环境的下的clickhouse备份,不能使用clickhouse-backup,因为clickhouse-client只能备份Docker环境下的clickhouse的元数据
Docker环境的下的clickhouse备份,可以使用TCP的clickhouse-client的9000或HTTP的8123连接clickhouse服务器后使用backup\restore命令来备份\恢复

[dailachdbud005 ~]# docker ps
CONTAINER ID   IMAGE                                             COMMAND                  CREATED         STATUS      PORTS                                                                                                NAMES  
197a50985f73   clickhouse/clickhouse-server:22.9.3               "/entrypoint.sh"         16 months ago   Up 9 days   0.0.0.0:8123->8123/tcp, :::8123->8123/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 9009/tcp       root-clickhouse-1
[dailachdbud005 ~]# ps -ef|grep click|grep -v grep
101        2330   2260  6 Jun02 ?        14:52:20 /usr/bin/clickhouse-server --config-file=/etc/clickhouse-server/config.xml[dailachdbud005 ~]# clickhouse-client -h dailachdbud005 -u onescore --password '123456' --port 9000
select name,path,type from system.disks;
197a50985f73 :) select name,path,type from system.disks;┌─name────┬─path─────────────────┬─type──┐
1. │ backups │ /backups/local2.default/var/lib/clickhouse/local │└─────────┴──────────────────────┴───────┘

查看到/backups对应的宿主服务器目录为/var/lib/docker/volumes/root_clickhouse_backup/_data

[dailachdbud005 ~]# docker inspect 197a50985f73 | grep backups -B 3 -A 3{"Type": "volume","Source": "root_clickhouse_backup","Target": "/backups","VolumeOptions": {}}],
--"Type": "volume","Name": "root_clickhouse_backup","Source": "/var/lib/docker/volumes/root_clickhouse_backup/_data","Destination": "/backups","Driver": "local","Mode": "z","RW": true,
--"Cmd": null,"Image": "clickhouse/clickhouse-server:22.9.3","Volumes": {"/backups": {},"/etc/clickhouse-server/config.d": {},"/etc/clickhouse-server/users.d": {},"/var/lib/clickhouse": {}

clickhouse-backup
实践过只能备份元数据,因为clickhouse-backup远程备份,只能备份元数据

[odsonescoredev3 /]# cat /etc/clickhouse-backup/config.yml
general:remote_storage: nonemax_file_size: 1099511627776disable_progress_bar: falsebackups_to_keep_local: 3backups_to_keep_remote: 15log_level: infoallow_empty_backups: false
clickhouse:username: onescorepassword: "123456"host: localhostport: 9000disk_mapping: {}skip_tables:- system.*- default.*- information_schema.*- INFORMATION_SCHEMA.*timeout: 5mfreeze_by_part: falsesecure: falseskip_verify: falsesync_replicated_tables: trueskip_sync_replica_timeouts: truelog_sql_queries: false
sftp:address: "127.0.0.1"username: "root"password: "D123"port: 22key: ""path: "/mnt/datadomaindir/clickhouse_backup/Dev/ODS1SCHFBDMDEV"concurrency: 1compression_format: nonedebug: false[odsonescoredev3 /]# df -h
Filesystem                                                                                          Size  Used Avail Use% Mounted on
devtmpfs                                                                                            7.8G     0  7.8G   0% /dev
tmpfs                                                                                               7.8G   22M  7.8G   1% /dev/shm
tmpfs                                                                                               7.8G  755M  7.1G  10% /run
tmpfs                                                                                               7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/centos-root                                                                             253G   82G  171G  33% /
/dev/sda1                                                                                           3.8G  233M  3.5G   7% /boot
overlay                                                                                             253G   82G  171G  33% /var/lib/docker/overlay2/d3d9d9a71c2c9553ba3646b987d7b4e30363c2288e8098ace3d8e35aabfd2cd5/merged
overlay                                                                                             253G   82G  171G  33% /var/lib/docker/overlay2/f2bc5bec233a9f6341729d53db0473ed605a72a80ad6b8bd7ffb54b004c81803/merged
overlay                                                                                             253G   82G  171G  33% /var/lib/docker/overlay2/8f4af2f381fc2ed01f9ec3cf7d0c5637d1016f7b0d91cd0f6a84d6e8fb01d40e/merged
overlay                                                                                             253G   82G  171G  33% /var/lib/docker/overlay2/b78e207587d76ebb9fd3969bfb120faf257572d241948e5d5a1632c3efc04349/merged
overlay                                                                                             253G   82G  171G  33% /var/lib/docker/overlay2/b9a729c56ab4c25055feb2d19f1dedeb6921cfa37e7194d5da68b4aaab7230a4/merged
tmpfs                                                                                               1.6G     0  1.6G   0% /run/user/0[odsonescoredev3 /]# ll /var/lib/clickhouse
total 8
drwxr-x--- 2 101 101 116 Jul 12  2022 access
drwxr-x--- 3 101 101  22 May 13 23:19 backup
drwxr-x--- 4 101 101  35 Jul 12  2022 data
drwxr-x--- 2 101 101   6 Jul 12  2022 dictionaries_lib
drwxr-x--- 2 101 101   6 Jul 12  2022 flags
drwxr-xr-x 2 101 101   6 Jul 12  2022 format_schemas
drwxr-x--- 4 101 101 184 Jul 12  2022 metadata
drwxr-x--- 2 101 101   6 Jul 12  2022 metadata_dropped
drwxr-x--- 2 101 101  41 Jul 12  2022 preprocessed_configs
-rw-r----- 1 101 101  55 Jul 12  2022 status
drwxr-x--- 8 101 101  72 Jul 12  2022 store
drwxr-xr-x 2 101 101   6 Jul 12  2022 tmp
drwxr-x--- 2 101 101   6 Jul 12  2022 user_defined
drwxr-xr-x 2 101 101   6 Jul 12  2022 user_files
drwxr-x--- 2 101 101   6 Jul 12  2022 user_scripts
-rw-r----- 1 101 101  36 Jul 12  2022 uuid[odsonescoredev3 /]# ll /var/lib/clickhouse/backup/20240514
total 52
drwxr-x--- 9 101 101   100 May 13 23:19 metadata
-rw-r----- 1 101 101 52532 May 13 23:19 metadata.json

使用clickhouse-client的backup和restore执行备份恢复

1、使用http的模式

[dailachdbud005 ~]# cat /root/backup_clickhouse.sh
#!/bin/bash
export CURRENT_TIME=$(date -u +"%Y%m%d%H%M%S")curl 'http://localhost:8123/?add_http_cors_header=1&default_format=JSONCompact&max_result_rows=1000&max_result_bytes=10000000&result_overflow_mode=break' \-H 'Accept: */*' \-H 'Accept-Language: en-US,en;q=0.9' \-H 'Authorization: Basic b25lmU6T2RzMjAyMg==' \-H 'Connection: keep-alive' \-H 'Content-Type: text/plain;charset=UTF-8' \--data $'BACKUP DATABASE etl TO Disk(\'backups\', \'etl_latest.zip\') \n\n\n' \--compressed \--insecure
sleep 1;mv /var/lib/docker/volumes/root_clickhouse_backup/_data/etl_latest.zip /var/lib/docker/volumes/root_clickhouse_backup/_data/"$CURRENT_TIME"_etl.zip&
sleep 1;mv /var/lib/docker/volumes/root_clickhouse_backup/_data/"$CURRENT_TIME"_etl.zip /mnt/clickhouse_backup/&

2、使用tcp的模式
查看备份路径

[dailachdbud005 ~]# clickhouse-client -h dailachdbud005 -u onescore --password '123456' --port 9000
select name,path,type from system.disks;
197a50985f73 :) select name,path,type from system.disks;┌─name────┬─path─────────────────┬─type──┐
1. │ backups │ /backups/local2.default/var/lib/clickhouse/local │└─────────┴──────────────────────┴───────┘

备份的shell脚本

[dailachdbud005 ~]# cat /root/script/clickhouse-client_backup.sh
#/bin/bash
current_date=`date +%Y%m%d`
clickhouse-client -h dailachdbud005 -u onescore --password '123456' --port 9000 -q "backup database  import to Disk('backups','import.zip');"
clickhouse-client -h dailachdbud005 -u onescore --password '123456' --port 9000 -q "backup database  prime to Disk('backups','prime.zip');"mv /var/lib/docker/volumes/root_clickhouse_backup/_data/import.zip /var/lib/docker/volumes/root_clickhouse_backup/_data/import_$current_date.zip
mv /var/lib/docker/volumes/root_clickhouse_backup/_data/prime.zip /var/lib/docker/volumes/root_clickhouse_backup/_data/prime_$current_date.zip

查看备份是否成功

[dailachdbud005 ~]# clickhouse-client -h dailachdbud005 -u onescore --password '123456' --port 9000
SELECT id,name,status,error,start_time,end_time FROM system.backups where end_time>'2024-06-12' and status='BACKUP_CREATED';┌─id───────────────────────────────────┬─name────────────────────────────────┬─status─────────┬─error─┬──────────start_time─┬────────────end_time─┐
1. │ b70432ab-5a1f-48fd-b050-863240bff3c5 │ Disk('backups', 'shared.zip')       │ BACKUP_CREATED │       │ 2024-06-12 04:43:242024-06-12 04:44:392. │ d4b9c0a3-d295-412a-9062-12f05b5ae091 │ Disk('backups', 'reports.zip')      │ BACKUP_CREATED │       │ 2024-06-12 04:43:242024-06-12 04:43:243. │ c6ea3507-1532-467f-9325-e4a60a93a869 │ Disk('backups', 'remote.zip')       │ BACKUP_CREATED │       │ 2024-06-12 04:43:242024-06-12 04:43:244.4e5878c0-1893-41dd-8a4e-74f214ea010b │ Disk('backups', 'prime.zip')        │ BACKUP_CREATED │       │ 2024-06-12 04:43:232024-06-12 04:43:245.8a3a272a-7323-4b87-96f8-63494ebd54c8 │ Disk('backups', 'import.zip')       │ BACKUP_CREATED │       │ 2024-06-12 04:43:212024-06-12 04:43:23 │└──────────────────────────────────────┴─────────────────────────────────────┴────────────────┴───────┴─────────────────────┴─────────────────────┘

执行恢复

197a50985f73 :) show databases;┌─name───────────────┐1. │ INFORMATION_SCHEMA │2. │ cache              │3.default4.import5. │ information_schema │6. │ prime              │7. │ remote             │8. │ reports            │9. │ shared             │
10. │ system             │└────────────────────┘
197a50985f73 :) use remote;
197a50985f73 :) show tables;┌─name────────────────────────────────────────────┐
1.1712047046_255_dae3b6de0a774198a9e17c76d4d6f08e │
2.1712048388_255_53c73797082740b69f90d8f19676ed15 │└─────────────────────────────────────────────────┘
197a50985f73 :) select count(*) from 1712047046_255_dae3b6de0a774198a9e17c76d4d6f08e;┌─count()─┐
1.136 │└─────────┘
197a50985f73 :) drop database reports;
197a50985f73 :) drop database prime;
197a50985f73 :) drop database remote;197a50985f73 :) show databases;┌─name───────────────┐
1. │ INFORMATION_SCHEMA │
2. │ cache              │
3.default4.import5. │ information_schema │
6. │ shared             │
7. │ system             │└────────────────────┘197a50985f73 :) restore database reports from Disk('backups','reports_20240612.zip');┌─id───────────────────────────────────┬─status───┐
1.7cac908e-3aa7-4fd5-8511-91c76b67f071 │ RESTORED │└──────────────────────────────────────┴──────────┘197a50985f73 :) restore database prime from Disk('backups','prime_20240612.zip');┌─id───────────────────────────────────┬─status───┐
1.0ecd29a1-6e1c-404d-be4e-282294627b90 │ RESTORED │└──────────────────────────────────────┴──────────┘197a50985f73 :) restore database remote from Disk('backups','remote_20240612.zip');┌─id───────────────────────────────────┬─status───┐
1. │ f69a23d3-df69-4969-bd93-9c21a6166e28 │ RESTORED │└──────────────────────────────────────┴──────────┘197a50985f73 :) show databases;┌─name───────────────┐1. │ INFORMATION_SCHEMA │2. │ cache              │3.default4.import5. │ information_schema │6. │ prime              │7. │ remote             │8. │ reports            │9. │ shared             │
10. │ system             │└────────────────────┘
197a50985f73 :) use remote;
197a50985f73 :) show tables;┌─name────────────────────────────────────────────┐
1.1712047046_255_dae3b6de0a774198a9e17c76d4d6f08e │
2.1712048388_255_53c73797082740b69f90d8f19676ed15 │└─────────────────────────────────────────────────┘
197a50985f73 :) select count(*) from 1712047046_255_dae3b6de0a774198a9e17c76d4d6f08e;┌─count()─┐
1.136 │└─────────┘

http://www.ppmy.cn/embedded/52607.html

相关文章

NetSuite Account Merge 科目合并功能分析

最近项目中,客户有提到过能否将不用的Account与新建的Account进行合并,即我们所说的Merge功能~可以,但是该功能有使用的限制,比如最直接的一点需要注意,不同类型的Account是不可以使用Merge功能的&#xff…

MPLS标签转发过程(个人学习笔记)

三个操作标签 Push:当IP报文进入MPLS域时,MPLS边界设备在报文二层首部和IP首部之间插入一个新标签;或者MPLS中间设备根据需要,在标签栈顶增加一个新的标签(即标签嵌套封装)。Swap:当报文在MPLS域…

MySQL入门学习-连接查询.复杂连接

复杂连接是指涉及多个表之间的连接操作,通过使用条件和连接类型来组合和关联这些表的数据。 一、常见的复杂连接类型及其特点、使用方法和示例代码: 1. INNER JOIN(内连接): - 特点:返回两个表中满足连接…

2023国家最高科学技术奖薛其坤院士:科学家的幸福感来自于哪里

内容来源:量子前哨(ID:Qforepost) 文丨浪味仙 排版丨沛贤 深度好文:2000字丨8分钟阅读 6 月 24 日,2023 年度国家最高科学技术奖在京揭晓,薛其坤院士荣获中国科技界崇高荣誉,这不…

GitHub的基本使用方法

GitHub是一个面向开源及私有软件项目的托管平台,提供了版本控制功能、代码托管服务、协作功能等。以下是GitHub的基本使用方法: 注册并登录:首先,访问GitHub官网(https://github.com/)并点击右上角的"…

机器学习之函数式自动微分

前言 神经网络的训练主要使用反向传播算法,通过损失函数计算模型预测值与正确标签的差异,并进行反向传播计算梯度,最终更新模型参数。自动微分能够计算可导函数在某点处的导数值,是反向传播算法的一般化,主要解决了复…

查看es p12证书文件过期方法

查看证书过期时间: openssl pkcs12 -in elastic-certificates.p12 -nokeys -out elastic-certificates.crt (需要输入证书生成时配置密码) openssl x509 -enddate -noout -in elastic-certificates.crt

Sam Altman:从少儿奇才到OpenAI掌舵人

自2022年底发布了ChatGPT以来,OpenAI及其首席执行官Sam Altman迅速成为科技界的焦点人物。Altman的崛起并非偶然,而是长期以来不断追求权力和创新的结果。本文将回顾Altman的成长历程,探索他如何一步步走向OpenAI的顶峰。 童年与教育背景 S…