MinIO分片上传超大文件(纯服务端)

ops/2024/10/11 0:47:52/

目录

  • 一、MinIO快速搭建
    • 1.1、拉取docker镜像
    • 1.2、启动docker容器
  • 二、分片上传大文件到MinIO
    • 2.1、添加依赖
    • 2.2、实现MinioClient
    • 2.3、实现分片上传
      • 2.3.0、初始化MinioClient
      • 2.3.1、准备分片上传
      • 2.3.2、分片并上传
        • 2.3.2.1、设置分片大小
        • 2.3.2.2、分片
      • 2.3.3、分片合并
  • 三、测试
    • 3.1、完整测试代码
    • 3.2、运行日志和效果

MinIO_1">一、MinIO快速搭建

这里简单介绍一下通过docker方式快速搭建MinIO的大体流程。

1.1、拉取docker镜像

首先直接尝试拉取:

docker pull minio/minio

如果拉不到,试图更改docker镜像源:

echo '{"registry-mirrors": ["https://4xxwxhl6.mirror.aliyuncs.com","https://mirror.iscas.ac.cn","https://docker.rainbond.cc","https://docker.nju.edu.cn","https://6kx4zyno.mirror.aliyuncs.com","https://mirror.baidubce.com","https://docker.m.daocloud.io","https://dockerproxy.com"]
}' | sudo tee /etc/docker/daemon.json > /dev/null

接着重启docker服务,使新配置生效:

sudo systemctl restart docker

最后再次拉取即可。

1.2、启动docker容器

首先创建配置和数据目录:

mkdir -p /opt/minio/config
mkdir -p /opt/minio/data

接着启动:

docker run -p 9000:9000 -p 9001:9001 --net=host --name minio -d --restart=always -e "MINIO_ACCESS_KEY=minio" -e "MINIO_SECRET_KEY=minio123" -v /opt/minio/data:/data -v /opt/minio/config:/root/.minio minio/minio server /data --console-address ":9001" -address ":9000"

最后进入MinIO控制台http://192.168.2.195:9001,简单做点存储桶、用户、用户组等配置即可。比如创建新用户名minioUser,密码minioUser123。

MinIO_39">二、分片上传大文件到MinIO

2.1、添加依赖

这里需要注意minio 8.3.3必须依赖okhttp的版本不小于4.8.1。

// minio 8.3.3 Must use okhttp >= 4.8.1
implementation 'io.minio:minio:8.3.3'
implementation 'com.squareup.okhttp3:okhttp:4.12.0'

2.2、实现MinioClient

参考S3官方文档https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html#mpu-process,大文件的分片上传,主要分三步实现:

  1. initMultiPartUpload创建一个大文件分片上传任务
  2. uploadMultiPart逐个上传分片
  3. mergeMultipartUpload合并分片

通过继承默认的MinioClient,将一些相关的重要方法暴露出来,以便使用。

package com.szh.minio;import com.google.common.collect.Multimap;
import io.minio.*;
import io.minio.errors.*;
import io.minio.messages.Part;import java.io.IOException;
import java.security.InvalidKeyException;
import java.security.NoSuchAlgorithmException;public class CustomMinioClient extends MinioClient {/*** 继承父类*/public CustomMinioClient(MinioClient client) {super(client);}/*** 初始化分片上传即获取uploadId*/public String initMultiPartUpload(String bucket, String region, String object, Multimap<String, String> headers, Multimap<String, String> extraQueryParams) throws IOException, InvalidKeyException, NoSuchAlgorithmException, InsufficientDataException, ServerException, InternalException, XmlParserException, InvalidResponseException, ErrorResponseException {CreateMultipartUploadResponse response = this.createMultipartUpload(bucket, region, object, headers, extraQueryParams);return response.result().uploadId();}/*** 上传单个分片*/public UploadPartResponse uploadMultiPart(String bucket, String region, String object, Object data,long length,String uploadId,int partNumber,Multimap<String, String> headers,Multimap<String, String> extraQueryParams) throws IOException, InvalidKeyException, NoSuchAlgorithmException, InsufficientDataException, ServerException, InternalException, XmlParserException, InvalidResponseException, ErrorResponseException {return this.uploadPart(bucket, region, object, data, length, uploadId, partNumber, headers, extraQueryParams);}/*** 合并分片*/public ObjectWriteResponse mergeMultipartUpload(String bucketName, String region, String objectName, String uploadId, Part[] parts, Multimap<String, String> extraHeaders, Multimap<String, String> extraQueryParams) throws IOException, NoSuchAlgorithmException, InsufficientDataException, ServerException, InternalException, XmlParserException, InvalidResponseException, ErrorResponseException, ServerException, InvalidKeyException {return this.completeMultipartUpload(bucketName, region, objectName, uploadId, parts, extraHeaders, extraQueryParams);}public void cancelMultipartUpload(String bucketName, String region, String objectName, String uploadId, Multimap<String, String> extraHeaders, Multimap<String, String> extraQueryParams) throws ServerException, InsufficientDataException, ErrorResponseException, NoSuchAlgorithmException, IOException, InvalidKeyException, XmlParserException, InvalidResponseException, InternalException {this.abortMultipartUpload(bucketName, region, objectName, uploadId, extraHeaders, extraQueryParams);}/*** 查询当前上传后的分片信息*/public ListPartsResponse listMultipart(String bucketName, String region, String objectName, Integer maxParts, Integer partNumberMarker, String uploadId, Multimap<String, String> extraHeaders, Multimap<String, String> extraQueryParams) throws NoSuchAlgorithmException, InsufficientDataException, IOException, InvalidKeyException, ServerException, XmlParserException, ErrorResponseException, InternalException, InvalidResponseException {return this.listParts(bucketName, region, objectName, maxParts, partNumberMarker, uploadId, extraHeaders, extraQueryParams);}
}

2.3、实现分片上传

2.3.0、初始化MinioClient

连接到minio,并确保存储桶的存在。

static CustomMinioClient minioClient = new CustomMinioClient(MinioClient.builder().endpoint("http://192.168.2.195:9000").credentials("minioUser", "minioUser123").build());
// 测试桶
static String bucketName = "test";
static {try {boolean found = minioClient.bucketExists(BucketExistsArgs.builder().bucket(bucketName).build());if (!found) {minioClient.makeBucket(MakeBucketArgs.builder().bucket(bucketName).build());}} catch (Exception e) {throw new RuntimeException(e);}
}

2.3.1、准备分片上传

创建一个大文件分片上传任务。

String contentType = "application/octet-stream";
HashMultimap<String, String> headers = HashMultimap.create();
headers.put("Content-Type", contentType);
String uploadId = minioClient.initMultiPartUpload(bucketName, null, file.getName(), headers, null);
System.out.println("uploadId: " + uploadId);

2.3.2、分片并上传

本文是使用纯服务端进行分片和上传,而实际项目中更推荐由后端首先调用minio的接口getPresignedObjectUrl,逐个生成每个分片的签名后的上传url,然后前端直接以此上传到minio,即可省去后端服务的网络IO开销。

📢 后者方案请见MinIO分片上传超大文件(非纯服务端)

2.3.2.1、设置分片大小

一方面,需要注意单个分片大小最小5MB,如果每个分片设置小于5MB,则minio或S3底层在合并时报错:code = EntityTooSmall, message = Your proposed upload is smaller than the minimum allowed object size

另一方面,在调整分片大小时,需要注意minio或S3底层允许的分片范围[1,10000]

2.3.2.2、分片

一方面,为了保证分片的效率,借助线程池的并发,以及RandomAccessFile的文件随机访问能力,更快地完成分片的流程。当然,可控制并发数和分片大小以防止并发分片中的OOM。

另一方面,考虑到分片全部完成之后,还有最后的合并操作,所以借助CountDownLatch来确保所有分片上传之后,再去执行合并。

2.3.3、分片合并

合并所有已上传的分片。

Part[] parts = new Part[(int) chunkCount];
// 查询上传后的分片数据。S3最大允许10000,且从1开始
ListPartsResponse partResult = minioClient.listMultipart(bucketName, null, file.getName(), 10000, 0, uploadId, null, null);
int partNumber = 1;
for (Part part : partResult.result().partList()) {parts[partNumber - 1] = new Part(partNumber, part.etag());partNumber++;
}
ObjectWriteResponse objectWriteResponse = minioClient.mergeMultipartUpload(bucketName, null, file.getName(), uploadId, parts, null, null);

三、测试

3.1、完整测试代码

package com.szh.minio;import com.google.common.collect.HashMultimap;
import io.minio.*;
import io.minio.messages.Part;
import lombok.Getter;
import lombok.Setter;import java.io.File;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;@Setter
@Getter
public class MinioMain {
static CustomMinioClient minioClient = new CustomMinioClient(MinioClient.builder().endpoint("http://192.168.2.195:9000").credentials("minioUser", "minioUser123").build());// 测试桶static String bucketName = "test";static {try {boolean found = minioClient.bucketExists(BucketExistsArgs.builder().bucket(bucketName).build());if (!found) {minioClient.makeBucket(MakeBucketArgs.builder().bucket(bucketName).build());}} catch (Exception e) {throw new RuntimeException(e);}}// 需要被分片上传的大文件static String filePath = "C:\\tmp\\psi_result.csv";static File file = new File(filePath);// 单个分片大小5MB,如果每个分片设置小于5MB,则minio或S3底层在合并时报错:// code = EntityTooSmall, message = Your proposed upload is smaller than the minimum allowed object size.static final long CHUNK_SIZE = 5 * 1024 * 1024;// 当前分片号,minio或S3底层允许的分片范围[1,10000]// https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html#mpu-processprivate int chunkIndex;// 用于得知所有分片都传输成功后的时刻,进而进行合并private static CountDownLatch countDownLatch;public static void main(String[] args) throws Exception {// 第一步:准备分片上传String contentType = "application/octet-stream";HashMultimap<String, String> headers = HashMultimap.create();headers.put("Content-Type", contentType);String uploadId = minioClient.initMultiPartUpload(bucketName, null, file.getName(), headers, null);System.out.println("uploadId: " + uploadId);// 第二步:分片并上传// ps:实际项目中可由后端先getPresignedObjectUrl逐个生成每个分片的签名后的上传url,前端直接以此上传到minio,即可省去后端服务的网络开销long totalLength = file.length();System.out.println("totalLength: " + totalLength + " Byte");// 计算分片数量long chunkCount = (totalLength + CHUNK_SIZE - 1) / CHUNK_SIZE;System.out.println("chunkCount: " + chunkCount);countDownLatch = new CountDownLatch((int) chunkCount);// 5个核心线程并发上传分片ExecutorService fixedThreadPool = Executors.newFixedThreadPool(5);for (long i = 0; i < chunkCount; i++) {long position = i * CHUNK_SIZE;int bytesRead = (int) Math.min(CHUNK_SIZE, totalLength - position);MinioMain minioMain = new MinioMain();// S3分片号从1开始minioMain.setChunkIndex((int) i + 1);fixedThreadPool.submit(new Runnable() {@Overridepublic void run() {try {// 上传分片minioMain.processChunk(filePath, position, bytesRead, uploadId);} catch (Exception e) {throw new RuntimeException(e);}}});}countDownLatch.await();fixedThreadPool.shutdownNow();// 第三步:合并分片System.out.println("ready to merge <" + file.getName() + " - " + uploadId + " - " + bucketName + ">");Part[] parts = new Part[(int) chunkCount];// 查询上传后的分片数据。S3最大允许10000,且从1开始ListPartsResponse partResult = minioClient.listMultipart(bucketName, null, file.getName(), 10000, 0, uploadId, null, null);int partNumber = 1;for (Part part : partResult.result().partList()) {parts[partNumber - 1] = new Part(partNumber, part.etag());partNumber++;}ObjectWriteResponse objectWriteResponse = minioClient.mergeMultipartUpload(bucketName, null, file.getName(), uploadId, parts, null, null);System.out.println("mergeMultipartUpload resp etag: " + objectWriteResponse.etag());StatObjectResponse statObjectResponse = minioClient.statObject(StatObjectArgs.builder().bucket(bucketName).object(file.getName()).build());System.out.println("etag: " + statObjectResponse.etag() + " size: " + statObjectResponse.size() + " lastModified: " + statObjectResponse.lastModified());}private void processChunk(String filePath, long position, int bytesRead, String uploadId) {// 可控制并发数和分片大小以防止OOMbyte[] buffer = new byte[bytesRead];RandomAccessFile raf = null;try {int chunkIndex = this.getChunkIndex();raf = new RandomAccessFile(filePath, "r");// 定位到指定位置raf.seek(position);// 读取bytesRead字节长度作为分片raf.readFully(buffer);String contentType = "application/octet-stream";HashMultimap<String, String> headers = HashMultimap.create();headers.put("Content-Type", contentType);UploadPartResponse uploadPartResponse = minioClient.uploadMultiPart(bucketName, null, file.getName(),buffer, bytesRead,uploadId, chunkIndex, headers, null);System.out.println("chunk[" + chunkIndex + "] buffer size: [" + buffer.length + " Byte] upload etag: [" + uploadPartResponse.etag() + "]");} catch (Exception e) {e.printStackTrace();} finally {if (raf != null) {try {raf.close();} catch (IOException e) {e.printStackTrace();}}countDownLatch.countDown();}}
}

3.2、运行日志和效果

运行日志如下:

uploadId: MzFiMWRmZjctMDg0Yy00YzMyLTk5NTYtMjRkZGZiMDZlYjJhLmUwZmFkNzFiLWEwZTctNDU1Yi04ZWFjLWFhODQyZjBiMmIyOXgxNzI3MzQwMjUzMTA2Njc5MTEz
totalLength: 3576974860 Byte
chunkCount: 683
chunk[1] buffer size: [5242880 Byte] upload etag: [97096e510d1dcda56646608345de08ea]
chunk[3] buffer size: [5242880 Byte] upload etag: [d8102f80f10eb79f600cdf2d378ae8fe]
chunk[4] buffer size: [5242880 Byte] upload etag: [b74f9b8fa2025580b4fc00449c66e271]
chunk[5] buffer size: [5242880 Byte] upload etag: [e77603ee49cc3f7d229f124ecd9a3f38]
chunk[2] buffer size: [5242880 Byte] upload etag: [b148b311ccd2b3fcd4777d56a8758c3d]
chunk[6] buffer size: [5242880 Byte] upload etag: [94abe5a7a2117b612d9805029398cfd9]
chunk[7] buffer size: [5242880 Byte] upload etag: [433b52aed0d1b1486df07a2259932a83]
chunk[8] buffer size: [5242880 Byte] upload etag: [2c242bd205f9b3c4546454fe2d0abef4]
...
chunk[679] buffer size: [5242880 Byte] upload etag: [8492b0573cc74ec55cb6d2a86aee0f69]
chunk[678] buffer size: [5242880 Byte] upload etag: [4aa5c01b4f7aea95952ec62d71ee9996]
chunk[681] buffer size: [5242880 Byte] upload etag: [ac0b739044bfd2644fc8da97fc03a1a9]
chunk[680] buffer size: [5242880 Byte] upload etag: [d95ee210ac774b3ca26e091941c66e20]
chunk[682] buffer size: [5242880 Byte] upload etag: [75e78df64c1fad0839ba8a1583cd93ec]
chunk[683] buffer size: [1330700 Byte] upload etag: [2f30c8d65e23d266c7f10f051854bc6a]
ready to merge <psi_result.csv - MzFiMWRmZjctMDg0Yy00YzMyLTk5NTYtMjRkZGZiMDZlYjJhLmUwZmFkNzFiLWEwZTctNDU1Yi04ZWFjLWFhODQyZjBiMmIyOXgxNzI3MzQwMjUzMTA2Njc5MTEz - test>
mergeMultipartUpload resp etag: "ff6ebd330b3cb224ade84463dd14df82-683"
etag: ff6ebd330b3cb224ade84463dd14df82-683 size: 3576974860 lastModified: 2024-09-26T09:09Z

上传后的控制台:
MinioConsole


http://www.ppmy.cn/ops/123730.html

相关文章

PCL Harris3D关键点提取

目录 一、概述 1.1原理 1.2实现步骤 1.3应用场景 二、代码实现 2.1关键函数 2.1.1 Harris3D关键点提取 2.1.2 可视化函数 2.2完整代码 三、实现效果 PCL点云算法汇总及实战案例汇总的目录地址链接: PCL点云算法与项目实战案例汇总(长期更新) 一、概述 Ha…

GNU链接器(LD):PROVIDE、PROVIDE_HIDDEN关键字介绍

0 参考资料 GNU-LD-v2.30-中文手册.pdf GNU linker.pdf1 前言 一个完整的编译工具链应该包含以下4个部分&#xff1a; &#xff08;1&#xff09;编译器 &#xff08;2&#xff09;汇编器 &#xff08;3&#xff09;链接器 &#xff08;4&#xff09;lib库 在GNU工具链中&…

VAD 论文学习

VAD: Vectorized Scene Representation for Efficient Autonomous Driving 解决了什么问题&#xff1f;相关工作感知运动预测规划 提出了什么方法&#xff1f;概览1. 矢量化的场景学习矢量化地图交通参与者的矢量化运动 2. Planning via Interaction自车-其它交通参与者的交流自…

macOS .bash_profile配置文件优化记录

文章目录 说明原文件内容优化思路优化操作测试验证1. 验证JAVA_HOME2. 验证MAVEN_HOME3. 验证MONGODB_HOME4. 验证CLASSPATH5. 验证PATH 说明 展示的代码中&#xff0c;关于具体的文件路径位置&#xff0c;请灵活修改为自己的真实文件目录&#xff01; 原文件内容 从macOS C…

R语言绘制面积图

面积图是一种数据可视化图表。它通过填充区域来展示数据随某个变量&#xff08;如时间&#xff09;的变化趋势及累积效果。面积图能清晰地呈现数据的上升、下降和波动情况&#xff0c;直观反映数据的大小关系。适用于多种领域&#xff0c;如经济数据分析展示 GDP 变化及产业贡献…

Redis-主从复制

分布式系统,涉及到一个非常关键的问题:单点问题 如果某个服务器程序,只有一个节点,就会出现: 可用性问题(这个服务器挂了,服务中断)性能/支持的并发量有限 引入分布式系统,主要也是为了解决上述的单点问题 在分布式系统中,希望有多个服务器来部署redis服务,从而构成一个red…

论文阅读笔记-Are Pre-trained Convolutions Better than Pre-trained Transformers?

前言 Transformer诞生到现在,从NLP领域到CV领域,可以说是两开花。特别是在预训练模型中,BERT相关系列近些年屡屡突破,在各种下游任务中,不仅能提速还有效果上的提升。所以在NLP的相关任务中,提及Transformer和CNN时,Transformer一般都会优先考虑,更何况是在预训练语言…

YOLOv8实战工地安全帽检测【数据集+YOLOv8模型+源码+PyQt5界面】

本文采用YOLOv8作为核心算法框架&#xff0c;结合PyQt5构建用户界面&#xff0c;使用Python3进行开发。YOLOv8以其高效的实时检测能力&#xff0c;在多个目标检测任务中展现出卓越性能。本研究针对工地安全帽数据集进行训练和优化&#xff0c;该数据集包含丰富的安全帽图像样本…