Seaweedfs(master volume filer) docker run参数帮助文档

ops/2025/2/13 20:32:29/

文章目录

  • 进入容器后执行获取
    • weed -h
      • 英文
      • 中文
    • weed server -h
      • 英文
      • 中文
    • weed volume -h
      • 英文
      • 中文
    • 关键点
    • 测试了一下,这个`-volume.minFreeSpace string`有点狠,比如设置值为10(10%),它直接给系统只留下10%的空间,其余空间全给你先占用了
    • 尝试只用参数`-volume.max string`设置最大卷数量(貌似一个是大约1g)

进入容器后执行获取

weed -h

英文

/data # weedSeaweedFS: store billions of files and serve them fast!Usage:weed command [arguments]The commands are:autocomplete install autocompleteautocomplete.uninstall uninstall autocompletebackup      incrementally backup a volume to local folderbenchmark   benchmark by writing millions of files and reading them outcompact     run weed tool compact on volume filedownload    download files by file idexport      list or export files from one volume data filefiler       start a file server that points to a master server, or a list of master serversfiler.backup resume-able continuously replicate files from a SeaweedFS cluster to another location defined in replication.tomlfiler.cat   copy one file to localfiler.copy  copy one or a list of files to a filer folderfiler.meta.backup continuously backup filer meta data changes to anther filer store specified in a backup_filer.tomlfiler.meta.tail see continuous changes on a filerfiler.remote.gateway resumable continuously write back bucket creation, deletion, and other local updates to remote object storefiler.remote.sync resumable continuously write back updates to remote storagefiler.replicate replicate file changes to another destinationfiler.sync  resumable continuous synchronization between two active-active or active-passive SeaweedFS clustersfix         run weed tool fix on files or whole folders to recreate index file(s) if corruptedfuse        Allow use weed with linux's mount commandiam         start a iam API compatible servermaster      start a master servermaster.follower start a master followermount       mount weed filer to a directory as file system in userspace(FUSE)mq.broker   <WIP> start a message queue brokers3          start a s3 API compatible server that is backed by a filerscaffold    generate basic configuration filesserver      start a master server, a volume server, and optionally a filer and a S3 gatewayshell       run interactive administrative commandsupdate      get latest or specific version from https://github.com/seaweedfs/seaweedfsupload      upload one or a list of filesversion     print SeaweedFS versionvolume      start a volume serverwebdav      start a webdav server that is backed by a filerUse "weed help [command]" for more information about a command.For Logging, use "weed [logging_options] [command]". The logging options are:-alsologtostderrlog to standard error as well as files (default true)-config_dir valuedirectory with toml configuration files-log_backtrace_at valuewhen logging hits line file:N, emit a stack trace-logdir stringIf non-empty, write log files in this directory-logtostderrlog to standard error instead of files-options stringa file of command line options, each line in optionName=optionValue format-stderrthreshold valuelogs at or above this threshold go to stderr-v valuelog levels [0|1|2|3|4], default to 0-vmodule valuecomma-separated list of pattern=N settings for file-filtered logging

中文

SeaweedFS: store billions of files and serve them fast!  # 海量文件存储与快速服务Usage:weed command [arguments]  # 使用格式:weed 命令 [参数]The commands are:autocomplete install autocomplete  # 安装自动补全功能autocomplete.uninstall uninstall autocomplete  # 卸载自动补全功能backup      incrementally backup a volume to local folder  # 增量备份卷数据到本地目录benchmark   benchmark by writing millions of files and reading them out  # 通过读写百万文件进行性能测试compact     run weed tool compact on volume file  # 压缩卷文件download    download files by file id  # 通过文件ID下载文件export      list or export files from one volume data file  # 从卷数据文件列出/导出文件filer       start a file server that points to a master server, or a list of master servers  # 启动文件服务器连接主节点filer.backup resume-able continuously replicate files from a SeaweedFS cluster to another location defined in replication.toml  # 持续备份文件到replication.toml定义的位置filer.cat   copy one file to local  # 复制单个文件到本地filer.copy  copy one or a list of files to a filer folder  # 复制文件到filer目录filer.meta.backup continuously backup filer meta data changes to anther filer store specified in a backup_filer.toml  # 持续备份元数据到备份配置指定位置filer.meta.tail see continuous changes on a filer  # 实时查看filer元数据变化filer.remote.gateway resumable continuously write back bucket creation, deletion, and other local updates to remote object store  # 将本地存储操作同步到远程对象存储filer.remote.sync resumable continuously write back updates to remote storage  # 持续同步更新到远程存储filer.replicate replicate file changes to another destination  # 文件变更复制到其他目标filer.sync  resumable continuous synchronization between two active-active or active-passive SeaweedFS clusters  # 集群间持续同步fix         run weed tool fix on files or whole folders to recreate index file(s) if corrupted  # 修复损坏的索引文件fuse        Allow use weed with linux's mount command  # 支持Linux挂载命令iam         start a iam API compatible server  # 启动IAM兼容API服务master      start a master server  # 启动主节点master.follower start a master follower  # 启动主节点跟随者mount       mount weed filer to a directory as file system in userspace(FUSE)  # 挂载FUSE文件系统mq.broker   <WIP> start a message queue broker  # 启动消息队列代理(开发中)s3          start a s3 API compatible server that is backed by a filer  # 启动S3兼容服务scaffold    generate basic configuration files  # 生成基础配置文件server      start a master server, a volume server, and optionally a filer and a S3 gateway  # 启动完整服务(主节点+存储节点+可选组件)shell       run interactive administrative commands  # 进入交互式管理命令行update      get latest or specific version from https://github.com/seaweedfs/seaweedfs  # 更新SeaweedFS版本upload      upload one or a list of files  # 上传单个或多个文件version     print SeaweedFS version  # 显示版本信息volume      start a volume server  # 启动存储节点webdav      start a webdav server that is backed by a filer  # 启动WebDAV服务日志选项说明(每个命令前均可添加):-alsologtostderr同时输出日志到标准错误和文件(默认true)-config_dir value包含toml配置文件的目录-log_backtrace_at value当记录到指定行时输出堆栈跟踪-logdir string日志文件存储目录(非空时生效)-logtostderr日志输出到标准错误而非文件-options string命令行选项配置文件(每行格式为optionName=optionValue)-stderrthreshold value高于此级别的日志输出到标准错误-v value日志级别 [0|1|2|3|4],默认为0-vmodule value文件过滤日志设置(逗号分隔的pattern=N格式)

weed server -h

英文

/data # weed server -h
Example: weed server -dir=/tmp -volume.max=5 -ip=server_name
Default Usage:-cpuprofile stringcpu profile output file-dataCenter stringcurrent volume server's data center name-debugserves runtime profiling data, e.g., http://localhost:6060/debug/pprof/goroutine?debug=2-debug.port inthttp port for debugging (default 6060)-dir stringdirectories to store data files. dir[,dir]... (default "/tmp")-disableHttpdisable http requests, only gRPC operations are allowed.-filerwhether to start filer-filer.collection stringall data will be stored in this collection-filer.concurrentUploadLimitMB intlimit total concurrent upload size (default 64)-filer.defaultReplicaPlacement stringdefault replication type. If not specified, use master setting.-filer.dirListLimit intlimit sub dir listing size (default 1000)-filer.disableDirListingturn off directory listing-filer.disk string[hdd|ssd|<tag>] hard drive or solid state drive or any tag-filer.downloadMaxMBps intdownload max speed for each download request, in MB per second-filer.encryptVolumeDataencrypt data on volume servers-filer.filerGroup stringshare metadata with other filers in the same filerGroup-filer.localSocket stringdefault to /tmp/seaweedfs-filer-<port>.sock-filer.maxMB intsplit files larger than the limit (default 4)-filer.port intfiler server http listen port (default 8888)-filer.port.grpc intfiler server grpc listen port-filer.port.public intfiler server public http listen port-filer.saveToFilerLimit intSmall files smaller than this limit can be cached in filer store.-filer.ui.deleteDirenable filer UI show delete directory button (default true)-iamwhether to start IAM service-iam.port intiam server http listen port (default 8111)-idleTimeout intconnection idle seconds (default 30)-ip stringip or server name, also used as identifier (default "172.17.0.6")-ip.bind stringip address to bind to. If empty, default to same as -ip option.-masterwhether to start master server (default true)-master.defaultReplication stringDefault replication type if not specified.-master.dir stringdata directory to store meta data, default to same as -dir specified-master.electionTimeout durationelection timeout of master servers (default 10s)-master.garbageThreshold floatthreshold to vacuum and reclaim spaces (default 0.3)-master.heartbeatInterval durationheartbeat interval of master servers, and will be randomly multiplied by [1, 1.25) (default 300ms)-master.metrics.address stringPrometheus gateway address-master.metrics.intervalSeconds intPrometheus push interval in seconds (default 15)-master.peers stringall master nodes in comma separated ip:masterPort list-master.port intmaster server http listen port (default 9333)-master.port.grpc intmaster server grpc listen port-master.raftHashicorpuse hashicorp raft-master.resumeStateresume previous state on start master server-master.volumePreallocatePreallocate disk space for volumes.-master.volumeSizeLimitMB uintMaster stops directing writes to oversized volumes. (default 30000)-memprofile stringmemory profile output file-metricsPort intPrometheus metrics listen port-mq.brokerwhether to start message queue broker-mq.broker.port intmessage queue broker gRPC listen port (default 17777)-options stringa file of command line options, each line in optionName=optionValue format-rack stringcurrent volume server's rack name-s3whether to start S3 gateway-s3.allowDeleteBucketNotEmptyallow recursive deleting all entries along with bucket (default true)-s3.allowEmptyFolderallow empty folders (default true)-s3.auditLogConfig stringpath to the audit log config file-s3.cert.file stringpath to the TLS certificate file-s3.config stringpath to the config file-s3.domainName stringsuffix of the host name in comma separated list, {bucket}.{domainName}-s3.key.file stringpath to the TLS private key file-s3.port ints3 server http listen port (default 8333)-s3.port.grpc ints3 server grpc listen port-volumewhether to start volume server (default true)-volume.compactionMBps intlimit compaction speed in mega bytes per second-volume.concurrentDownloadLimitMB intlimit total concurrent download size (default 64)-volume.concurrentUploadLimitMB intlimit total concurrent upload size (default 64)-volume.dir.idx stringdirectory to store .idx files-volume.disk string[hdd|ssd|<tag>] hard drive or solid state drive or any tag-volume.fileSizeLimitMB intlimit file size to avoid out of memory (default 256)-volume.hasSlowRead<experimental> if true, this prevents slow reads from blocking other requests, but large file read P99 latency will increase. (default true)-volume.images.fix.orientationAdjust jpg orientation when uploading.-volume.index stringChoose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance. (default "memory")-volume.index.leveldbTimeout intalive time for leveldb (default to 0). If leveldb of volume is not accessed in ldbTimeout hours, it will be off loaded to reduce opened files and memory consumption.-volume.inflightUploadDataTimeout durationinflight upload data wait timeout of volume servers (default 1m0s)-volume.max stringmaximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")-volume.minFreeSpace stringmin free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.-volume.minFreeSpacePercent stringminimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly (deprecated, use minFreeSpace instead). (default "1")-volume.port intvolume server http listen port (default 8080)-volume.port.grpc intvolume server grpc listen port-volume.port.public intvolume server public port-volume.pprofenable pprof http handlers. precludes --memprofile and --cpuprofile-volume.preStopSeconds intnumber of seconds between stop send heartbeats and stop volume server (default 10)-volume.publicUrl stringpublicly accessible address-volume.readBufferSizeMB int<experimental> larger values can optimize query performance but will increase some memory usage,Use with hasSlowRead normally (default 4)-volume.readMode string[local|proxy|redirect] how to deal with non-local volume: 'not found|read in remote node|redirect volume location'. (default "proxy")-webdavwhether to start WebDAV gateway-webdav.cacheCapacityMB intlocal cache capacity in MB-webdav.cacheDir stringlocal cache directory for file chunks (default "/tmp")-webdav.cert.file stringpath to the TLS certificate file-webdav.collection stringcollection to create the files-webdav.disk string[hdd|ssd|<tag>] hard drive or solid state drive or any tag-webdav.filer.path stringuse this remote path from filer server (default "/")-webdav.key.file stringpath to the TLS private key file-webdav.port intwebdav server http listen port (default 7333)-webdav.replication stringreplication to create the files-whiteList stringcomma separated Ip addresses having write permission. No limit if empty.
Description:start both a volume server to provide storage spacesand a master server to provide volume=>location mapping service and sequence number of file idsThis is provided as a convenient way to start both volume server and master server.The servers acts exactly the same as starting them separately.So other volume servers can connect to this master server also.Optionally, a filer server can be started.Also optionally, a S3 gateway can be started.
/data #

中文

/data # weed server -h
Example: weed server -dir=/tmp -volume.max=5 -ip=server_name  # 示例命令
Default Usage:-cpuprofile string  # CPU性能分析输出文件cpu profile output file  -dataCenter string  # 当前卷服务器的数据中心名称current volume server's data center name  -debug  # 启用调试模式,提供运行时分析数据serves runtime profiling data, e.g., http://localhost:6060/debug/pprof/goroutine?debug=2  -debug.port int  # 调试用的HTTP端口号 (默认6060)http port for debugging (default 6060)  -dir string  # 数据存储目录列表,多个目录用逗号分隔 (默认"/tmp")directories to store data files. dir[,dir]... (default "/tmp")  -disableHttp  # 禁用HTTP请求,只允许gRPC操作disable http requests, only gRPC operations are allowed.  -filer  # 是否启动文件管理器服务whether to start filer  -filer.collection string  # 所有数据将存储在此集合中all data will be stored in this collection  -filer.concurrentUploadLimitMB int  # 总并发上传大小限制(单位MB)(默认64)limit total concurrent upload size (default 64)  -filer.defaultReplicaPlacement string  # 默认副本放置策略(未指定时使用主设置)default replication type. If not specified, use master setting.  -filer.dirListLimit int  # 子目录列表显示数量限制 (默认1000)limit sub dir listing size (default 1000)  -filer.disableDirListing  # 关闭目录列表功能turn off directory listing  -filer.disk string  # 磁盘类型标签 [hdd|ssd|<自定义标签>][hdd|ssd|<tag>] hard drive or solid state drive or any tag  -filer.downloadMaxMBps int  # 单个下载请求的最大速度(MB/秒)download max speed for each download request, in MB per second  -filer.encryptVolumeData  # 加密卷服务器上的数据encrypt data on volume servers  -filer.filerGroup string  # 与同组文件管理器共享元数据share metadata with other filers in the same filerGroup  -filer.localSocket string  # 本地socket文件路径 (默认/tmp/seaweedfs-filer-<port>.sock)default to /tmp/seaweedfs-filer-<port>.sock  -filer.maxMB int  # 文件分割阈值(单位MB)(默认4)split files larger than the limit (default 4)  -filer.port int  # 文件管理器HTTP监听端口 (默认8888)filer server http listen port (default 8888)  -filer.port.grpc int  # 文件管理器gRPC监听端口filer server grpc listen port  -filer.port.public int  # 文件管理器公共HTTP监听端口filer server public http listen port  -filer.saveToFilerLimit int  # 可缓存到文件管理器的小文件大小阈值Small files smaller than this limit can be cached in filer store.  -filer.ui.deleteDir  # 在文件管理器UI显示删除目录按钮 (默认true)enable filer UI show delete directory button (default true)  -iam  # 是否启动IAM服务whether to start IAM service  -iam.port int  # IAM服务HTTP监听端口 (默认8111)iam server http listen port (default 8111)  -idleTimeout int  # 连接空闲超时秒数 (默认30)connection idle seconds (default 30)  -ip string  # 服务器IP或名称,也作为标识符 (默认"172.17.0.6")ip or server name, also used as identifier (default "172.17.0.6")  -ip.bind string  # 绑定的IP地址(空则使用-ip设置)ip address to bind to. If empty, default to same as -ip option.  -master  # 是否启动主服务器 (默认true)whether to start master server (default true)  -master.defaultReplication string  # 默认副本策略(未指定时使用)Default replication type if not specified.  -master.dir string  # 主服务器元数据存储目录(默认同-dir)data directory to store meta data, default to same as -dir specified  -master.electionTimeout duration  # 主服务器选举超时时间 (默认10s)election timeout of master servers (default 10s)  -master.garbageThreshold float  # 触发空间回收的垃圾占比阈值 (默认0.3)threshold to vacuum and reclaim spaces (default 0.3)  -master.heartbeatInterval duration  # 主服务器心跳间隔(随机乘以1~1.25)(默认300ms)heartbeat interval of master servers, and will be randomly multiplied by [1, 1.25) (default 300ms)  -master.metrics.address string  # Prometheus网关地址Prometheus gateway address  -master.metrics.intervalSeconds int  # Prometheus推送间隔(秒)(默认15)Prometheus push interval in seconds (default 15)  -master.peers string  # 所有主节点列表(逗号分隔的ip:port)all master nodes in comma separated ip:masterPort list  -master.port int  # 主服务器HTTP监听端口 (默认9333)master server http listen port (default 9333)  -master.port.grpc int  # 主服务器gRPC监听端口master server grpc listen port  -master.raftHashicorp  # 使用Hashicorp Raft实现use hashicorp raft  -master.resumeState  # 启动时恢复之前的状态resume previous state on start master server  -master.volumePreallocate  # 为卷预分配磁盘空间Preallocate disk space for volumes.  -master.volumeSizeLimitMB uint  # 主服务器停止写入超大卷的阈值(单位MB)(默认30000)Master stops directing writes to oversized volumes. (default 30000)  -memprofile string  # 内存分析输出文件memory profile output file  -metricsPort int  # Prometheus指标监听端口Prometheus metrics listen port  -mq.broker  # 是否启动消息队列代理whether to start message queue broker  -mq.broker.port int  # 消息队列代理gRPC监听端口 (默认17777)message queue broker gRPC listen port (default 17777)  -options string  # 命令行选项配置文件(每行格式optionName=optionValue)a file of command line options, each line in optionName=optionValue format  -rack string  # 当前卷服务器的机架名称current volume server's rack name  -s3  # 是否启动S3网关whether to start S3 gateway  -s3.allowDeleteBucketNotEmpty  # 允许递归删除非空桶 (默认true)allow recursive deleting all entries along with bucket (default true)  -s3.allowEmptyFolder  # 允许空文件夹 (默认true)allow empty folders (default true)  -s3.auditLogConfig string  # 审计日志配置文件路径path to the audit log config file  -s3.cert.file string  # TLS证书文件路径path to the TLS certificate file  -s3.config string  # 配置文件路径path to the config file  -s3.domainName string  # S3域名后缀(逗号分隔列表,格式{bucket}.{domainName})suffix of the host name in comma separated list, {bucket}.{domainName}  -s3.key.file string  # TLS私钥文件路径path to the TLS private key file  -s3.port int  # S3服务HTTP监听端口 (默认8333)s3 server http listen port (default 8333)  -s3.port.grpc int  # S3服务gRPC监听端口s3 server grpc listen port  -volume  # 是否启动卷服务器 (默认true)whether to start volume server (default true)  -volume.compactionMBps int  # 压缩速度限制(MB/秒)limit compaction speed in mega bytes per second  -volume.concurrentDownloadLimitMB int  # 总并发下载大小限制(单位MB)(默认64)limit total concurrent download size (default 64)  -volume.concurrentUploadLimitMB int  # 总并发上传大小限制(单位MB)(默认64)limit total concurrent upload size (default 64)  -volume.dir.idx string  # .idx文件存储目录directory to store .idx files  -volume.disk string  # 卷磁盘类型标签 [hdd|ssd|<自定义标签>][hdd|ssd|<tag>] hard drive or solid state drive or any tag  -volume.fileSizeLimitMB int  # 文件大小限制以避免内存溢出(单位MB)(默认256)limit file size to avoid out of memory (default 256)  -volume.hasSlowRead  # <实验性> 防止慢速读取阻塞其他请求(默认true)<experimental> if true, this prevents slow reads from blocking other requests, but large file read P99 latency will increase. (default true)  -volume.images.fix.orientation  # 上传时自动调整JPG方向Adjust jpg orientation when uploading.  -volume.index string  # 索引模式选择 [memory|leveldb|leveldbMedium|leveldbLarge] (默认"memory")Choose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance. (default "memory")  -volume.index.leveldbTimeout int  # leveldb存活超时时间(小时),0表示禁用alive time for leveldb (default to 0). If leveldb of volume is not accessed in ldbTimeout hours, it will be off loaded to reduce opened files and memory consumption.  -volume.inflightUploadDataTimeout duration  # 传输中上传数据等待超时时间 (默认1m0s)inflight upload data wait timeout of volume servers (default 1m0s)  -volume.max string  # 最大卷数量(设为0则自动根据磁盘空间计算)(默认"8")maximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")  -volume.minFreeSpace string  # 最小空闲磁盘空间(百分比<=100,或如10GiB)min free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.  -volume.minFreeSpacePercent string  # 最小空闲磁盘空间百分比(已弃用,改用minFreeSpace)(默认"1")minimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly (deprecated, use minFreeSpace instead). (default "1")  -volume.port int  # 卷服务器HTTP监听端口 (默认8080)volume server http listen port (default 8080)  -volume.port.grpc int  # 卷服务器gRPC监听端口volume server grpc listen port  -volume.port.public int  # 卷服务器公共端口volume server public port  -volume.pprof  # 启用pprof HTTP处理器(与--memprofile/--cpuprofile互斥)enable pprof http handlers. precludes --memprofile and --cpuprofile  -volume.preStopSeconds int  # 停止发送心跳到停止服务的时间间隔(秒)(默认10)number of seconds between stop send heartbeats and stop volume server (default 10)  -volume.publicUrl string  # 公开访问地址publicly accessible address  -volume.readBufferSizeMB int  # <实验性> 读缓冲区大小(MB)(默认4)<experimental> larger values can optimize query performance but will increase some memory usage,Use with hasSlowRead normally (default 4)  -volume.readMode string  # 非本地卷处理模式 [local|proxy|redirect] (默认"proxy")[local|proxy|redirect] how to deal with non-local volume: 'not found|read in remote node|redirect volume location'. (default "proxy")  -webdav  # 是否启动WebDAV网关whether to start WebDAV gateway  -webdav.cacheCapacityMB int  # 本地缓存容量(MB)local cache capacity in MB  -webdav.cacheDir string  # 文件块本地缓存目录 (默认"/tmp")local cache directory for file chunks (default "/tmp")  -webdav.cert.file string  # TLS证书文件路径path to the TLS certificate file  -webdav.collection string  # 文件创建的目标集合collection to create the files  -webdav.disk string  # WebDAV磁盘类型标签 [hdd|ssd|<自定义标签>][hdd|ssd|<tag>] hard drive or solid state drive or any tag  -webdav.filer.path string  # 使用的远程文件管理器路径 (默认"/")use this remote path from filer server (default "/")  -webdav.key.file string  # TLS私钥文件路径path to the TLS private key file  -webdav.port int  # WebDAV服务HTTP监听端口 (默认7333)webdav server http listen port (default 7333)  -webdav.replication string  # 文件创建的副本策略replication to create the files  -whiteList string  # 拥有写权限的IP白名单(逗号分隔,空表示无限制)comma separated Ip addresses having write permission. No limit if empty.  
Description:start both a volume server to provide storage spaces  # 同时启动卷服务器提供存储空间and a master server to provide volume=>location mapping service and sequence number of file ids  # 和主服务器提供卷位置映射及文件ID序列服务This is provided as a convenient way to start both volume server and master server.  # 本命令是同时启动卷服务器和主服务器的便捷方式The servers acts exactly the same as starting them separately.  # 服务表现与单独启动时完全相同So other volume servers can connect to this master server also.  # 其他卷服务器也可以连接到此主服务器Optionally, a filer server can be started.  # 可选项:可启动文件管理器服务Also optionally, a S3 gateway can be started.  # 可选项:可启动S3网关
/data #

weed volume -h

英文

/data # weed volume -h
Example: weed volume -port=8080 -dir=/tmp -max=5 -ip=server_name -mserver=localhost:9333
Default Usage:-compactionMBps intlimit background compaction or copying speed in mega bytes per second-concurrentDownloadLimitMB intlimit total concurrent download size (default 256)-concurrentUploadLimitMB intlimit total concurrent upload size (default 256)-cpuprofile stringcpu profile output file-dataCenter stringcurrent volume server's data center name-dir stringdirectories to store data files. dir[,dir]... (default "/tmp")-dir.idx stringdirectory to store .idx files-disk string[hdd|ssd|<tag>] hard drive or solid state drive or any tag-fileSizeLimitMB intlimit file size to avoid out of memory (default 256)-hasSlowRead<experimental> if true, this prevents slow reads from blocking other requests, but large file read P99 latency will increase. (default true)-idleTimeout intconnection idle seconds (default 30)-images.fix.orientationAdjust jpg orientation when uploading.-index stringChoose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance. (default "memory")-index.leveldbTimeout intalive time for leveldb (default to 0). If leveldb of volume is not accessed in ldbTimeout hours, it will be off loaded to reduce opened files and memory consumption.-inflightUploadDataTimeout durationinflight upload data wait timeout of volume servers (default 1m0s)-ip stringip or server name, also used as identifier (default "172.17.0.6")-ip.bind stringip address to bind to. If empty, default to same as -ip option.-max stringmaximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")-memprofile stringmemory profile output file-metricsPort intPrometheus metrics listen port-minFreeSpace stringmin free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.-minFreeSpacePercent stringminimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly (deprecated, use minFreeSpace instead). (default "1")-mserver stringcomma-separated master servers (default "localhost:9333")-options stringa file of command line options, each line in optionName=optionValue format-port inthttp listen port (default 8080)-port.grpc intgrpc listen port-port.public intport opened to public-pprofenable pprof http handlers. precludes --memprofile and --cpuprofile-preStopSeconds intnumber of seconds between stop send heartbeats and stop volume server (default 10)-publicUrl stringPublicly accessible address-rack stringcurrent volume server's rack name-readBufferSizeMB int<experimental> larger values can optimize query performance but will increase some memory usage,Use with hasSlowRead normally. (default 4)-readMode string[local|proxy|redirect] how to deal with non-local volume: 'not found|proxy to remote node|redirect volume location'. (default "proxy")-whiteList stringcomma separated Ip addresses having write permission. No limit if empty.
Description:start a volume server to provide storage spaces

中文

/data # weed volume -h
Example: weed volume -port=8080 -dir=/tmp -max=5 -ip=server_name -mserver=localhost:9333
Default Usage:-compactionMBps intlimit background compaction or copying speed in mega bytes per second[限制后台压缩或复制速度,单位MB/秒]-concurrentDownloadLimitMB intlimit total concurrent download size (default 256)[限制并发下载总大小,默认256MB]-concurrentUploadLimitMB intlimit total concurrent upload size (default 256)[限制并发上传总大小,默认256MB]-cpuprofile stringcpu profile output file[CPU性能分析输出文件名]-dataCenter stringcurrent volume server's data center name[当前卷服务器的数据中心名称]-dir stringdirectories to store data files. dir[,dir]... (default "/tmp")[数据文件存储目录,多个目录用逗号分隔,默认/tmp]-dir.idx stringdirectory to store .idx files[索引文件存储目录]-disk string[hdd|ssd|<tag>] hard drive or solid state drive or any tag[磁盘类型标识:hdd/ssd/自定义标签]-fileSizeLimitMB intlimit file size to avoid out of memory (default 256)[限制单个文件大小防止内存溢出,默认256MB]-hasSlowRead<experimental> if true, this prevents slow reads from blocking other requests, but large file read P99 latency will increase. (default true)[实验性:启用后慢读不会阻塞其他请求,但大文件读取延迟会增加]-idleTimeout intconnection idle seconds (default 30)[连接空闲超时时间(秒),默认30秒]-images.fix.orientationAdjust jpg orientation when uploading.[上传时自动调整JPG方向]-index stringChoose [memory|leveldb|leveldbMedium|leveldbLarge] mode for memory~performance balance. (default "memory")[索引存储模式:内存优先或不同级别的LevelDB]-index.leveldbTimeout intalive time for leveldb (default to 0). If leveldb of volume is not accessed in ldbTimeout hours, it will be off loaded to reduce opened files and memory consumption.[LevelDB存活时间(小时),超时后卸载以节省资源]-inflightUploadDataTimeout durationinflight upload data wait timeout of volume servers (default 1m0s)[上传数据等待超时时间,默认1分钟]-ip stringip or server name, also used as identifier (default "172.17.0.6")[服务器IP/名称,也作为唯一标识]-ip.bind stringip address to bind to. If empty, default to same as -ip option.[绑定IP地址,默认与-ip相同]-max stringmaximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")[最大卷数量(自动计算磁盘空间与卷大小的比值)]-memprofile stringmemory profile output file[内存性能分析输出文件名]-metricsPort intPrometheus metrics listen port[Prometheus指标监听端口]-minFreeSpace stringmin free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.[最小磁盘剩余空间(百分比或易读字节单位如10GiB),空间不足时将卷设为只读]-minFreeSpacePercent stringminimum free disk space (default to 1%). Low disk space will mark all volumes as ReadOnly (deprecated, use minFreeSpace instead). (default "1")[已弃用,改用minFreeSpace参数]-mserver stringcomma-separated master servers (default "localhost:9333")[主服务器地址列表,用逗号分隔]-options stringa file of command line options, each line in optionName=optionValue format[配置文件路径(每行格式为optionName=optionValue)]-port inthttp listen port (default 8080)[HTTP监听端口]-port.grpc intgrpc listen port[gRPC监听端口]-port.public intport opened to public[对外开放端口]-pprofenable pprof http handlers. precludes --memprofile and --cpuprofile[启用pprof性能分析(与--memprofile/--cpuprofile互斥)]-preStopSeconds intnumber of seconds between stop send heartbeats and stop volume server (default 10)[停止发送心跳到停止服务之间的等待秒数]-publicUrl stringPublicly accessible address[公开访问地址]-rack stringcurrent volume server's rack name[当前卷服务器的机架名称]-readBufferSizeMB int<experimental> larger values can optimize query performance but will increase some memory usage,Use with hasSlowRead normally. (default 4)[实验性:增大可优化查询性能但增加内存占用,默认4MB]-readMode string[local|proxy|redirect] how to deal with non-local volume: 'not found|proxy to remote node|redirect volume location'. (default "proxy")[非本地卷处理模式:本地无/代理请求/重定向]-whiteList stringcomma separated Ip addresses having write permission. No limit if empty.[白名单IP地址(逗号分隔),空表示无限制]
Description:start a volume server to provide storage spaces[启动卷服务器提供存储空间]

关键点

-master.garbageThreshold float  # 触发空间回收的垃圾占比阈值 (默认0.3)threshold to vacuum and reclaim spaces (default 0.3)  -volume.max string  # 最大卷数量(设为0则自动根据磁盘空间计算)(默认"8")maximum numbers of volumes, count[,count]... If set to zero, the limit will be auto configured as free disk space divided by volume size. (default "8")  
-volume.minFreeSpace string  # 最小空闲磁盘空间(百分比<=100,或如10GiB),如果达到阈值所有卷将被标记只读(大概写30表示30%)min free disk space (value<=100 as percentage like 1, other as human readable bytes, like 10GiB). Low disk space will mark all volumes as ReadOnly.  

测试了一下,这个-volume.minFreeSpace string有点狠,比如设置值为10(10%),它直接给系统只留下10%的空间,其余空间全给你先占用了

在这里插入图片描述

尝试只用参数-volume.max string设置最大卷数量(貌似一个是大约1g)

我尝试设置20:

    docker run \-d -i -t --restart always \--name $CONTAINER_NAME \-p $MASTER_PORT:9333 \-p $FILER_PORT:8888 \-v $SCRIPT_LOCATION/mount/masterVolumeFiler/data/:/data/ \-v /etc/localtime:/etc/localtime:ro \--log-driver=json-file \--log-opt max-size=100m \--log-opt max-file=3 \$IMAGE_NAME:$IMAGE_TAG \server -filer -volume.max=20

在这里插入图片描述

在不断上传文件过程中,它会分阶段扩张:

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍
ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ        ‌‍ᅟᅠ


http://www.ppmy.cn/ops/158126.html

相关文章

python+unity落地方案实现AI 换脸融合

先上效果再说技术结论&#xff0c;使用的是自行搭建的AI人脸融合库&#xff0c;可以离线不受限制无限次生成&#xff0c;有需要的可以后台私信python ai换脸融合。 TODO 未来的方向&#xff1a;3D人脸融合和AI数据训练 这个技术使用的是openvcinsighface&#xff0c;openvc…

支持向量机原理

支持向量机&#xff08;简称SVM&#xff09;虽然诞生只有短短的二十多年&#xff0c;但是自一诞生便由于它良好的分类性能席卷了机器学习领域。如果不考虑集成学习的算法&#xff0c;不考虑特定的训练数据集&#xff0c;尤其在分类任务中表现突出。在分类算法中的表现SVM说是排…

华为云的分布式缓存服务适合什么场景

华为云的分布式缓存服务&#xff08;DCS&#xff09;适用于多种场景&#xff0c;能够有效提升系统的性能和可靠性。以下是九河云总结的其主要适用场景&#xff1a; 高并发读取场景 在电商、社交平台等高并发应用中&#xff0c;华为云DCS可以将热点数据缓存到内存中&#xff0c…

【MySQL】常用语句

目录 1. 数据库操作2. 表操作3. 数据操作&#xff08;CRUD&#xff09;4. 高级查询5. 索引管理6. 用户与权限7. 数据导入导出8. 事务控制9. 其他实用语句注意事项 如果这篇文章对你有所帮助&#xff0c;渴望获得你的一个点赞&#xff01; 1. 数据库操作 创建数据库 CREATE DATA…

Linux 系统中,进程间通信机制

在 Linux 系统中&#xff0c;进程间通信&#xff08;Inter-Process Communication, IPC&#xff09;是多个进程之间交换数据和同步操作的机制。Linux 提供了多种 IPC 方式&#xff0c;每种方式适用于不同的场景。以下是常见的 IPC 方式及其详解&#xff1a; 1. 管道&#xff08…

【DeepSeek】从文本摘要到对话生成:DeepSeek 在 NLP 任务中的实战指南

网罗开发 &#xff08;小红书、快手、视频号同名&#xff09; 大家好&#xff0c;我是 展菲&#xff0c;目前在上市企业从事人工智能项目研发管理工作&#xff0c;平时热衷于分享各种编程领域的软硬技能知识以及前沿技术&#xff0c;包括iOS、前端、Harmony OS、Java、Python等…

JAVA/RUST/C#/Kotlin 各语言语法糖及特性对比表

各语言语法糖及特性对比表 声明&#xff1a;所有数据均由AI整合生成 语法糖/特性说明GoC#KotlinJava (版本及备注)Rust局部方法嵌套方法&#xff0c;可访问外部局部变量✅✅✅✅✅&#xff08;可用闭包&#xff0c;但用 fn 定义的内嵌函数不能捕获环境&#xff09;lock 语句简化…

Spring Cloud+ Sleuth + Zipkin链路追踪

Spring Cloud+ Sleuth + Zipkin链路追踪 01、Spring Cloud Zipkin链路追踪01、Zipkin的来源和背景02、大型互联网公司为什么需要分布式跟踪系统?设计初衷设计理念03、什么是Zipkin?04、Zipkin的默认搭建方式05、传统日志收集的选型06、微服务的链路追踪 -架构模型07、Zipkin搭…