HDFS写流程源码分析(二)-服务端

news/2024/9/18 6:44:41/

HDFS 写流程源码分析

  • 一、客户端
  • 二、NameNode端
    • (一)create
    • (二)addBlock

环境为hadoop 3.1.3

一、客户端

HDFS写流程源码分析(一)-客户端

二、NameNode端

(一)create

首先找到NameNode的rpc服务端,进入NameNodeRpcServer.create()

  public HdfsFileStatus create(String src, FsPermission masked,String clientName, EnumSetWritable<CreateFlag> flag,boolean createParent, short replication, long blockSize,CryptoProtocolVersion[] supportedVersions, String ecPolicyName)throws IOException {checkNNStartup();// 发起请求的客户端ipString clientMachine = getClientMachine();  if (stateChangeLog.isDebugEnabled()) {stateChangeLog.debug("*DIR* NameNode.create: file "+src+" for "+clientName+" at "+clientMachine);}// 目录的长度(8000)和深度(1000)是否超出限制if (!checkPathLength(src)) {  throw new IOException("create: Pathname too long.  Limit "+ MAX_PATH_LENGTH + " characters, " + MAX_PATH_DEPTH + " levels.");}// 当前NameNode的状态(active、backup、standby)是否支持该操作namesystem.checkOperation(OperationCategory.WRITE);  CacheEntryWithPayload cacheEntry = RetryCache.waitForCompletion(retryCache, null);if (cacheEntry != null && cacheEntry.isSuccess()) {return (HdfsFileStatus) cacheEntry.getPayload();}HdfsFileStatus status = null;try {PermissionStatus perm = new PermissionStatus(getRemoteUser().getShortUserName(), null, masked);// 创建文件status = namesystem.startFile(src, perm, clientName, clientMachine,  flag.get(), createParent, replication, blockSize, supportedVersions,ecPolicyName, cacheEntry != null);} finally {RetryCache.setState(cacheEntry, status != null, status);}metrics.incrFilesCreated();metrics.incrCreateFileOps();return status;}

该方法创建了文件,并返回了fileId以及权限等文件相关信息使客户端创建输出流。这里我们着重看FSNamesystem.startFile()

  HdfsFileStatus startFile(String src, PermissionStatus permissions,String holder, String clientMachine, EnumSet<CreateFlag> flag,boolean createParent, short replication, long blockSize,CryptoProtocolVersion[] supportedVersions, String ecPolicyName,boolean logRetryCache) throws IOException {HdfsFileStatus status;try {// 创建文件status = startFileInt(src, permissions, holder, clientMachine, flag,  createParent, replication, blockSize, supportedVersions, ecPolicyName,logRetryCache);} catch (AccessControlException e) {logAuditEvent(false, "create", src);throw e;}logAuditEvent(true, "create", src, status);return status;}

不需要关注ecPolicy等相关参数,这是利用纠删码(Erasure Coding)实现条带式(striped)存储的方式,可以降低数据存储空间的开销,这里我们不考虑这些。继续看startFileInt()

  private HdfsFileStatus startFileInt(String src,PermissionStatus permissions, String holder, String clientMachine,EnumSet<CreateFlag> flag, boolean createParent, short replication,long blockSize, CryptoProtocolVersion[] supportedVersions,String ecPolicyName, boolean logRetryCache) throws IOException {if (NameNode.stateChangeLog.isDebugEnabled()) {StringBuilder builder = new StringBuilder();builder.append("DIR* NameSystem.startFile: src=").append(src).append(", holder=").append(holder).append(", clientMachine=").append(clientMachine).append(", createParent=").append(createParent).append(", replication=").append(replication).append(", createFlag=").append(flag).append(", blockSize=").append(blockSize).append(", supportedVersions=").append(Arrays.toString(supportedVersions));NameNode.stateChangeLog.debug(builder.toString());}if (!DFSUtil.isValidName(src) ||                       // 路径是否合法FSDirectory.isExactReservedName(src) ||            // 路径是否是reserved的(FSDirectory.isReservedName(src)                   // 同上&& !FSDirectory.isReservedRawName(src)         // 是否是预留raw&& !FSDirectory.isReservedInodesName(src))) {  // 是否是预留inodethrow new InvalidPathException(src);}boolean shouldReplicate = flag.contains(CreateFlag.SHOULD_REPLICATE);if (shouldReplicate &&(!org.apache.commons.lang.StringUtils.isEmpty(ecPolicyName))) {throw new HadoopIllegalArgumentException("SHOULD_REPLICATE flag and " +"ecPolicyName are exclusive parameters. Set both is not allowed!");}INodesInPath iip = null;boolean skipSync = true; // until we do something that might create editsHdfsFileStatus stat = null;BlocksMapUpdateInfo toRemoveBlocks = null;checkOperation(OperationCategory.WRITE);final FSPermissionChecker pc = getPermissionChecker();writeLock();try {checkOperation(OperationCategory.WRITE);checkNameNodeSafeMode("Cannot create file" + src);// 获取路径中的inodes,INodesInPath中包含了从根目录到当前文件的各级inode信息iip = FSDirWriteFileOp.resolvePathForStartFile(  dir, pc, src, flag, createParent);if (blockSize < minBlockSize) {throw new IOException("Specified block size is less than configured" +" minimum value (" + DFSConfigKeys.DFS_NAMENODE_MIN_BLOCK_SIZE_KEY+ "): " + blockSize + " < " + minBlockSize);}if (shouldReplicate) {blockManager.verifyReplication(src, replication, clientMachine);} else {final ErasureCodingPolicy ecPolicy = FSDirErasureCodingOp.getErasureCodingPolicy(this, ecPolicyName, iip);if (ecPolicy != null && (!ecPolicy.isReplicationPolicy())) {checkErasureCodingSupported("createWithEC");if (blockSize < ecPolicy.getCellSize()) {throw new IOException("Specified block size (" + blockSize+ ") is less than the cell size (" + ecPolicy.getCellSize()+") of the erasure coding policy (" + ecPolicy + ").");}} else {// 判断副本数是否超出配置文件设置的限制blockManager.verifyReplication(src, replication, clientMachine);}}FileEncryptionInfo feInfo = null;if (!iip.isRaw() && provider != null) {EncryptionKeyInfo ezInfo = FSDirEncryptionZoneOp.getEncryptionKeyInfo(this, iip, supportedVersions);// if the path has an encryption zone, the lock was released while// generating the EDEK.  re-resolve the path to ensure the namesystem// and/or EZ has not mutatedif (ezInfo != null) {checkOperation(OperationCategory.WRITE);iip = FSDirWriteFileOp.resolvePathForStartFile(dir, pc, iip.getPath(), flag, createParent);feInfo = FSDirEncryptionZoneOp.getFileEncryptionInfo(dir, iip, ezInfo);}}skipSync = false; // following might generate editstoRemoveBlocks = new BlocksMapUpdateInfo();// 目录上写锁dir.writeLock();try {// 创建文件stat = FSDirWriteFileOp.startFile(this, iip, permissions, holder,clientMachine, flag, createParent, replication, blockSize, feInfo,toRemoveBlocks, shouldReplicate, ecPolicyName, logRetryCache);} catch (IOException e) {skipSync = e instanceof StandbyException;throw e;} finally {dir.writeUnlock();}} finally {writeUnlock("create");// There might be transactions logged while trying to recover the lease.// They need to be sync'ed even when an exception was thrown.if (!skipSync) {// edit log落盘,实际上就是预写日志getEditLog().logSync();// 如果覆盖文件,则需要清理对应blockif (toRemoveBlocks != null) {removeBlocks(toRemoveBlocks);toRemoveBlocks.clear();}}}return stat;}

着重看FSDirWriteFileOp.startFile()

  static HdfsFileStatus startFile(FSNamesystem fsn, INodesInPath iip,PermissionStatus permissions, String holder, String clientMachine,EnumSet<CreateFlag> flag, boolean createParent,short replication, long blockSize,FileEncryptionInfo feInfo, INode.BlocksMapUpdateInfo toRemoveBlocks,boolean shouldReplicate, String ecPolicyName, boolean logRetryEntry)throws IOException {assert fsn.hasWriteLock();boolean overwrite = flag.contains(CreateFlag.OVERWRITE);boolean isLazyPersist = flag.contains(CreateFlag.LAZY_PERSIST);final String src = iip.getPath();// 目录树FSDirectory fsd = fsn.getFSDirectory();  // 如果目标文件是已存在的if (iip.getLastINode() != null) {  // 覆盖if (overwrite) {  List<INode> toRemoveINodes = new ChunkedArrayList<>();List<Long> toRemoveUCFiles = new ChunkedArrayList<>();// 1、将文件从命名空间中移除// 2、删除文件对应blocklong ret = FSDirDeleteOp.delete(fsd, iip, toRemoveBlocks,  // toRemoveBlocks将在删除流程没出错的情况下在上级方法删除                                        toRemoveINodes, toRemoveUCFiles, now());if (ret >= 0) {// 将INodesInPath中最后一级inode删掉,即被overwrite的文件iip = INodesInPath.replace(iip, iip.length() - 1, null);FSDirDeleteOp.incrDeletedFileCount(ret);// 删除lease,将inode移除fsn.removeLeasesAndINodes(toRemoveUCFiles, toRemoveINodes, true);}} else {// If lease soft limit time is expired, recover the leasefsn.recoverLeaseInternal(FSNamesystem.RecoverLeaseOp.CREATE_FILE, iip,src, holder, clientMachine, false);throw new FileAlreadyExistsException(src + " for client " +clientMachine + " already exists");}}// object(inode、block)数量是否超出限制fsn.checkFsObjectLimit();  INodeFile newNode = null;INodesInPath parent =FSDirMkdirOp.createAncestorDirectories(fsd, iip, permissions);if (parent != null) {// 如果父目录不为空,创建目标文件iip = addFile(fsd, parent, iip.getLastLocalName(), permissions,      replication, blockSize, holder, clientMachine, shouldReplicate,ecPolicyName);newNode = iip != null ? iip.getLastINode().asFile() : null;}if (newNode == null) {throw new IOException("Unable to add " + src +  " to namespace");}fsn.leaseManager.addLease(  // 上lease,clientName -> filesnewNode.getFileUnderConstructionFeature().getClientName(),newNode.getId());if (feInfo != null) {FSDirEncryptionZoneOp.setFileEncryptionInfo(fsd, iip, feInfo,XAttrSetFlag.CREATE);}// 设置存储策略setNewINodeStoragePolicy(fsd.getBlockManager(), iip, isLazyPersist);  // 预写日志fsd.getEditLog().logOpenFile(src, newNode, overwrite, logRetryEntry); if (NameNode.stateChangeLog.isDebugEnabled()) {NameNode.stateChangeLog.debug("DIR* NameSystem.startFile: added " +src + " inode " + newNode.getId() + " " + holder);}return FSDirStatAndListingOp.getFileInfo(fsd, iip, false, false);}

继续看addFile()

  private static INodesInPath addFile(FSDirectory fsd, INodesInPath existing, byte[] localName,PermissionStatus permissions, short replication, long preferredBlockSize,String clientName, String clientMachine, boolean shouldReplicate,String ecPolicyName) throws IOException {Preconditions.checkNotNull(existing);long modTime = now();INodesInPath newiip;fsd.writeLock();try {boolean isStriped = false;ErasureCodingPolicy ecPolicy = null;if (!shouldReplicate) {ecPolicy = FSDirErasureCodingOp.getErasureCodingPolicy(fsd.getFSNamesystem(), ecPolicyName, existing);if (ecPolicy != null && (!ecPolicy.isReplicationPolicy())) {isStriped = true;}}final BlockType blockType = isStriped ?BlockType.STRIPED : BlockType.CONTIGUOUS;final Short replicationFactor = (!isStriped ? replication : null);final Byte ecPolicyID = (isStriped ? ecPolicy.getId() : null);// 创建inodeINodeFile newNode = newINodeFile(fsd.allocateNewInodeId(), permissions,  modTime, modTime, replicationFactor, ecPolicyID, preferredBlockSize,blockType);newNode.setLocalName(localName);newNode.toUnderConstruction(clientName, clientMachine);// 将inode加入命名空间中newiip = fsd.addINode(existing, newNode, permissions.getPermission());  } finally {fsd.writeUnlock();}if (newiip == null) {NameNode.stateChangeLog.info("DIR* addFile: failed to add " +existing.getPath() + "/" + DFSUtil.bytes2String(localName));return null;}if(NameNode.stateChangeLog.isDebugEnabled()) {NameNode.stateChangeLog.debug("DIR* addFile: " +DFSUtil.bytes2String(localName) + " is added");}return newiip;}

在该方法中,创建了目标文件的inode,并将其加入目录树中。

(二)addBlock

首先看NameNodeRpcServeraddBlock()方法,这是rpc的server端实现。

  public LocatedBlock addBlock(String src, String clientName,ExtendedBlock previous, DatanodeInfo[] excludedNodes, long fileId,String[] favoredNodes, EnumSet<AddBlockFlag> addBlockFlags)throws IOException {// NameNode是否完全启动checkNNStartup();// 申请block并获取其存储的DataNodeLocatedBlock locatedBlock = namesystem.getAdditionalBlock(src, fileId,clientName, previous, excludedNodes, favoredNodes, addBlockFlags);if (locatedBlock != null) {metrics.incrAddBlockOps();}return locatedBlock;}

进入namesystem.getAdditionalBlock()方法。

  LocatedBlock getAdditionalBlock(String src, long fileId, String clientName, ExtendedBlock previous,DatanodeInfo[] excludedNodes, String[] favoredNodes,EnumSet<AddBlockFlag> flags) throws IOException {final String operationName = "getAdditionalBlock";NameNode.stateChangeLog.debug("BLOCK* getAdditionalBlock: {}  inodeId {}" +" for {}", src, fileId, clientName);// 用于判断当前块是不是重试块LocatedBlock[] onRetryBlock = new LocatedBlock[1];FSDirWriteFileOp.ValidateAddBlockResult r;// 检查NameNode当前状态(Active Backup StandBy)是否可以执行read操作checkOperation(OperationCategory.READ);final FSPermissionChecker pc = getPermissionChecker();readLock();try {checkOperation(OperationCategory.READ);// 1、是否可以添加block// 2、是否有潜在的重试块// 3、分配DataNoder = FSDirWriteFileOp.validateAddBlock(this, pc, src, fileId, clientName,previous, onRetryBlock);} finally {readUnlock(operationName);}// 如果是重试块,直接返回该块if (r == null) {assert onRetryBlock[0] != null : "Retry block is null";// This is a retry. Just return the last block.return onRetryBlock[0];}// 选择目标存储节点DatanodeStorageInfo[] targets = FSDirWriteFileOp.chooseTargetForNewBlock(blockManager, src, excludedNodes, favoredNodes, flags, r);checkOperation(OperationCategory.WRITE);writeLock();LocatedBlock lb;try {checkOperation(OperationCategory.WRITE);// block加入blocksMap,记录DataNode正在传输的block数等操作lb = FSDirWriteFileOp.storeAllocatedBlock(  this, src, fileId, clientName, previous, targets);} finally {writeUnlock(operationName);}getEditLog().logSync();return lb;}

这个方法做了许多事,我们一个个来看。首先是FSDirWriteFileOp.validateAddBlock()

  static ValidateAddBlockResult validateAddBlock(FSNamesystem fsn, FSPermissionChecker pc,String src, long fileId, String clientName,ExtendedBlock previous, LocatedBlock[] onRetryBlock) throws IOException {final long blockSize;final short numTargets;final byte storagePolicyID;String clientMachine;final BlockType blockType;// 获取从根目录到目标文件每级的inodeINodesInPath iip = fsn.dir.resolvePath(pc, src, fileId);/** 分析文件状态:* 1、判断上一个块和当前名称空间是否为同一个block pool* 2、判断object(inode及block)数是否超出限制* 3、检查lease(单写多读)* 4、校验多种情况下前一块是否合格以及是否为重试块*/FileState fileState = analyzeFileState(fsn, iip, fileId, clientName,previous, onRetryBlock);if (onRetryBlock[0] != null && onRetryBlock[0].getLocations().length > 0) {// This is a retry. No need to generate new locations.// Use the last block if it has locations.return null;}final INodeFile pendingFile = fileState.inode;// 判断先前块是否都已complete,// 是否可以添加新块if (!fsn.checkFileProgress(src, pendingFile, false)) {  throw new NotReplicatedYetException("Not replicated yet: " + src);}// 文件过大if (pendingFile.getBlocks().length >= fsn.maxBlocksPerFile) {throw new IOException("File has reached the limit on maximum number of"+ " blocks (" + DFSConfigKeys.DFS_NAMENODE_MAX_BLOCKS_PER_FILE_KEY+ "): " + pendingFile.getBlocks().length + " >= "+ fsn.maxBlocksPerFile);}blockSize = pendingFile.getPreferredBlockSize();  // 块大小,128MBclientMachine = pendingFile.getFileUnderConstructionFeature()  // 客户端IP.getClientMachine();// 块类型 // CONTIGUOUS:连续存储,一般是用这个// STRIPED:条带化,用纠删码存储,减少存储空间blockType = pendingFile.getBlockType();  ErasureCodingPolicy ecPolicy = null;// 条带化存储纠删码相关if (blockType == BlockType.STRIPED) {ecPolicy =FSDirErasureCodingOp.unprotectedGetErasureCodingPolicy(fsn, iip);numTargets = (short) (ecPolicy.getSchema().getNumDataUnits()+ ecPolicy.getSchema().getNumParityUnits());} else {// 需要的副本数量numTargets = pendingFile.getFileReplication();}storagePolicyID = pendingFile.getStoragePolicyID();return new ValidateAddBlockResult(blockSize, numTargets, storagePolicyID,clientMachine, blockType, ecPolicy);}

该方法主要验证了文件状态(判断上一个块和当前名称空间是否为同一个block pool、判断object(inode及block)数是否超出限制、检查lease(单写多读)、校验多种情况下前一块是否合格以及是否为重试块),并封装了块相关信息。接下来回到上级方法,看FSDirWriteFileOp.chooseTargetForNewBlock()

  static DatanodeStorageInfo[] chooseTargetForNewBlock(BlockManager bm, String src, DatanodeInfo[] excludedNodes,String[] favoredNodes, EnumSet<AddBlockFlag> flags,ValidateAddBlockResult r) throws IOException {Node clientNode = null;boolean ignoreClientLocality = (flags != null&& flags.contains(AddBlockFlag.IGNORE_CLIENT_LOCALITY));// If client locality is ignored, clientNode remains 'null' to indicate// 是否考虑客户端本机,因为客户端有可能也是DataNodeif (!ignoreClientLocality) {clientNode = bm.getDatanodeManager().getDatanodeByHost(r.clientMachine);if (clientNode == null) {clientNode = getClientNode(bm, r.clientMachine);}}// 排除的DataNodeSet<Node> excludedNodesSet =(excludedNodes == null) ? new HashSet<>(): new HashSet<>(Arrays.asList(excludedNodes));// 倾向的DataNodeList<String> favoredNodesList =(favoredNodes == null) ? Collections.emptyList(): Arrays.asList(favoredNodes);// choose targets for the new block to be allocated. // 选择DataNodesreturn bm.chooseTarget4NewBlock(src, r.numTargets, clientNode,excludedNodesSet, r.blockSize,favoredNodesList, r.storagePolicyID,r.blockType, r.ecPolicy, flags);}

这个方法主要选择用于存储该block的DataNode。其中excludedNodesfavoredNodes都由客户端决定,比如,当客户端尝试连接NameNode对某块分配的DataNode但发现连不上时,就会将该DataNode加入excludedNodes并重新调用addBlock分配block,以避免选择客户端不可达的DataNode作为副本。进入bm.chooseTarget4NewBlock()


http://www.ppmy.cn/news/546792.html

相关文章

炮轰三国服务器维护,炮轰三国上红色要多少个精华 | 手游网游页游攻略大全

发布时间&#xff1a;2016-03-13 最终幻想14终于开测了!在小伙伴们说了这么长时间之后,7月22日开测时间已经到来了!众所周知,最终幻想14的地图非常大,所以在世界地图上,有着非常多的标志.今天小编为大家带来的就是地图上的红色圆球标志的作用介绍,我们一起 ... 标签&#xff1a…

PID控制算法:8、Arduino PID 库讲解

Arduino PID 库讲解 First Pass – Initial Input and Proportional-Mode selection /*working variables*/ unsigned long lastTime; double Input, Output, Setpoint; double ITerm, lastInput; double kp, ki, kd; int SampleTime 1000; //1 sec double outMin, outMax; …

ipad刷机的时候,突然中断了怎么处理好

对于经常玩手机、ipad用户来说&#xff0c;像刷机这个词&#xff0c;相信大家并不陌生&#xff0c;刷机通常是&#xff0c;手机或者ipad用得卡的时候&#xff0c;还有就是想越狱&#xff0c;当然&#xff0c;刷机的过程可能会有突然情况&#xff0c;就是中断了&#xff0c;那这…

刷机一直请求shsh_爱思助手刷机过程里提示请求SHSH失败的处理方法

Android版iPhone版PC版 当前很多网友会使用到爱思助手软件来刷机&#xff0c;而最近有网友在刷机过程里遇到了提示请求SHSH失败的情况&#xff0c;怎么办呢&#xff1f;请看下文爱思助手刷机过程里提示请求SHSH失败的处理教程。 问题&#xff1a;iPhoneXS Max手机系统想刷机到i…

刷机一直请求shsh_爱思助手刷机过程里提示请求SHSH失败的处理教程

当前很多网友会使用到爱思助手软件来刷机&#xff0c;而最近有网友在刷机过程里遇到了提示请求SHSH失败的情况&#xff0c;怎么办呢&#xff1f;接下来小编就带各位了解文爱思助手刷机过程里提示请求SHSH失败的处理教程吧。 问题&#xff1a;iPhoneXS Max手机系统想刷机到iOS12…

爱思服务器shsh文件类型,爱思助手SHSH怎么备份 爱思助手SHSH备份教程

爱思助手SHSH怎么备份&#xff1f;之前有很多用户提过这个问题&#xff0c;今天小编就通过这篇文章给大家讲讲操作教程&#xff0c;非常简单&#xff0c;一起来看吧&#xff01; 类别&#xff1a;手机工具 大小&#xff1a;123.18M 语言&#xff1a;简体中文 评分&#xf…

刷机一直请求shsh_爱思助手里刷机提示请求SHSH失败的处理教程

各位使用爱思助手的同学们&#xff0c;你们知道爱思助手刷机时提示请求SHSH失败如何处理呢?在这篇教程内小编就为各位呈现了爱思助手里刷机提示请求SHSH失败的处理教程。 爱思助手里刷机提示请求SHSH失败的处理教程 问题&#xff1a;iPhoneXS Max系统想刷机到iOS12.2正式版&am…

保姆级系统降级教程,适用于iPhone13

准备电脑一台&#xff0c;数据线&#xff0c;高于50%电量的iPhone 1&#xff0c;电脑端打开此网址&#xff0c;找到iPhone13pro那一列&#xff0c;下载固件15.6rc&#xff0c;下载过程中可以进行以下步骤&#xff0c;网址因为等级原因&#xff0c;酷安限制发布&#xff0c;欢迎…

爱思助手从苹果服务器shsh失败,爱思助手无法提取SHSH降级iOS6.1.2固件教程

iOS6.1.2降级方法&#xff1a; 1、你的手机在cydia如果有显示相应固件的SHSH&#xff0c;那用itools的SHSH保存功能&#xff0c;将SHSH保存到电脑。此时&#xff0c;itools虽然已经帮你保存SHSH到电脑&#xff0c;但是itools保存的SHSH爱思助手不认&#xff0c;如果之前你没用爱…

windows下插入u盘怎么使用qt进行读取_【技术篇】Windows系统使用爱思助手制作越狱 U 盘教程...

CheckRa1n 越狱工具需要在 macOS 系统上进行安装,借助电脑端爱思助手制作越狱 U 盘后,即可在 Windows 电脑上使用 CheckRa1n 进行越狱。 支持设备:iPhone 5S - iPhone X 兼容系统:iOS 12.3 及以上 U盘购买链接:$cwEp1zZR4ua$(也可自行从某宝购买) 爱思助手下载链接:http:…

爱思助手(i4助手) v5.21 官方版

爱思助手(i4助手) v5.21 官方版 软件大小&#xff1a;46MB 软件语言&#xff1a;简体中文 软件类别&#xff1a;手机工具 软件授权&#xff1a;官方版 更新时间&#xff1a;2015-02-14 应用平台&#xff1a;/Win8/Win7/Vista/WinXP 爱思助手(i4助手)是首款集一键查询shsh、一键…

打开ssh通道是什么意思_如何使用爱思助手打开SSH通道和关闭SSH通道?

爱思助手是全球首款集一键查询shsh、一键备份shsh、一键刷机功能于一身的全自动傻瓜式刷机软件,关于爱思助手教程小编也给大家分享了很多,下面小编要给大家分享的就是如何使用爱思助手"打开SSH通道"和"关闭SSH通道"?在介绍具体方法之前,我们先来了解一…

爱思助手(i4助手) v5.08 官方版​

爱思助手(i4助手) v5.08 官方版 软件大小&#xff1a;38.3MB 软件语言&#xff1a;简体中文 软件类别&#xff1a;手机工具 软件授权&#xff1a;官方版 更新时间&#xff1a;2015-01-29 应用平台&#xff1a;/Win8/Win7/Vista/WinXP 爱思助手(i4助手)是首款集一键查询shsh、一…

大数据企业开发全套流程

大数据企业开发基础流程 Linux命令 1 Hadoop(HDFSYarn)单机版环境搭建 Hadoop 是一个开源的分布式计算框架&#xff0c;由 HDFS&#xff08;Hadoop Distributed File System&#xff09;和 YARN&#xff08;Yet Another Resource Negotiator&#xff09;两个核心组件组成。HD…

itunes刷机一直正在恢复固件要多久_ios刷机报错故障汇总指南

ios刷机报错故障汇总指南 苹果刷机一般有两类刷机软件:一是苹果官方软件 iTunes,二是第三方软件,比如爱思助手。 iTunes整体操作比较麻烦 ,以下内容说一下爱思助手恢复模式刷机报错的几大故障码及解决方法(关于怎么进入恢复模式/DFU模式下一期做详细教程) 刷机卡60%报错未…

使用爱思助手刷机和越狱

转至&#xff1a;http://www.chinamac.com/iphoneipad/mac21376.html http://www.educity.cn/help/651014.html 感谢作者&#xff0c;支持原创 先说刷机&#xff1a; 2、在连接手机之前先将手机的密码取消&#xff0c;否则读不了信息&#xff0c;将手机与电脑连接显示下图&am…

支持断点续传的大文件传输协议

从1971年A.K.Bhushan提出第一个FTP协议版本&#xff08;RFC114&#xff09;到现在&#xff0c;人们对FTP的应用已经历了40余年的时间&#xff0c;同时&#xff0c;许多基于FTP协议的数据传输软件也应运而生。如Windows操作系统下经常使用的支持FTP协议的软件有&#xff1a;Cute…

java上传超大文件解决方案

用JAVA实现大文件上传及显示进度信息 ---解析HTTP MultiPart协议 (本文提供全部源码下载&#xff0c;请访问 https://github.com/1269085759/up6-jsp-mysql) 一. 大文件上传基础描述&#xff1a; 各种WEB框架中&#xff0c;对于浏览器上传文件的请求&#xff0c;都有自己的处理…

Netty学习之路一(大文件传输案例分析)

业务场景&#xff1a; 由于工作需要&#xff0c;需要在两台服务器的java服务之间通过netty建立链接&#xff0c;将大文件&#xff08;几百G到TB级别&#xff09;从机器A上的serverA发送到机器B上的serverB。 实现方法设计&#xff1a; 系统现有的实现方法&#xff1a;将业务方…

大文件传输(gofastdfs)

简介&#xff1a; 1、使用的是go-fast&#xff0c;支持大文件传输。 2、参考的资料&#xff1a;https://gitee.com/sjqzhang/go-fastdfs/blob/master/README-en.md 3、下载fileserver.exe 4、找到“断点续传示例”,点击“更多客户端请参考”&#xff0c;下载“tus-java-client”…