GATK ReadsPathDataSource类介绍

news/2024/9/20 3:59:16/ 标签: java, 生物信息学

GATK(Genome Analysis Toolkit)是一个广泛使用的基因组分析工具包,它的核心库之一是htsjdk,用于处理高通量测序数据。在GATK中,ReadsPathDataSource类是负责管理和提供读取高通量测序数据文件(如BAM、SAM、CRAM)的类。

常见使用场景

  • 数据加载:在GATK的基因组分析工具链中,ReadsPathDataSource 经常被用来从指定路径加载测序数据。
  • 数据过滤:通过 ReadsPathDataSource,可以方便地在加载数据的同时进行预过滤,如按特定标准选择感兴趣的序列记录。
  • 多文件支持:支持同时从多个文件中加载数据,使得分析多个样本的数据更加便捷。

类关系

ReadsPathDataSource源码

package org.broadinstitute.hellbender.engine;import com.google.common.annotations.VisibleForTesting;
import htsjdk.samtools.MergingSamRecordIterator;
import htsjdk.samtools.SAMException;
import htsjdk.samtools.SAMFileHeader;
import htsjdk.samtools.SAMRecord;
import htsjdk.samtools.SAMSequenceDictionary;
import htsjdk.samtools.SamFileHeaderMerger;
import htsjdk.samtools.SamInputResource;
import htsjdk.samtools.SamReader;
import htsjdk.samtools.SamReaderFactory;
import htsjdk.samtools.util.CloseableIterator;
import htsjdk.samtools.util.IOUtil;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.broadinstitute.hellbender.exceptions.GATKException;
import org.broadinstitute.hellbender.exceptions.UserException;
import org.broadinstitute.hellbender.utils.IntervalUtils;
import org.broadinstitute.hellbender.utils.SimpleInterval;
import org.broadinstitute.hellbender.utils.Utils;
import org.broadinstitute.hellbender.utils.gcs.BucketUtils;
import org.broadinstitute.hellbender.utils.iterators.SAMRecordToReadIterator;
import org.broadinstitute.hellbender.utils.iterators.SamReaderQueryingIterator;
import org.broadinstitute.hellbender.utils.read.GATKRead;
import org.broadinstitute.hellbender.utils.read.ReadConstants;import java.io.IOException;
import java.nio.channels.SeekableByteChannel;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.function.Function;
import java.util.stream.Collectors;/*** Manages traversals and queries over sources of reads which are accessible via {@link Path}s* (for now, SAM/BAM/CRAM files only).** Two basic operations are available:** -Iteration over all reads, optionally restricted to reads that overlap a set of intervals* -Targeted queries by one interval at a time*/
public final class ReadsPathDataSource implements ReadsDataSource {private static final Logger logger = LogManager.getLogger(ReadsPathDataSource.class);/*** Mapping from SamReaders to iterators over the reads from each reader. Only one* iterator can be open from a given reader at a time (this is a restriction* in htsjdk). Iterator is set to null for a reader if no iteration is currently* active on that reader.*/private final Map<SamReader, CloseableIterator<SAMRecord>> readers;/*** Hang onto the input files so that we can print useful errors about them*/private final Map<SamReader, Path> backingPaths;/*** Only reads that overlap these intervals (and unmapped reads, if {@link #traverseUnmapped} is set) will be returned* during a full iteration. Null if iteration is unbounded.** Individual queries are unaffected by these intervals -- only traversals initiated via {@link #iterator} are affected.*/private List<SimpleInterval> intervalsForTraversal;/*** If true, restrict traversals to unmapped reads (and reads overlapping any {@link #intervalsForTraversal}, if set).* False if iteration is unbounded or bounded only by our {@link #intervalsForTraversal}.** Note that this setting covers only unmapped reads that have no position -- unmapped reads that are assigned the* position of their mates will be returned by queries overlapping that position.** Individual queries are unaffected by this setting  -- only traversals initiated via {@link #iterator} are affected.*/private boolean traverseUnmapped;/*** Used to create a merged Sam header when we're dealing with multiple readers. Null if we only have a single reader.*/private final SamFileHeaderMerger headerMerger;/*** Are indices available for all files?*/private boolean indicesAvailable;/*** Has it been closed already.*/private boolean isClosed;/*** Initialize this data source with a single SAM/BAM file and validation stringency SILENT.** @param samFile SAM/BAM file, not null.*/public ReadsPathDataSource( final Path samFile ) {this(samFile != null ? Arrays.asList(samFile) : null, (SamReaderFactory)null);}/*** Initialize this data source with multiple SAM/BAM files and validation stringency SILENT.** @param samFiles SAM/BAM files, not null.*/public ReadsPathDataSource( final List<Path> samFiles ) {this(samFiles, (SamReaderFactory)null);}/*** Initialize this data source with a single SAM/BAM file and a custom SamReaderFactory** @param samPath path to SAM/BAM file, not null.* @param customSamReaderFactory SamReaderFactory to use, if null a default factory with no reference and validation*                               stringency SILENT is used.*/public ReadsPathDataSource( final Path samPath, SamReaderFactory customSamReaderFactory ) {this(samPath != null ? Arrays.asList(samPath) : null, customSamReaderFactory);}/*** Initialize this data source with multiple SAM/BAM files and a custom SamReaderFactory** @param samPaths path to SAM/BAM file, not null.* @param customSamReaderFactory SamReaderFactory to use, if null a default factory with no reference and validation*                               stringency SILENT is used.*/public ReadsPathDataSource( final List<Path> samPaths, SamReaderFactory customSamReaderFactory ) {this(samPaths, null, customSamReaderFactory, 0, 0);}/*** Initialize this data source with multiple SAM/BAM/CRAM files, and explicit indices for those files.** @param samPaths paths to SAM/BAM/CRAM files, not null* @param samIndices indices for all of the SAM/BAM/CRAM files, in the same order as samPaths. May be null,*                   in which case index paths are inferred automatically.*/public ReadsPathDataSource( final List<Path> samPaths, final List<Path> samIndices ) {this(samPaths, samIndices, null, 0, 0);}/*** Initialize this data source with multiple SAM/BAM/CRAM files, explicit indices for those files,* and a custom SamReaderFactory.** @param samPaths paths to SAM/BAM/CRAM files, not null* @param samIndices indices for all of the SAM/BAM/CRAM files, in the same order as samPaths. May be null,*                   in which case index paths are inferred automatically.* @param customSamReaderFactory SamReaderFactory to use, if null a default factory with no reference and validation*                               stringency SILENT is used.*/public ReadsPathDataSource( final List<Path> samPaths, final List<Path> samIndices,SamReaderFactory customSamReaderFactory ) {this(samPaths, samIndices, customSamReaderFactory, 0, 0);}/*** Initialize this data source with multiple SAM/BAM/CRAM files, explicit indices for those files,* and a custom SamReaderFactory.** @param samPaths paths to SAM/BAM/CRAM files, not null* @param samIndices indices for all of the SAM/BAM/CRAM files, in the same order as samPaths. May be null,*                   in which case index paths are inferred automatically.* @param customSamReaderFactory SamReaderFactory to use, if null a default factory with no reference and validation*                               stringency SILENT is used.* @param cloudPrefetchBuffer MB size of caching/prefetching wrapper for the data, if on Google Cloud (0 to disable).* @param cloudIndexPrefetchBuffer MB size of caching/prefetching wrapper for the index, if on Google Cloud (0 to disable).*/public ReadsPathDataSource( final List<Path> samPaths, final List<Path> samIndices,SamReaderFactory customSamReaderFactory,int cloudPrefetchBuffer, int cloudIndexPrefetchBuffer) {this(samPaths, samIndices, customSamReaderFactory,BucketUtils.getPrefetchingWrapper(cloudPrefetchBuffer),BucketUtils.getPrefetchingWrapper(cloudIndexPrefetchBuffer) );}/*** Initialize this data source with multiple SAM/BAM/CRAM files, explicit indices for those files,* and a custom SamReaderFactory.** @param samPaths paths to SAM/BAM/CRAM files, not null* @param samIndices indices for all of the SAM/BAM/CRAM files, in the same order as samPaths. May be null,*                   in which case index paths are inferred automatically.* @param customSamReaderFactory SamReaderFactory to use, if null a default factory with no reference and validation*                               stringency SILENT is used.* @param cloudWrapper caching/prefetching wrapper for the data, if on Google Cloud.* @param cloudIndexWrapper caching/prefetching wrapper for the index, if on Google Cloud.*/public ReadsPathDataSource( final List<Path> samPaths, final List<Path> samIndices,SamReaderFactory customSamReaderFactory,Function<SeekableByteChannel, SeekableByteChannel> cloudWrapper,Function<SeekableByteChannel, SeekableByteChannel> cloudIndexWrapper ) {Utils.nonNull(samPaths);Utils.nonEmpty(samPaths, "ReadsPathDataSource cannot be created from empty file list");if ( samIndices != null && samPaths.size() != samIndices.size() ) {throw new UserException(String.format("Must have the same number of BAM/CRAM/SAM paths and indices. Saw %d BAM/CRAM/SAMs but %d indices",samPaths.size(), samIndices.size()));}readers = new LinkedHashMap<>(samPaths.size() * 2);backingPaths = new LinkedHashMap<>(samPaths.size() * 2);indicesAvailable = true;final SamReaderFactory samReaderFactory =customSamReaderFactory == null ?SamReaderFactory.makeDefault().validationStringency(ReadConstants.DEFAULT_READ_VALIDATION_STRINGENCY) :customSamReaderFactory;int samCount = 0;for ( final Path samPath : samPaths ) {// Ensure each file can be readtry {IOUtil.assertFileIsReadable(samPath);}catch ( SAMException|IllegalArgumentException e ) {throw new UserException.CouldNotReadInputFile(samPath.toString(), e);}Function<SeekableByteChannel, SeekableByteChannel> wrapper =(BucketUtils.isEligibleForPrefetching(samPath)? cloudWrapper: Function.identity());// if samIndices==null then we'll guess the index name from the file name.// If the file's on the cloud, then the search will only consider locations that are also// in the cloud.Function<SeekableByteChannel, SeekableByteChannel> indexWrapper =((samIndices != null && BucketUtils.isEligibleForPrefetching(samIndices.get(samCount))|| (samIndices == null && BucketUtils.isEligibleForPrefetching(samPath)))? cloudIndexWrapper: Function.identity());SamReader reader;if ( samIndices == null ) {reader = samReaderFactory.open(samPath, wrapper, indexWrapper);}else {final SamInputResource samResource = SamInputResource.of(samPath, wrapper);Path indexPath = samIndices.get(samCount);samResource.index(indexPath, indexWrapper);reader = samReaderFactory.open(samResource);}// Ensure that each file has an indexif ( ! reader.hasIndex() ) {indicesAvailable = false;}readers.put(reader, null);backingPaths.put(reader, samPath);++samCount;}// Prepare a header merger only if we have multiple readersheaderMerger = samPaths.size() > 1 ? createHeaderMerger() : null;}/*** Are indices available for all files?*/public boolean indicesAvailable() {return indicesAvailable;}/*** @return true if indices are available for all inputs.* This is identical to {@link #indicesAvailable}*/@Overridepublic boolean isQueryableByInterval() {return indicesAvailable();}/*** Restricts a traversal of this data source via {@link #iterator} to only return reads that overlap the given intervals,* and to unmapped reads if specified.** Calls to {@link #query} are not affected by this method.** @param intervals Our next full traversal will return reads overlapping these intervals* @param traverseUnmapped Our next full traversal will return unmapped reads (this affects only unmapped reads that*                         have no position -- unmapped reads that have the position of their mapped mates will be*                         included if the interval overlapping that position is included).*/@Overridepublic void setTraversalBounds( final List<SimpleInterval> intervals, final boolean traverseUnmapped ) {// Set intervalsForTraversal to null if intervals is either null or emptythis.intervalsForTraversal = intervals != null && ! intervals.isEmpty() ? intervals : null;this.traverseUnmapped = traverseUnmapped;if ( traversalIsBounded() && ! indicesAvailable ) {raiseExceptionForMissingIndex("Traversal by intervals was requested but some input files are not indexed.");}}/*** @return True if traversals initiated via {@link #iterator} will be restricted to reads that overlap intervals*         as configured via {@link #setTraversalBounds}, otherwise false*/@Overridepublic boolean traversalIsBounded() {return intervalsForTraversal != null || traverseUnmapped;}private void raiseExceptionForMissingIndex( String reason ) {String commandsToIndex = backingPaths.entrySet().stream().filter(f -> !f.getKey().hasIndex()).map(Map.Entry::getValue).map(Path::toAbsolutePath).map(f -> "samtools index " + f).collect(Collectors.joining("\n","\n","\n"));throw new UserException(reason + "\nPlease index all input files:\n" + commandsToIndex);}/*** Iterate over all reads in this data source. If intervals were provided via {@link #setTraversalBounds},* iteration is limited to reads that overlap that set of intervals.** @return An iterator over the reads in this data source, limited to reads that overlap the intervals supplied*         via {@link #setTraversalBounds} (if intervals were provided)*/@Overridepublic Iterator<GATKRead> iterator() {logger.debug("Preparing readers for traversal");return prepareIteratorsForTraversal(intervalsForTraversal, traverseUnmapped);}/*** Query reads over a specific interval. This operation is not affected by prior calls to* {@link #setTraversalBounds}** @param interval The interval over which to query* @return Iterator over reads overlapping the query interval*/@Overridepublic Iterator<GATKRead> query( final SimpleInterval interval ) {if ( ! indicesAvailable ) {raiseExceptionForMissingIndex("Cannot query reads data source by interval unless all files are indexed");}return prepareIteratorsForTraversal(Arrays.asList(interval));}/*** @return An iterator over just the unmapped reads with no assigned position. This operation is not affected*         by prior calls to {@link #setTraversalBounds}. The underlying file must be indexed.*/@Overridepublic Iterator<GATKRead> queryUnmapped() {if ( ! indicesAvailable ) {raiseExceptionForMissingIndex("Cannot query reads data source by interval unless all files are indexed");}return prepareIteratorsForTraversal(null, true);}/*** Returns the SAM header for this data source. Will be a merged header if there are multiple readers.* If there is only a single reader, returns its header directly.** @return SAM header for this data source*/@Overridepublic SAMFileHeader getHeader() {return headerMerger != null ? headerMerger.getMergedHeader() : readers.entrySet().iterator().next().getKey().getFileHeader();}/*** Prepare iterators over all readers in response to a request for a complete iteration or query** If there are multiple intervals, they must have been optimized using QueryInterval.optimizeIntervals()* before calling this method.** @param queryIntervals Intervals to bound the iteration (reads must overlap one of these intervals). If null, iteration is unbounded.* @return Iterator over all reads in this data source, limited to overlap with the supplied intervals*/private Iterator<GATKRead> prepareIteratorsForTraversal( final List<SimpleInterval> queryIntervals ) {return prepareIteratorsForTraversal(queryIntervals, false);}/*** Prepare iterators over all readers in response to a request for a complete iteration or query** @param queryIntervals Intervals to bound the iteration (reads must overlap one of these intervals). If null, iteration is unbounded.* @return Iterator over all reads in this data source, limited to overlap with the supplied intervals*/private Iterator<GATKRead> prepareIteratorsForTraversal( final List<SimpleInterval> queryIntervals, final boolean queryUnmapped ) {// htsjdk requires that only one iterator be open at a time per reader, so close out// any previous iterationsclosePreviousIterationsIfNecessary();final boolean traversalIsBounded = (queryIntervals != null && ! queryIntervals.isEmpty()) || queryUnmapped;// Set up an iterator for each reader, bounded to overlap with the supplied intervals if there are anyfor ( Map.Entry<SamReader, CloseableIterator<SAMRecord>> readerEntry : readers.entrySet() ) {if (traversalIsBounded) {readerEntry.setValue(new SamReaderQueryingIterator(readerEntry.getKey(),readers.size() > 1 ?getIntervalsOverlappingReader(readerEntry.getKey(), queryIntervals) :queryIntervals,queryUnmapped));} else {readerEntry.setValue(readerEntry.getKey().iterator());}}// Create a merging iterator over all readers if necessary. In the case where there's only a single reader,// return its iterator directly to avoid the overhead of the merging iterator.Iterator<SAMRecord> startingIterator = null;if ( readers.size() == 1 ) {startingIterator = readers.entrySet().iterator().next().getValue();}else {startingIterator = new MergingSamRecordIterator(headerMerger, readers, true);}return new SAMRecordToReadIterator(startingIterator);}/*** Reduce the intervals down to only include ones that can actually intersect with this reader*/private List<SimpleInterval> getIntervalsOverlappingReader(final SamReader samReader,final List<SimpleInterval> queryIntervals ){final SAMSequenceDictionary sequenceDictionary = samReader.getFileHeader().getSequenceDictionary();return queryIntervals.stream().filter(interval -> IntervalUtils.intervalIsOnDictionaryContig(interval, sequenceDictionary)).collect(Collectors.toList());}/*** Create a header merger from the individual SAM/BAM headers in our readers** @return a header merger containing all individual headers in this data source*/private SamFileHeaderMerger createHeaderMerger() {List<SAMFileHeader> headers = new ArrayList<>(readers.size());for ( Map.Entry<SamReader, CloseableIterator<SAMRecord>> readerEntry : readers.entrySet() ) {headers.add(readerEntry.getKey().getFileHeader());}SamFileHeaderMerger headerMerger = new SamFileHeaderMerger(identifySortOrder(headers), headers, true);return headerMerger;}@VisibleForTestingstatic SAMFileHeader.SortOrder identifySortOrder( final List<SAMFileHeader> headers ){final Set<SAMFileHeader.SortOrder> sortOrders = headers.stream().map(SAMFileHeader::getSortOrder).collect(Collectors.toSet());final SAMFileHeader.SortOrder order;if (sortOrders.size() == 1) {order = sortOrders.iterator().next();} else {order = SAMFileHeader.SortOrder.unsorted;logger.warn("Inputs have different sort orders. Assuming {} sorted reads for all of them.", order);}return order;}/*** @return true if this {@code ReadsPathDataSource} supports serial iteration (has only non-SAM inputs). If any* input has type==SAM_TYPE (is backed by a SamFileReader) this will return false, since SamFileReader* doesn't support serial iterators, and can't be serially re-traversed without re-initialization of the* underlying reader (and {@code ReadsPathDataSource}.*/public boolean supportsSerialIteration() {return !hasSAMInputs();}/*** Shut down this data source permanently, closing all iterations and readers.*/@Overridepublic void close() {if (isClosed) {return;}isClosed = true;closePreviousIterationsIfNecessary();try {for ( Map.Entry<SamReader, CloseableIterator<SAMRecord>> readerEntry : readers.entrySet() ) {readerEntry.getKey().close();}}catch ( IOException e ) {throw new GATKException("Error closing SAMReader");}}boolean isClosed() {return isClosed;}/*** Close any previously-opened iterations over our readers (htsjdk allows only one open iteration per reader).*/private void closePreviousIterationsIfNecessary() {for ( Map.Entry<SamReader, CloseableIterator<SAMRecord>> readerEntry : readers.entrySet() ) {CloseableIterator<SAMRecord> readerIterator = readerEntry.getValue();if ( readerIterator != null ) {readerIterator.close();readerEntry.setValue(null);}}}// Return true if any input is has type==SAM_TYPE (is backed by a SamFileReader) since SamFileReader// doesn't support serial iterators and can't be serially re-traversed without re-initialization of the// readerprivate boolean hasSAMInputs() {return readers.keySet().stream().anyMatch(r -> r.type().equals(SamReader.Type.SAM_TYPE));}/*** Get the sequence dictionary for this ReadsPathDataSource** @return SAMSequenceDictionary from the SAMReader backing this if there is only 1 input file, otherwise the merged SAMSequenceDictionary from the merged header*/@Overridepublic SAMSequenceDictionary getSequenceDictionary() {return getHeader().getSequenceDictionary();}}

ReadsDataSource源码

package org.broadinstitute.hellbender.engine;import htsjdk.samtools.SAMFileHeader;
import htsjdk.samtools.SAMSequenceDictionary;
import org.broadinstitute.hellbender.utils.SimpleInterval;
import org.broadinstitute.hellbender.utils.read.GATKRead;import java.util.Iterator;
import java.util.List;/**** An interface for managing traversals over sources of reads.** Two basic operations are available:** -Iteration over all reads, optionally restricted to reads that overlap a set of intervals* -Targeted queries by one interval at a time*/
public interface ReadsDataSource extends GATKDataSource<GATKRead>, AutoCloseable {/*** Restricts a traversal of this data source via {@link #iterator} to only return reads that overlap the given intervals,* and to unmapped reads if specified.** Calls to {@link #query} are not affected by this method.** @param intervals Our next full traversal will return reads overlapping these intervals* @param traverseUnmapped Our next full traversal will return unmapped reads (this affects only unmapped reads that*                         have no position -- unmapped reads that have the position of their mapped mates will be*                         included if the interval overlapping that position is included).*/void setTraversalBounds(List<SimpleInterval> intervals, boolean traverseUnmapped);/*** Restricts a traversal of this data source via {@link #iterator} to only return reads which overlap the given intervals.* Calls to {@link #query} are not affected by setting these intervals.** @param intervals Our next full traversal will return only reads overlapping these intervals*/default void setTraversalBounds(List<SimpleInterval> intervals) {setTraversalBounds(intervals, false);}/*** Restricts a traversal of this data source via {@link #iterator} to only return reads that overlap the given intervals,* and to unmapped reads if specified.** Calls to {@link #query} are not affected by this method.** @param traversalParameters set of traversal parameters to control which reads get returned by the next call*                            to {@link #iterator}*/default void setTraversalBounds(TraversalParameters traversalParameters){setTraversalBounds(traversalParameters.getIntervalsForTraversal(), traversalParameters.traverseUnmappedReads());}/*** @return true if traversals initiated via {@link #iterator} will be restricted to reads that overlap intervals*         as configured via {@link #setTraversalBounds}, otherwise false*/boolean traversalIsBounded();/*** @return true if this datasource supports the query() operation otherwise false.*/boolean isQueryableByInterval();/*** @return An iterator over just the unmapped reads with no assigned position. This operation is not affected*         by prior calls to {@link #setTraversalBounds}. The underlying file must be indexed.*/Iterator<GATKRead> queryUnmapped();/*** Returns the SAM header for this data source.** @return SAM header for this data source*/SAMFileHeader getHeader();/*** Get the sequence dictionary for this ReadsDataSource** @return SAMSequenceDictionary for the reads backing this datasource.*/default SAMSequenceDictionary getSequenceDictionary(){return getHeader().getSequenceDictionary();}/*** @return true if this {@code ReadsDataSource} supports multiple iterations over the data*/boolean supportsSerialIteration();/*** Shut down this data source permanently, closing all iterations and readers.*/@Override  //Overriden here to disallow throwing checked exceptions.void close();
}

GATKDataSource源码

package org.broadinstitute.hellbender.engine;import org.broadinstitute.hellbender.utils.SimpleInterval;import java.util.Iterator;/*** A GATKDataSource is something that can be iterated over from start to finish* and/or queried by genomic interval. It is not necessarily file-based.** @param <T> Type of data in the data source*/
public interface GATKDataSource<T> extends Iterable<T> {Iterator<T> query(final SimpleInterval interval);
}


http://www.ppmy.cn/news/1517294.html

相关文章

android selinux报avc denied权限和编译报neverallow解决方案

avc: denied { read } for name“present” dev“sysfs” ino42693 scontextu:r:hal_health_default:s0 tcontextu:object_r:sysfs:s0 tclassfile permissive1 denied {xxx}: 表示缺少什么权限 scontext:表示谁缺少权限 tcontext:表示对那些文件缺少权限&#xff1a; tclass&am…

华为云 x 容联云|828企业节,助推中国数智产业实力再升级

2024年8月27日&#xff0c;华为第三届828 B2B企业节在2024中国国际大数据产业博览会上正式开幕。 828 B2B企业节是全国首个基于数字化赋能的企业节&#xff0c;由华为联合上万家生态伙伴共同发起&#xff0c;旨在为广大企业尤其是中小企业搭建数字化创新发展平台&#xff0c;融…

关于Arrays.asList返回List无法新增和删除?

这个是在写项目的时候发现的&#xff0c;然后就分析了一下源码&#xff0c;得其内部原理 复现代码示例&#xff1a; public class ArraysAsList {public static void main(String[] args) {Integer[] array {1, 2, 3, 4, 5};List<Integer> list Arrays.asList(array);…

网络通信tcp

管道通信与数据复制管道通信确实涉及数据复制的过程&#xff0c;这是由于管道的工作原理所决定的。下面详细解释一下&#xff1a;管道通信的数据复制 1. 写入管道&#xff1a;•当一个进程通过 write() 系统调用向管道写入数据时&#xff0c;数据实际上是从进程的用户空间复制…

PTA L1-010 比较大小

L1-010 比较大小&#xff08;10分&#xff09; 本题要求将输入的任意3个整数从小到大输出。 输入格式: 输入在一行中给出3个整数&#xff0c;其间以空格分隔。 输出格式: 在一行中将3个整数从小到大输出&#xff0c;其间以“->”相连。 输入样例: 4 2 8输出样例: 2-…

【单片机】PICC编译器和XC8编译器的历史发展,有什么关系

PIC 编译器的发展历史及其演变 在嵌入式系统开发领域,Microchip 的 PIC 微控制器因其广泛的应用、简单的架构和高效的性能,成为许多工程师和开发者的首选。在为这些微控制器编写代码时,编译器的选择至关重要。本文将介绍 Hi-Tech C 编译器(PICC)和 MPLAB XC 编译器的发展…

pyro 教程 时间序列 单变量,重尾,python pytorch,教程和实例 Forecasting预测,布朗运动项、偏差项和协变量项

预测I:单变量&#xff0c;重尾 本教程介绍了预测模块&#xff0c;用Pyro模型进行预测的框架。本教程只涵盖单变量模型和简单的可能性。本教程假设读者已经熟悉慢病毒感染和张量形状. 另请参见: 预测II:状态空间模型 预测三:层次模型 摘要 要创建预测模型: 创建预测模型班级…

网络安全系统性学习路线「全文字详细介绍」

&#x1f91f; 基于入门网络安全打造的&#xff1a;&#x1f449;黑客&网络安全入门&进阶学习资源包 一、基础与准备 网络安全行业与法规 想要从事网络安全行业&#xff0c;必然要先对行业建立一个整体的认知&#xff0c;了解网络安全对于国家和社会的作用&#xff0…

裸金属机的算力共享支持怎么实现

目录 裸金属机的算力共享支持怎么实现 一、技术架构 二、资源调度 三、安全保障 四、应用场景适配 裸金属机通过ssh实现远程调度 1. SSH配置 2. 远程登录 3. 远程命令执行 4. 自动化脚本 5. 安全注意事项 裸金属机的算力共享支持怎么实现 裸金属机的算力共享支持实…

探索数据结构:图(三)之最短路径算法

✨✨ 欢迎大家来到贝蒂大讲堂✨✨ &#x1f388;&#x1f388;养成好习惯&#xff0c;先赞后看哦~&#x1f388;&#x1f388; 所属专栏&#xff1a;数据结构与算法 贝蒂的主页&#xff1a;Betty’s blog 1. 最短路径算法 最短路径问题可分为单源最短路径和多源最短路径。其指…

neo4j+es知识库管理系统(源码)

一、项目介绍 一款全源码&#xff0c;可二开&#xff0c;可基于云部署、私有部署的企业级知识库云平台&#xff0c;一款让企业知识变为实打实的数字财富的系统&#xff0c;应用在需要进行文档整理、分类、归集、检索、分析的场景。 为什么建立知识库平台&#xff1f; 助力企业…

【Java设计模式】异步方法调用模式:通过异步编程提升性能

文章目录 【Java设计模式】异步方法调用模式&#xff1a;通过异步编程提升性能一、概述二、异步方法调用设计模式的别名三、异步方法调用设计模式的意图四、异步方法调用模式的详细解释及实际示例五、Java中异步方法调用模式的编程示例六、Java中何时使用异步方法调用模式七、J…

css实现左右固定中间自适应三栏布局

1.利用浮动 将左右两栏分别设置为向左和向右浮动&#xff0c;然后中间栏通过设置overflow: hidden来清除浮动的影响&#xff0c;从而实现自适应宽度。 <body><div class"container"><div class"left">左栏</div><div class&q…

LeetCode 热题100-39 对称二叉树

对称二叉树 给你一个二叉树的根节点 root &#xff0c; 检查它是否轴对称。 示例 1&#xff1a; 输入&#xff1a;root [1,2,2,3,4,4,3] 输出&#xff1a;true示例 2&#xff1a; 输入&#xff1a;root [1,2,2,null,3,null,3] 输出&#xff1a;false提示&#xff1a; 树中…

CMake构建学习笔记9-Eigen库的构建

Eigen是一个高性能的C线性代数库&#xff0c;广泛用于科学计算、机器学习、计算机视觉等领域。不过&#xff0c;Eigen有点特别&#xff0c;它是一个纯头文件实现的库&#xff1b;也就是说&#xff0c;任何一个程序要引入它&#xff0c;只要include它的头文件就可以了。这天然就…

C语言 ——— 将动态版本的通讯录实现为文件存储联系人模式

目录 前言 在退出通讯录之前 在运行通讯录之前 前言 在这篇博客中&#xff0c;实现了动态版本的通讯录&#xff0c;接下来会增加函数&#xff0c;能用文件存储通讯录中的联系人 C语言 ——— 在控制台实现通讯录&#xff08;增删查改、动态开辟内存空间&#xff09;-CSDN…

二进制、八进制、十进制、十六进制的相互转换

一:各个进制的原理 我们最熟悉也是目前使用的数字是10进制-->逢10进1. 即10进制由10个符号表示一个数字: 0,1,2,3,4,5,6,7,8,9 同理,可得2进制逢2进1,2进制由2符号表示一个数字:0,1 八进制逢8进1,8进制由8符号表示一个数字:0,1,2,3,4,5,6,7 十六进制逢16进1,16进制由16…

深入解析SSRF和Redis未授权访问

深入解析SSRF和Redis未授权访问&#xff1a;漏洞分析与防御 在网络安全领域&#xff0c;服务器端请求伪造&#xff08;SSRF&#xff09; 和 Redis未授权访问 是两类常见且危险的安全漏洞。 1.2 SSRF攻击的利用 1.2.1 测试并确认SSRF漏洞 一个典型的例子是&#xff0c;当应用…

list的使用及其相关知识点

目录 ◉list的底层逻辑 ◉关于list的新增功能 ▲splice功能 ▲remove函数 ▲unique函数 ▲merge函数 ▲sort函数 ▣迭代器类型 ▲reverse函数 作为数据容器之一的list和其他容器的使用上有很多相似的地方&#xff0c;比如都有大致相同的构造函数&#xff0c;大致相同的头插尾插…

华为eNSP:静态路由配置、浮动路由配置

静态路由&#xff1a; 一、拓扑图 二、路由器配置 2.1&#xff1a;配置接口 R1&#xff1a; [r1]int g0/0/0 [r1-GigabitEthernet0/0/0]ip add 192.168.1.254 24 [r1-GigabitEthernet0/0/0]qu [r1]int g0/0/1 [r1-GigabitEthernet0/0/1]ip add 10.1.1.1 24 [r1-GigabitEth…