stream_2">1. stream一般用法
cudaStream_t stream_;
cudaStreamCreate(&stream_); // create stream
// some operators running on this stream_
cudaStreamSynchronize(stream_)// in final
cudaStreamDestroy(stream_);
- stream: Nonblocking模式 (WithFlags模式)
// stream_flags:
// 1)cudaStreamDefault:这个和stream0默认流是同步的,启示和stream0上操作没区别
// 2)cudaStreamNonBlocking:和stream0号默认流不同步,异步,可以看reference[2]中的效果图可视化情况,更加形象
cudaStreamCreateWithFlags(&cuda_stream_, stream_flags)
streamevent_21">2. stream与event:
cudaStream_t stream1;
cudaStream_t stream2;
cudaEvent_t event_stream2_wait_stream1_on_kernel2 = nullptr;
cudaEventCreate(&event_stream2_wait_stream1_after_kernel2, CU_EVENT_DISABLE_TIMING);
cudaStreamCreate(&stream1); // create stream1流,可以理解为任务队列1
cudaStreamCreateWithFlags(&stream2, cudaStreamNonBlocking); // create stream2,可以理解为任务队列2// kernel1 加入到任务队列1,并且排队等待执行,接着就返回来,执行下一句
kernel1<<<blocks, threads, 0, stream1>>>(n, data);
// 这个时候,只知道kernel1已经加到队列1中去执行了,具体执行完与否不知道,开始将kernel2加入到队列1中,
// 接着kernel1执行,只有kernel1执行完毕后,才开始执行kernel2,因为他们俩都在stream1上。
kernel2<<<blocks, threads, 0, stream1>>>(n, data);
// 在这个地方打标签,也就是在kernel2后面打标签,如果stream1上的kernel2执行完,
// 并且stream1通过了这个标签,那么后面就不用等待; 如果kernel2还没有结束,
// 则stream1还没走到这个标签,则后面就需要等待走过这个标签才可以执行。
cudaEventRecord(event_stream2_wait_stream1_after_kernel2,stream1);
// 上面打完标签record完后,就开始执行这个wait, 这个时候,wait等待这个标签是否在stream1上被经过(被经过的解释只是形象表示,具体是什么机制触发还不清楚,可能是信号量的方式触发这一行使其不再等待),如果被经过,则这个wait就不再等待,直接放行,执行kernel3,将kernel3的任务发配到stream2上; 因为stream2是nonblocking方式,所以会很大程度上保持与stream1的并行。
cudaStreamWaitEvent(stream2, event_stream2_wait_stream1_after_kernel2);
kernel3<<<blocks, threads, 0, stream2>>>(n, data);// some operators running on this stream_
cudaStreamSynchronize(stream1);
cudaStreamSynchronize(stream2);
stream_48">3. stream异常的排查
- 有的时候cudaStreamSynchronize(stream) illegal memory不一定是stream存在问题,有可能是前面数据拷贝计算等没有完成,或者同步异常导致的后面数据非法,从而stream sync出现问题, 排查思路
- 打印前后的数据,出错与不出错对比,前面操作的一些数据是否存在差异(是否由于前面操作非法导致的);
- 用nsys tools工具,可视化执行流的情况,并且对比出错与不出错时的情况,查看流中函数执行的差异性;是否有越界情况。
stream_53">4. stream的异步与同步行为
https://docs.nvidia.com/cuda/cuda-runtime-api/api-sync-behavior.html#api-sync-behavior__memcpy-async
需要注意的一个点:一些cudaMemcpyAsync, 不一定是异步的,比如当host和device之间传输数据的时候,虽然使用异步copy,但是会内含同步,从而导致一些block或spin行为。
2. API synchronization behavior
The API provides memcpy/memset functions in both synchronous and asynchronous forms, the latter having an "Async" suffix. This is a misnomer as each function may exhibit synchronous or asynchronous behavior depending on the arguments passed to the function.
Memcpy
In the reference documentation, each memcpy function is categorized as synchronous or asynchronous, corresponding to the definitions below.
**Synchronous**- For transfers from pageable host memory to device memory, a stream sync is performed before the copy is initiated. The function will return once the pageable buffer has been copied to the staging memory for DMA transfer to device memory, but the DMA to final destination may not have completed.- For transfers from pinned host memory to device memory, the function is synchronous with respect to the host.- For transfers from device to either pageable or pinned host memory, the function returns only once the copy has completed.- For transfers from device memory to device memory, no host-side synchronization is performed.- For transfers from any host memory to any host memory, the function is fully synchronous with respect to the host.
Asynchronous- For transfers between device memory and pageable host memory, the function might be synchronous with respect to host.- For transfers from any host memory to any host memory, the function is fully synchronous with respect to the host.- If pageable memory must first be staged to pinned memory, the driver may synchronize with the stream and stage the copy into pinned memory.- For all other transfers, the function should be fully asynchronous.
Memset
The cudaMemset functions are asynchronous with respect to the host except when the target memory is pinned host memory. The Async versions are always asynchronous with respect to the host.
Kernel Launches
Kernel launches are asynchronous with respect to the host. Details of concurrent kernel execution and data transfers can be found in the CUDA Programmers Guide.