9.7 visual studio 搭建yolov10的onnx的预测(c++)

server/2025/1/16 15:15:17/

1.环境配置

在进行onnx预测前,需要搭建的环境如下:

1.opencv环境的配置,可参考博客:9.2 c++搭建opencv环境-CSDN博客

2.libtorch环境的配置,可参考博客:9.4 visualStudio 2022 配置 cuda 和 torch (c++)-CSDN博客

3.cuda环境的配置,可参考博客:9.4 visualStudio 2022 配置 cuda 和 torch (c++)-CSDN博客

4.onnx环境的配置,可参考博客:VS2019配置ONNXRuntime c++环境_microsoft.ml.onnxruntime-CSDN博客

2.yolov10的c++代码

该代码做了部分的修改,最后调试成功。具体的代码如下:

main.cpp

#include <iostream>
//#include <getopt.h>
#include "yolov5v8_dnn.h"
#include "yolov5v8_ort.h"using namespace std;
using namespace cv;void main(int argc, char** argv)
{string img_path = "E:\\vs\\daima\\1_8\\Project1\\x64\\Release\\street.jpg";string model_path = "E:\\vs\\daima\\1_8\\Project1\\x64\\Release\\yolov8n-seg.onnx";string test_cls = "dnn";if (test_cls == "dnn") {// Input the path of model ("yolov8s.onnx" or "yolov5s.onnx") to run Inference with yolov8/yolov5 (ONNX)// Note that in this example the classes are hard-coded and 'classes.txt' is a place holder.Inference inf(model_path, cv::Size(640, 640), "classes.txt", true);cv::Mat frame = cv::imread(img_path);std::vector<Detection> output = inf.runInference(frame);if (output.size() != 0) inf.DrawPred(frame, output);else cout << "Detect Nothing!" << endl;}if (test_cls == "ort") {DCSP_CORE* yoloDetector = new DCSP_CORE;
#ifdef USE_CUDA//DCSP_INIT_PARAM params{ model_path, YOLO_ORIGIN_V5, {640, 640},  0.25, 0.45, 0.5, true }; // GPU FP32 inferenceDCSP_INIT_PARAM params{ model_path, YOLO_ORIGIN_V5_HALF, {640, 640},  0.25, 0.45, 0.5, true }; // GPU FP16 inference
#elseDCSP_INIT_PARAM params{ model_path, YOLO_ORIGIN_V5, {640, 640},0.25, 0.45, 0.5, false };  // CPU inference
#endifyoloDetector->CreateSession(params);cv::Mat img = cv::imread(img_path);std::vector<DCSP_RESULT> res;yoloDetector->RunSession(img, res);if (res.size() != 0) yoloDetector->DrawPred(img, res);else cout << "Detect Nothing!" << endl;}
}

yolov5v8_dnn.cpp

#include "yolov5v8_dnn.h"
using namespace std;Inference::Inference(const std::string& onnxModelPath, const cv::Size& modelInputShape, const std::string& classesTxtFile, const bool& runWithCuda)
{modelPath = onnxModelPath;modelShape = modelInputShape;classesPath = classesTxtFile;cudaEnabled = runWithCuda;loadOnnxNetwork();// loadClassesFromFile(); The classes are hard-coded for this example
}std::vector<Detection> Inference::runInference(const cv::Mat& input)
{cv::Mat SrcImg = input;cv::Mat netInputImg;cv::Vec4d params;LetterBox(SrcImg, netInputImg, params, cv::Size(modelShape.width, modelShape.height));cv::Mat blob;cv::dnn::blobFromImage(netInputImg, blob, 1.0 / 255.0, modelShape, cv::Scalar(), true, false);net.setInput(blob);std::vector<cv::Mat> outputs;net.forward(outputs, net.getUnconnectedOutLayersNames());if (outputs.size() == 2) RunSegmentation = true;int rows = outputs[0].size[1];int dimensions = outputs[0].size[2];bool yolov8 = false;// yolov5 has an output of shape (batchSize, 25200, 85) (Num classes + box[x,y,w,h] + confidence[c])// yolov8 has an output of shape (batchSize, 84,  8400) (Num classes + box[x,y,w,h])if (dimensions > rows) // Check if the shape[2] is more than shape[1] (yolov8){yolov8 = true;rows = outputs[0].size[2];dimensions = outputs[0].size[1];outputs[0] = outputs[0].reshape(1, dimensions);cv::transpose(outputs[0], outputs[0]);}float* data = (float*)outputs[0].data;std::vector<int> class_ids;std::vector<float> confidences;std::vector<cv::Rect> boxes;std::vector<vector<float>> picked_proposals;for (int i = 0; i < rows; ++i){int _segChannels;if (yolov8){float* classes_scores = data + 4;cv::Mat scores(1, classes.size(), CV_32FC1, classes_scores);cv::Point class_id;double maxClassScore;minMaxLoc(scores, 0, &maxClassScore, 0, &class_id);if (maxClassScore > modelScoreThreshold){if (RunSegmentation) {_segChannels = outputs[1].size[1];vector<float> temp_proto(data + classes.size() + 4, data + classes.size() + 4 + _segChannels);picked_proposals.push_back(temp_proto);}confidences.push_back(maxClassScore);class_ids.push_back(class_id.x);float x = (data[0] - params[2]) / params[0];float y = (data[1] - params[3]) / params[1];float w = data[2] / params[0];float h = data[3] / params[1];int left = MAX(round(x - 0.5 * w + 0.5), 0);int top = MAX(round(y - 0.5 * h + 0.5), 0);if ((left + w) > SrcImg.cols) { w = SrcImg.cols - left; }if ((top + h) > SrcImg.rows) { h = SrcImg.rows - top; }boxes.push_back(cv::Rect(left, top, int(w), int(h)));}}else // yolov5{float confidence = data[4];if (confidence >= modelConfidenceThreshold){float* classes_scores = data + 5;cv::Mat scores(1, classes.size(), CV_32FC1, classes_scores);cv::Point class_id;double max_class_score;minMaxLoc(scores, 0, &max_class_score, 0, &class_id);if (max_class_score > modelScoreThreshold){if (RunSegmentation) {_segChannels = outputs[1].size[1];vector<float> temp_proto(data + classes.size() + 5, data + classes.size() + 5 + _segChannels);picked_proposals.push_back(temp_proto);}confidences.push_back(confidence);class_ids.push_back(class_id.x);float x = (data[0] - params[2]) / params[0];float y = (data[1] - params[3]) / params[1];float w = data[2] / params[0];float h = data[3] / params[1];int left = MAX(round(x - 0.5 * w + 0.5), 0);int top = MAX(round(y - 0.5 * h + 0.5), 0);if ((left + w) > SrcImg.cols) { w = SrcImg.cols - left; }if ((top + h) > SrcImg.rows) { h = SrcImg.rows - top; }boxes.push_back(cv::Rect(left, top, int(w), int(h)));}}}data += dimensions;}std::vector<int> nms_result;cv::dnn::NMSBoxes(boxes, confidences, modelScoreThreshold, modelNMSThreshold, nms_result);std::vector<Detection> detections{};std::vector<vector<float>> temp_mask_proposals;for (unsigned long i = 0; i < nms_result.size(); ++i){int idx = nms_result[i];Detection result;result.class_id = class_ids[idx];result.confidence = confidences[idx];std::random_device rd;std::mt19937 gen(rd());std::uniform_int_distribution<int> dis(100, 255);result.color = cv::Scalar(dis(gen),dis(gen),dis(gen));result.className = classes[result.class_id];result.box = boxes[idx];if (RunSegmentation) temp_mask_proposals.push_back(picked_proposals[idx]);if (result.box.width != 0 && result.box.height != 0) detections.push_back(result);}if (RunSegmentation) {cv::Mat mask_proposals;for (int i = 0; i < temp_mask_proposals.size(); ++i)mask_proposals.push_back(cv::Mat(temp_mask_proposals[i]).t());GetMask(mask_proposals, outputs[1], params, SrcImg.size(), detections);}return detections;
}void Inference::loadClassesFromFile()
{std::ifstream inputFile(classesPath);if (inputFile.is_open()){std::string classLine;while (std::getline(inputFile, classLine))classes.push_back(classLine);inputFile.close();}
}void Inference::loadOnnxNetwork()
{net = cv::dnn::readNetFromONNX(modelPath);if (cudaEnabled){std::cout << "\nRunning on CUDA" << std::endl;net.setPreferableBackend(cv::dnn::DNN_BACKEND_CUDA);net.setPreferableTarget(cv::dnn::DNN_TARGET_CUDA);}else{std::cout << "\nRunning on CPU" << std::endl;net.setPreferableBackend(cv::dnn::DNN_BACKEND_OPENCV);net.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);}
}void Inference::LetterBox(const cv::Mat& image, cv::Mat& outImage, cv::Vec4d& params, const cv::Size& newShape,bool autoShape, bool scaleFill, bool scaleUp, int stride, const cv::Scalar& color)
{if (false) {int maxLen = MAX(image.rows, image.cols);outImage = cv::Mat::zeros(cv::Size(maxLen, maxLen), CV_8UC3);image.copyTo(outImage(cv::Rect(0, 0, image.cols, image.rows)));params[0] = 1;params[1] = 1;params[3] = 0;params[2] = 0;}cv::Size shape = image.size();float r = std::min((float)newShape.height / (float)shape.height,(float)newShape.width / (float)shape.width);if (!scaleUp)r = std::min(r, 1.0f);float ratio[2]{ r, r };int new_un_pad[2] = { (int)std::round((float)shape.width * r),(int)std::round((float)shape.height * r) };auto dw = (float)(newShape.width - new_un_pad[0]);auto dh = (float)(newShape.height - new_un_pad[1]);if (autoShape){dw = (float)((int)dw % stride);dh = (float)((int)dh % stride);}else if (scaleFill){dw = 0.0f;dh = 0.0f;new_un_pad[0] = newShape.width;new_un_pad[1] = newShape.height;ratio[0] = (float)newShape.width / (float)shape.width;ratio[1] = (float)newShape.height / (float)shape.height;}dw /= 2.0f;dh /= 2.0f;if (shape.width != new_un_pad[0] && shape.height != new_un_pad[1]){cv::resize(image, outImage, cv::Size(new_un_pad[0], new_un_pad[1]));}else {outImage = image.clone();}int top = int(std::round(dh - 0.1f));int bottom = int(std::round(dh + 0.1f));int left = int(std::round(dw - 0.1f));int right = int(std::round(dw + 0.1f));params[0] = ratio[0];params[1] = ratio[1];params[2] = left;params[3] = top;cv::copyMakeBorder(outImage, outImage, top, bottom, left, right, cv::BORDER_CONSTANT, color);
}void Inference::GetMask(const cv::Mat& maskProposals, const cv::Mat& mask_protos, const cv::Vec4d& params, const cv::Size& srcImgShape, std::vector<Detection>& output) {if (output.size() == 0) return;int _segChannels = mask_protos.size[1];int _segHeight = mask_protos.size[2];int _segWidth = mask_protos.size[3];cv::Mat protos = mask_protos.reshape(0, { _segChannels,_segWidth * _segHeight });cv::Mat matmulRes = (maskProposals * protos).t();cv::Mat masks = matmulRes.reshape(output.size(), { _segHeight,_segWidth });vector<cv::Mat> maskChannels;split(masks, maskChannels);for (int i = 0; i < output.size(); ++i) {cv::Mat dest, mask;//sigmoidcv::exp(-maskChannels[i], dest);dest = 1.0 / (1.0 + dest);cv::Rect roi(int(params[2] / modelShape.width * _segWidth), int(params[3] / modelShape.height * _segHeight), int(_segWidth - params[2] / 2), int(_segHeight - params[3] / 2));dest = dest(roi);cv::resize(dest, mask, srcImgShape, cv::INTER_NEAREST);//cropcv::Rect temp_rect = output[i].box;mask = mask(temp_rect) > modelScoreThreshold;output[i].boxMask = mask;}
}void Inference::DrawPred(cv::Mat& img, vector<Detection>& result) {int detections = result.size();std::cout << "Number of detections:" << detections << std::endl;cv::Mat mask = img.clone();for (int i = 0; i < detections; ++i){Detection detection = result[i];cv::Rect box = detection.box;cv::Scalar color = detection.color;// Detection boxcv::rectangle(img, box, color, 2);mask(detection.box).setTo(color, detection.boxMask);// Detection box textstd::string classString = detection.className + ' ' + std::to_string(detection.confidence).substr(0, 4);cv::Size textSize = cv::getTextSize(classString, cv::FONT_HERSHEY_DUPLEX, 1, 2, 0);cv::Rect textBox(box.x, box.y - 40, textSize.width + 10, textSize.height + 20);cv::rectangle(img, textBox, color, cv::FILLED);cv::putText(img, classString, cv::Point(box.x + 5, box.y - 10), cv::FONT_HERSHEY_DUPLEX, 1, cv::Scalar(0, 0, 0), 2, 0);}// Detection maskif (RunSegmentation) cv::addWeighted(img, 0.5, mask, 0.5, 0, img); //将mask加在原图上面cv::imshow("Inference", img);cv::imwrite("out.bmp", img);cv::waitKey();cv::destroyWindow("Inference");
}

yolov5v8_ort.cpp

#define _CRT_SECURE_NO_WARNINGS
#include "yolov5v8_ort.h"
#include <regex>
#include <random>
#define benchmark
using namespace std;DCSP_CORE::DCSP_CORE() {}DCSP_CORE::~DCSP_CORE() {delete session;
}#ifdef USE_CUDA
namespace Ort
{template<>struct TypeToTensorType<half> { static constexpr ONNXTensorElementDataType type = ONNX_TENSOR_ELEMENT_DATA_TYPE_FLOAT16; };
}
#endiftemplate<typename T>
char* BlobFromImage(cv::Mat& iImg, T& iBlob) {int channels = iImg.channels();int imgHeight = iImg.rows;int imgWidth = iImg.cols;for (int c = 0; c < channels; c++) {for (int h = 0; h < imgHeight; h++) {for (int w = 0; w < imgWidth; w++) {iBlob[c * imgWidth * imgHeight + h * imgWidth + w] = typename std::remove_pointer<T>::type((iImg.at<cv::Vec3b>(h, w)[c]) / 255.0f);}}}return RET_OK;
}char* PreProcess(cv::Mat& iImg, std::vector<int> iImgSize, cv::Mat& oImg) {cv::Mat img = iImg.clone();cv::resize(iImg, oImg, cv::Size(iImgSize.at(0), iImgSize.at(1)));if (img.channels() == 1) {cv::cvtColor(oImg, oImg, cv::COLOR_GRAY2BGR);}cv::cvtColor(oImg, oImg, cv::COLOR_BGR2RGB);return RET_OK;
}void LetterBox(const cv::Mat& image, cv::Mat& outImage, cv::Vec4d& params, const cv::Size& newShape = cv::Size(640, 640),bool autoShape = false, bool scaleFill = false, bool scaleUp = true, int stride = 32, const cv::Scalar& color = cv::Scalar(114, 114, 114))
{if (false) {int maxLen = MAX(image.rows, image.cols);outImage = cv::Mat::zeros(cv::Size(maxLen, maxLen), CV_8UC3);image.copyTo(outImage(cv::Rect(0, 0, image.cols, image.rows)));params[0] = 1;params[1] = 1;params[3] = 0;params[2] = 0;}cv::Size shape = image.size();float r = std::min((float)newShape.height / (float)shape.height,(float)newShape.width / (float)shape.width);if (!scaleUp)r = std::min(r, 1.0f);float ratio[2]{ r, r };int new_un_pad[2] = { (int)std::round((float)shape.width * r),(int)std::round((float)shape.height * r) };auto dw = (float)(newShape.width - new_un_pad[0]);auto dh = (float)(newShape.height - new_un_pad[1]);if (autoShape){dw = (float)((int)dw % stride);dh = (float)((int)dh % stride);}else if (scaleFill){dw = 0.0f;dh = 0.0f;new_un_pad[0] = newShape.width;new_un_pad[1] = newShape.height;ratio[0] = (float)newShape.width / (float)shape.width;ratio[1] = (float)newShape.height / (float)shape.height;}dw /= 2.0f;dh /= 2.0f;if (shape.width != new_un_pad[0] && shape.height != new_un_pad[1]){cv::resize(image, outImage, cv::Size(new_un_pad[0], new_un_pad[1]));}else {outImage = image.clone();}int top = int(std::round(dh - 0.1f));int bottom = int(std::round(dh + 0.1f));int left = int(std::round(dw - 0.1f));int right = int(std::round(dw + 0.1f));params[0] = ratio[0];params[1] = ratio[1];params[2] = left;params[3] = top;cv::copyMakeBorder(outImage, outImage, top, bottom, left, right, cv::BORDER_CONSTANT, color);
}void GetMask(const int* const _seg_params, const float& rectConfidenceThreshold, const cv::Mat& maskProposals, const cv::Mat& mask_protos, const cv::Vec4d& params, const cv::Size& srcImgShape, std::vector<DCSP_RESULT>& output) {int _segChannels = *_seg_params;int _segHeight = *(_seg_params + 1);int _segWidth = *(_seg_params + 2);int _netHeight = *(_seg_params + 3);int _netWidth = *(_seg_params + 4);cv::Mat protos = mask_protos.reshape(0, { _segChannels,_segWidth * _segHeight });cv::Mat matmulRes = (maskProposals * protos).t();cv::Mat masks = matmulRes.reshape(output.size(), { _segHeight,_segWidth });std::vector<cv::Mat> maskChannels;split(masks, maskChannels);for (int i = 0; i < output.size(); ++i) {cv::Mat dest, mask;//sigmoidcv::exp(-maskChannels[i], dest);dest = 1.0 / (1.0 + dest);cv::Rect roi(int(params[2] / _netWidth * _segWidth), int(params[3] / _netHeight * _segHeight), int(_segWidth - params[2] / 2), int(_segHeight - params[3] / 2));dest = dest(roi);cv::resize(dest, mask, srcImgShape, cv::INTER_NEAREST);//cropcv::Rect temp_rect = output[i].box;mask = mask(temp_rect) > rectConfidenceThreshold;output[i].boxMask = mask;}
}void DCSP_CORE::DrawPred(cv::Mat& img, std::vector<DCSP_RESULT>& result) {int detections = result.size();std::cout << "Number of detections:" << detections << std::endl;cv::Mat mask = img.clone();for (int i = 0; i < detections; ++i){DCSP_RESULT detection = result[i];cv::Rect box = detection.box;cv::Scalar color = detection.color;// Detection boxcv::rectangle(img, box, color, 2);mask(detection.box).setTo(color, detection.boxMask);// Detection box textstd::string classString = detection.className + ' ' + std::to_string(detection.confidence).substr(0, 4);cv::Size textSize = cv::getTextSize(classString, cv::FONT_HERSHEY_DUPLEX, 1, 2, 0);cv::Rect textBox(box.x, box.y - 40, textSize.width + 10, textSize.height + 20);cv::rectangle(img, textBox, color, cv::FILLED);cv::putText(img, classString, cv::Point(box.x + 5, box.y - 10), cv::FONT_HERSHEY_DUPLEX, 1, cv::Scalar(0, 0, 0), 2, 0);}// Detection maskif (RunSegmentation) cv::addWeighted(img, 0.5, mask, 0.5, 0, img); //将mask加在原图上面cv::imshow("Inference", img);cv::imwrite("out.bmp", img);cv::waitKey();cv::destroyWindow("Inference");
}char* DCSP_CORE::CreateSession(DCSP_INIT_PARAM& iParams) {char* Ret = RET_OK;std::regex pattern("[\u4e00-\u9fa5]");bool result = std::regex_search(iParams.ModelPath, pattern);if (result) {char str_tmp[] = "[DCSP_ONNX]:Model path error.Change your model path without chinese characters.";Ret = str_tmp;std::cout << Ret << std::endl;return Ret;}try {modelConfidenceThreshold = iParams.modelConfidenceThreshold;rectConfidenceThreshold = iParams.RectConfidenceThreshold;iouThreshold = iParams.iouThreshold;imgSize = iParams.imgSize;modelType = iParams.ModelType;env = Ort::Env(ORT_LOGGING_LEVEL_WARNING, "Yolo");Ort::SessionOptions sessionOption;if (iParams.CudaEnable) {cudaEnable = iParams.CudaEnable;OrtCUDAProviderOptions cudaOption;cudaOption.device_id = 0;sessionOption.AppendExecutionProvider_CUDA(cudaOption);}sessionOption.SetGraphOptimizationLevel(GraphOptimizationLevel::ORT_ENABLE_ALL);sessionOption.SetIntraOpNumThreads(iParams.IntraOpNumThreads);sessionOption.SetLogSeverityLevel(iParams.LogSeverityLevel);#ifdef _WIN32int ModelPathSize = MultiByteToWideChar(CP_UTF8, 0, iParams.ModelPath.c_str(), static_cast<int>(iParams.ModelPath.length()), nullptr, 0);wchar_t* wide_cstr = new wchar_t[ModelPathSize + 1];MultiByteToWideChar(CP_UTF8, 0, iParams.ModelPath.c_str(), static_cast<int>(iParams.ModelPath.length()), wide_cstr, ModelPathSize);wide_cstr[ModelPathSize] = L'\0';const wchar_t* modelPath = wide_cstr;
#elseconst char* modelPath = iParams.ModelPath.c_str();
#endif // _WIN32session = new Ort::Session(env, modelPath, sessionOption);Ort::AllocatorWithDefaultOptions allocator;size_t inputNodesNum = session->GetInputCount();for (size_t i = 0; i < inputNodesNum; i++) {Ort::AllocatedStringPtr input_node_name = session->GetInputNameAllocated(i, allocator);char* temp_buf = new char[50];strcpy(temp_buf, input_node_name.get());inputNodeNames.push_back(temp_buf);}size_t OutputNodesNum = session->GetOutputCount();for (size_t i = 0; i < OutputNodesNum; i++) {Ort::AllocatedStringPtr output_node_name = session->GetOutputNameAllocated(i, allocator);char* temp_buf = new char[10];strcpy(temp_buf, output_node_name.get());outputNodeNames.push_back(temp_buf);}if (outputNodeNames.size() == 2) RunSegmentation = true;options = Ort::RunOptions{ nullptr };WarmUpSession();return RET_OK;}catch (const std::exception& e) {const char* str1 = "[DCSP_ONNX]:";const char* str2 = e.what();std::string result = std::string(str1) + std::string(str2);char* merged = new char[result.length() + 1];std::strcpy(merged, result.c_str());std::cout << merged << std::endl;delete[] merged;char str_tmps[] = "[DCSP_ONNX]:Create session failed.";char* strs = str_tmps;return strs;}
}char* DCSP_CORE::RunSession(cv::Mat& iImg, std::vector<DCSP_RESULT>& oResult) {
#ifdef benchmarkclock_t starttime_1 = clock();
#endif // benchmarkchar* Ret = RET_OK;cv::Mat processedImg;cv::Vec4d params;//resize图片尺寸,PreProcess是直接resize,LetterBox有padding操作//PreProcess(iImg, imgSize, processedImg);LetterBox(iImg, processedImg, params, cv::Size(imgSize.at(1), imgSize.at(0)));if (modelType < 4) {float* blob = new float[processedImg.total() * 3];BlobFromImage(processedImg, blob);std::vector<int64_t> inputNodeDims = { 1, 3, imgSize.at(0), imgSize.at(1) };TensorProcess(starttime_1, params, iImg, blob, inputNodeDims, oResult);}else {
#ifdef USE_CUDAhalf* blob = new half[processedImg.total() * 3];BlobFromImage(processedImg, blob);std::vector<int64_t> inputNodeDims = { 1,3,imgSize.at(0),imgSize.at(1) };TensorProcess(starttime_1, params, iImg, blob, inputNodeDims, oResult);
#endif}return Ret;
}template<typename N>
char* DCSP_CORE::TensorProcess(clock_t& starttime_1, cv::Vec4d& params, cv::Mat& iImg, N* blob, std::vector<int64_t>& inputNodeDims, std::vector<DCSP_RESULT>& oResult) {Ort::Value inputTensor = Ort::Value::CreateTensor<typename std::remove_pointer<N>::type>(Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU), blob, 3 * imgSize.at(0) * imgSize.at(1), inputNodeDims.data(), inputNodeDims.size());
#ifdef benchmarkclock_t starttime_2 = clock();
#endif // benchmarkauto outputTensor = session->Run(options, inputNodeNames.data(), &inputTensor, 1, outputNodeNames.data(), outputNodeNames.size());
#ifdef benchmarkclock_t starttime_3 = clock();
#endif // benchmarkstd::vector<int64_t> _outputTensorShape;_outputTensorShape = outputTensor[0].GetTensorTypeAndShapeInfo().GetShape();//auto output = outputTensor[0].GetTensorMutableData<typename std::remove_pointer<N>::type>();N* output = outputTensor[0].GetTensorMutableData<N>();delete blob;// yolov5 has an output of shape (batchSize, 25200, 85) (Num classes + box[x,y,w,h] + confidence[c])// yolov8 has an output of shape (batchSize, 84,  8400) (Num classes + box[x,y,w,h])// yolov5int dimensions = _outputTensorShape[1];int rows = _outputTensorShape[2];cv::Mat rowData;if (modelType < 3)rowData = cv::Mat(dimensions, rows, CV_32F, output);elserowData = cv::Mat(dimensions, rows, CV_16S, output);// yolov8if (rows > dimensions) {dimensions = _outputTensorShape[2];rows = _outputTensorShape[1];rowData = rowData.t();}std::vector<int> class_ids;std::vector<float> confidences;std::vector<cv::Rect> boxes;std::vector<std::vector<float>> picked_proposals;N* data = (N*)rowData.data;for (int i = 0; i < dimensions; ++i) {switch (modelType) {case 0://V5_ORIGIN_FP32case 7://V5_ORIGIN_FP16{N confidence = data[4];if (confidence >= modelConfidenceThreshold){cv::Mat scores;if (modelType < 3) scores = cv::Mat(1, classes.size(), CV_32FC1, data + 5);else scores = cv::Mat(1, classes.size(), CV_16SC1, data + 5);cv::Point class_id;double max_class_score;minMaxLoc(scores, 0, &max_class_score, 0, &class_id);max_class_score = *(data + 5 + class_id.x) * confidence;if (max_class_score > rectConfidenceThreshold){if (RunSegmentation) {int _segChannels = outputTensor[1].GetTensorTypeAndShapeInfo().GetShape()[1];std::vector<float> temp_proto(data + classes.size() + 5, data + classes.size() + 5 + _segChannels);picked_proposals.push_back(temp_proto);}confidences.push_back(confidence);class_ids.push_back(class_id.x);float x = (data[0] - params[2]) / params[0];float y = (data[1] - params[3]) / params[1];float w = data[2] / params[0];float h = data[3] / params[1];int left = MAX(round(x - 0.5 * w + 0.5), 0);int top = MAX(round(y - 0.5 * h + 0.5), 0);if ((left + w) > iImg.cols) { w = iImg.cols - left; }if ((top + h) > iImg.rows) { h = iImg.rows - top; }boxes.emplace_back(cv::Rect(left, top, int(w), int(h)));}}break;}case 1://V8_ORIGIN_FP32case 4://V8_ORIGIN_FP16{cv::Mat scores;if (modelType < 3) scores = cv::Mat(1, classes.size(), CV_32FC1, data + 4);else scores = cv::Mat(1, classes.size(), CV_16SC1, data + 4);cv::Point class_id;double maxClassScore;cv::minMaxLoc(scores, 0, &maxClassScore, 0, &class_id);maxClassScore = *(data + 4 + class_id.x);if (maxClassScore > rectConfidenceThreshold) {if (RunSegmentation) {int _segChannels = outputTensor[1].GetTensorTypeAndShapeInfo().GetShape()[1];std::vector<float> temp_proto(data + classes.size() + 4, data + classes.size() + 4 + _segChannels);picked_proposals.push_back(temp_proto);}confidences.push_back(maxClassScore);class_ids.push_back(class_id.x);float x = (data[0] - params[2]) / params[0];float y = (data[1] - params[3]) / params[1];float w = data[2] / params[0];float h = data[3] / params[1];int left = MAX(round(x - 0.5 * w + 0.5), 0);int top = MAX(round(y - 0.5 * h + 0.5), 0);if ((left + w) > iImg.cols) { w = iImg.cols - left; }if ((top + h) > iImg.rows) { h = iImg.rows - top; }boxes.emplace_back(cv::Rect(left, top, int(w), int(h)));}break;}}data += rows;}std::vector<int> nmsResult;cv::dnn::NMSBoxes(boxes, confidences, rectConfidenceThreshold, iouThreshold, nmsResult);std::vector<std::vector<float>> temp_mask_proposals;for (int i = 0; i < nmsResult.size(); ++i) {int idx = nmsResult[i];DCSP_RESULT result;result.classId = class_ids[idx];result.confidence = confidences[idx];result.box = boxes[idx];result.className = classes[result.classId];std::random_device rd;std::mt19937 gen(rd());std::uniform_int_distribution<int> dis(100, 255);result.color = cv::Scalar(dis(gen), dis(gen), dis(gen));if (result.box.width != 0 && result.box.height != 0) oResult.push_back(result);if (RunSegmentation) temp_mask_proposals.push_back(picked_proposals[idx]);}if (RunSegmentation) {cv::Mat mask_proposals;for (int i = 0; i < temp_mask_proposals.size(); ++i)mask_proposals.push_back(cv::Mat(temp_mask_proposals[i]).t());std::vector<int64_t> _outputMaskTensorShape;_outputMaskTensorShape = outputTensor[1].GetTensorTypeAndShapeInfo().GetShape();int _segChannels = _outputMaskTensorShape[1];int _segWidth = _outputMaskTensorShape[2];int _segHeight = _outputMaskTensorShape[3];N* pdata = outputTensor[1].GetTensorMutableData<N>();std::vector<float> mask(pdata, pdata + _segChannels * _segWidth * _segHeight);int _seg_params[5] = { _segChannels, _segWidth, _segHeight, inputNodeDims[2], inputNodeDims[3] };cv::Mat mask_protos = cv::Mat(mask);GetMask(_seg_params, rectConfidenceThreshold, mask_proposals, mask_protos, params, iImg.size(), oResult);}#ifdef benchmarkclock_t starttime_4 = clock();double pre_process_time = (double)(starttime_2 - starttime_1) / CLOCKS_PER_SEC * 1000;double process_time = (double)(starttime_3 - starttime_2) / CLOCKS_PER_SEC * 1000;double post_process_time = (double)(starttime_4 - starttime_3) / CLOCKS_PER_SEC * 1000;if (cudaEnable) {std::cout << "[DCSP_ONNX(CUDA)]: " << pre_process_time << "ms pre-process, " << process_time<< "ms inference, " << post_process_time << "ms post-process." << std::endl;}else {std::cout << "[DCSP_ONNX(CPU)]: " << pre_process_time << "ms pre-process, " << process_time<< "ms inference, " << post_process_time << "ms post-process." << std::endl;}
#endif // benchmarkreturn RET_OK;
}char* DCSP_CORE::WarmUpSession() {clock_t starttime_1 = clock();cv::Mat iImg = cv::Mat(cv::Size(imgSize.at(0), imgSize.at(1)), CV_8UC3);cv::Mat processedImg;cv::Vec4d params;//resize图片尺寸,PreProcess是直接resize,LetterBox有padding操作//PreProcess(iImg, imgSize, processedImg);LetterBox(iImg, processedImg, params, cv::Size(imgSize.at(1), imgSize.at(0)));if (modelType < 4) {float* blob = new float[iImg.total() * 3];BlobFromImage(processedImg, blob);std::vector<int64_t> YOLO_input_node_dims = { 1, 3, imgSize.at(0), imgSize.at(1) };Ort::Value input_tensor = Ort::Value::CreateTensor<float>(Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU), blob, 3 * imgSize.at(0) * imgSize.at(1),YOLO_input_node_dims.data(), YOLO_input_node_dims.size());auto output_tensors = session->Run(options, inputNodeNames.data(), &input_tensor, 1, outputNodeNames.data(), outputNodeNames.size());delete[] blob;clock_t starttime_4 = clock();double post_process_time = (double)(starttime_4 - starttime_1) / CLOCKS_PER_SEC * 1000;if (cudaEnable) {std::cout << "[DCSP_ONNX(CUDA)]: " << "Cuda warm-up cost " << post_process_time << " ms. " << std::endl;}}else {
#ifdef USE_CUDAhalf* blob = new half[iImg.total() * 3];BlobFromImage(processedImg, blob);std::vector<int64_t> YOLO_input_node_dims = { 1,3,imgSize.at(0),imgSize.at(1) };Ort::Value input_tensor = Ort::Value::CreateTensor<half>(Ort::MemoryInfo::CreateCpu(OrtDeviceAllocator, OrtMemTypeCPU), blob, 3 * imgSize.at(0) * imgSize.at(1), YOLO_input_node_dims.data(), YOLO_input_node_dims.size());auto output_tensors = session->Run(options, inputNodeNames.data(), &input_tensor, 1, outputNodeNames.data(), outputNodeNames.size());delete[] blob;clock_t starttime_4 = clock();double post_process_time = (double)(starttime_4 - starttime_1) / CLOCKS_PER_SEC * 1000;if (cudaEnable){std::cout << "[DCSP_ONNX(CUDA)]: " << "Cuda warm-up cost " << post_process_time << " ms. " << std::endl;}
#endif}return RET_OK;
}

yolov5v8_dnn.h

#ifndef YOLOV5V8_DNN_H
#define YOLOV5V8_DNN_H// Cpp native
#include <fstream>
#include <vector>
#include <string>
#include <random>// OpenCV / DNN / Inference
#include <opencv2/imgproc.hpp>
#include <opencv2/opencv.hpp>
#include <opencv2/dnn.hpp>struct Detection
{int class_id{ 0 };std::string className{};float confidence{ 0.0 };cv::Scalar color{};cv::Rect box{};cv::Mat boxMask;
};class Inference
{
public:Inference(const std::string& onnxModelPath, const cv::Size& modelInputShape = { 640, 640 }, const std::string& classesTxtFile = "", const bool& runWithCuda = true);std::vector<Detection> runInference(const cv::Mat& input);void DrawPred(cv::Mat& img, std::vector<Detection>& result);private:void loadClassesFromFile();void loadOnnxNetwork();void LetterBox(const cv::Mat& image, cv::Mat& outImage,cv::Vec4d& params, //[ratio_x,ratio_y,dw,dh]const cv::Size& newShape = cv::Size(640, 640),bool autoShape = false,bool scaleFill = false,bool scaleUp = true,int stride = 32,const cv::Scalar& color = cv::Scalar(114, 114, 114));void GetMask(const cv::Mat& maskProposals, const cv::Mat& mask_protos, const cv::Vec4d& params, const cv::Size& srcImgShape, std::vector<Detection>& output);private:std::string modelPath{};bool cudaEnabled{};cv::Size2f modelShape{};bool RunSegmentation = false;float modelConfidenceThreshold{ 0.25 };float modelScoreThreshold{ 0.45 };float modelNMSThreshold{ 0.50 };bool letterBoxForSquare = true;cv::dnn::Net net;std::string classesPath{};std::vector<std::string> classes{ "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant","stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella","handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard","tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot","hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard","cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" };
};#endif // YOLOV5V8_DNN_H

yolov5v8_ort.h

#pragma once#define    RET_OK nullptr
#define    USE_CUDA#ifdef _WIN32
#include <Windows.h>
#include <direct.h>
#include <io.h>
#endif#include <string>
#include <vector>
#include <cstdio>
#include <opencv2/opencv.hpp>
#include "onnxruntime_cxx_api.h"#ifdef USE_CUDA
#include <cuda_fp16.h>
#endifenum MODEL_TYPE {//FLOAT32 MODELYOLO_ORIGIN_V5 = 0,//supportYOLO_ORIGIN_V8 = 1,//supportYOLO_POSE_V8 = 2,YOLO_CLS_V8 = 3,YOLO_ORIGIN_V8_HALF = 4,//supportYOLO_POSE_V8_HALF = 5,YOLO_CLS_V8_HALF = 6,YOLO_ORIGIN_V5_HALF = 7 //support
};typedef struct _DCSP_INIT_PARAM {std::string ModelPath;MODEL_TYPE ModelType = YOLO_ORIGIN_V8;std::vector<int> imgSize = { 640, 640 };float modelConfidenceThreshold = 0.25;float RectConfidenceThreshold = 0.6;float iouThreshold = 0.5;bool CudaEnable = false;int LogSeverityLevel = 3;int IntraOpNumThreads = 1;
} DCSP_INIT_PARAM;typedef struct _DCSP_RESULT {int classId;std::string className;float confidence;cv::Rect box;cv::Mat boxMask;       //矩形框内maskcv::Scalar color;
} DCSP_RESULT;class DCSP_CORE {
public:DCSP_CORE();~DCSP_CORE();public:void DrawPred(cv::Mat& img, std::vector<DCSP_RESULT>& result);char* CreateSession(DCSP_INIT_PARAM& iParams);char* RunSession(cv::Mat& iImg, std::vector<DCSP_RESULT>& oResult);char* WarmUpSession();template<typename N>char* TensorProcess(clock_t& starttime_1, cv::Vec4d& params, cv::Mat& iImg, N* blob, std::vector<int64_t>& inputNodeDims, std::vector<DCSP_RESULT>& oResult);std::vector<std::string> classes{ "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant","stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella","handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard","tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot","hot dog", "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard","cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" };private:Ort::Env env;Ort::Session* session;bool cudaEnable;Ort::RunOptions options;bool RunSegmentation = false;std::vector<const char*> inputNodeNames;std::vector<const char*> outputNodeNames;MODEL_TYPE modelType;std::vector<int> imgSize;float modelConfidenceThreshold;float rectConfidenceThreshold;float iouThreshold;};

3.geotpt文件的配置

geotpt文件的配置比较简单,我们只需要写两个文件放入我们的工程就行,代码如下:

getopt.h

# ifndef __GETOPT_H_
# define __GETOPT_H_# ifdef _GETOPT_API
#     undef _GETOPT_API
# endif
//------------------------------------------------------------------------------
# if defined(EXPORTS_GETOPT) && defined(STATIC_GETOPT)
#     error "The preprocessor definitions of EXPORTS_GETOPT and STATIC_GETOPT \
can only be used individually"
# elif defined(STATIC_GETOPT)
#     pragma message("Warning static builds of getopt violate the Lesser GNU \
Public License")
#     define _GETOPT_API
# elif defined(EXPORTS_GETOPT)
#     pragma message("Exporting getopt library")
#     define _GETOPT_API __declspec(dllexport)
# else
#     pragma message("Importing getopt library")
#     define _GETOPT_API __declspec(dllimport)
# endif# include <tchar.h>
// Standard GNU options
# define null_argument           0 /*Argument Null*/
# define no_argument             0 /*Argument Switch Only*/
# define required_argument       1 /*Argument Required*/
# define optional_argument       2 /*Argument Optional*/
// Shorter Versions of options
# define ARG_NULL 0 /*Argument Null*/
# define ARG_NONE 0 /*Argument Switch Only*/
# define ARG_REQ  1 /*Argument Required*/
# define ARG_OPT  2 /*Argument Optional*/
// Change behavior for C\C++
# ifdef __cplusplus
#     define _BEGIN_EXTERN_C extern "C" {
#     define _END_EXTERN_C }
#     define _GETOPT_THROW throw()
# else
#     define _BEGIN_EXTERN_C
#     define _END_EXTERN_C
#     define _GETOPT_THROW
# endif
_BEGIN_EXTERN_C
extern _GETOPT_API TCHAR* optarg;
extern _GETOPT_API int    optind;
extern _GETOPT_API int    opterr;
extern _GETOPT_API int    optopt;
struct option
{/* The predefined macro variable __STDC__ is defined for C++, and it has the in-teger value 0 when it is used in an #if statement, indicating that the C++ l-anguage is not a proper superset of C, and that the compiler does not confor-m to C. In C, __STDC__ has the integer value 1. */
# if defined (__STDC__) && __STDC__const TCHAR* name;
# elseTCHAR* name;
# endifint has_arg;int* flag;TCHAR val;
};
extern _GETOPT_API int getopt(int argc, TCHAR* const* argv, const TCHAR* optstring) _GETOPT_THROW;
extern _GETOPT_API int getopt_long
(int ___argc, TCHAR* const* ___argv, const TCHAR* __shortopts, const struct option* __longopts, int* __longind) _GETOPT_THROW;
extern _GETOPT_API int getopt_long_only
(int ___argc, TCHAR* const* ___argv, const TCHAR* __shortopts, const struct option* __longopts, int* __longind) _GETOPT_THROW;
// harly.he add for reentrant 12.09/2013
extern _GETOPT_API void getopt_reset() _GETOPT_THROW;
_END_EXTERN_C
// Undefine so the macros are not included
# undef _BEGIN_EXTERN_C
# undef _END_EXTERN_C
# undef _GETOPT_THROW
# undef _GETOPT_API
# endif  // __GETOPT_H_

getopt.c

# ifndef _CRT_SECURE_NO_WARNINGS
#     define _CRT_SECURE_NO_WARNINGS
# endif# include <stdlib.h>
# include <stdio.h>
# include <tchar.h>
# include "getopt.h"# ifdef __cplusplus
#     define _GETOPT_THROW throw()
# else
#     define _GETOPT_THROW
# endifenum ENUM_ORDERING
{REQUIRE_ORDER, PERMUTE, RETURN_IN_ORDER
};struct _getopt_data
{int     optind;int     opterr;int     optopt;TCHAR* optarg;int     __initialized;TCHAR* __nextchar;int     __ordering;int     __posixly_correct;int     __first_nonopt;int     __last_nonopt;
};
static struct _getopt_data  getopt_data = { 0, 0, 0, NULL, 0, NULL, 0, 0, 0, 0 };TCHAR* optarg = NULL;
int     optind = 1;
int     opterr = 1;
int     optopt = _T('?');static void exchange(TCHAR** argv, struct _getopt_data* d)
{int     bottom = d->__first_nonopt;int     middle = d->__last_nonopt;int     top = d->optind;TCHAR* tem;while (top > middle && middle > bottom){if (top - middle > middle - bottom){int len = middle - bottom;register int i;for (i = 0; i < len; i++){tem = argv[bottom + i];argv[bottom + i] = argv[top - (middle - bottom) + i];argv[top - (middle - bottom) + i] = tem;}top -= len;}else{int len = top - middle;register int i;for (i = 0; i < len; i++){tem = argv[bottom + i];argv[bottom + i] = argv[middle + i];argv[middle + i] = tem;}bottom += len;}}d->__first_nonopt += (d->optind - d->__last_nonopt);d->__last_nonopt = d->optind;
}static const TCHAR* _getopt_initialize(const TCHAR* optstring, struct _getopt_data* d, int posixly_correct)
{d->__first_nonopt = d->__last_nonopt = d->optind;d->__nextchar = NULL;d->__posixly_correct = posixly_correct| !!_tgetenv(_T("POSIXLY_CORRECT"));if (optstring[0] == _T('-')){d->__ordering = RETURN_IN_ORDER;++optstring;}else if (optstring[0] == _T('+')){d->__ordering = REQUIRE_ORDER;++optstring;}else if (d->__posixly_correct){d->__ordering = REQUIRE_ORDER;}else{d->__ordering = PERMUTE;}return optstring;
}int _getopt_internal_r(int argc, TCHAR* const* argv, const TCHAR* optstring, const struct option* longopts, int* longind, int long_only, struct _getopt_data* d, int posixly_correct)
{int print_errors = d->opterr;if (argc < 1){return -1;}d->optarg = NULL;if (d->optind == 0 || !d->__initialized){if (d->optind == 0){d->optind = 1;}optstring = _getopt_initialize(optstring, d, posixly_correct);d->__initialized = 1;}else if (optstring[0] == _T('-') || optstring[0] == _T('+')){optstring++;}if (optstring[0] == _T(':')){print_errors = 0;}if (d->__nextchar == NULL || *d->__nextchar == _T('\0')){if (d->__last_nonopt > d->optind){d->__last_nonopt = d->optind;}if (d->__first_nonopt > d->optind){d->__first_nonopt = d->optind;}if (d->__ordering == PERMUTE){if (d->__first_nonopt != d->__last_nonopt&& d->__last_nonopt != d->optind){exchange((TCHAR**)argv, d);}else if (d->__last_nonopt != d->optind){d->__first_nonopt = d->optind;}while (d->optind< argc&& (argv[d->optind][0] != _T('-')|| argv[d->optind][1] == _T('\0'))){d->optind++;}d->__last_nonopt = d->optind;}if (d->optind != argc && !_tcscmp(argv[d->optind], _T("--"))){d->optind++;if (d->__first_nonopt != d->__last_nonopt&& d->__last_nonopt != d->optind){exchange((TCHAR**)argv, d);}else if (d->__first_nonopt == d->__last_nonopt){d->__first_nonopt = d->optind;}d->__last_nonopt = argc;d->optind = argc;}if (d->optind == argc){if (d->__first_nonopt != d->__last_nonopt){d->optind = d->__first_nonopt;}return -1;}if ((argv[d->optind][0] != _T('-')|| argv[d->optind][1] == _T('\0'))){if (d->__ordering == REQUIRE_ORDER){return -1;}d->optarg = argv[d->optind++];return 1;}d->__nextchar = (argv[d->optind]+ 1 + (longopts != NULL&& argv[d->optind][1] == _T('-')));}if (longopts != NULL&& (argv[d->optind][1] == _T('-')|| (long_only && (argv[d->optind][2]|| !_tcschr(optstring, argv[d->optind][1]))))){TCHAR* nameend;const struct option* p;const struct option* pfound = NULL;int                     exact = 0;int                     ambig = 0;int                     indfound = -1;int                     option_index;for (nameend = d->__nextchar;*nameend && *nameend != _T('=');nameend++);for (p = longopts, option_index = 0; p->name; p++, option_index++){if (!_tcsncmp(p->name, d->__nextchar, nameend - d->__nextchar)){if ((unsigned int)(nameend - d->__nextchar)== (unsigned int)_tcslen(p->name)){pfound = p;indfound = option_index;exact = 1;break;}else if (pfound == NULL){pfound = p;indfound = option_index;}else if (long_only|| pfound->has_arg != p->has_arg|| pfound->flag != p->flag|| pfound->val != p->val){ambig = 1;}}}if (ambig && !exact){if (print_errors){_ftprintf(stderr, _T("%s: option '%s' is ambiguous\n"), argv[0], argv[d->optind]);}d->__nextchar += _tcslen(d->__nextchar);d->optind++;d->optopt = 0;return _T('?');}if (pfound != NULL){option_index = indfound;d->optind++;if (*nameend){if (pfound->has_arg){d->optarg = nameend + 1;}else{if (print_errors){if (argv[d->optind - 1][1] == _T('-')){_ftprintf(stderr, _T("%s: option '--%s' doesn't allow ")_T("an argument\n"), argv[0], pfound->name);}else{_ftprintf(stderr, _T("%s: option '%c%s' doesn't allow ")_T("an argument\n"), argv[0], argv[d->optind - 1][0], pfound->name);}}d->__nextchar += _tcslen(d->__nextchar);d->optopt = pfound->val;return _T('?');}}else if (pfound->has_arg == 1){if (d->optind < argc){d->optarg = argv[d->optind++];}else{if (print_errors){_ftprintf(stderr, _T("%s: option '--%s' requires an ")_T("argument\n"), argv[0], pfound->name);}d->__nextchar += _tcslen(d->__nextchar);d->optopt = pfound->val;return optstring[0] == _T(':') ? _T(':') : _T('?');}}d->__nextchar += _tcslen(d->__nextchar);if (longind != NULL){*longind = option_index;}if (pfound->flag){*(pfound->flag) = pfound->val;return 0;}return pfound->val;}if (!long_only|| argv[d->optind][1]== _T('-')|| _tcschr(optstring, *d->__nextchar)== NULL){if (print_errors){if (argv[d->optind][1] == _T('-')){/* --option */_ftprintf(stderr, _T("%s: unrecognized option '--%s'\n"), argv[0], d->__nextchar);}else{/* +option or -option */_ftprintf(stderr, _T("%s: unrecognized option '%c%s'\n"), argv[0], argv[d->optind][0], d->__nextchar);}}d->__nextchar = (TCHAR*)_T("");d->optind++;d->optopt = 0;return _T('?');}}{TCHAR   c = *d->__nextchar++;TCHAR* temp = (TCHAR*)_tcschr(optstring, c);if (*d->__nextchar == _T('\0')){++d->optind;}if (temp == NULL || c == _T(':') || c == _T(';')){if (print_errors){_ftprintf(stderr, _T("%s: invalid option -- '%c'\n"), argv[0], c);}d->optopt = c;return _T('?');}if (temp[0] == _T('W') && temp[1] == _T(';')){TCHAR* nameend;const struct option* p;const struct option* pfound = NULL;int                     exact = 0;int                     ambig = 0;int                     indfound = 0;int                     option_index;if (*d->__nextchar != _T('\0')){d->optarg = d->__nextchar;d->optind++;}else if (d->optind == argc){if (print_errors){_ftprintf(stderr, _T("%s: option requires an argument -- '%c'\n"), argv[0], c);}d->optopt = c;if (optstring[0] == _T(':')){c = _T(':');}else{c = _T('?');}return c;}else{d->optarg = argv[d->optind++];}for (d->__nextchar = nameend = d->optarg;*nameend && *nameend != _T('=');nameend++);for (p = longopts, option_index = 0;p->name;p++, option_index++){if (!_tcsncmp(p->name, d->__nextchar, nameend - d->__nextchar)){if ((unsigned int)(nameend - d->__nextchar)== _tcslen(p->name)){pfound = p;indfound = option_index;exact = 1;break;}else if (pfound == NULL){pfound = p;indfound = option_index;}else if (long_only|| pfound->has_arg != p->has_arg|| pfound->flag != p->flag|| pfound->val != p->val){ambig = 1;}}}if (ambig && !exact){if (print_errors){_ftprintf(stderr, _T("%s: option '-W %s' is ambiguous\n"), argv[0], d->optarg);}d->__nextchar += _tcslen(d->__nextchar);d->optind++;return _T('?');}if (pfound != NULL){option_index = indfound;if (*nameend){if (pfound->has_arg){d->optarg = nameend + 1;}else{if (print_errors){_ftprintf(stderr, _T("%s: option '-W %s' doesn't allow ")_T("an argument\n"), argv[0], pfound->name);}d->__nextchar += _tcslen(d->__nextchar);return _T('?');}}else if (pfound->has_arg == 1){if (d->optind < argc){d->optarg = argv[d->optind++];}else{if (print_errors){_ftprintf(stderr, _T("%s: option '-W %s' requires an ")_T("argument\n"), argv[0], pfound->name);}d->__nextchar += _tcslen(d->__nextchar);return optstring[0] == _T(':') ? _T(':') : _T('?');}}else{d->optarg = NULL;}d->__nextchar += _tcslen(d->__nextchar);if (longind != NULL){*longind = option_index;}if (pfound->flag){*(pfound->flag) = pfound->val;return 0;}return pfound->val;}d->__nextchar = NULL;return _T('W');}if (temp[1] == _T(':')){if (temp[2] == _T(':')){if (*d->__nextchar != _T('\0')){d->optarg = d->__nextchar;d->optind++;}else{d->optarg = NULL;}d->__nextchar = NULL;}else{if (*d->__nextchar != _T('\0')){d->optarg = d->__nextchar;d->optind++;}else if (d->optind == argc){if (print_errors){_ftprintf(stderr, _T("%s: option requires an ")_T("argument -- '%c'\n"), argv[0], c);}d->optopt = c;if (optstring[0] == _T(':')){c = _T(':');}else{c = _T('?');}}else{d->optarg = argv[d->optind++];}d->__nextchar = NULL;}}return c;}
}int _getopt_internal(int argc, TCHAR* const* argv, const TCHAR* optstring, const struct option* longopts, int* longind, int long_only, int posixly_correct)
{int result;getopt_data.optind = optind;getopt_data.opterr = opterr;result = _getopt_internal_r(argc, argv, optstring, longopts, longind, long_only, &getopt_data, posixly_correct);optind = getopt_data.optind;optarg = getopt_data.optarg;optopt = getopt_data.optopt;return result;
}int getopt(int argc, TCHAR* const* argv, const TCHAR* optstring) _GETOPT_THROW
{return _getopt_internal(argc, argv, optstring, (const struct option*)0, (int*)0, 0, 0);
}int getopt_long(int argc, TCHAR* const* argv, const TCHAR* options, const struct option* long_options, int* opt_index) _GETOPT_THROW
{return _getopt_internal(argc, argv, options, long_options, opt_index, 0, 0);
}int _getopt_long_r(int argc, TCHAR* const* argv, const TCHAR* options, const struct option* long_options, int* opt_index, struct _getopt_data* d)
{return _getopt_internal_r(argc, argv, options, long_options, opt_index, 0, d, 0);
}int getopt_long_only(int argc, TCHAR* const* argv, const TCHAR* options, const struct option* long_options, int* opt_index) _GETOPT_THROW
{return _getopt_internal(argc, argv, options, long_options, opt_index, 1, 0);
}int _getopt_long_only_r(int argc, TCHAR* const* argv, const TCHAR* options, const struct option* long_options, int* opt_index, struct _getopt_data* d)
{return _getopt_internal_r(argc, argv, options, long_options, opt_index, 1, d, 0);
}void getopt_reset()
{optarg = NULL;optind = 1;opterr = 1;optopt = _T('?');//getopt_data.optind = 0;getopt_data.opterr = 0;getopt_data.optopt = 0;getopt_data.optarg = NULL;getopt_data.__initialized = 0;getopt_data.__nextchar = NULL;getopt_data.__ordering = 0;getopt_data.__posixly_correct = 0;getopt_data.__first_nonopt = 0;getopt_data.__last_nonopt = 0;
}

做完以上步骤后,就可以进行预测了。

注意:最好配置opencv4.8.1,其他版本的opencv可能会报如下错误:


http://www.ppmy.cn/server/158843.html

相关文章

庖丁解java(一篇文章学java)

(大家不用收藏这篇文章,因为这篇文章会经常更新,也就是删除后重发) 一篇文章学java,这是我滴一个执念... 当然,真一篇文章就写完java基础,java架构,java业务实现,java业务扩展,根本不可能.所以,这篇文章,就是一个索引,索什么呢? 请看下文... 关于决定开始写博文的介绍 …

基于微信小程序的食堂线上预约点餐系统设计与实现(LW+源码+讲解)

专注于大学生项目实战开发,讲解,毕业答疑辅导&#xff0c;欢迎高校老师/同行前辈交流合作✌。 技术范围&#xff1a;SpringBoot、Vue、SSM、HLMT、小程序、Jsp、PHP、Nodejs、Python、爬虫、数据可视化、安卓app、大数据、物联网、机器学习等设计与开发。 主要内容&#xff1a;…

蓝桥杯刷题第二天——背包问题

题目描述 有N件物品和一个容量是V的背包。每件物品只能使用一次。第i件物品的体积是Vi价值是Wi。 求解将哪些物品装入背包&#xff0c;可使这些物品的总体积不超过背包容量&#xff0c;且总价值最大。 输出最大价值。 输入格式 第一行两个整数&#xff0c;N&#xff0c;V&am…

AWS云计算概览(自用留存)

目录 一、云概念概览 &#xff08;1&#xff09;云服务模型 &#xff08;2&#xff09;云计算6大优势 &#xff08;3&#xff09;web服务 &#xff08;4&#xff09;AWS云采用框架&#xff08;AWS CAF&#xff09; 二、云经济学 & 账单 &#xff08;1&#xff09;定…

RK356x bsp 5 - 海华AW-CM358SM Wi-Fi/Bt模组调试记录

文章目录 1、环境介绍2、目标3、海华AW-CM358SM3.1、基本信息3.2、支持SDIO3.03.3、电气特性 4、适配流程步骤5、SDIO控制器适配5.1、sdio dts配置5.2、验证 6、Wi-Fi 适配6.1、wifi dts配置6.2、驱动移植6.2.1、kernel menuconfig6.2.2、传统驱动移植6.2.3、RK SDK WIFI/BT驱动…

如何创建表格式布局

文章目录 1. 概念介绍2. 使用方法3. 示例代码4. 经验总结我们在上一章回中介绍了Image Widget,本章回中将介绍GirdView这种Widget,闲话休提,让我们一起Talk Flutter吧。 1. 概念介绍 在Flutter中使用GirdView表示网格状的布局,类似日常办公中使用的Excel,它和ListView一样具…

k8s基础(6)—Kubernetes-存储

Kubernetes-存储概述 k8s的持久券简介 Kubernetes的持久卷&#xff08;PersistentVolume, PV&#xff09;和持久卷声明&#xff08;PersistentVolumeClaim, PVC&#xff09;为用户在Kubernetes中使用卷提供了抽象。PV是集群中的一块存储&#xff0c;PVC是对这部分存储的请求。…

ES6的高阶语法特性

一、模板字符串的高级用法 1.1.模板字符串的嵌套 模板字符串的嵌套允许在一个模板字符串内部再嵌入一个或多个模板字符串。这种嵌套结构在处理复杂数据结构或生成具有层级关系的文本时非常有用。 1. 嵌套示例 假设我们有一个包含多个对象的数组&#xff0c;每个对象都有名称、…