如何在OpenCV中运行自定义OCR模型

ops/2024/12/18 12:17:51/

我们首先介绍如何获取自定义OCR模型,然后介绍如何转换自己的OCR模型以便能够被opencv_dnn模块正确运行,最后我们将提供一些预先训练的模型。

训练你自己的 OCR 模型

此存储库是训练您自己的 OCR 模型的良好起点。在存储库中,MJSynth+SynthText 默认设置为训练集。此外,您可以配置所需的模型结构和数据集。

将 OCR 模型转换为 ONNX 格式并在 OpenCV DNN 中使用它

完成模型训练后,请使用transform_to_onnx.py将模型转换为onnx格式。

在网络摄像头中执行

源码:

'''Text detection model: https://github.com/argman/EASTDownload link: https://www.dropbox.com/s/r2ingd0l3zt8hxs/frozen_east_text_detection.tar.gz?dl=1CRNN Text recognition model taken from here: https://github.com/meijieru/crnn.pytorchHow to convert from pb to onnx:Using classes from here: https://github.com/meijieru/crnn.pytorch/blob/master/models/crnn.pyMore converted onnx text recognition models can be downloaded directly here:Download link: https://drive.google.com/drive/folders/1cTbQ3nuZG-EKWak6emD_s8_hHXWz7lAr?usp=sharingAnd these models taken from here:https://github.com/clovaai/deep-text-recognition-benchmarkimport torchfrom models.crnn import CRNNmodel = CRNN(32, 1, 37, 256)model.load_state_dict(torch.load('crnn.pth'))dummy_input = torch.randn(1, 1, 32, 100)torch.onnx.export(model, dummy_input, "crnn.onnx", verbose=True)
'''# Import required modules
import numpy as np
import cv2 as cv
import math
import argparse############ Add argument parser for command line arguments ############
parser = argparse.ArgumentParser(description="Use this script to run TensorFlow implementation (https://github.com/argman/EAST) of ""EAST: An Efficient and Accurate Scene Text Detector (https://arxiv.org/abs/1704.03155v2)""The OCR model can be obtained from converting the pretrained CRNN model to .onnx format from the github repository https://github.com/meijieru/crnn.pytorch""Or you can download trained OCR model directly from https://drive.google.com/drive/folders/1cTbQ3nuZG-EKWak6emD_s8_hHXWz7lAr?usp=sharing")
parser.add_argument('--input',help='Path to input image or video file. Skip this argument to capture frames from a camera.')
parser.add_argument('--model', '-m', required=True,help='Path to a binary .pb file contains trained detector network.')
parser.add_argument('--ocr', default="crnn.onnx",help="Path to a binary .pb or .onnx file contains trained recognition network", )
parser.add_argument('--width', type=int, default=320,help='Preprocess input image by resizing to a specific width. It should be multiple by 32.')
parser.add_argument('--height', type=int, default=320,help='Preprocess input image by resizing to a specific height. It should be multiple by 32.')
parser.add_argument('--thr', type=float, default=0.5,help='Confidence threshold.')
parser.add_argument('--nms', type=float, default=0.4,help='Non-maximum suppression threshold.')
args = parser.parse_args()############ Utility functions ############def fourPointsTransform(frame, vertices):vertices = np.asarray(vertices)outputSize = (100, 32)targetVertices = np.array([[0, outputSize[1] - 1],[0, 0],[outputSize[0] - 1, 0],[outputSize[0] - 1, outputSize[1] - 1]], dtype="float32")rotationMatrix = cv.getPerspectiveTransform(vertices, targetVertices)result = cv.warpPerspective(frame, rotationMatrix, outputSize)return resultdef decodeText(scores):text = ""alphabet = "0123456789abcdefghijklmnopqrstuvwxyz"for i in range(scores.shape[0]):c = np.argmax(scores[i][0])if c != 0:text += alphabet[c - 1]else:text += '-'# adjacent same letters as well as background text must be removed to get the final outputchar_list = []for i in range(len(text)):if text[i] != '-' and (not (i > 0 and text[i] == text[i - 1])):char_list.append(text[i])return ''.join(char_list)def decodeBoundingBoxes(scores, geometry, scoreThresh):detections = []confidences = []############ CHECK DIMENSIONS AND SHAPES OF geometry AND scores ############assert len(scores.shape) == 4, "Incorrect dimensions of scores"assert len(geometry.shape) == 4, "Incorrect dimensions of geometry"assert scores.shape[0] == 1, "Invalid dimensions of scores"assert geometry.shape[0] == 1, "Invalid dimensions of geometry"assert scores.shape[1] == 1, "Invalid dimensions of scores"assert geometry.shape[1] == 5, "Invalid dimensions of geometry"assert scores.shape[2] == geometry.shape[2], "Invalid dimensions of scores and geometry"assert scores.shape[3] == geometry.shape[3], "Invalid dimensions of scores and geometry"height = scores.shape[2]width = scores.shape[3]for y in range(0, height):# Extract data from scoresscoresData = scores[0][0][y]x0_data = geometry[0][0][y]x1_data = geometry[0][1][y]x2_data = geometry[0][2][y]x3_data = geometry[0][3][y]anglesData = geometry[0][4][y]for x in range(0, width):score = scoresData[x]# If score is lower than threshold score, move to next xif (score < scoreThresh):continue# Calculate offsetoffsetX = x * 4.0offsetY = y * 4.0angle = anglesData[x]# Calculate cos and sin of anglecosA = math.cos(angle)sinA = math.sin(angle)h = x0_data[x] + x2_data[x]w = x1_data[x] + x3_data[x]# Calculate offsetoffset = ([offsetX + cosA * x1_data[x] + sinA * x2_data[x], offsetY - sinA * x1_data[x] + cosA * x2_data[x]])# Find points for rectanglep1 = (-sinA * h + offset[0], -cosA * h + offset[1])p3 = (-cosA * w + offset[0], sinA * w + offset[1])center = (0.5 * (p1[0] + p3[0]), 0.5 * (p1[1] + p3[1]))detections.append((center, (w, h), -1 * angle * 180.0 / math.pi))confidences.append(float(score))# Return detections and confidencesreturn [detections, confidences]def main():# Read and store argumentsconfThreshold = args.thrnmsThreshold = args.nmsinpWidth = args.widthinpHeight = args.heightmodelDetector = args.modelmodelRecognition = args.ocr# Load networkdetector = cv.dnn.readNet(modelDetector)recognizer = cv.dnn.readNet(modelRecognition)# Create a new named windowkWinName = "EAST: An Efficient and Accurate Scene Text Detector"cv.namedWindow(kWinName, cv.WINDOW_NORMAL)outNames = []outNames.append("feature_fusion/Conv_7/Sigmoid")outNames.append("feature_fusion/concat_3")# Open a video file or an image file or a camera streamcap = cv.VideoCapture(args.input if args.input else 0)tickmeter = cv.TickMeter()while cv.waitKey(1) < 0:# Read framehasFrame, frame = cap.read()if not hasFrame:cv.waitKey()break# Get frame height and widthheight_ = frame.shape[0]width_ = frame.shape[1]rW = width_ / float(inpWidth)rH = height_ / float(inpHeight)# Create a 4D blob from frame.blob = cv.dnn.blobFromImage(frame, 1.0, (inpWidth, inpHeight), (123.68, 116.78, 103.94), True, False)# Run the detection modeldetector.setInput(blob)tickmeter.start()outs = detector.forward(outNames)tickmeter.stop()# Get scores and geometryscores = outs[0]geometry = outs[1][boxes, confidences] = decodeBoundingBoxes(scores, geometry, confThreshold)# Apply NMSindices = cv.dnn.NMSBoxesRotated(boxes, confidences, confThreshold, nmsThreshold)for i in indices:# get 4 corners of the rotated rectvertices = cv.boxPoints(boxes[i])# scale the bounding box coordinates based on the respective ratiosfor j in range(4):vertices[j][0] *= rWvertices[j][1] *= rH# get cropped image using perspective transformif modelRecognition:cropped = fourPointsTransform(frame, vertices)cropped = cv.cvtColor(cropped, cv.COLOR_BGR2GRAY)# Create a 4D blob from cropped imageblob = cv.dnn.blobFromImage(cropped, size=(100, 32), mean=127.5, scalefactor=1 / 127.5)recognizer.setInput(blob)# Run the recognition modeltickmeter.start()result = recognizer.forward()tickmeter.stop()# decode the result into textwordRecognized = decodeText(result)cv.putText(frame, wordRecognized, (int(vertices[1][0]), int(vertices[1][1])), cv.FONT_HERSHEY_SIMPLEX,0.5, (255, 0, 0))for j in range(4):p1 = (int(vertices[j][0]), int(vertices[j][1]))p2 = (int(vertices[(j + 1) % 4][0]), int(vertices[(j + 1) % 4][1]))cv.line(frame, p1, p2, (0, 255, 0), 1)# Put efficiency informationlabel = 'Inference time: %.2f ms' % (tickmeter.getTimeMilli())cv.putText(frame, label, (0, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0))# Display the framecv.imshow(kWinName, frame)tickmeter.reset()if __name__ == "__main__":main()
$ text_detection -m=[path_to_text_detect_model] -ocr=[path_to_text_recognition_model]

提供预先训练的 ONNX 模型

一些预先训练的模型可以在https://drive.google.com/drive/folders/1cTbQ3nuZG-EKWak6emD_s8_hHXWz7lAr?usp=sharing找到。

下表显示了它们在不同文本识别数据集上的表现:

文本识别模型的性能是在OpenCV DNN上测试的,不包括文本检测模型。

选型建议

文本识别模型的输入是文本检测模型的输出,这导致文本检测的性能极大地影响着文本识别的性能。

DenseNet_CTC 的参数最小,FPS 最好,适合边缘设备,对计算成本非常敏感。如果你的计算资源有限,又想达到更好的准确率,VGG_CTC 是个不错的选择。

CRNN_VGG_BiLSTM_CTC适用于对识别准确率要求较高的场景。


http://www.ppmy.cn/ops/142903.html

相关文章

vscode中插件ofExtensions的debug模式也无法查看U、p等openfoam中foam类型的变量

插件介绍&#xff1a; 主要内容如下&#xff1a; 以自编译的$HOME/OpenFOAM-7例&#xff0c;如果OFdebugopt设置为WM_COMPILE_OPTIONDebug&#xff0c;那最终的激活环境的命令为source $HOME/OpenFOAM/OpenFOAM-8/etc/bashrc WM_COMPILE_OPTIONDebug&#xff0c;这时候$FOAM_…

【Go卸载时:遇到无法卸载情况】

进入go&#xff0c;先把之前的版本下载一遍&#xff0c;进入后点击repair。 go下载地址&#xff1a;https://go.dev/dl/ 然后下载新版本即可

多维高斯分布

高斯分布&#xff08;Gaussian Distribution&#xff09; 高斯分布&#xff0c;又称正态分布&#xff0c;是一种最常见的概率分布形式&#xff0c;广泛应用于统计学、机器学习和自然科学等领域。 高斯分布的概率密度函数&#xff08;PDF&#xff09; 对于给定的均值 μ 和方差…

VirtualBox使用教程

VirtualBox是一款由Oracle公司开发的开源虚拟机软件&#xff0c;支持在主机操作系统上运行多个虚拟化的操作系统。本文将介绍如何安装VirtualBox、创建虚拟机以及一些常见的设置技巧。 一、VirtualBox的安装 1. 下载VirtualBox 访问VirtualBox的官方网站&#xff08;https:/…

蓝桥杯python赛道我来了

最近蓝桥杯报名快要截止了&#xff0c;我们学校开始收费了&#xff0c;我们学校没有校赛&#xff0c;一旦报名缴费就是省赛&#xff0c;虽然一早就在官网上报名了&#xff0c;但是一直在纠结&#xff0c;和家人沟通&#xff0c;和朋友交流&#xff0c;其实只是想寻求外界的支持…

2025erp系统开源免费进销存系统搭建教程/功能介绍/上线即可运营软件平台源码

系统介绍 基于ThinkPHP与LayUI构建的全方位进销存解决方案 本系统集成了采购、销售、零售、多仓库管理、财务管理等核心功能模块&#xff0c;旨在为企业提供一站式进销存管理体验。借助详尽的报表分析和灵活的设置选项&#xff0c;企业可实现精细化管理&#xff0c;提升运营效…

Unity中C#脚本基础

**好的&#xff0c;让我们更详细地探讨Unity中C#脚本的一些关键概念和技巧。 1. 基础脚本结构 Unity脚本通常继承自MonoBehaviour类&#xff0c;它提供了Start和Update等生命周期方法。 using UnityEngine;public class MyScript : MonoBehaviour {// 在游戏对象被实例化时调…

21 go语言(golang) - gin框架安装及使用(二)

四、组成 前面的文章中&#xff0c;我们介绍了其中一部分组成&#xff0c;接下来继续学习&#xff1a; Router&#xff08;路由器&#xff09; Gin 使用基于树结构的路由机制来处理 HTTP 请求。它支持动态路由参数、分组路由以及中间件。路由器负责将请求路径映射到相应的处理…