1,相关链接
https://blog.csdn.net/leiwuhen92/article/details/126419345
EasyOCR 识别模型训练(重要必看)
2,安装配置
pip install ocr>easyocr
3,官方模型测试
使用下方代码进行模型调用,可以对检测结果可视化,并且其在 EasyOCR 基础上,增补功能,提高文字识别的准确率和覆盖率。
https://gitee.com/deng-kan/augmented_ocr>easyocr/tree/master
基于EasyOCR 二次封装好的源代码(重要)
相关模型下载:
https://www.jaided.ai/ocr>easyocr/modelhub/
使用纯英文模型
将相关模型放到用户下的相关路径下
使用en 专门的英文模型(只用en要比准确)
其可以是被验证码,即其对于遮挡物体的数字英文识别效果也是比较好一些的
4,自定义数据集训练及测试
参考下方链接,及其重要
https://blog.csdn.net/leiwuhen92/article/details/126419345
EasyOCR 识别模型训练(重要必看)
只下载用于训练的即可
https://github.com/clovaai/deep-text-recognition-benchmark#when-you-need-to-train-on-your-own-dataset-or-non-latin-language-datasets
这里用自己的图像集训练,可以只下载deep-text-recognition-benchmark即可,用此来进行训练
(1)标签转换脚本
import cv2
import os
import csvinput_path = "E:/tupian/ocr2/ocr_data"
output_path = "./data"
ratio = 0.2#拆分列表
def split_list(lst, ratios, num_splits):"""将列表按照指定比例和数量拆分成子列表:param lst: 待拆分列表:param ratios: 每个子列表的元素占比,由小数表示的列表:param num_splits: 子列表的数量:return: 拆分后的子列表组成的列表"""if len(ratios) != num_splits:raise ValueError("The length of ratios must equal to num_splits.")total_ratio = sum(ratios)if total_ratio != 1:raise ValueError("The sum of ratios must be equal to 1.")n = len(lst)result = []start = 0for i in range(num_splits):end = start + int(n * ratios[i])result.append(lst[start:end])start = endreturn resultdef read_path(input_path,output_path,ratio):#遍历该目录下的所有图片文件train_list = []img_output_path_train = './data/train/images/'img_output_path_valid = './data/valid/images/'img_output_path_eval = './data/eval/images/'for filename in os.listdir(input_path):if '.jpg' in filename:img_filename = filenameimg_path = input_path +'/' + filenametxt_path = input_path +'/' + filename.replace('.jpg','.txt')img_output_path_train = './data/train/images/'img_output_path_valid = './data/valid/images/'img_output_path_eval = './data/eval/images/'if not os.path.exists(img_output_path_train):os.makedirs(img_output_path_train)if not os.path.exists(img_output_path_valid):os.makedirs(img_output_path_valid)if not os.path.exists(img_output_path_eval):os.makedirs(img_output_path_eval)print(img_path)print(txt_path)#读取保存图像img = cv2.imread(img_path)cv2.imwrite(img_output_path_train + img_filename, img)cv2.imwrite(img_output_path_valid + img_filename, img)cv2.imwrite(img_output_path_eval + img_filename, img)#读取txt文件并保存到tsv#中间用tab隔开(字符空格隔开,使用space表示空格)label = ''with open(txt_path, "r", encoding='utf-8') as f:# read():读取文件全部内容,以字符串形式返回结果data = f.read()for char in data:# if char == ' ':# char = '<space>'label = label + char + ' 'label = label.replace(' \n ','').replace(' ','')print(label)name_label_list =[]name_label_list.append('images/' + img_filename)name_label_list.append(label)print(name_label_list)train_list.append(name_label_list)#按照比例分割开ratios = [1-ratio, ratio]num_splits = 2result = split_list(train_list, ratios, num_splits)#其中,train.tsv是待创建的文件名或项目文件夹中已有的文件名,‘w’代表写入,指定newline=’‘是为了避免每写出一行数据就会有一个空行,delimiter代表分隔符,tsv的分隔符为’\t’,csv的分隔符为’,’。with open('./data/train/gt.txt', 'w', newline='') as tsvfile:writer = csv.writer(tsvfile, delimiter='\t')writer.writerows(result[0])with open('./data/valid/gt.txt', 'w', newline='') as tsvfile:writer = csv.writer(tsvfile, delimiter='\t')writer.writerows(result[1])with open('./data/eval/gt.txt', 'w', newline='') as tsvfile:writer = csv.writer(tsvfile, delimiter='\t')writer.writerows(result[1])#注意*处如果包含家目录(home)不能写成~符号代替
#必须要写成"/home"的格式,否则会报错说找不到对应的目录
#读取的目录
read_path(input_path, output_path, ratio)
#print(os.getcwd())
要将其转换成如下的数据集格式
(2)生成lmdb数据格式(重要)
此处生成lmdb数据格式不能省略(注意:)
python create_lmdb_dataset.py --inputPath data/train --gtFile data/train/gt.txt --outputPath result/train
python create_lmdb_dataset.py --inputPath data/valid --gtFile data/valid/gt.txt --outputPath result/valid
python create_lmdb_dataset.py --inputPath data/eval --gtFile data/eval/gt.txt --outputPath result/eval
文件结构要变一下,要手动将其mdb放到MJ-ST文件夹下
(3)预训练模型下载
https://github.com/WelY1/lp_recognition_TensorRT
(4)训练
只下载用于训练的deep-text-recognition-benchmark即可
https://github.com/clovaai/deep-text-recognition-benchmark#when-you-need-to-train-on-your-own-dataset-or-non-latin-language-datasets
训练要选择的字库
##########
python train.py --train_data result/train --valid_data result/valid --workers 0 --Transformation None --FeatureExtraction VGG --SequenceModeling BiLSTM --Prediction CTC
用下方的训练命令,使用预训练模型
############
python train.py --train_data result/train --valid_data result/valid --workers 0 --select_data MJ-ST --batch_ratio 0.5-0.5 --Transformation None --FeatureExtraction VGG --SequenceModeling BiLSTM --Prediction CTC --saved_model None-VGG-BiLSTM-CTC.pth --num_iter 2000 --valInterval 20 --FT
初步运行成功了
(5)测试
测试脚本增加
#结果输出string_txt = img_name.replace('.jpg', '') + '_' + pred + '.txt'print(string_txt)with open(string_txt, 'w') as tsvfile:tsvfile.write(pred)
# 测试项目中包含的演示图像
python demo.py --Transformation None --FeatureExtraction VGG --SequenceModeling BiLSTM --Prediction CTC --image_folder demo_image/ --saved_model ./saved_models/None-VGG-BiLSTM-CTC-Seed1111/best_accuracy.pth
测试效果不错