【模糊检测】

news/2024/11/24 13:35:58/

模糊检测

模糊图像特点:边缘模糊,梯度变化小。

传统方法

更多方法模糊图像检测-无参考图像的清晰度评价 - 知乎 (zhihu.com)

拉普拉斯方差

从空间域出发,分析模糊图像的梯度比较小。拉普拉斯算子测量图像的二阶导数,突出显示包含快速强度变化的图像区域。如果方差低,表示图像边缘非常少。

使用OpenCV库,技巧是设置正确阈值,阈值太低会错误将图像标记为模糊,太高,模糊图像不会被标记。

缺点:阈值需要自己设置,光照环境会对效果有很大影响。对色彩很敏感,大色块就不算模糊。无法感知运动模糊。

	import cv2image = cv2.imread(imagePath)gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)value =  cv2.Laplacian(image, cv2.CV_64F).var() # 大于value表示清晰图像

快速傅里叶变化

对频域进行处理分析,模糊图像频率低;

使用matplotlib和numpy库.

缺点:对彩色图片适应良好,暗一点的图片就g。针对有清晰局部的运动模糊照片判断不行。

import numpy as np
import cv2
import imutilsdef detect_blur_fft(image, size=60, thresh=10, vis=False):
# 输入: image:我们用于模糊检测的输入图像
# size:围绕图像中心点的半径大小,我们将对其进行 FFT 移位归零
# 将与幅度的平均值进行比较的值,以确定图像是否被认为是模糊的(h, w) = image.shape (cX, cY) = (int(w / 2.0), int(h / 2.0))  # 导出中心 (x, y) 坐标fft = np.fft.fft2(image)  #计算 FFT 以找到频率变换fftShift = np.fft.fftshift(fft) # 然后移位零频率分量(即直流分量位于左上角)到中心fftShift[cY - size:cY + size, cX - size:cX + size] = 0 #将 FFT 移位的中心区域清零(即,删除低频率)fftShift = np.fft.ifftshift(fftShift) # 应用逆移使得低频再次到左上角recon = np.fft.ifft2(fftShift) # 逆FFTmagnitude = 20 * np.log(np.abs(recon)) # 计算重建图像的幅度值,mean = np.mean(magnitude)  # 计算幅度值的平均值return (mean, mean <= thresh) # 如果平均值小于阈值,图像将被视为“模糊”image = cv2.imread("dataset/motion_blur2.jpg")
image = imutils.resize(image, width=500) 
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
(mean, blurry) = detect_blur_fft(gray)
# print(mean, blurry)image = np.dstack([gray] * 3)
color = (0, 0, 255) if blurry else (0, 255, 0)
text = "Blurry ({:.4f})" if blurry else "Not Blurry ({:.4f})"
text = text.format(mean)
cv2.putText(image, text, (10, 25), cv2.FONT_HERSHEY_SIMPLEX, 0.7, color, 2)
print("[INFO] {}".format(text))# show the output image
# cv2.imshow("Output", image)
# cv2.waitKey(0)

检测视频

# import the necessary packages
from imutils.video import VideoStream
from pyimagesearch.blur_detector import detect_blur_fft
import argparse
import imutils
import time
import cv2
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-t", "--thresh", type=int, default=10,help="threshold for our blur detector to fire")
args = vars(ap.parse_args())# initialize the video stream and allow the camera sensor to warm up
print("[INFO] starting video stream...")
vs = VideoStream(src=0).start()
time.sleep(2.0)
# loop over the frames from the video stream
while True:# grab the frame from the threaded video stream and resize it# to have a maximum width of 400 pixelsframe = vs.read()frame = imutils.resize(frame, width=500)# convert the frame to grayscale and detect blur in itgray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)(mean, blurry) = detect_blur_fft(gray, size=60,thresh=args["thresh"], vis=False)color = (0, 0, 255) if blurry else (0, 255, 0)text = "Blurry ({:.4f})" if blurry else "Not Blurry ({:.4f})"text = text.format(mean)cv2.putText(frame, text, (10, 25), cv2.FONT_HERSHEY_SIMPLEX,0.7, color, 2)# show the output framecv2.imshow("Frame", frame)key = cv2.waitKey(1) & 0xFF# if the `q` key was pressed, break from the loopif key == ord("q"):break
# do a bit of cleanup
cv2.destroyAllWindows()
vs.stop()

参考文档OpenCV Fast Fourier Transform (FFT) for blur detection in images and video streams - PyImageSearch

SVD

缺点:对色彩敏感,无法识别运动模糊。

def get_blur_degree(image_file, sv_num=10):img = cv2.imread(image_file,cv2.IMREAD_GRAYSCALE)u, s, v = np.linalg.svd(img)top_sv = np.sum(s[0:sv_num])total_sv = np.sum(s)return top_sv/total_sv

机器学习

SVM:0-1分类;先计算数据集的算子,svm在算子之间找一个界面作为模糊和清晰界限。

缺点:动态模糊无法识别,颜色暗淡图像识别为模糊,和传统方法相比,阈值不用自己找了,但是对暗度低的图片识别效果也不行。

python代码

from sklearn.metrics import accuracy_score
from sklearn import svm
import os
import cv2
import numpy as np
import pandas as pd
from tensorflow.keras.preprocessing import image
from sklearn.metrics import accuracy_score
from sklearn.utils import shuffle
from tqdm import tqdm# loading train datadef get_features(path):input_size = (512, 512)images = os.listdir(path)features = []for i in tqdm(images):feature = []# gray = cv2.imread(path+img,0)# laplacian_var = cv2.Laplacian(gray, cv2.CV_64F).var()img = image.load_img(path+i, target_size=input_size)# img = image.load_img(path+i)gray = cv2.cvtColor(np.asarray(img), cv2.COLOR_BGR2GRAY)laplacian = cv2.Laplacian(gray, cv2.CV_64F)feature.extend([laplacian.var(), np.amax(laplacian)])features.append(feature)return pd.DataFrame(features)path_undis = '../CERTH_ImageBlurDataset/TrainingSet/Undistorted/'
path_art_blur = '../CERTH_ImageBlurDataset/TrainingSet/Artificially-Blurred/'
path_nat_blur = '../CERTH_ImageBlurDataset/TrainingSet/Naturally-Blurred/'feature_undis = get_features(path_undis)
print('Undistorted DONE')
feature_art_blur = get_features(path_art_blur)
print('Artificially-Blurred DONE')
feature_nat_blur = get_features(path_nat_blur)
print("Naturally-Blurred DONE")
feature_art_blur.to_csv('./data/art_blur.csv', index=False)
feature_nat_blur.to_csv('./data/nat_blur.csv', index=False)
feature_undis.to_csv('./data/undis.csv', index=False)# uncomment below code if you have pre-calculated features# feature_art_blur = pd.read_csv('./data/art_blur.csv')
# feature_nat_blur = pd.read_csv('./data/nat_blur.csv')
# feature_undis = pd.read_csv('./data/undis.csv')images = pd.DataFrame()images = pd.DataFrame()
images = images.append(feature_undis)
images = images.append(feature_art_blur)
images = images.append(feature_nat_blur)x_train = np.array(images)y_train = np.concatenate((np.zeros((feature_undis.shape[0], )), np.ones((feature_art_blur.shape[0]+feature_nat_blur.shape[0], ))), axis=0)x_train, y_train = shuffle(x_train, y_train)# Training model
svm_model = svm.SVC(C=50, kernel='rbf')svm_model.fit(x_train, y_train)pred = svm_model.predict(x_train)
print('\nTraining Accuracy:', accuracy_score(y_train, pred))

CERTH数据集上87.57%准确率,我自己跑的精度87.56%。

代码链接im-vvk/Blur-Image-Detection: Classification of Blurred and Non-Blurred Images using opencv and SVM (github.com)

参考文档Mobile Image Blur Detection with Machine Learning | by Benedikt Brief | snapADDY Tech Blog — Tales from the Devs | Medium

深度学习

CNN:3层卷积。

缺点:相比支持向量机,没什么长进,可能因为层数太少了。

import numpy as np
import pandas as pd
import os
import pickle
from tensorflow.keras.preprocessing import image
import tensorflow as tf
from tensorflow.keras import utils
from sklearn.metrics import accuracy_scoreinput_size = (300, 300)with open('./X_train_300.pkl', 'rb') as picklefile:X_train = pickle.load( picklefile)with open('./y_train_300.pkl', 'rb') as picklefile:y_train = pickle.load( picklefile)with open('./X_test_300.pkl', 'rb') as picklefile:X_test = pickle.load(picklefile)with open('./y_test_300.pkl', 'rb') as picklefile:y_test = pickle.load(picklefile)model = tf.keras.models.Sequential([tf.keras.layers.Conv2D(32, (3,3), activation = 'relu', input_shape=(input_size[0], input_size[1], 3)),tf.keras.layers.MaxPooling2D(2,2),tf.keras.layers.Conv2D(64, (5,5), activation = 'relu'),tf.keras.layers.MaxPooling2D(2,2),tf.keras.layers.Conv2D(128, (5,5), activation = 'relu'),tf.keras.layers.MaxPooling2D(2,2),tf.keras.layers.Flatten(),tf.keras.layers.Dense(1024, activation='relu'),tf.keras.layers.Dropout(0.5),tf.keras.layers.Dense(512, activation='relu'),tf.keras.layers.Dropout(0.5),# tf.keras.layers.Dense(256, activation='relu'),# tf.keras.layers.Dropout(0.5),tf.keras.layers.Dense(2, activation='softmax')
])model.summary()model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])traindata = np.stack(X_train)
testdata = np.stack(X_test)
trainlabel = utils.to_categorical(y_train)
testlabel = utils.to_categorical(y_test)# epochs = 10
# for i in range(epochs):
model.fit(traindata, trainlabel, batch_size=16, epochs=10, validation_data=(testdata, testlabel), verbose=1)
print("Model training complete...")
(loss, accuracy) = model.evaluate(testdata, testlabel, batch_size = 32, verbose = 1)
model.save(r'./model/model300.h5')print("accuracy: {:.2f}%".format(accuracy * 100))

CERTH数据集上87.3%准确率,我自己跑的精度训练集73.51%。

代码链接akhileshkb/CERTH_ImageBlurDataset: model for CERTH image blur dataset challenge (github.com)


http://www.ppmy.cn/news/661367.html

相关文章

Layui-select 下拉框实现拼音全拼匹配/首字母模糊搜索

Layui layui&#xff08;谐音&#xff1a;类UI) 是一款采用自身模块规范编写的前端 UI 框架。亲测很好用&#xff0c;很好看。 官网&#xff1a;http://www.layui.com/ github&#xff1a;https://github.com/sentsin/layui 插播一条相关博客&#xff1a;Layui-select 修复搜…

elementUI Cascader 级联选择器 拼音模糊搜索

项目场景&#xff1a; 我领导说他大BOSS想要可以拼音搜索&#xff0c;然后我就去百度百度百度… 只是elementUI级联选择器的拼音模糊搜索&#xff0c;可以作为普通输入框模糊搜索的借鉴… 实现步骤&#xff1a; 1、下载依赖&#xff1a; 可以在VScode打开项目的终端里执行这…

mysql实现根据同音字、首字母、拼音进行模糊搜索(复刻钉钉模糊搜索)

公司新上了一款低代码平台的项目,在使用过程中用户反馈搜索功能体感不好,不如钉钉的搜索灵活则尝试复刻了一下钉钉的灵活搜索,实现方式可能不同但最终展现的效果是一致的,特此记录 待优化:mysql自定义函数影响查询速度,添加索引也很慢 部分生僻字不支持汉字转拼音 思路: 创建自…

element拼音模糊搜索

1.安装依赖&#xff08;npm有问题就用cnpm&#xff09; npm install pinyin-match --save cnpm install pinyin-match --save2.引用 import pinyin from pinyin-match3.关键代码 pinyin.match(data, value); //data匹配的内容&#xff0c;value输入的内容4.示例 <templ…

中文拼音模糊查询的一种解决方法

目前在ASP.Net平台下中文拼音模糊查询的方法有好几种。不外乎都是把拼音的码放到数据库里。 今天发现另一种解决的办法。网上有人把拼音码做成SQL Server的PLUGIN&#xff0c;通过存储过程来调用。 支持GBK,BIG5,也支持词组查询。 访问这个网址&#xff1a;http://www.cppfa…

拼音匹配模糊搜索

pinyin-engine 这是一款简单高效的拼音匹配引擎&#xff0c;它能使用拼音够快速的检索列表中的数据。 使用索引以及缓存机制&#xff0c;从而在客户端实现毫秒级的数据检索它的字典数据格式经过压缩处理&#xff0c;简体中文版本仅仅 17kb 大小&#xff08;Gzip&#xff09;支…

前端模糊搜索,拼音模糊搜索,js拼音模糊搜索

zpinyin轻量级高性能的前端拼音模糊检索js插件 前言 zpinyin轻量级前端拼音模糊检索插件 使用原数据与索引数据区分模式&#xff0c;原数据大小不会对检索查询速度造成影响。 检索索引一次建立后续直接使用&#xff0c;大幅优化检索效率。 概述 该插件收录常用汉字6763个 支…

前端js实现模糊搜索和拼音搜索

前端在列表搜索功能中&#xff0c;经常遇到有很多属性的列表&#xff0c;属性的内容可能还有汉字&#xff0c;精准匹配太局限了&#xff0c;搜索条件需要很精准。 通过正则表达式进行模糊匹配 //input为输入的搜索内容 //生成input正则表达式进行模糊匹配 let inputArr inpu…