无参考图像评价指标NIQE——自然图像质量

news/2024/9/22 19:45:40/

无参考图像评价指标NIQE——自然图像质量评价器

  • NIQE(Natural Image Quality Evaluator)
    • 为何引入NIQE
    • NIQE的背景
    • NIQE算法的具体过程
    • 代码实现

NIQE(Natural Image Quality Evaluator)

为何引入NIQE

       在进行图像重建任务后通常还要做图像质量评价,常用的判断指标有峰值信噪比(PSNR)结构相似性(SSIM) 等。在大部分任务中,通过这些指标可以证明图像重建任务是否更加有效。但在超分辨重建任务里,在近些年引入生成对抗网络后,人们发现高的PSNR或SSIM不一定能代表更好重建质量,因为在高PSNR或SSIM的图片中,它们图像的纹理细节并不一定符合人眼的视觉习惯。因此,研究者们找到了这种更有效的图像重建质量的指标NIQE

NIQE的背景

NIQE原文DOI号:https://doi.org/10.1109/LSP.2012.2227726
            百度网盘:https://pan.baidu.com/s/1pWk5Zneh5YxpZxlLjFQ2ZA
                提取码:223a

文中特定缩写:

缩写全称
IQA(Image Quality Assessment)图像质量评价
NSS(Natural Scene Statistic)自然图像统计
FR(Full Reference)全参考
NR(No Reference)无参考
OA(Option Aware)有感知意见
OU(Option Unaware)无感知意见
DA(Distortion Aware)有感知失真
DU(Distortion Unaware)无感知失真
MVG(Multivariate Gaussian) Model多元(变量)高斯模型
GGD(Generalized Gaussian Distribution)广义高斯分布

对上面部分名词解释:

  • 全参考(FR):把内容相同的自然图像与失真图像比较质量,如:PSNR与SSIM。
  • 无参考(NR):只拿失真图像去评价质量。

  • 有感知意见(OA):即已经被训练的在人为排序失真图像数据集,和结合人们主观意见得分的一种无参考图像质量评价模型。
  • 无感知意见(OU):由于获得用人类判断分数的失真图像数据集在实际中难实现,不寻求找人类在失真图像上判断的数据集的训练就叫无意意见。这也是大家更感兴趣的。

  • 有感知失真(DA):一些源于OU思想的算法,在IQA模型的创建或者训练期间,可能会也可能不会出现失真图像。在训练(并调整到)特定失真的模型就称为有感知失真。
  • 无感知失真(DU):无感知失真只依赖于曝光自然源图像或图像模型指导质量评估的过程。

        NIQE既不寻求曝光失真图像的先验信息也不用训练任何人类意见得分。它是一个全新的 NR OU-DU IQA 模型。

NIQE算法的具体过程

       NIQE基于一组“质量感知”特征,并将其拟合到MVG模型中。质量感知特征源于一个简单但高度正则化的NSS模型。然后,将给定的测试图像的NIQE指标表示为从测试图像中提取的NSS特征的MVG模型与从自然图像语料中提取的质量感知特征的MVG模型之间的距离。整个过程由五步操作完成:

  1. 空域NSS特征提取
    一个经典的空域NSS模型:
    I ^ ( i , j ) = I ( i , j ) − μ ( i , j ) σ ( i , j ) + 1 (1) \hat{I}(i,j)=\frac{I(i,j)-{\mu}(i,j)}{{\sigma}(i,j)+1}\tag{1} I^(i,j)=σ(i,j)+1I(i,j)μ(i,j)(1)
    式中, i ∈ { 1 , 2 , . . . M } i\in\left\{1,2,...M\right\} i{1,2,...M} j ∈ { 1 , 2 , . . . N } j\in\left\{1,2,...N\right\} j{1,2,...N}为图像的空间坐标, M M M N N N为图像的空间维度。且:
    μ ( i , j ) = ∑ k = − K K ∑ l = − L L w k , l I ( i + k , j + l ) (2) {\mu}(i,j)=\sum_{k=-K}^K\sum_{l=-L}^L{w_{k,l}I(i+k,j+l)}\tag{2} μ(i,j)=k=KKl=LLwk,lI(i+k,j+l)(2)
    σ ( i , j ) = ∑ k = − K K ∑ l = − L L w k , l [ I ( i , j ) − μ ( i , j ) ] 2 (3) {\sigma}(i,j)=\sqrt{\sum_{k=-K}^K\sum_{l=-L}^L{w_{k,l}\left[I(i,j)-{\mu}(i,j)\right]^2}}\tag{3} σ(i,j)=k=KKl=LLwk,l[I(i,j)μ(i,j)]2 (3)
    在(2)与(3)中, { w k , l ∣ k = − K , . . . K ; l = − L , . . . L } \left\{w_{k,l}|k=-K,...K;l=-L,...L\right\} {wk,lk=K,...K;l=L,...L}是一个从3个标准差 ( K = L = 3 ) (K=L=3) (K=L=3)中采样并重新缩放到单位体积的二维循环对称高斯权函数。

  2. 图像块选取
           一旦图像的系数由(1)式计算出,整张图像会被分割成 P × P P\times{P} P×P的块。然后从每个块的系数中计算出特殊的NSS特征。方差(3)在之前的基于NSS的图片分析中常常被忽视。但是它在结构化图片信息上有丰富的内容。这些内容可以被用来量化局部图片的锐利度。(从美学上认为一幅图片越锐利它的成像效果会越好,平滑模糊代表一种视觉信息的潜在损失。)将 P × P P\times{P} P×P的图像块用 b = 1 , 2 , . . . , B b=1,2,...,B b=1,2,...,B做标记,再用一种直接的方法计算每一块 b b b平均局部偏移范围:
    δ ( b ) = ∑ ∑ ( i , j ) ∈ p a t c h b σ ( i , j ) (4) {\delta}(b)={\sum\sum}_{(i,j)\in{patchb}}{{\sigma}(i,j)}\tag{4} δ(b)=∑∑(i,j)patchbσ(i,j)(4)
    式中, δ {\delta} δ表示局部活跃(锐利)度。
           在求得每块的锐利度后,超过门限 T T T δ {\delta} δ 将被选中(即: δ > T {\delta}>T δ>T)。门限 T T T 由块的峰值锐利度 p p p 得到,在原文中 p = 0.75 p=0.75 p=0.75,按照作者的观察 p p p 在一个小范围 [ 0.6 , 0.9 ] \left[0.6,0.9\right] [0.6,0.9]变化。

  3. 特征化图像块
           先前以NSS为基础图像质量的研究表明广义高斯分布可以有效捕捉自然图像与失真图像公式(1)的联系。
    以0为均值的广义高斯分布公式(GGD)为:
    f ( x ; α , β ) = α 2 β Γ ( 1 / α ) exp ⁡ ( − ( ∣ x ∣ β ) α ) (5) f(x;{\alpha},{\beta})=\frac{{\alpha}}{2{\beta}{\Gamma}(1/{\alpha})}\exp{(-(\frac{\left|x\right|}{\beta})^{\alpha})}\tag{5} f(x;α,β)=2βΓ(1/α)αexp((βx)α)(5)
    式中的Gamma函数 Γ ( ⋅ ) {\Gamma}(\cdot) Γ()为:
    Γ ( a ) = ∫ 0 ∞ t a − 1 e − t d t ( a > 0 ) (6) {\Gamma}(a)=\int_0^\infty{t^{a-1}e^{-t}dt}{\quad}(a>0)\tag{6} Γ(a)=0ta1etdt(a>0)(6)
           GGD的参数 ( α , β ) ({\alpha},{\beta}) (α,β)可以使用一种moment-matching方法估计出。
           这样,我们用一个均值为0的对称广义高斯分布(AGGD)将领域系数的乘积很好的建模:
    f ( x ; γ , β l , β r ) = { γ β l + β r Γ ( 1 γ ) exp ⁡ ( − ( − x β l ) γ ) ( ∀ x ≤ 0 ) γ β l + β r Γ ( 1 γ ) exp ⁡ ( − ( − x β r ) γ ) ( ∀ x ≥ 0 ) (7) f(x;{\gamma},{\beta}_l,{\beta}_r)=\begin{cases} \frac{\gamma}{{\beta}_l+{\beta}_r{\Gamma}(\frac{1}{\gamma})\exp(-(\frac{-x}{{\beta}_l})^{\gamma})}{\quad}(\forall{x}\leq{0})\\ \frac{\gamma}{{\beta}_l+{\beta}_r{\Gamma}(\frac{1}{\gamma})\exp(-(\frac{-x}{{\beta}_r})^{\gamma})}{\quad}(\forall{x}\ge{0})\end{cases}\tag{7} f(x;γ,βl,βr)= βl+βrΓ(γ1)exp((βlx)γ)γ(x0)βl+βrΓ(γ1)exp((βrx)γ)γ(x0)(7)
    AGGD的参数 ( γ , β l , β r ) ({\gamma},{\beta}_l,{\beta}_r) (γ,βl,βr),可以被“Mutiscale skewed heavy tailed model for texture analysis”这篇文章中的方法计算出。AGGD的均值也很有用:
    η = ( β r − β l ) Γ ( 2 γ ) Γ ( 1 γ ) (8) {\eta}=({\beta}_r-{\beta}_l)\frac{{\Gamma}(\frac{2}{\gamma})}{{\Gamma}(\frac{1}{\gamma})}\tag{8} η=(βrβl)Γ(γ1)Γ(γ2)(8)

  4. MVG模型
           通过将自然图像块与MVG模型密度函数拟合,可以得到一个简单的NSS特征模型,MVG模型密度函数为:
    f X ( x 1 , . . . , x k ) = 1 ( 2 π ) k / 2 ∣ Σ ∣ 1 / 2 exp ⁡ ( − 1 2 ( x − v ) T Σ − 1 ( x − v ) ) (9) f_X(x_1,...,x_k)=\frac{1}{(2{\pi})^{k/2}\left|{\Sigma}\right|^{1/2}}\exp(-\frac{1}{2}(x-v)^T{\Sigma}^{-1}(x-v))\tag{9} fX(x1,...,xk)=(2π)k/2Σ1/21exp(21(xv)TΣ1(xv))(9)
    式中 ( x 1 , . . , x k ) (x_1,..,x_k) (x1,..,xk)是有(5)~(8)式3得出的NSS特征。 v v v Σ {\Sigma} Σ分别表示MVG模型的均值与协方差矩阵,可由标准最大似然估计得到。

  5. NIQE指标
           用NSS特征模型与提取自失真图像特征的MVG间的距离来表示失真图像的质量:
    D ( v 1 , v 2 , Σ 1 , Σ 2 ) = ( ( v 1 − v 2 ) T ( Σ 1 + Σ 2 2 ) − 1 ( v 1 − v 2 ) ) (10) D(v_1,v_2,{\Sigma}_1,{\Sigma}_2)=\sqrt{((v_1-v_2)^T(\frac{{\Sigma}_1+{\Sigma}_2}{2})^{-1}(v_1-v_2))}\tag{10} D(v1,v2,Σ1,Σ2)=((v1v2)T(2Σ1+Σ2)1(v1v2)) (10)
    其中 v 1 v_1 v1 v 2 v_2 v2 Σ 1 {\Sigma}_1 Σ1 Σ 2 {\Sigma}_2 Σ2分别表示自然MVG模型与失真图像MVG模型的均值向量和协方差矩阵。

代码实现

上述算法由Matlab实现,共有7个文件:

  1. estimateaggdparam.m
function [alpha betal betar] = estimateaggdparam(vec)gam   = 0.2:0.001:10;r_gam = ((gamma(2./gam)).^2)./(gamma(1./gam).*gamma(3./gam));leftstd = sqrt(mean((vec(vec<0)).^2));rightstd = sqrt(mean((vec(vec>0)).^2));gammahat = leftstd/rightstd;rhat = (mean(abs(vec)))^2/mean((vec).^2);rhatnorm = (rhat*(gammahat^3 +1)*(gammahat+1))/((gammahat^2 +1)^2);[min_difference, array_position] = min((r_gam - rhatnorm).^2);alpha = gam(array_position);betal = leftstd *sqrt(gamma(1/alpha)/gamma(3/alpha));betar = rightstd*sqrt(gamma(1/alpha)/gamma(3/alpha));
  1. estimatemodelparam.m
function  [mu_prisparam cov_prisparam]  = estimatemodelparam(folderpath,...blocksizerow,blocksizecol,blockrowoverlap,blockcoloverlap,sh_th)   
% Input
% folderpath      - Folder containing the pristine images
% blocksizerow    - Height of the blocks in to which image is divided
% blocksizecol    - Width of the blocks in to which image is divided
% blockrowoverlap - Amount of vertical overlap between blocks
% blockcoloverlap - Amount of horizontal overlap between blocks
% sh_th           - The sharpness threshold level
%Output
%mu_prisparam  - mean of multivariate Gaussian model
%cov_prisparam - covariance of multivariate Gaussian model% Example call%[mu_prisparam cov_prisparam] = estimatemodelparam('pristine',96,96,0,0,0.75);%----------------------------------------------------------------
% Find the names of images in the folder
current = pwd;
cd(sprintf('%s',folderpath))
names        = ls;
names        = names(3:end,:);
cd(current)
% ---------------------------------------------------------------
%Number of features
% 18 features at each scale
featnum      = 18;
% ---------------------------------------------------------------
% Make the directory for storing the features
mkdir(sprintf('local_risquee_prisfeatures'))
% ---------------------------------------------------------------
% Compute pristine image features
for itr = 1:size(names,1)
itr
im               = imread(sprintf('%s\\%s',folderpath,names(itr,:)));
if(size(im,3)==3)
im               = rgb2gray(im);
end
im               = double(im);             
[row col]        = size(im);
block_rownum     = floor(row/blocksizerow);
block_colnum     = floor(col/blocksizecol);
im               = im(1:block_rownum*blocksizerow, ...1:block_colnum*blocksizecol);               
window           = fspecial('gaussian',7,7/6);
window           = window/sum(sum(window));
scalenum         = 2;
warning('off')feat = [];for itr_scale = 1:scalenummu = imfilter(im,window,'replicate');
mu_sq = mu.*mu;
sigma = sqrt(abs(imfilter(im.*im,window,'replicate') - mu_sq));
structdis = (im-mu)./(sigma+1);feat_scale               = blkproc(structdis,[blocksizerow/itr_scale blocksizecol/itr_scale], ...[blockrowoverlap/itr_scale blockcoloverlap/itr_scale], ...@computefeature);
feat_scale               = reshape(feat_scale,[featnum ....size(feat_scale,1)*size(feat_scale,2)/featnum]);
feat_scale = feat_scale';if(itr_scale == 1)
sharpness = blkproc(sigma,[blocksizerow blocksizecol], ...[blockrowoverlap blockcoloverlap],@computemean);
sharpness = sharpness(:);
endfeat = [feat feat_scale];im =imresize(im,0.5);
endsave(sprintf('local_risquee_prisfeatures\\prisfeatures_local%d.mat',...itr),'feat','sharpness');
end%----------------------------------------------
% Load pristine image features
prisparam    = [];
current      = pwd;
cd(sprintf('%s','local_risquee_prisfeatures'))
names        = ls;
names        = names(3:end,:);
cd(current)
for itr      = 1:size(names,1)
% Load the features and select the only features     
load(sprintf('local_risquee_prisfeatures\\%s',strtrim(names(itr,:))));
IX               = find(sharpness(:) >sh_th*max(sharpness(:)));
feat             = feat(IX,:);
prisparam        = [prisparam; feat];end 
%----------------------------------------------
% Compute model parameters
mu_prisparam       = nanmean(prisparam);
cov_prisparam      = nancov(prisparam);
%----------------------------------------------
% Save features in the mat file
save('modelparameters_new.mat','mu_prisparam','cov_prisparam');
%----------------------------------------------
  1. convert_shave_image.m
function shaved = convert_shave_image(input_image,shave_width)% Converting to y channel onlyimage_ychannel = rgb2ycbcr(input_image);image_ychannel = image_ychannel(:,:,1);% Shaving imageshaved = image_ychannel(1+shave_width:end-shave_width,...1+shave_width:end-shave_width);
end
  1. computemean.m
function val = computemean(patch)val = mean2(patch);
  1. computequality.m
function  quality = computequality(im,blocksizerow,blocksizecol,...blockrowoverlap,blockcoloverlap,mu_prisparam,cov_prisparam)% Input
% im              - Image whose quality needs to be computed
% blocksizerow    - Height of the blocks in to which image is divided
% blocksizecol    - Width of the blocks in to which image is divided
% blockrowoverlap - Amount of vertical overlap between blocks
% blockcoloverlap - Amount of horizontal overlap between blocks
% mu_prisparam    - mean of multivariate Gaussian model
% cov_prisparam   - covariance of multivariate Gaussian model% For good performance, it is advisable to use make the multivariate Gaussian model
% using same size patches as the distorted image is divided in to% Output
%quality      - Quality of the input distorted image% Example call
%quality = computequality(im,96,96,0,0,mu_prisparam,cov_prisparam)% ---------------------------------------------------------------
%Number of features
% 18 features at each scale
featnum      = 18;
%----------------------------------------------------------------
%Compute features
if(size(im,3)==3)
im               = rgb2gray(im);
end
im               = double(im);                
[row col]        = size(im);
block_rownum     = floor(row/blocksizerow);
block_colnum     = floor(col/blocksizecol);im               = im(1:block_rownum*blocksizerow,1:block_colnum*blocksizecol);              
[row col]        = size(im);
block_rownum     = floor(row/blocksizerow);
block_colnum     = floor(col/blocksizecol);
im = im(1:block_rownum*blocksizerow, ...1:block_colnum*blocksizecol);               
window           = fspecial('gaussian',7,7/6);
window           = window/sum(sum(window));
scalenum         = 2;
warning('off')feat             = [];for itr_scale = 1:scalenummu = imfilter(im,window,'replicate');
mu_sq = mu.*mu;
sigma = sqrt(abs(imfilter(im.*im,window,'replicate') - mu_sq));
structdis = (im-mu)./(sigma+1);feat_scale = blkproc(structdis,[blocksizerow/itr_scale blocksizecol/itr_scale], ...[blockrowoverlap/itr_scale blockcoloverlap/itr_scale], ...@computefeature);
feat_scale = reshape(feat_scale,[featnum ....size(feat_scale,1)*size(feat_scale,2)/featnum]);
feat_scale = feat_scale';if(itr_scale == 1)
sharpness = blkproc(sigma,[blocksizerow blocksizecol], ...[blockrowoverlap blockcoloverlap],@computemean);
sharpness = sharpness(:);
endfeat = [feat feat_scale];im =imresize(im,0.5);end% Fit a MVG model to distorted patch features
distparam = feat;
mu_distparam = nanmean(distparam);
cov_distparam = nancov(distparam);% Compute quality
invcov_param = pinv((cov_prisparam+cov_distparam)/2);
quality = sqrt((mu_prisparam-mu_distparam)* ...invcov_param*(mu_prisparam-mu_distparam)');
  1. computefeature.m
function feat = computefeature(structdis)% Input  - MSCn coefficients
% Output - Compute the 18 dimensional feature vector 
feat = [];[alpha betal betar] = estimateaggdparam(structdis(:));feat = [feat;alpha;(betal+betar)/2];shifts = [ 0 1;1 0 ;1 1;1 -1];for itr_shift =1:4
shifted_structdis = circshift(structdis,shifts(itr_shift,:));
pair = structdis(:).*shifted_structdis(:);
[alpha betal betar] = estimateaggdparam(pair);
meanparam = (betar-betal)*(gamma(2/alpha)/gamma(1/alpha));                       
feat = [feat;alpha;meanparam;betal;betar];
end
  1. calc_NIQE
function NIQE = calc_scores(input_image_path,shave_width)
%% Loading model
load modelparameters.mat
blocksizerow    = 96;
blocksizecol    = 96;
blockrowoverlap = 0;
blockcoloverlap = 0;
%% Calculating scores
NIQE = [];input_image = convert_shave_image(imread(input_image_path),shave_width);      % Calculating scoresNIQE = computequality(input_image,blocksizerow,blocksizecol,...blockrowoverlap,blockcoloverlap,mu_prisparam,cov_prisparam);
end

自然图像语料模型参数为:
modelparameters.mat,获取地方如下

百度网盘:https://pan.baidu.com/s/1XVNJCQdWFkbQBKQAq89Y-w
提取码:4x3o


http://www.ppmy.cn/news/289381.html

相关文章

图像锐度评分算法,方差,点锐度法,差分法,梯度法

图像锐度评分算法&#xff0c;方差&#xff0c;点锐度法&#xff0c;差分法&#xff0c;梯度法 图像锐度评分是用来描述图像清晰度的一个指标。常见的图像锐度评分算法包括方差法、点锐度法、差分法和梯度法等。 方差法&#xff1a;该方法是通过计算图像像素值的方差来评估图像…

Visionpro工具用途中文介绍

提示&#xff1a;文章写完后&#xff0c;目录 目录 一.未分类 二.3D Tools 三.Calibration & Fixturing 四.Color 五.Deprecated 六.Geometry - Creation 七.Geometry - Finding & Fitting 八.Geometry - Intersection 九.Geometry - Measurement 十.Image Pr…

锐度定义

锐度是亮度对于空间的导数幅度。 转&#xff1a;http://zh.wikipedia.org/wiki/%E9%94%90%E5%BA%A6

锐度

锐度&#xff0c;有时也叫“清晰度”&#xff0c;它是反映图像平面清晰度和图像边缘锐利程度的一个指标。如果将锐度调高&#xff0c;图像平面上的细节对比度也更高&#xff0c;看起来更清楚。 在高锐度的情况下&#xff0c;不但画面上人脸的皱纹、斑点更清楚&#xff0c;而且…

小程序收集箱:批量调节图像曝光度、锐利度、对比度、亮度的demo

本程序用于批量调节图像曝光度、锐利度、对比度、亮度&#xff0c;可视化进行&#xff0c;方便找到最佳值。这些参数在应用到其他程序时&#xff0c;需根据程序中乘的系数换算&#xff0c;不然得不到理想效果。 #!/usr/bin/python # -*- coding: utf-8 -*- # python3.7 # auth…

影像质量评估之锐利度--MTF(SFR)

由于最近需要写一个Camera 自动调焦的程式&#xff0c;需要用到MTF&#xff08;Modulation Transfer Function&#xff09;,其中MTF有很多实现算法&#xff0c;本人目前使用SFR&#xff08;spatial frequency response&#xff09;算法衡量影像的锐利度(Sharpness)。SFR数值越大…

转 影像质量评估之锐利度--MTF(SFR)

转载 https://blog.csdn.net/qq61394323/article/details/38454969 影像质量评估之锐利度–MTF(SFR) 2014年08月09日 11:23:05 努力的草根 阅读数&#xff1a;25287 &#xfeff;&#xfeff; 由于最近需要写一个Camera 自动调焦的程式&#xff0c;需要用到MTF&#xff08;Modu…

智能优化算法改进-K-means聚类种群初始化附Matlab代码

目录 0引言 一、K-means聚类原理 二、K-Means聚类算法步骤 三、K-Means聚类原理图​编辑 四、K-means聚类改进智能优化算法种群初始化效果图 4.1 初始种群数据图 ​4.2 K-means聚类结果图 4.2.1 根据K-means聚类原理聚类 4.2.2 根据MATLAB自带kmeans函数聚类 五、K-…