肖总博客:http://39.108.216.13:8090/display/~xiaozhenzhong/Image-Matting+and+Background+Blur
图像抠图的closed form算法讲解:http://blog.csdn.net/edesignerj/article/details/53349663 (本文用到的是input image和scribble image 其中 scribble image可由ps获取,画刷硬度设置为100)
文章: A. Levin D . Lischinski and Y. Weiss. A closed form solution and pattern recognitionA levin: 主页: http://webee.technion.ac.il/people/anat.levin/ 原文code:matlab;python版本: https://github.com/MarcoForte/closed-form-matting (Python3.5+,scipy,numpy,matplotlib,sklearn, opencv-python)进入文件夹,直接执行: python closed_form_matting.py(这张图片7.5s)c++版本: https://github.com/Rnandani/Natural-image-matting
背景虚化算法,虚化的基本知识:http://blog.csdn.net/edesignerj/article/details/53349663
经典贝叶斯抠图讲解:http://blog.csdn.net/baimafujinji/article/details/72863106?locationNum=2&fps=1 (MATLAB)
这篇文章对原来 Michael Rubinstein原文中所带的code进行了修改,抛弃了一些gui界面,只保存了简单的根据原图和trimap进行 alpha通道计算部分;
原文: A Bayesian Approach to Digital Matting. CVPR, 2001. 主页: http://grail.cs.washington.edu/projects/digital-matting/image-matting/ 原文代码: 以下为 Michael Rubinstein源代码运行效果, 依然不能运行troll图片;ji(不是不能运行troll图片,是image过大,大概要运行几百个小时才能处理完成)
python版本: https://github.com/MarcoForte/bayesian-matting (python 3.5+) (经典贝叶斯算法的python版本)执行image gandolf 可以,但是换成image troll 后不可以,会进入死循环;(应该也不是死循环,只是需要执行的时间过长,)
Trent专栏:http://blog.csdn.net/Trent1985
背景虚化ps教程:http://www.ps-xxw.cn/tupianchuli/5930.html
GoogleAR项目:TAngo http://blog.csdn.net/a369414641/article/details/53437674
简单的背景虚化(原型, 水平,垂直)及JAVA代码:http://blog.csdn.net/a369414641/article/details/53437674
代码:
KNN matting: https://github.com/dingzeyuli/knn-matting (linux, matlab) (CVPR2012)
linux下直接运行install.sh, 下载相关依赖库,完成后直接运行run_demo.m , 测试图片GT04.png (800*563) time <5s(matlab 2016b) (运行测试0103.png图片等,平均时间约为2.4s)
文章: Shared Sampling for Real-Time Alpha Matting;(2010) (MATLAB)
原文代码: http://inf.ufrgs.br/~eslgastal/SharedMatting/ ( CUDA 3.2+LINUX 64BIT+GPU CAPABILITY >1.0 +QT VERSION4 +BOOST 1.4)在linux下执行已经编好的可执行程序, matlab用于对结果进行优化;(作者直接提供了一个可执行程序,貌似不可修改)进入文件夹,直接执行: ./SharedMatting -i GT04.png -t GT04_trimap.png -g GT04_gt.png -b moon.jpg (实时)(optimization takes almost 9 seconds)或执行: ./SharedMatting 手动选择输入图片input image he trimap; 执行: time ./SharedMatting -i GT04.png -t GT04_trimap.png -a GT04_ALPHA.png ----real: 0.174s源代码修改后的c++版本:https://github.com/np-csu/AlphaMatting (原文的c++ +opencv版本)-----没有执行出来,好像缺少一些文件作者源代码,即可执行程序运行效果: (优化前) (优化后)优化时用到了closed form源码中的getlaplasian.m 函数;
alpha matting on MAC :https://github.com/volvet/AlphaMatting (MAC C++) (没有执行)
(博客:实时抠图算法:http://blog.csdn.net/volvet/article/details/51713766?locationNum=1&fps=1)
c++代码运行时间: 640*480测试图片约为1.9s (运行环境: clion +linux)
global Matting :https://github.com/atilimcetin/global-matting (c++,windows)
论文: He, Kaiming, et al. “A global sampling method for alpha matting.” In CVPR’11, pages 2049–2056, 2011.
构建vs工程visual studio2015, opencv3.1 ,下载guided filter ,debug模式下,1501s ; release模式测试640*480的人物测试图片每张约700ms
Deep-image-matting: https://github.com/Joker316701882/Deep-Image-Matting (python )
(tensorflow implementation for paper 'deep image matting')论文:Ning Xu, Brian Price, Scott Cohen, Thomas Huang. Deep Image Matting.2017
gSLICr: real-time super-pixel segmentation:https://github.com/carlren/gSLICr (c++ ubuntu 14.04; win8 visual studio )(2015)
摄像头无法打开;(astra pro 才可以作为外接普通摄像头打开,而astra 只能用openni的驱动打开)
Robust matting: https://github.com/wangchuan/RobustMatting (opencv3.2 eigen vs2015) (2017)
下载源码,建立工程,将Eigen下所需的库添加到资源文件,release下生成.exe文件;执行Robust_Matting.exe GT04-image.png GT04-trimap.png troll_alpha.png 每张图片运行时间 58s(运行测试图片0103.png图片等,平均时间约为3s)参考文献: J. Wang and M. Cohen. Optimized color sampling for robust matting. CVPR, 2007
poisson-matting: https://github.com/MarcoForte/poisson-matting (python 3.5 or 2.7, windows)
J Sun, J Jia, C Tang, Y Shum. 2004. Poisson matting. ACM Trans. Graph. 23, 3 (August 2004), 315-321. DOI安装相关的库: scipy,numpy,matplotlib,opencv,numba,pillow; 执行 python poisson_matting.py ,每张图片时间:0.58s
mishima-matting: https://github.com/MarcoForte/mishima-matting (python3.5,) (2017)
相关依赖库:scipy, numpy,matplot, 执行 python mishima_matting.py,Runtime for an image 82.46864613339243 (没有numba加速)
自动生成trimap: auto-portrait-matting: https://github.com/aromazyl/auto-portrait-matting (hog+svm+grabcut 算法自动trimap生成, linux)
下载源码,编译, 提示错误: test.cc:8:25: fatal error: gtest/gtest.h: 没有那个文件或目录, 确实没有这个头文件:apt-cache search gtest ----> 安装 libgtest-dev 进行make:-----> 报错:cannot find -lbopencv-imgcodecs -lopencv_videoio -lopencv_shape -lopen_gtest解决 gtest:
sudo apt-get install cmake libgtest-dev
cd /usr/src/gtest
sudo cmake CMakeLists.txt
sudo make
copy or symlink libgtest.a and libgtest_main.a to your /usr/lib folder
sudo cp *.a /usr/lib
或者建立一个软链接:https://stackoverflow.com/questions/21201668/eclpse-cdt-gtest-setup-errorcannot-find-lgtest
make 成功, 运行可执行文件,弹出窗口,但是不知道要怎么用;
clothing recognition : https://github.com/LangYH/ClothingRecognition
QT编译出错,目前的版本是4.8, 需要更新版本至5.4 (因为5.4或者5.6的库中有该工程需要调用的文件)
grabcut rgbd四通道: https://github.com/Morde-kaiser/GrabCut-RGBD (opencv grabcut 改进,Windows)
对opencv的Grabcut的改进,融合了深度信息进行图像分割;图像的输入为一个4通道矩阵,第四通道为深度图GrabCut讲解: http://blog.csdn.net/zouxy09/article/details/8534954
证件照转换模块:(portrait-master) (grabcut和matting算法):https://github.com/EthanLauAL/portrait (Windows ,vs c++;linux)(2014)
在linux下运行,出现找不到opencv库的情况: main_camera.cc:(.text+0x19f):对‘cv::VideoCapture::VideoCapture(int)’未定义的引用
orbbec@orbbec:/usr/lib/pkgconfig$ pkg-config --cflags opencv-2.4.13
-I/usr/local/include/opencv -I/usr/local/include
orbbec@orbbec:/usr/lib/pkgconfig$ pkg-config --cflags opencv-3.3.1.pc
-I/usr/local/OpenCV3/include/opencv -I/usr/local/OpenCV3/include
pkg-config可以找到安装包,但是还是会出现未定义的引用:主要是makefile的问题, 导致库的调用顺序出错,再每个子文件夹下分别链接 相应的库进行编译;生成可执行文件;
运行结果: 插入摄像头,随意截取图像,进行证件照转换(背景复杂时效果很差)
Automatic Portrait Segmentation for Image Stylization:
文章和代码下载: http://xiaoyongshen.me/webpage_portrait/index.html (paper and code)文章: Automatic portrait segmentaion for image stylization, cvpr 2016; (caffe,matlab, matio1.5.11 )下载matio:https://github.com/tbeu/matio#21-dependencies (安装出错)错误:
/media/orbbec/工作啊!!/PROJECT/Image_Matting/Automatic_Portrait_Segmentation/caffe-portraitseg/include/caffe/util/cudnn.hpp(60): error: identifier “cudnnTensor4dDescriptor_t” is undefined
15 errors detected in the compilation of “/tmp/tmpxft_00003587_00000000-7_conv_layer.cpp1.ii”.
/media/orbbec/工作啊!!/PROJECT/Image_Matting/Automatic_Portrait_Segmentation/caffe-portraitseg/build/src/caffe/CMakeFiles/cuda_compile.dir/layers/./cuda_compile_generated_conv_layer.cu.o
src/caffe/CMakeFiles/caffe.dir/build.make:552: recipe for target ‘src/caffe/CMakeFiles/cuda_compile.dir/layers/cuda_compile_generated_conv_layer.cu.o’ failed
关于cudnn版本问题: https://github.com/BVLC/caffe/issues/1792
cudnn下载链接: https://developer.nvidia.com/rdp/cudnn-archive (版本错误,应该下载version 1)tensorflow版本: https://github.com/PetroWu/AutoPortraitMatting
Research: Image_matting_based_on_alpha_value:
论文主页:http://www.cs.unc.edu/~lguan/Research.files/Research.htm#IM (Matlab)
论文:Li Guan, "Algorithms of Object Extraction in Digital Images based on Alpha value", Zhejiang University, Hangzhou,
Jun. 2004.
README:This is a GUI demo for four image matting algorithms.
Four algorithms are in four separate .m files and are easy to extract for specific use.
Usage: You need to have MATLAB 6.0 or above. (I tested the code in 6.0 and 7.0)Just run "Matting.m" and the help window in the GUI will lead you step by step.执行gui界面,出错:Error while evaluating UIControl Callback
Deep Automatic Portrait Matting: 与automatic portrait segmentation for image stylization 是同一个作者:
http://xiaoyongshen.me/webpages/webpage_automatting/
deep auto = auto portrait +softmax
关于image_matting 方法的一段总结:
Existing matting methods can be categorized as propagation-based or sampling-based.
Propagation-based methods treat the problem as interpolating the unknown alpha values from the known regions.
The interpolation can be done by solving an affinity matrix ( possion matting, random walks for interactive alpha-matting, closed-form solution,new appearance models, fast matting using large kernel matting laplacian matrics)
optimizing Markov Random Fields [18] (an iteractive optimization approach for unified image segmentation and matting)
or by computing geodesic distance [2].
These methods mainly rely on the image’s continuity to estimate the alpha matte, and do not explicitly account for the foreground and background colors. They have shown success in many cases, but may fail when the foreground has long and thin structures or holes. Their performance can be improved when combined with sampling-based methods .
Sampling-based methods first estimate the foreground and background colors and then compute the alpha matte.
Earlier methods like Ruzon and Tomasi’s work [alpha estimatiuon] and Bayesian Matting [6], fit a parametric model to the color distributions. But they are less valid when the image does not satisfy the model.
Recent sampling-based methods [robust-matting,improving color-model , shared matting] are mostly non-parametric: they pick out some color samples from the known regions to estimate the unknown alpha values. These methods perform well in condition that the true foreground and background colors are in the sample set. However, the true foreground/background colors are not always covered, because these methods only collect samples near each unknown pixel, and the number of samples is rather limited.