0
点赞
收藏
分享

微信扫一扫

第6章 图像检索以及基于图像描述符的搜索

boomwu 2022-07-12 阅读 60


第6章 图像检索以及基于图像描述符的搜索

OpenCV可以检测图像的主要特征,然后提取这些特征,使其成为图像描述符,类似于人的眼睛和大脑。

本章介绍如何使用OpenCV检测图像特征,并通过这些特征进行图像匹配和检测。 本章选取一些图像,检测它们的主要特征,并通过单应性来检测这些图像是否存在于另一个图像中。

6.1 特征检测算法

OpenCV中最常用的特征检测和提取算法:

Harris:用于检测角点

SIFT:用于检测斑点(Blob)

SURF:用于检测斑点

FAST:检测角点

BRIEF:检测斑点

ORB:带方向的FAST算法和具有选择不变性的BRIEF算法。

匹配算法:

暴力(Brute-Force)匹配法

基于FLANN的匹配法

 

6.1.1 特征定义

特征是有意义的图像区域,该区域具有独特性和易于识别性。

因此,角点及高密度区域是很好的特征,而大量重复的模式和低密度区域不是好的特征。边缘可以将图像分成两个区域,因此也可以看作好的特征,

斑点(和周围差别很大的区域)也是有意义的特征。

 

 

检测角点特征:

import cv2
import numpy as np
img = cv2.imread("images/chess_board.png")
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
gray = np.float32(gray)
dst = cv2.cornerHarris(gray,2,23,0.04)
img[dst>0.01 * dst.max()] = [0,255,0]
while(True):
    cv2.imshow('corners',img)
    if cv2.waitKey(84) & 0xff == ord('q'):
        break
cv2.destroyAllWindows()

 

6.1.2 使用DoG和SIFT进行特征提取与描述

通过SIFT得到充满角点和特征的图像

import cv2
import numpy as np

imgpath = r".\images\varese.jpg"
img = cv2.imread(imgpath)
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
sift = cv2.xfeatures2d.SIFT_create()
keypoints,descriptor = sift.detectAndCompute(gray,None)
img = cv2.drawKeypoints(image=img,outImage=img,keypoints=keypoints,flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS,
                        color=(51,163,236))
cv2.imshow('sift_keypoints',img)
cv2.waitKey()

 

6.1.3 使用快速Hessian算法和SURF来提取和检测特征

SURF特征检测算法比SIFT快好几倍,它吸收了SIFT算法的思想。

SURF是OpenCV的一个类,SURF使用快速Hessian算法检测关键点,SURF会提取特征。

import cv2
import numpy as np

imgpath = r".\images\varese.jpg"
img = cv2.imread(imgpath)
alg = "SURF"
def fd(algorithm):
    if algorithm == "SIFT":
        return cv2.xfeatures2d.SIFT_create()
    if algorithm == "SURF":
        return  cv2.xfeatures2d.SURF_create(float(8000))


gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
fd_alg = fd(alg)
keypoints,descriptor = fd_alg.detectAndCompute(gray,None)
img = cv2.drawKeypoints(image=img,outImage=img,keypoints=keypoints,
                        flags=4,color=(51,163,236))
cv2.imshow('sift_keypoints',img)
cv2.waitKey()

 

6.1.4 基于ORB的特征检测和特征匹配

ORB是一种新的算法,用来替代SIFT和SURF算法。

ORB将基于FAST关键点检测的技术和基于BRIEF描述符的技术相结合、

1.FAST(Feature from Accelerated Segment Test)

FAST算法会在像素周围绘制一个圆,该圆包括16个像素,然后,FAST将每个像素与加上一个阈值的圆心像素值进行比较,若有连续,比此值还亮或还暗的像素,则认为圆心的角点。

2.BRIEF(Binary Robust Independent Elemetary Features)并不是特征检测算法,它只是一个描述符。

3.暴力匹配

暴力匹配是一种描述符匹配方法,该方法会比较两个描述符,并产生匹配结果的列表,称为暴力匹配的原因是该算法不涉及优化,第一个描述符的所有特征都和第二个描述符的特征进行比较。

6.1.5 ORB特征匹配

import numpy as np
import  cv2
from matplotlib import  pyplot as plt

img1 = cv2.imread('images/manowar_logo.png',cv2.IMREAD_GRAYSCALE)
img2 = cv2.imread('images/manowar_single.jpg',cv2.IMREAD_GRAYSCALE)

orb = cv2.ORB_create()
kp1,des1 =  orb.detectAndCompute(img1,None)
kp2,des2 = orb.detectAndCompute(img2,None)
bf = cv2.BFMatcher(cv2.NORM_HAMMING,crossCheck=True)
matches = bf.match(des1,des2)
matches = sorted(matches,key=lambda  x:x.distance)
img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:40],img2,flags=2)
plt.imshow(img3),plt.show()

6.1.6 K-最近邻匹配(运行出错)

只需对前面的匹配操作进行修改:

6.1.7 FLANN匹配

近似最近邻的快速库

import numpy as np
import  cv2
from matplotlib import  pyplot as plt

queryImage = cv2.imread('images/bathory_album.jpg',0)
trainingImage = cv2.imread('images/vinyls.jpg',0)

sift = cv2.xfeatures2d.SIFT_create()
kp1, des1 = sift.detectAndCompute(queryImage,None)
kp2, des2 = sift.detectAndCompute(trainingImage,None)

FLANN_INDEX_KDTREE = 0
indexParams = dict(algorithm = FLANN_INDEX_KDTREE, trees=5)
searchParams = dict(checks=50)
flann = cv2.FlannBasedMatcher(indexParams,searchParams)
matches = flann.knnMatch(des1,des2,k=2)

matchesMask = [[0,0] for i in range(len(matches))]

for i,(m,n) in enumerate(matches):
    if m.distance < 0.7 *n.distance:
        matchesMask[i]=[1,0]

drawParams = dict(matchColor=(0,255,0),singlePointColor = (255,0,0),
                  matchesMask=matchesMask, flags=0)
resultImage = cv2.drawMatchesKnn(queryImage,kp1,trainingImage,kp2,matches,None,**drawParams)
plt.imshow(resultImage,),plt.show()

 

6.1.8 FLANN的单应性匹配

单应性是一个条件,该条件表面当两幅图像中一副出现投影畸变时,还能匹配。

import numpy as np
import cv2
from matplotlib import  pyplot as plt

MIN_MATCH_COUNT = 10

img1 = cv2.imread('images/bb.jpg',0)
img2 = cv2.imread('images/color2_small.jpg',0)

sift = cv2.xfeatures2d.SIFT_create()
kp1,des1 = sift.detectAndCompute(img1,None)
kp2,des2 = sift.detectAndCompute(img2,None)

FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm=FLANN_INDEX_KDTREE,trees=5)
search_params = dict(checks=50)

flann = cv2.FlannBasedMatcher(index_params,search_params)

matches = flann.knnMatch(des1,des2,k=2)

good = []
for m,n in matches:
    if m.distance < 0.7*n.distance:
        good.append(m)
if len(good)>MIN_MATCH_COUNT:
    scr_pts = np.float32([kp1[m.queryIdx].pt for m in good]).reshape(-1,1,2)
    dst_pts = np.float32([kp2[m.trainIdx].pt for m in good]).reshape(-1,1,2)

    M,mask = cv2.findHomography(scr_pts,dst_pts,cv2.RANSAC,5.0)
    matchesMask = mask.ravel().tolist()
    h,w = img1.shape
    pts = np.float32([ [0,0],[0,h-1],[w-1,h-1],[w-1,0]]).reshape(-1,1,2)
    dst = cv2.perspectiveTransform(pts,M)
    img2 = cv2.polylines(img2,[np.int32(dst)],True,255,3,cv2.LINE_AA)
else:
    print("Not enough matches are found")
    matchesMask = None

draw_params = dict(matchColor=(0,255,0),singlePointColor = None,
                   matchesMask = matchesMask,flags=2)

img3 = cv2.drawMatches(img1,kp1,img2,kp2,good,None,**draw_params)
plt.imshow(img3,'gray'), plt.show()







 

 



6.1.9 基于文身取证的应用程序示例

#1.generate_description.py

import cv2
import numpy as np
from os import  walk
from os.path import  join
import sys

def create_descritors(folder):
    files = []
    for(dirpath,dirnames,filenames) in walk(folder):
        files.extend(filenames)
    for f in files:
        save_descriptor(folder,f,cv2.xfeatures2d.SIFT_create())

def save_descriptor(folder,image_path,feature_detector):
    img = cv2.imread(join(folder,image_path),0)
    keypoints,descriptors = feature_detector.detectAndCompute(img,None)
    descriptor_file = image_path.replace("jpg","npy")
    np.save(join(folder,descriptor_file),descriptors)

dir = 'anchors'
create_descritors(dir)


 


 


#2scan_and_match.py


from os.path import  join from os import  walk import  numpy as np import  cv2 from sys import argv folder = "anchors" query = cv2.imread( join(folder,"tattoo_seed.jpg"),0) files = [] images = [] descriptiors = [] for (dirpath,dirnames,filenames) in walk(folder):     files.extend(filenames)     for f in files:         if f.endswith("npy") and f != "tattoo_seed.npy":             descriptiors.append(f)     print(descriptiors) sift = cv2.xfeatures2d.SIFT_create() query_kp ,query_ds = sift.detectAndCompute(query,None) FLANN_INDEX_KDTREE = 0 index_params = dict(algorithm=FLANN_INDEX_KDTREE,trees = 5) search_params = dict(checks = 50) flann = cv2.FlannBasedMatcher(index_params,search_params) MIN_MATCH_COUNT = 10 potential_culprits = {} print(">>初始化图像扫描...") for d in descriptiors:     print("-----分析%s ---------"%d)     matches=flann.knnMatch(query_ds,np.load(join(folder,d)),k=2)     good = []     for m,n in matches:         if m.distance < 0.7*n.distance:             good.append(m)     if len(good)>MIN_MATCH_COUNT:             print("%s 匹配!(%d)"%(d,len(good)))     else:             print("%s 不匹配!" % d)     potential_culprits[d] = len(good) max_matches = None potential_suspect = None for culprit, matches in potential_culprits.items():     if max_matches == None or matches>max_matches:         max_matches = matches         potential_suspect = culprit print("嫌疑人 %s" %potential_suspect.replace("npy","").upper())





 

 

 

 

举报

相关推荐

0 条评论