1. 程式人生 > >【OpenCV】高斯混合背景建模

【OpenCV】高斯混合背景建模

OpenCV中實現了兩個版本的高斯混合背景/前景分割方法(Gaussian Mixture-based Background/Foreground Segmentation Algorithm)[1-2],呼叫介面很明朗,效果也很好。

BackgroundSubtractorMOG 使用示例

int main(){
	VideoCapture video("1.avi");
	Mat frame,mask,thresholdImage, output;
	video>>frame;
	BackgroundSubtractorMOG bgSubtractor(20,10,0.5,false);
	while(true){
		video>>frame;
		++frameNum;
		bgSubtractor(frame,mask,0.001);
		imshow("mask",mask);
		waitKey(10);
	}
	return 0;
}

建構函式可以使用預設建構函式或帶形參的建構函式:

BackgroundSubtractorMOG::BackgroundSubtractorMOG()
BackgroundSubtractorMOG::BackgroundSubtractorMOG(int history, int nmixtures, 
double backgroundRatio, double noiseSigma=0)
其中history為使用歷史幀的數目,nmixtures為混合高斯數量,backgroundRatio為背景比例,noiseSigma為噪聲權重。

而呼叫的介面只有過載操作符():

void BackgroundSubtractorMOG::operator()(InputArray image, OutputArray fgmask, double learningRate=0)
其中image為當前幀影象,fgmask為輸出的前景mask,learningRate為背景學習速率。

以下是使用BackgroundSubtractorMOG進行前景/背景檢測的一個截圖。


BackgroundSubtractorMOG2 使用示例

int main(){
	VideoCapture video("1.avi");
	Mat frame,mask,thresholdImage, output;
	//video>>frame;
	BackgroundSubtractorMOG2 bgSubtractor(20,16,true);
	
	while(true){
		video>>frame;
		++frameNum;
		bgSubtractor(frame,mask,0.001);
		cout<<frameNum<<endl;
		//imshow("mask",mask);
		//waitKey(10);
	}
	return 0;
}

同樣的,建構函式可以使用預設建構函式和帶形參的建構函式

BackgroundSubtractorMOG2::BackgroundSubtractorMOG2()
BackgroundSubtractorMOG2::BackgroundSubtractorMOG2(int history, 
float varThreshold, bool bShadowDetection=true )
history同上,varThreshold表示馬氏平方距離上使用的來判斷是否為背景的閾值(此值不影響背景更新速率),bShadowDetection表示是否使用陰影檢測(如果開啟陰影檢測,則mask中使用127表示陰影)。

使用過載操作符()呼叫每幀檢測函式:

void BackgroundSubtractorMOG2::operator()(InputArray image, OutputArray fgmask, double learningRate=-1)
引數意義同BackgroundSubtractorMOG中的operator()函式。

同時BackgroundSubtractorMOG2提供了getBackgroundImage()函式用以返回背景影象:

void BackgroundSubtractorMOG2::getBackgroundImage(OutputArray backgroundImage)

另外OpenCV的refman中說新建物件以後還有其他和模型油有關的引數可以修改,不過比較坑的是opencv把這個這些函式引數宣告為protected,同時沒有提供訪問介面,所以要修改的話還是要自己修改原始檔提供訪問介面。

protected:
    Size frameSize;
    int frameType;
    Mat bgmodel;
    Mat bgmodelUsedModes;//keep track of number of modes per pixel
    int nframes;
    int history;
    int nmixtures;
    //! here it is the maximum allowed number of mixture components.
    //! Actual number is determined dynamically per pixel
    double varThreshold;
    // threshold on the squared Mahalanobis distance to decide if it is well described
    // by the background model or not. Related to Cthr from the paper.
    // This does not influence the update of the background. A typical value could be 4 sigma
    // and that is varThreshold=4*4=16; Corresponds to Tb in the paper.
    /////////////////////////
    // less important parameters - things you might change but be carefull
    ////////////////////////
    float backgroundRatio;
    // corresponds to fTB=1-cf from the paper
    // TB - threshold when the component becomes significant enough to be included into
    // the background model. It is the TB=1-cf from the paper. So I use cf=0.1 => TB=0.
    // For alpha=0.001 it means that the mode should exist for approximately 105 frames before
    // it is considered foreground
    // float noiseSigma;
    float varThresholdGen;
    //correspondts to Tg - threshold on the squared Mahalan. dist. to decide
    //when a sample is close to the existing components. If it is not close
    //to any a new component will be generated. I use 3 sigma => Tg=3*3=9.
    //Smaller Tg leads to more generated components and higher Tg might make
    //lead to small number of components but they can grow too large
    float fVarInit;
    float fVarMin;
    float fVarMax;
    //initial variance  for the newly generated components.
    //It will will influence the speed of adaptation. A good guess should be made.
    //A simple way is to estimate the typical standard deviation from the images.
    //I used here 10 as a reasonable value
    // min and max can be used to further control the variance
    float fCT;//CT - complexity reduction prior
    //this is related to the number of samples needed to accept that a component
    //actually exists. We use CT=0.05 of all the samples. By setting CT=0 you get
    //the standard Stauffer&Grimson algorithm (maybe not exact but very similar)
    //shadow detection parameters
    bool bShadowDetection;//default 1 - do shadow detection
    unsigned char nShadowDetection;//do shadow detection - insert this value as the detection result - 127 default value
    float fTau;
    // Tau - shadow threshold. The shadow is detected if the pixel is darker
    //version of the background. Tau is a threshold on how much darker the shadow can be.
    //Tau= 0.5 means that if pixel is more than 2 times darker then it is not shadow
    //See: Prati,Mikic,Trivedi,Cucchiarra,"Detecting Moving Shadows...",IEEE PAMI,2003.
以下是使用BackgroundSubtractorMOG2檢測的前景和背景:

參考文獻:

[1] KaewTraKulPong, Pakorn, and Richard Bowden. "An improved adaptive background mixture model for real-time tracking with shadow detection." Video-Based Surveillance Systems. Springer US, 2002. 135-144.
[2] Zivkovic, Zoran. "Improved adaptive Gaussian mixture model for background subtraction." Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on. Vol. 2. IEEE, 2004.