# Random Ferns for patch description

#### Introduction

• Let us consider the problem of object recognition in which we have to decide if a given patch of image contains the desired object or not.
• One way to achieve this to characterize the texture about a key-points across images acquired under widely varying poses and lightning conditions.

Due to its robustness to partial occlusions and computational efﬁciency, recognition of image patches extracted around detected key-points is crucial for many vision problems.
• One way to achieve this is to compute a statistical model for the object/patch and then estimate if a given patch has a good probability of being sampled from the object model.
• Let us consider a mathematical representation of the patch to be classified by representing it in terms of a feature vector.
• Let $f$ be the set of features computed over the patch. \begin{eqnarray} \bar{c_i}=argmax P(C = c_i | f)\\ P(C = c_i | f) = \frac{P(f | C=c_i) P(C=c_i)}{P(f)} \\ \bar{c_i} = argmax P(f | C=c_i) \\ \end{eqnarray} For real time application we require features that can be easily computed and encoded.The concept of Local Binary patterns has become popular for texture analysis and patch description due to it small memory and computational requirement.
• In case of Local binary features each patch is described by a bit string. The value of each bit is derived by performing some binary test on the image patch
• The test's can in principle be in any binary test that can effectively encode the information present in the patch as 0 or 1.
• Typically each binary feature is determined by performing intensity comparisons . The intensity comparisons can be between two different pixel locations or between image and transformed image at specified pixel location.
• A set of binary test constitutes the feature vector.
• However these features are very simple and hence a large number of such features may be required to accurately describe a patch.
• If we consider a joint probability distribution we would require to store $2^N$ entries for each class.However if we assume independence between features using Naive Bayes assumption we are required to store N feature ,however this completely ignores any correlation between features.
• We consider a randomized fern algorithm,which consists of randomly selecting S features out of total pool of N features into single group.
• All the features with a group are considered to be dependent and a joint distribution is constructed over these features.
• Each group is called a fern and for each fern $f_k$ we can compute $P(f_k | C = c_i)$, Thus we have K random classifier that provide us the probability that feature vector $f_k$ belong to class $P(f_k | C = c_i)$.
• Here each fern is constructed from a random subset of S features.
• $F_k$ corresponds binary feature vector for each fern k. Since each group contains S features and we are required to maintain a joint PDF we require $2^S$ entries.And since there are M groups total entires required are $M*2^S$.
• The feature points used in binary tests are selected randomly from a normal distribution. The grouping of features points is also performed randomly.
• We will approximate the joint PDF using histogram.Each fern consists of S binary features
• Since features are binary ,total number of bins the joint histogram are $2^S$
• For example for a fern with 2 features the histogram bins are indexed as 00,01,10,11.Each bin can be indexed by a integral value.
• During the training phase we need estimate the class conditional probabilities that require the estimation of $2^S*M$ parameters.

An ensemble techniques is used to combine the result of K classifiers \begin{eqnarray} \bar{c_i} = argmax \frac{1}{K} \sum{k=1}^{K} P(f_k | C=c_i) \\ \end{eqnarray}
• A important feature of this scheme of modelling does not require us to store the training data.
• Also we can perform incremental learning since we have to update only the counts in the statistical model.

#### Implementation Details

• The points where the features are computed are stored in 2D vector of length $numFerns*2*numFeatures$, this is stored in a vector where each element is of type $Point2f$.
• As mentioned above to maintain a Joint PDF we have to maintain a joint histogram of $2^{numFeatures}$ and there are $numFerns$ groups.The data is stored in a single data structure of type std::vector.
• 3 such vectors are maintained positive,negative and posterior probability distribution.
• Each of the locations in the joint PDF can be addressed using a integer data type ie The feature vector extracted for each Fern/group is a integer data type.
• Whenever we learn a positive/negative sample,we increment the count corresponding to the bins indexed by the integer valued feature.Since feature vector is of length $_numFerns$ the vectors are indexed as $i*numIndices+features[i]$ where $_numIndices=2^{numFeatures}$ $i$ represents the fern and $features[i]$ represents integer feature value.
• Thus to maintain histogram ,the bin indices corresponding to decimal equivalent to binary features are incremented.
• Positive vector is updated upon learning positive sample ,Negative vector is updated upon learning negative sample and Posteriors probability vector is updated after updating positive or negative samples and contains the confidence/probability that binary feature belongs to positive patch.
• Give a binary vector to compute probability that it represents a positive sample we just look at the posterior probabilities of each group $P(F_k | C=c_i)$ and compute the average over all groups $k=1\ldots numFerns$.

#### Code

• The code for the same can be found at git repo https://github.com/pi19404/OpenVision/ in $ImgFeatures/randomferns.cpp$ and $ImgFeatures/randomferns.hpp$ files
• Some important functions are also provided below
• class $RandomFerns$ is the main class for the file

# Uniform LBP Features and Spatial Histogram Computation

#### Introduction

In the earlier article we had seen that we had computed a LBP image.Each pixel in this image still can take $2^8$ possible values. The idea of uniform LBP is to perform lossy encoding so that we eliminate some irrelevant data and retain useful information which takes on 58 possible values. In this article we will look at Uniform LBP and computation of Spatial Histogram of LBP which can be used as a feature for image description.
• Uniform LBP features are considered as one which have only 2 contigious regions corresponding to 0's and 1's,while non uniform LBP have more than 1 contigious regions corresponding to 0's and 1's.
• This we need a mapping from which assign each on $2^8$ possible codes to one of 58 encoded uniform LBP values.
• All the non uniform codes are assigned to a single values.
• The uniform LBP's can be viewed as corners,line,ramp gradient
• The non uniform LBP's are assumed to be irrelevant and can be ignored.
• A uniform binary pattern can be identified by performing a bit wise traversal and checking if number of bit transitions are atmost 2 .
• This is done by first circular right/left shifting the given code and performing XOR operation with the original code.
• Since we need to only consider 8 bit numbers ,we need perform circular shift and masking MSB for integers.
• Now number of bit operations is simply the number of set bits in the result
• The code for this can be as follows
• the method to count bits is naive way and since we are using 8 bit words,it will take 8 iterations of the for loop.
• Brian Kernighan's method performs this by going through as many iterations as set bits
• The result can be precomputed for possible input code to determine if a LBP is a uniform code or not.
• The next task is to map the LBP codes to one of 58 uniform codes.
• The encoding will be done along the rows as per the below figure
• Since we known the possible input codes before hand,we can prepare a lookup table before hand to check if a uniform code or not
• Now if it is a uniform code we need to map it to one of possible 58 codes.
• To do this we move from all numbers from $0-2^8$,check if they are uniform and assign them to a one of 58 possible code.
• However we can see than
• Thus we modify the existing lbp image code to return only uniform lbp coded as destination lbp image by performing a simple lookup operations.
• Now next task is to compute a spatial histogram,the histogram may be computed over the entire image or dividing the image into grids
• The LBP spatial histogram can be used as a texture descriptor.
• However the LBP image is a gradient image in some sense,it encode information about different types of gradiants
• The LBP pattern can be used to identify isolated corners or flat region ( all 0 or 1)
• The LBP pattern can be used to identify corner a continuous run of 0 or 1 of length (5-8 and its rotated version)
• The LBP pattern can be used to identify a edge a continous run of 0 or 1(length 4 and rotated version )
• The LBP pattern can be used to identify horizontal or vertical edge ( vertical/horizontal run of 0 and 1 )
• The LBP pattern can be used to identify a line end (1000000 and its rotated version)
• The LBP pattern with 2 continuous 1'2 can be considered as a horizontal or vertical line

#### Code

• The code for the same can be found at git rep https://github.com/pi19404/OpenVision/ in ImgFeatures/lbpfeatures.cpp and ImgFeatures/lbpfeatures.hpp files.

#### References

• Description of interest regions with local binary patterns by Heikkil,Pietikֳ & Schmid
• Rotation invariant image description with local binary pattern histogram fourier features. by Ahonen T. and Matas J. and He C & Pietikֳinen M.
• Multiresolution gray-scale and rotation invariant texture classification with Local Binary Patterns by Ojala

# Integral Images for computing Mean and Variance

#### Introduction

• The Integral Image is used as a quick and effective way of calculating the sum of values (pixel values) in a given image â€“ or a rectangular subset of a grid (the given image).
• In this article we will assume that concepts of integral image is known and then proceed to see how it can be used to compute the mean and variance of a image patch.
• Given a integral representation of an image,the sum of value of pixels in the rectangular region R with vertices A,B,C,D is given by $I = S(A) +S(D) -S(B) -S(C)$
• Dividing this quantity by the number of pixels gives us the mean value of pixels in the region. $\mu = \frac{I}{N}$
• Let us also consider the squared integral image.To obtain this all the pixel values in the image are squared then integral image is computed.
• consider the variance about a rectangulation regions $v= \sum_i (x_i-\mu)^2 v= \sum_i x_i^2 - 2\sum_i x_i \mu + \mu^2 v= \sum_i x_i^2 - \mu^2$ The summation $x_i^2$ can obtained by the square integral image and $\mu$ can be obtained by integral image computation.
• This enables us to compute the variance of rectangular patch of image.
• A similar method can be employed to compute the denominator variance term normalize cross correlation formula.
• the above code can be found in git repo https://github.com/pi19404/OpenVision/tree/master/ ImgFeatures/integralImage.cpp and ImgFeatures/integralImage.hpp files.