How to setup animating image process...?
It returns a â€˜maybe faceâ€™ image segment, the algorithm then determines if a face is present in the image. It then uses these eye locations to create a more accurate bounding box for the face. Before any processing is done we must normalize the â€˜maybe faceâ€™ space. This involves a convolution with a Gaussian Kernel (1), followed by a contrast stretching operation (2).
After normalization two new filters are then used. One is a Laplacian filter that finds sharp contrast changes in a circle (finds red eye effect or sharp eye reflections in the image, and pupils if clearly visible), the other is simply a binary threshold that filters out pixels with intensity values greater than the lowest 5% of intensity values in the image histogram (finds the dark pupils).These two filters find many points on an average image and so the results need to be consolidated and rapidly filtered.