To explain simple face detection, I wrote 5 posts with each another technique in the face detection process. When you combine all 5 steps, you will be able to detect faces in pictures or video streams.
The steps described in these posts are:
- Filter out skin color
- Reduce noise
- Mark the different blobs
- Create the boxes that will mark the faces
- Select the boxes with the right proportions and draw
This post teaches you how to filter out the colors from an image that will most likely be skin. Detecting skin is a difficult subject because the color of skin can vary a lot from person to person, amount of light and the color of the light. The best way to do a gamma correction first. As we keep this simple and plain, this will not be discussed in these posts. To detect skin color, we will use the YCrCb color space. YCbCr consists of a luminance (Y) which is the grayscale image and 2 chroma components which determine the difference from grayscale in red and blue. With YCrCb it is a lot easier to determine which values are most likely skin. Not only because chroma red is somewhere near skin color, also because the minimum and maximum values can be determined easier.
On the internet there are many different values that claim to be the best. Of course they depend on the lightning, hardware and ethnicity of the person whose skin should be detected. The values I picked where selected for the demo pictures I used and should work for most pictures taken with a Nokia Lumia by daylight. This value is:
Y: 80-240 Cb: 90-125 Cr: 135-180
When we take this to code, it’s nothing else but checking whether the color values are between those values. If so, we make it white, otherwise we make it black.
/// <summary> /// Creates blobs in the picture based on YCbCr color values /// </summary> /// <param name="pixels">The pixels</param> /// <param name="width">The width</param> /// <param name="height">The Height</param> /// <param name="yMin">The minimum Y value a pixel should have</param> /// <param name="yMax">The maximum Y value a pixel should have</param> /// <param name="cbMin">The minimum Cb value a pixel should have</param> /// <param name="cbMax">The maximum Cb value a pixel should have</param> /// <param name="crMin">The minimum Cr value a pixel should have</param> /// <param name="crMax">The maximum Cr value a pixel should have</param> /// <exception cref="ArgumentException">Will occur when the width and height do not match the length of the pixel array</exception> /// <exception cref="ArgumentNullException">Will occur when one of the arguments is null</exception> public static void GetBlobsByYCbCr(int[] pixels, int width, int height, int yMin = 80, int yMax = 240, int cbMin = 90, int cbMax = 125, int crMin = 135, int crMax = 180) { if (pixels == null) throw new ArgumentNullException("pixels"); if (width * height != pixels.Length) throw new ArgumentException("The pixel array does not match width and height"); for (int i = 0; i < pixels.Length; i++) { //byte a = (byte)(pixels[i] >> 24); byte r = (byte)((pixels[i] & 0x00ff0000) >> 16); byte g = (byte)((pixels[i] & 0x0000ff00) >> 8); byte b = (byte)(pixels[i] & 0x000000ff); // Grayscale float Y = 0.299f * r + 0.587f * g + 0.114f * b; // Chroma blue float Cb = 128f + -0.169f * g - 0.332f * g + 0.500f * b; //Chroma red float Cr = 128f + 0.500f * r - 0.419f * g - 0.081f * b; Y = Y < 0 ? (byte)0 : Y > 255 ? (byte)255 : (byte)Y; Cb = Cb < 0 ? (byte)0 : Cb > 255 ? (byte)255 : (byte)Cb; Cr = Cr < 0 ? (byte)0 : Cr > 255 ? (byte)255 : (byte)Cr; if ( Y > yMin && Y < yMax && Cr > crMin && Cr < crMax && Cb > cbMin && Cb < cbMax ) pixels[i] = 0xFF << 24 | 0xFF << 16 | 0xFF << 8 | 0xFF; else pixels[i] = 0; } }
4 thoughts on “Simple Face Detection for Windows Phone (1/5)”